Humans bring gender bias to their interactions with AI – new study
Humans bring gender biases to their interactions with Artificial Intelligence (AI), according to new research from Trinity College Dublin and Ludwig-Maximilians Universität (LMU) Munich.
The study involving 402 participants found that people exploited female-labelled AI and distrusted male-labelled AI to a comparable extent as they do human partners bearing the same gender labels.
Notably, in the case of female-labelled AI, the study found that exploitation in the Human-AI setting was even more prevalent than in the case of human partners with the same gender labels.
This is the first study to examine the role of machine gender in human-AI collaboration using a systematic, empirical approach.
The findings show that gendered expectations from human-human settings extend to human-AI cooperation. This has significant implications for how organisations design, deploy, and regulate interactive AI systems, according to the authors.
The study, led by sociologists in Trinity’s School of Social Sciences and Philosophy, has just been published in the journal iScience.
Key findings:
- Patterns of exploitation and distrust toward AI agents mirrored those seen with human partners carrying the same gender labels.
- Participants were more likely to exploit AI agents labelled female and more likely to distrust AI agents labelled male.
- Assigning gender to AI agents can shape cooperation, trust, and misuse implications for product design, workplace deployment, and governance.
Sepideh Bazazi, first author of the study and Visiting Research Fellow at the School of Social Sciences and Philosophy, Trinity, explained: “As AI becomes part of everyday life our findings that gendered expectations spill into human-AI cooperation underscore the importance of carefully considering gender representation in AI design, for example, to maximise people’s engagement and build trust in their interactions with automated systems.
“Designers of interactive AI agents should recognise and mitigate biases in human interactions to prevent reinforcing harmful gender discrimination and to create trustworthy, fair, and socially responsible AI systems.”
Taha Yasseri, co-author of the study and Director of the Centre for Sociology of Humans and Machines (SOHAM) at Trinity, said: “Our results show that simply assigning a gender label to an AI can change how people treat it. If organisations give AI agents human-like cues, including gender, they should anticipate downstream effects on trust and cooperation."
Jurgis Karpus, co-author of the study and Postdoctoral Researcher at Ludwig-Maximilians-Universität (LMU) Munich, added: “This study raises an important dilemma. Giving AI agents human-like features can foster cooperation between people and AI, but it also risks transferring and reinforcing unwelcome existing gender biases from people’s interactions with fellow humans.”
The article, ‘AI’s assigned gender affects human–AI cooperation’ by Sepideh Bazazi (TCD); Jurgis Karpus (LMU); Taha Yasseri (TCD, TU Dublin) can be read on the journal iScience website.
More about the study:
In this experimental study participants played repeated rounds of the social science experiment Prisoner’s Dilemma—a classic experiment in behavioural game theory and economics to study human cooperation and defection. Partners were labelled human or AI. Each partner was further labelled male, female, non-binary, or gender-neutral. The team analysed motives for cooperation and defection, distinguishing exploitation (taking advantage of a cooperative partner) from distrust (defecting pre-emptively). Findings show that gender labelling can reproduce gendered patterns of cooperation with AI. The participants were recruited in the UK and the experiment was conducted online. The sample size was 402 participants.
Journal
iScience
Method of Research
Observational study
Subject of Research
People
Article Title
AI’s assigned gender affects human-AI cooperation
Article Publication Date
2-Nov-2025