Do People Project Gender Biases onto AI Managers?

Published: Wednesday 11 March 2026 - 07:00

New research shows that human perceptions of AI decision-makers may be shaped by gender stereotypes.
Participants in a recent study reacted more negatively when a female AI manager decided not to give them a reward, even when the decision was identical to one made by a male AI manager.
The study, conducted by Dr Hao Cui and Professor Taha Yasseri from the Centre for the Sociology of Humans and Machines (SOHAM) and published in Computers in Human Behavior Reports, explores how people evaluate decisions made by AI managers compared with human managers.

Speaking about the paper, Dr Hao Cui, a Research Fellow at SOHAM and the lead author of the study said:

As artificial intelligence systems begin to take on roles traditionally held by human managers, it becomes increasingly important to understand how people respond to their decisions. Our study shows that people may project existing gender stereotypes onto AI systems, meaning that identical decisions can be judged differently depending on whether the AI manager is presented as male or female.

In an experiment, participants worked in teams of three to solve a short puzzle. After completing the task, a manager selected one player to receive a small bonus. The researchers found that participants reacted more negatively when a female AI manager decided not to give an award, even though the decision itself was identical at randomly assigned to the participants. The findings suggest that gender biases commonly observed in evaluations of human leaders may also influence how people judge AI decision-makers, with potential implications for future human–AI interactions. 

Read the full paper here. 

Speaking about the findings, Professor Taha Yasseri, the director of SOHAM and co-author of the article said:

Our findings have important implications for the design and regulation of agentic AI systems. While research has largely focused on improving and developing agentic AI, far less is known about human behaviour when interacting with these agents.

The findings highlight an emerging challenge as AI systems increasingly take on roles that involve evaluating people, allocating rewards, or making workplace decisions. Ensuring fairness in AI will therefore require more than improving algorithms; it also means understanding how human perceptions and social stereotypes shape interactions with these systems. The researchers suggest that designers and organisations deploying AI should consider how gender cues and other human-like features may influence users’ reactions. Addressing these behavioural dynamics early could help build AI systems that are not only technically fair but also trusted and accepted by the people who work with them. 

The research conducted in this publication was funded by Research Ireland under grant number IRCLA/2022/3217, ANNETTE (Artificial Intelligence Enhanced Collective Intelligence). 

 

  • Dr Hao Cui is a Research Fellow at the Joint Centre for Sociology of Humans and Machines (SOHAM) where she explores how AI can enhance human collective intelligence. 
  • Prof Taha Yasseri is Workday Chair and Professor of Technology and Society at Technological University Dublin and Trinity College Dublin and where he directs SOHAM. 
  • The Centre for the Sociology of Humans and Machines (SOHAM) is a joint research centre at TU Dublin and Trinity College Dublin, dedicated to understanding how AI and automation are reshaping society. Through interdisciplinary research, international collaboration, and public engagement, we investigate the social, ethical, and technological dimensions of human-machine interactions.