AI seems to be well on its way to becoming pervasive. You hear rumbles of AI being used, somewhere behind the scenes, at your doctor’s office. You suspect it may have played a role in hiring decisions during your last job search. Sometimes – maybe even often – you use it yourself.
And yet, while AI now influences high-stakes decisions such as what kinds of medical care people receive, who gets hired and what news people see, these decisions are not always made equitably. Research has shown that algorithmic bias often harms marginalized groups. Facial recognition systems often misclassify transgender and nonbinary people, AI used in law enforcement can lead to the unwarranted arrest of Black people at disproportionately high rates, and algorithmic diagnostic systems can prevent disabled people from accessing necessary health care.
These inequalities raise a question: Do gender and racial minorities and disabled people have more negative attitudes toward AI than the general U.S. population?
I’m a social computing scholar who studies how marginalized people and communities use social technologies. In a new study, my colleagues Samuel Reiji Mayworm, Alexis Shore Ingber, Nazanin Andalibi and I surveyed over 700 people in the U.S., including a nationally representative sample and an intentional oversample of trans, nonbinary, disabled and racial minority individuals. We asked participants about their general attitudes toward AI: whether they believed it would improve their lives or work, whether they viewed it positively, and whether they expected to use it themselves in the future.
The results reveal a striking divide. Transgender, nonbinary and disabled participants reported, on average, significantly more negative attitudes toward AI than their cisgender and nondisabled counterparts. These results indicate that when gender minorities and disabled people are required to use AI systems, such as in workplace or health care settings, they may be doing so while harboring serious concerns or hesitations. These findings challenge the prevailing tech industry narrative that AI systems are inevitable and will benefit everyone.
Public perception plays a powerful role in shaping how AI is developed, adopted and regulated. The vision of AI as a social good falls apart if it mostly benefits those who already hold power. When people are required to use AI while simultaneously disliking or distrusting it, it can limit participation, erode trust and compound inequities.
Gender, disability and AI attitudes
Nonbinary people in our study had the most negative AI attitudes. Transgender people overall, including trans men and trans women, also expressed significantly negative AI attitudes. Among cisgender people – those whose gender identity matches the sex they were assigned at birth – women reported more negative attitudes than men, a trend echoing previous research, but our study adds an important dimension by examining nonbinary and…



