A Digital World without Women: How Visual Stereotypes harm AI Development

Teacher

Professional
Messages
2,672
Reputation
9
Reaction score
695
Points
113
Machine learning based on "gender-skewed" data calls into question the inclusiveness of neural networks.

An international team of researchers analyzed more than a million images from Google, Wikipedia, and IMDb by gender association in 3,495 categories, and found significant underrepresentation of women in these images.

The study, led by researchers from the Haas School of Business at the University of California, Berkeley, also included an analysis of billions of words from these platforms. The results showed that gender biases are much more common in images than in text, for categories traditionally associated with both women and men.

According to the authors of the study, published in the scientific journal Nature, given that people spend less time reading and more time viewing images on the Internet, this significantly worsens gender bias, increasing its prevalence and psychological impact.

The researchers also noted that the statistics they obtained on women's underrepresentation are much more serious and deeper than previously recorded in public opinion polls and other sources.

Experts warn that generative artificial intelligence models trained on a huge number of "gender-skewed" images can additionally reinforce gender, racial and other social stereotypes.

"Addressing the social consequences of this large-scale shift to visual communication will be essential for the development of a fair and inclusive future of the Internet," the authors of the study noted.
 
Top