Announcement

New Report Highlights Urgent Need for Addressing Gender Bias in AI Systems

Published on March 7, 2024

Share this post

The International Research Centre on Artificial Intelligence (IRCAI), under the auspices of UNESCO, in collaboration with UNESCO HQ, has released a comprehensive report titled “Challenging Systematic Prejudices: An Investigation into Gender Bias in Large Language Models”. This groundbreaking study sheds light on the persistent issue of gender bias within artificial intelligence, emphasizing the importance of implementing normative frameworks to mitigate these risks and ensure fairness in AI systems globally. We are excited to announce this new in-depth report in a partnership with a number of authors, set for release on International Women’s Day on March 8, 2024.

The UNESCO Recommendation on the Ethics of AI has long advocated for minimizing discriminatory or biased outcomes in AI applications. Yet, despite these guidelines, AI systems continue to perpetuate human, structural, and social biases, leading to potential harm at various societal levels. This report focuses on biases found in significant large language models (LLMs) such as OpenAI’s GPT-2 and ChatGPT, along with Meta’s Llama 2, showcasing their impact on decision-making systems and conversational agents.

Key findings reveal troubling gendered word associations and biased content generation within these models, highlighting an urgent need for scalable, objective methods to assess and correct biases. For instance, certain LLMs were more likely to associate traditional roles with gendered names and generate sexist or misogynistic content. The research also points to negative portrayals of gay subjects and cultural stereotyping, underlining the critical requirement for continuous research and policy intervention.

The report calls for a multi-stakeholder approach to AI development and deployment, emphasizing the pivotal roles of governments, policymakers, and technology companies in establishing frameworks and guidelines that prioritize inclusivity, accountability, and fairness. It stresses the importance of transparent AI algorithms, equitable training data collection, and the integration of diverse perspectives to combat stereotypical narratives.

Furthermore, the brief recommends technology companies invest in research to explore the impacts of AI across different demographic groups, ensuring ethical considerations guide AI development. It advocates for public awareness and education on AI ethics and biases, empowering users to critically engage with AI technologies and advocate for their rights.

This comprehensive study by IRCAI serves as a call to action for the global community to address and mitigate gender bias in AI, ensuring technology advances in a manner that is equitable, inclusive, and reflective of our diverse society.

Link to report: https://ircai.org/project/challenging-systematic-prejudices/

RECENT POSTS

Georgia DIP and AI Research Project

Georgia DIP and AI Research Project

International Telecommunication Union (ITU) has published the digital innovation profile for Georgia, a document analysing digital innovation ecosystem in Georgia, focusing on the opportunities and challenges in the field of artificial intelligence (AI). The document had been prepared with the collaboration of IRCAI experts.

CONTACT

International Research Centre
on Artificial Intelligence (IRCAI)
under the auspices of UNESCO 

Jožef Stefan Institute
Jamova cesta 39
SI-1000 Ljubljana

info@ircai.org
ircai.org

FOLLOW US

The designations employed and the presentation of material throughout this website do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area of its authorities, or concerning the delimitation of its frontiers or boundaries.

Design by Ana Fabjan