The International Research Centre on Artificial Intelligence (IRCAI), under the auspices of UNESCO, in collaboration with UNESCO HQ, has released a comprehensive report titled “Challenging Systematic Prejudices: An Investigation into Gender Bias in Large Language Models”. This groundbreaking study sheds light on the persistent issue of gender bias within artificial intelligence, emphasizing the importance of implementing normative frameworks to mitigate these risks and ensure fairness in AI systems globally. We are excited to announce this new in-depth report in a partnership with a number of authors, set for release on International Women’s Day on March 8, 2024.
The UNESCO Recommendation on the Ethics of AI has long advocated for minimizing discriminatory or biased outcomes in AI applications. Yet, despite these guidelines, AI systems continue to perpetuate human, structural, and social biases, leading to potential harm at various societal levels. This report focuses on biases found in significant large language models (LLMs) such as OpenAI’s GPT-2 and ChatGPT, along with Meta’s Llama 2, showcasing their impact on decision-making systems and conversational agents.
Key findings reveal troubling gendered word associations and biased content generation within these models, highlighting an urgent need for scalable, objective methods to assess and correct biases. For instance, certain LLMs were more likely to associate traditional roles with gendered names and generate sexist or misogynistic content. The research also points to negative portrayals of gay subjects and cultural stereotyping, underlining the critical requirement for continuous research and policy intervention.
The report calls for a multi-stakeholder approach to AI development and deployment, emphasizing the pivotal roles of governments, policymakers, and technology companies in establishing frameworks and guidelines that prioritize inclusivity, accountability, and fairness. It stresses the importance of transparent AI algorithms, equitable training data collection, and the integration of diverse perspectives to combat stereotypical narratives.
Furthermore, the brief recommends technology companies invest in research to explore the impacts of AI across different demographic groups, ensuring ethical considerations guide AI development. It advocates for public awareness and education on AI ethics and biases, empowering users to critically engage with AI technologies and advocate for their rights.
This comprehensive study by IRCAI serves as a call to action for the global community to address and mitigate gender bias in AI, ensuring technology advances in a manner that is equitable, inclusive, and reflective of our diverse society.
Link to report: https://ircai.org/project/challenging-systematic-prejudices/