Global Forum on the Ethics of AI (GFEAI) 2025

Global Forum on the Ethics of AI 2025
Thank you to all applicants!
We are grateful to have received so many high-quality submissions from researchers around the world in response to our Call for Papers. A total of 36 papers were selected for presentation at the Forum, held from 24-27 June 2025 in Bangkok, Thailand. We are now pleased to announce the 12 outstanding papers selected as the best papers of the Forum. These papers represent the most original, rigorous and impactful research presented at GFEAI 2025.
Selection of the 12 best GFEAI papers

1st Prize

Upholding Academic Integrity through AI Literacy in the Age of Generative AI in Education by Nancy Kwangwa

As the use of generative artificial intelligence (AI) tools becomes increasingly common in universities, they present both opportunities and challenges. This study, conducted at the Women’s University in Africa, introduced an AI literacy training programme to help students and academics understand how AI works, how to use it ethically, and how to avoid misuse such as plagiarism and overdependence. The findings show that shared learning between students and staff about the risks and benefits of generative AI fosters more responsible and transparent use of AI. The paper recommends integrating AI literacy into university curricula to ensure that education evolves with technology while preserving core values of integrity, transparency and accountability.

“While generative AI holds great potential to advance higher education goals, its benefits will only be fully realised if accompanied by deliberate investment in AI literacy training. Without it, we risk deepening existing inequalities and compromising the integrity and quality of academic practice.”

Nancy Kwangwa is Deputy University Librarian at the Women’s University in Africa and a leading advocate for AI literacy, digital equity, and ethical information access. She holds a PhD from the University of Cape Town and has contributed to global initiatives such as UNESCO’s Abuja Declaration, while leading digital inclusion efforts across Zimbabwe.

Carrot or Stick: A Comparative Study of AI Ethics Policies Across Countries by Danxia Chen, Cicely Jefferson, Jon K. Reid and Joanne Hix

AI technology and AI applications are dramatically impacting humanity and society, offering immense opportunities and significant risks. As Saveliev and Zhurenkov note: “AI is often understood as a double-edged sword of modern science.” This study compares AI ethics policies across different countries, filling a gap in current literature on ethical AI and sustainable development. By examining various nations’ approach to regulation, social responsibility, and governance, the research identifies best practices and offers recommendations for policymakers, developers, users, and global AI leaders. The authors emphasize the urgent need for global collaboration in shaping ethical AI frameworks.

“We need both carrot and stick — policies that enable free and unfettered innovation while striking the right balance between AI development and control.” Danxia Chen

Danxia Chen is a professor of data science and statistics at Dallas Baptist University, certified by NVIDIA, Harvard Business School, Deep Learning AI Institute and IBM, with expertise in ethical AI and research in education.

Cicely Jefferson is Dean of the Carter School of Business at Dallas Baptist University, with a background in law, ministry and business.

Jon K. Reid is a licensed counselor and former psychology professor with 30 years experience.

Joanne Hix teaches management at Dallas Baptist University with corporate strategy experience from Southwest Airlines and Mobil.

From Vision to Action: Egypt’s Hybrid Model for Responsible AI Governance by Yomna Ahmed Shawky Omran

As part of its Vision 2030 digital transformation, Egypt has positioned AI as a catalyst for national development in sectors like health and education. To address ethical and societal risks, the country has adopted a hybrid AI governance model that combines regulatory measures, like the Personal Data Protection Law and upcoming AI legislation, with soft tools such as the 2023 Egyptian Charter for Responsible AI. Central to this model is the Responsible AI Center, established to coordinate ethical AI practices across sectors. Egypt’s approach emphasizes inclusive, values-driven, and adaptable policies aligned with international frameworks (e.g., OECD, UNESCO). Supported by the ongoing UNESCO AI Readiness Assessment, it offers a blueprint for how Global South countries can foster trustworthy AI ecosystems that align innovation with ethics and local priorities.

“There is no “one-size-fits-all” model for AI governance. Egypt’s experience demonstrates how a hybrid and adaptive governance model can embed ethics and inclusivity at the core.”

Yomna Omran is the AI Governance and Coordination Lead at Egypt’s Ministry of Communications and Information Technology, specializing in the global governance of emerging technologies and the intersection of AI, ethics, and international relations.

2nd Prize

The AI Strategy Compass: A Strategic Change Framework for Institution-Wide AI Implementation in Higher Education by Ines Springael

As AI transforms education, most universities struggle with scattered AI experiments that fail to create real change. This paper introduces the AI Strategy Compass (AISC), a practical framework successfully implemented at Breda University of Applied Sciences to guide comprehensive AI adoption across institutions. The AISC consists of six interconnected components: creating urgency around AI’s importance, setting clear strategies, building a team of AI pioneers, taking a coordinated programmatic approach, maintaining open communication, and fostering cultural change. Rather than treating AI as merely a technical upgrade, the framework recognizes that successful implementation requires changing people’s behaviour, how we work, learn, and relate to technology. Early insights show that meaningful transformation happens when institutions prioritize human behavior over technology, empower knowledgeable staff to lead change, and create space for experimentation and learning. The AISC offers educational leaders a roadmap for moving beyond fragmented pilots toward institution-wide AI transformation that enhances teaching, research, and operations.

“If organisations want to create impact with ethical and inclusive AI, they must treat it as a whole—not as a set of isolated projects or fragmented actions—and invest in people, not just policies.”

Ines Springael is AI Programme Manager at Breda University of Applied Sciences, where she leads institution-wide AI adoption through a human-centered framework that treats AI implementation as a cultural and strategic transformation.

Advancing Ethical and Inclusive Artificial Intelligence Education for Sustainable Development in Bahrain by Raghu Dhumpati

This paper highlights Bahrain’s pioneering efforts in promoting ethical and inclusive AI education through a national AI framework aligned with UNESCO’s values. Supported by the government, the initiative integrates responsible AI principles into school curricula, emphasizes teacher training, and ensures broad student engagement. It has already reached thousands of learners and achieved gender balance in AI certification. Bahrain’s experience offers a compelling example of how smaller nations can make significant steps in preparing youth for an AI-driven future—ethically and equitably.

“Ethical and inclusive AI education is not a luxury—it’s a necessity. With the right policy commitment, gender parity, national certification, and ethical alignment even small nations can scale responsible AI learning for all.”

Dr. Raghu Dhumpati is a Professor of Computer Science at Bahrain Polytechnic and an expert in AI, cybersecurity, and cloud computing.

The SEA AI Policy Observatory: Foundational Intelligence for Ethical AI Governance by Shi Hao Lee and Supheakmungkol Sarin

Southeast Asia, home to over 600 million people across 11 diverse nations, faces unique challenges and opportunities in shaping ethical AI governance. Yet, the region’s evolving AI policy landscape remains underexplored due to fragmented information and limited visibility. To address this, AI Safety Asia (AISA) developed the Southeast Asia AI Policy Observatory—the region’s first digital platform dedicated to mapping AI governance initiatives across all 11 countries. Built through extensive engagement, including six roundtables with over 1,000 participants and more than 30 stakeholder interviews across three beta cycles (Sep 2024–Mar 2025), the Observatory provides essential regional insight to complement global monitoring efforts. The beta version is accessible at https://seaobservatory.com/.

“SEA AI Observatory is not just a repository, but a tool to help policymakers, researchers, and civil society track, compare, and engage with AI policies across 11 countries. More than comparison, I hope it fosters regional collaboration and drives ethical, inclusive AI governance that reflects Southeast Asia’s diverse needs and aspirations.”

Shi Hao Lee holds an M.S. in Computer Science from Cornell University and a B.A. from UC Berkeley. He is the former Technical Lead at AI Safety Asia, where he led the development of the SEA AI Observatory tracking AI policies across Southeast Asia.

3rd Prize

Toward Human-Centered AI for Older Adults: A Preliminary Study Integrating UCD, Google PAIR, and Stanford HAI Principles by Aung Pyae

As AI becomes part of everyday life, it must also serve older adults, who are often left behind by technology. This paper outlines three key insights for designing ethical, inclusive AI systems. First, older users value explainability and control—they’re more likely to trust and use AI when they understand how it works and can shape its behavior. Second, empathy and emotional alignment matter just as much as functionality: AI that responds to users’ feelings and routines can enhance well-being and reduce isolation. Third, combining human-centered design methods with ethical AI frameworks (such as Google’s PAIR and Stanford’s HAI) creates more inclusive systems that respect dignity and autonomy. This researcher demonstrates that older adults are not passive users—they want to be heard, involved, and supported. Trustworthy AI begins by listening, adapting, and designing with people—not just for them.

“Older adults are not passive recipients of technology—they have unique needs, preferences, and lived experiences that, when incorporated into AI design, lead to more effective, ethical, and inclusive systems. I hope policymakers recognize that creating human-centered AI requires active engagement with end-users, particularly those at risk of digital marginalization, to ensure that AI technologies support autonomy, dignity, and social inclusion in ageing societies.”

Dr. Aung Pyae is a Lecturer at Chulalongkorn University, Thailand, specializing in Human-Centered AI, Human-Computer Interaction, and Health Informatics. He has contributed extensively to both academic research and applied innovation in generative AI, UX design, and technology for aging populations.

Monitoring AI for Ethical Governance and Sustainable Development: The Brazilian Observatory (OBIA) Experience by Isabella Ferreira Lopez

This paper presents the Brazilian Artificial Intelligence Observatory (OBIA), a national initiative to monitor AI deployment in line with the Brazilian AI Strategy and Brazilian Artificial Intelligence Plan (PBIA). OBIA aims to centralize key data and indicators for more transparent, ethical, and sustainable AI governance. The paper focuses on two core research areas: federal government investments in intelligent systems and judicial cases regarding AI use. Preliminary findings from both strands show how OBIA combines quantitative and qualitative insights to map Brazil’s evolving AI landscape. By tracking AI use across sectors, OBIA strengthens oversight and contributes to evidence-based policymaking for responsible AI development.

Isabella Ferreira Lopes is a Computer Engineer and AI ethics researcher pursuing a Master’s in Artificial Intelligence at the University of São Paulo, focusing on data mining, machine learning, AI ethics and regulation.

Tagtime Medicare – Enhancing Medication Adherence in the Elderly via Mobile Technology and RFID Integration by Nuttapat Ngernshoosri, Jittaphanu Koomdee, Nuttawut Sroidokson and Porawat Visutsak

Managing medications can be challenging for older adults, often leading to missed doses and health risks. Tagtime Medicare is a mobile application developed to help elderly adults and their caregivers in managing medications more safely and efficiently. It features customizable reminders and integrates RFID tags attached to pill bottles, enabling users to scan and instantly access accurate dosage instructions. With promising pilot testing results (a 9.3% drop in missed doses, a 72% reduction in time spent on medication tasks, and a 90% user satisfaction rate), Tagtime Medicare demonstrates potential to improve health outcomes in ageing populations.

“We wanted to create a practical, compassionate digital tool that simplifies medication management for older adults. Our research shows that even simple, accessible technology can make a measurable difference in people’s lives.”

Nuttapat Ngernshoosri is a Computer Science student at King Mongkut’s University of Technology North Bangkok (KMUTNB), dedicated to creating human-centered, ethical healthcare technologies like Tagtime Medicare.

Jittaphanu Koomdee is a Computer Science student at KMUTNB, focused on developing ethical, user-friendly digital tools for healthcare and elder care.

Nuttawut Sroidokson is Deputy Director of Digital Technology at the Institute of Computer and Information Technology, KMUTNB.

Porawat Visutsak is an Associate Professor at the Faculty of Applied Science, KMUTNB, and a former Senior Visiting Scholar at Beijing Institute of Technology.

Honorable Mention

The Ethics of Artificial Intelligence in University Student Assessment: Towards Fair and Responsible Educational Practices by Ayoub Mohammed Albalushi

This study explores the growing use of AI in university student assessments, such as automated grading and plagiarism detection, and raises concerns about fairness, transparency, and accountability. While these technologies can enhance efficiency, they may also produce biased or unfair outcomes. The paper urges higher education institutions to adopt ethical guidelines, ensure human oversight, and regularly audit AI tools to ensure innovative, equitable and responsible educational systems.

“Fairness in student assessment must not be sacrificed for efficiency. Ethical AI systems should be designed to enhance—not replace—human judgment, and must be regularly audited for bias, transparency, and accountability.”

Ayoub Mohammed Suleiman Albalushi is a lecturer at the University of Technology and Applied Sciences in Oman, specializing in educational psychology and the ethical use of technology in education.

From Compliance to Character: Leveraging UNESCO’s EIA through Exemplarist Ethics to Foster Virtuous AI Development Cultures Against Vectorialism by Juan Manuel Martínez García

This paper explores how Ethical Impact Assessments (EIAs) for AI can move beyond their typical function as risk management tools to become catalysts of ethical culture in technology development. Drawing on Linda Zagzebski’s theory on virtue ethics and exemplarism, it proposes reimagining UNESCO’s EIA framework as source of moral inspiration, encouraging AI development teams to learn from ethical role models, or “exemplars, rather than relying solely on compliance with rules. By recognizing individuals in tech who embody integrity and responsibility, organizations can foster a culture of intrinsic motivation for ethical innovation. This approach is particularly relevant in fast-paced environments where speed and control overshadow ethical reflection. Reframing EIAs as tools for moral learning can help cultivate a genuine commitment to trustworthy AI within the identity and everyday practices of tech organizations.

“Moral imagination is our capacity to envision the ethical dimensions and potential consequences of technology. By connecting this human capacity with an AI framework grounded in moral exemplars, we can move beyond mere compliance and begin to shape systems that have a genuine orientation toward virtuous outcomes. This fusion of imagination and exemplarism is our most promising path forward.”

Juan Manuel Martínez is a philosopher, lecturer, and researcher at the Universidad Tecnológica de Pereira and Universidad de Caldas in Colombia, focusing on the philosophy of technology and the ethics of AI.

Addressing Intersectional Bias in AI-Driven Recruitment Using the HITHIRE Model: A Fair, Transparent, and Ethical Framework for Saudi Arabia’s Vision 2030 by Elham Albaroudi, Taha Mansouri, Mohammad Hatamleh and Ali Alameer

This paper introduces the HitHire governance framework, designed to ensure ethical, fair, and culturally grounded use of AI in recruitment, particularly within the context of Saudi Arabia’s Vision 2030. As AI tools increasingly influence hiring decisions, concerns around bias, lack of transparency, and accountability continue to grow. HitHire integrates global ethical principles, legal standards, and Islamic values into the design and oversight of recruitment technologies. It promotes fairness, transparency, and institutional responsibility while fostering trust and social justice. The framework provides practical guidance for developers, HR leaders, and policymakers working to align AI deployment with both innovation and equity. HitHire offers a scalable model for ethical AI adoption across the MENA region and other culturally diverse contexts.

“I was struck by how many recruitment AI systems overlook intersectional bias, especially in contexts like Saudi Arabia, where gender and nationality interact uniquely. HitHire doesn’t just detect bias, but it actively respects diversity goals set by initiatives like Vision 2030.” Elham Albaroudi

Dr. Elham Albaroudi is a PhD candidate at the University of Salford, specializing in AI governance and data science.

Dr. Taha Mansouri is a Lecturer in AI at the University of Salford, focusing on computer vision, large multimodal models, explainable AI, and AI ethics.

Mr. Mohammad Hatamleh is Executive Director at Bariah Business Company, with over a decade of experience in digital strategy, AI, and data consulting across the MENA region.

Dr. Ali Alameer is a Lecturer in AI at the University of Salford whose research focuses on ethical and trustworthy AI, computer vision, and large vision and language models.

The selected papers will be featured in the official proceedings of the Global Forum on the Ethics of AI 2025. Authors of the selected papers will be contacted directly with further information regarding the publication process and next steps.
About the Call

UNESCO and partner organizations are pleased to announce a call for papers for the upcoming Global Forum on the Ethics of Artificial Intelligence, to be held in Bangkok, Thailand from 24-27 June 2025. This call is dedicated to exploring the rapidly changing landscape of AI policy in the context of UNESCO’s Recommendation on the Ethics of Artificial Intelligence, the only globally agreed normative instrument in the domain of AI ethics, which has been adopted by 194 countries. The Global Forum acts as a venue for policymakers, civil society, academia and the private sector to share their experiences and good practices for implementing the Recommendation. With this call, we invite scholars, practitioners, policymakers, and students from around the globe to contribute to the Forum with their insights and research.

Presentation at the Forum and publication

Authors of accepted abstracts will be invited to present and discuss their research in a workshop at the Global Forum on the Ethics of AI in Bangkok on 24-27 June 2025. The costs associated with travel and participation in the Forum will be subsidized by the organizers. Authors may choose to have either an extended abstract or full working paper subsequently published in a proceedings of the Global Forum on the Ethics of AI. For those interested, we will seek to organize one or more special issues of relevant academic journals following a process of peer review and revision.

Submission Themes

This call consists of two tracks. The first track is for law, public policy, ethics, social sciences, economics, and related disciplines. For this track, we are seeking original and unpublished papers focusing on the following sub-themes linked to Policy Areas of the UNESCO Recommendation.

GFEAI 2025 Ethical Governance & Stewardship

Ethical Governance and Stewardship

  • Local / national / regional / global approaches to addressing AI governance, including policies, strategies, collaborations, interoperability of governance mechanisms, and good practices for institutional design of governmental agencies.
  • Optimal institutional design for AI supervision in the public sector, including structures and frameworks for effective AI oversight and regulation.
  • The role of civil society in setting the agenda for and participating in AI governance.
  • Approaches to the ethical design, development and deployment of AI technologies.
GFEAI 2025 Communication and Information

Communication and Information

  • Social, political and ethical implications of AI systems, including challenges related to the amplification of hate speech, disinformation, bias and all forms of discrimination as well as potential benefits for addressing the UN Sustainable Development Goals (SDGs).
GFEAI 2025 Environment and Ecosystems

Environment and Ecosystems

  • Measuring, monitoring and mitigating the impact of AI technologies on the environment, including relevant tools, methodologies, practices and standards.
GFEAI 2025 Culture

Culture

  • Ethical design, use and governance of AI systems in domains of languages, arts, religion, cultural heritage, and indigenous knowledges.
GFEAI 2025 Ethical Impact Assessment

Ethical Impact Assessment

  • Frameworks and good practices in assessing the ethical impacts of AI technologies throughout the AI life cycle, particularly in relation to the UNESCO Ethical Impact Assessment tool.
GFEAI 2025 Gender

Gender

  • Examining the impact of AI technologies on the gender equality agenda.
GFEAI 2025 Health and Social Well-Being

Health and Social Well-Being

  • Challenges and opportunities in advancing inclusive AI systems for and with persons with disabilities, and ensuring accessibility, fairness, and non-discrimination in AI design and deployment.
GFEAI 2025 Ethical Governance & Stewardship

Ethical Governance and Stewardship

  • Local / national / regional / global approaches to addressing AI governance, including policies, strategies, collaborations, interoperability of governance mechanisms, and good practices for institutional design of governmental agencies.
  • Optimal institutional design for AI supervision in the public sector, including structures and frameworks for effective AI oversight and regulation.
  • The role of civil society in setting the agenda for and participating in AI governance.
  • Approaches to the ethical design, development and deployment of AI technologies.
GFEAI 2025 Gender

Gender

  • Examining the impact of AI technologies on the gender equality agenda.
GFEAI 2025 Communication and Information

Communication and Information

  • Social, political and ethical implications of AI systems, including challenges related to the amplification of hate speech, disinformation, bias and all forms of discrimination as well as potential benefits for addressing the UN Sustainable Development Goals (SDGs).
GFEAI 2025 Environment and Ecosystems

Environment and Ecosystems

  • Measuring, monitoring and mitigating the impact of AI technologies on the environment, including relevant tools, methodologies, practices and standards.
GFEAI 2025 Culture

Culture

  • Ethical design, use and governance of AI systems in domains of languages, arts, religion, cultural heritage, and indigenous knowledges.
GFEAI 2025 Health and Social Well-Being

Health and Social Well-Being

  • Challenges and opportunities in advancing inclusive AI systems for and with persons with disabilities, and ensuring accessibility, fairness, and non-discrimination in AI design and deployment.
GFEAI 2025 Ethical Impact Assessment

Ethical Impact Assessment

  • Frameworks and good practices in assessing the ethical impacts of AI technologies throughout the AI life cycle, particularly in relation to the UNESCO Ethical Impact Assessment tool.

We welcome research focused on aspects of implementation as well as critical theoretical approaches to the topics listed above. Priority in the selection process will be given to evidenced papers that are cross-disciplinary.

The second track is for computer science, data science, and related disciplines. For this track, we are seeking original and unpublished papers focusing on the following sub-themes:

E-Government

Policy Analytics

Digital Rights Management

Digital Citizenship, Digital Identity

AI and Machine Learning

Personal Data Protection and Privacy

Innovation in Public Management

Sustainable Technology

Policy Standards and Guidelines

Key Dates

Deadline for submission of abstract: 1 May 2025.
Notification of acceptance for presentation and/or publication: 31 May 2025.
Publication of extended abstract / working paper in proceedings of Global Forum: July 2025.
Publication of full papers following a round of peer review and revisions: late 2025 / early 2026.

Submission Guidelines

Abstracts must be submitted in English and should be between 500-1000 words, clearly outlining the research focus, methodology / theoretical framework and preliminary findings or arguments.

Each submission should include the author(s)’s name, affiliation, and contact information.

Submissions should be original work, not previously published or under consideration for publication elsewhere.

A maximum of two abstracts may be submitted per person.

All submissions will be peer reviewed.

We look forward to receiving your submissions and to your valuable contributions to this critical discourse on the ethics and governance of AI.

For more information, please contact ai-ethics@unesco.org.

Join us in shaping the future of AI governance!

CONTACT

International Research Centre
on Artificial Intelligence (IRCAI)
under the auspices of UNESCO 

Jožef Stefan Institute
Jamova cesta 39
SI-1000 Ljubljana

info@ircai.org
ircai.org

FOLLOW US

The designations employed and the presentation of material throughout this website do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area of its authorities, or concerning the delimitation of its frontiers or boundaries.

Design by Ana Fabjan