SDG 8: Decent Work and Economic Growth
2. Project Details
Company or Institution
Northeastern University and Universidad Nacional Autonoma de Mexico (UNAM)
A.I. For Good Framework to Empower Digital Workers
General description of the AI solution
The A.I. Industry has powered a futuristic reality of self-driving cars and voice assistants to help us with almost any need. However, the A.I. Industry has also created systematic challenges. For instance, while it has led to platforms where workers label data to improve machine learning algorithms, my research has uncovered that these workers earn less than minimum wage. We are also seeing the surge of A.I. algorithms that privilege certain populations and racially exclude others. If we were able to fix these challenges we could create greater societal justice and enable A.I. that better addresses people’s needs, especially groups we have traditionally excluded.
To address this problem, we propose an “A.I. For Good” framework. Our framework uses value sensitive design to understand people’s values and rectify harm. We have been using the framework to design A.I. systems that improve the labor conditions of the digital workers operating behind the scenes in our A.I. industry. We have shown that our framework can help digital workers to increase their wages, develop their skills, and create overall fairer digital workspaces.
Excellence and Scientific Quality: Please detail the improvements made by the nominee or the nominees’ team or yourself if your applying for the award, and why they have been a success.
We have been designing human centered A.I. that can address the needs and challenges that workers are facing, such as low wages, limited opportunity to develop themselves, as well as unfair evaluations and overall labor conditions. We use different types of machine learning models depending on the challenges we identify. For instance, through interviews and surveys with digital workers we were able to identify that workers faced a number of unfair evaluations by employers, and this resulted in their termination and loss of their wages and jobs. Based on this finding we designed an intelligent system that utilize deep learning to learn to detect when a review that a worker receives from an employer is unfair. Similarly, through interviews and surveys we identified that workers had limited opportunities to develop their skills. We thus developed an intelligent tool that uses reinforcement learning to recommend tasks for workers to do to develop their skills (becoming faster and better at their job.)
We have conducted real world deployments and field experiments with our tools to ensure they actually work (to ensure quality).
All of the A.I. based tools and systems we have developed are novel and are pushing forward the state of the art. We have published our tools in top conferences and journals. Our work has also been covered by the BBC, the New York Times, and the Deutsche Welle. All our tools and systems are open source.
Scaling of impact to SDGs: Please detail how many citizens/communities and/or researchers/businesses this has had or can have a positive impact on, including particular groups where applicable and to what extent.
Our work relates to the following UN’s sustainable-development goals:
-Decent Work and Economic Growth
-Peace, justice, and strong institutions
-Industry, Innovation, and Infrastructure,
Our A.I. based tools also have measurable ways to show progress. We are quantifying : (a) workers’ skill development; (b) the hourly wages of worker; (c) the integration fo play and creativity in workers’ everyday labor life (here we connect to wellness); (d) the amount of invisible labor workers are doing (our goal is to reduce the amount of invisible labor that workers have to do on invisible labor platforms); (e) the amount of unfair evaluation digital workers receive from employers. Notice that we have developed specific tools that allow us to quantify all of these values (which was not possible to do before).
For our solution, we have also teamed up with public rural libraries who have provided us with the infrastructure to give displaced rural workers access to the computers they need in order to develop their skills and access new jobs. We have also been working closely with Federal Governments (such as Mexico’s Ministry of Foreign Affairs) to design technology that can empower nations to improve the labor conditions of their digital workers. We are also collaborating with the platform of Toloka to design A.I. based tools that can be used across nations.
We believe our A.I. based tools are having an impact to sustainable development because on one hand we are providing ways through which we can directly QUANTIFY the labor conditions of digital workers and then do something about the problems we quantify (something that was not possible before; yet is very important to bring change to digital platforms. If policy makers cannot quantify what is happening inside digital labor platforms they will not be able to create policy for fairer and better digital workspaces.
Scaling of AI solution: Please detail what proof of concept or implementations can you show now in terms of its efficacy and how the solution can be scaled to provide a global impact ad how realistic that scaling is.
We have tested our A.I. based tools with hundreds of workers and we have seen that our tools can impact their lives by increasing their wages, facilitating skill development, and creating fairer workspaces for digital workers. We have been teaming up with industry partners such as Toloka, Federal Governments, such as Mexico’s Ministry of Foreign Affairs, and different NGOs, in order to deploy our tools at scale. Some problems wire have faced in scaling up is that you need to spend time building those partnerships in order to have access to the communities you want to impact (lucky we have already build several of those relationships so that facilitates that we will be able to deploy the tool at scale.) Additionally in rural regions it is always difficult to gain traction and recruit a large mass of workers to use your tools. Here we have started to partner with different NGOs who have decades working with rural workforces in order to facilitate the connection with rural workers.
Our A.I. based tools have also focused on helping digital workers to develop their creativity and entrepreneurship (if they want to be entrepreneurs). We have developed tools that empower anyone to create their own digital marketplace (without having to have any prior technical knowledge). These types of tools are helping the growth of new A.I. companies in rural regions and the global south.
We work closely with NGOs, Federal governments, industry and academic actors to help build a vibrant AI for SDGs ecosystem. We have been operating within the US and Mexico, and have started also to expand, thanks to our industry collaborators, into Africa and the rest of Latin America.
All of our tools are open source and the university has helped us to ensure our tools are GDPR compliant.
Ethical aspect: Please detail the way the solution addresses any of the main ethical aspects, including trustworthiness, bias, gender issues, etc.
In order to ensure that we are developing ethical A.I. we use critical theory. We use the work of Herbert Marcuse, a theorist from the Frankfurt School of Critical Theory, to help us design A.I. that is driving societal change in an ethical way. Marcuse argues that a way to engage in critical analysis and propose design that address systematic challenges is by participating in artistic creativity that facilitates developing new designs that are not confined by the current reality of what is possible. Based on this, we have designed tools and interventions to engage with workers in “creative artistic co-design sessions. This allows us to also design new ethical A.I. tools that address power structures and systematic challenges. We have also teamed up with NGOs and Federal Governments to design interventions through which we can reach underserved populations (such as certain rural communities) to ensure the technology we design does not exacerbate harms.
To further ensure ethics in our A.I. pipeline, we also follow efforts from the machine learning community (such as work from Timnit et al. 2018) to ensure the datasets we generate through our A.I. tools about workers are accompanied with a datasheet that describes the dataset’s motivation, composition, collection process, recommended uses, as well as any other details that the community should be aware of. We have also explored developing model cards (based on Google’s former A.I. ethics researchers Mitchell et al 2018) to document benchmarked stress-testing evaluations of our A.I. conducted under different conditions, as well as intended uses of the models to limit their application in contexts for which they are not well suited.
Additionally, we draw upon Vallor’s care ethics and Herkert’s micro ethics approach to create training around A.I. ethics for researchers, students, and collaborators working with us in these A.I. projects.