All Industries | Excellent | France | SDG1 | SDG10 | SDG11 | SDG12 | SDG13 | SDG14 | SDG15 | SDG16 | SDG17 | SDG2 | SDG3 | SDG4 | SDG5 | SDG6 | SDG7 | SDG8 | SDG9

SDG Meter

Share this post

1. General


SDG 1: No Poverty

SDG 2: Zero Hunger

SDG 3: Good Health and Well-being

SDG 4: Quality Education

SDG 5: Gender Equality

SDG 6: Clean Water and Sanitation

SDG 7: Affordable and Clean Energy

SDG 8: Decent Work and Economic Growth

SDG 9: Industry, Innovation and Infrastructure

SDG 10: Reduced Inequality

SDG 11: Sustainable Cities and Communities

SDG 12: Responsible Consumption and Production

SDG 13: Climate Action

SDG 14: Life Below Water

SDG 15: Life on Land

SDG 16: Peace and Justice Strong Institutions

SDG 17: Partnerships to achieve the Goal



Please describe Other

All industries, no limitation

2. Project Details

Company or Institution



SDG Meter

General description of the AI solution

Our web platform allows any type of user (experienced or not with the SDGs) to automatically link a text of their choice to one or more SDGs it deals with but also to quantify its degree of belonging to these different SDGs. Thus a user can discover the degree of the SDGs covered by his text. Our platform also provides an informative aspect by indicating links to the precise description of all the SDGs. Our approach to the design of this method is based on the exclusion of any form of human intervention during the process of associating SDGs with texts, allowing UN experts to save a great deal of time and also to eliminate the bias introduced by experts beacause UNEP counts with experts in several topics, interlinkages with SDGs outside their expertise can be missed out. Our SDG Meter is based on the use of BERT (Bidirectional Encoder Representations from Transformers) developed in 2019 by Google research teams. BERT is currently one of the most powerful and efficient algorithms in the field of natural language processing (NLP). In various tests of our method, it has shown excellent results.



UNEP / Sorbonne university

3. Aspects

Excellence and Scientific Quality: Please detail the improvements made by the nominee or the nominees’ team or yourself if your applying for the award, and why they have been a success.

The solutions prior to our SDG Meter are limited in various ways, such as the detection of a maximum of one SDG in a text (OECD)(1), or the need for human intervention in the classification process (Pathfinder)(2). All the previous techniques also do not allow to establish the degree of belonging of an SDG to a text. An other method propose also to analyse which SDGs text treats about, this method is named « OSDG » (3). This last method establishes an order with importance categories but does not provide precise quantification. We therefore decided to develop our SDG Meter in order to allow a fully automatic classification, allowing us to recognize all the SDGs that a text deals with but also to be able to quantify the degree of relationship between an SDG and the text (expressed in percentages). Our method is based on the BERT algorithm: a deep learning method developed by Google research teams(4). This method is known to be effective for dealing with many Natural Language processing tasks. We adapted it for the context of SDG Multi-label classification. We have conducted various tests and comparisons of our solution under the supervision of linguistic experts, which has demonstrated the effectiveness and accuracy of our approach. This solution is entirely open-source,, the code and documentation related to this solution will be public (GitHUb and public conferences), and access to the classification tool will be via a web page being deployed (link to the visual driver in the form)


Scaling of impact to SDGs: Please detail how many citizens/communities and/or researchers/businesses this has had or can have a positive impact on, including particular groups where applicable and to what extent.

Our contribution has several impacts:
1) This work has been done jointly with UNEP. The solution will help UNEP expert in their tasks of classifying and linking all submitted texts (public policy, project, annual report etc.) to the SDGs. Indeed manual classification is very time-consuming but also can come with a bias as the annotation depends on the point of view of each expert. Our solution aims to automate this process and also to base the annotation of all texts on a central approach which will reduce the bias. Thus, experts can make the most of their time on more rewarding tasks instead of text manual classification.
2) The second utility concerns a wide types of users from various domains such as regional and international organizations, national governments, NGOs, research institutions, etc. . In fact, the persons or entities using our platform can bring their projects closer to the SDGs, get to know the SDGs and better orient their project according to the SDGs and their objectives. Our tool can also be used as an educational tool to familiarize a wide audience with the SDGs.
In summary, our method shows the ability of artificial intelligence to facilitate text processing, which is known to be a complex and tedious task, but also to improve the consistency of the classification by eliminating most of the biases introduced by a manual classification.
The democratization of our platform would allow any project/interests in the environment to bring it closer to the SDGs and thus potentially increase the ideas for the success of the SDGs.
In order to receive feedbacks of all users, our web platform proposes a survey and a contact forms. Thus we can be able to follow the satisfaction of the end users and to evaluate our solution on a large number of real tests.
By providing our code and documentation on Github repositor, we encourage any contribution to improve the features and efficiency of the tool.

Scaling of AI solution: Please detail what proof of concept or implementations can you show now in terms of its efficacy and how the solution can be scaled to provide a global impact ad how realistic that scaling is.

For the test part, we asked expert linguists UNEP to choose different texts and annotate the SDGs that the text deals with (with order of importance). These experts proceeded in two different ways:
– Annotation by themselves
– Annotation with the help of the linguistic tool "Lexico" establishing different statistics concerning the texts (non-automatic tool, the intervention of experts for the choice of SDGs is always required)
We have also selected different texts from the IISD-SDG website ( providing news type texts annotated by experts (with order of importance)
We ran our SDG Meter on all of these texts and found excellent results that were almost identical to expert annotations.
On 400 texts, our method has an accuracy of 98%, i.e. 98 correctly classified texts.
Our website (under development and soon available (<1 month) will first be made available to UNEP experts who will be in charge of evaluating in real time the capacity of our method for their daily needs. The second phase will consist in making our website available to the general public with, among other things, the implementation of traffic management and the optimization of the processing time of a text (currently < 15 seconds) (estimate of a maximum of one month for these extension works).
A communication has already been launched concerning the recent technologies developed at UNEP in the framework of the AI for Planet program, but also a communication to UNEP partners who are waiting for the availability of our method to the general public.
We also had other requests concerning the fact of replicating our method to classify texts not according to SDGs but for example according to impact categories (related to SDG12).

Our method based on the very powerful BERT algorithm is however limited to the consideration of a maximum of 512 words (Google search teams are currently working on extending this limit that will be available soon under the name “SMITH” with 2048 words limitation. Also public researches are conducting to avoid any form of limitation in text treatment algorithm.), which does not allow an analysis on a one-page document (short text or summary). We are therefore waiting for future advances concerning this limitation (a challenge in AI research). Our method remains very useful, however, as an example in the context of the submission of public policy projects to UNEP, a summary is always mandatory and this part has the required format for the execution of our SDG Meter.

Ethical aspect: Please detail the way the solution addresses any of the main ethical aspects, including trustworthiness, bias, gender issues, etc.

Our method easily fits in the requirements related to ethics and the RGPD because it does not collect any information via the website, no text submitted is kept and no information on the user except for the message part where the user submits his feedback.


International Research Centre in Artificial Intelligence
under the auspices of UNESCO (IRCAI)

Jožef Stefan Institute
Jamova cesta 39
SI-1000 Ljubljana


The designations employed and the presentation of material throughout this website do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area of its authorities, or concerning the delimitation of its frontiers or boundaries.

Design by Ana Fabjan