SDG 16: Peace and Justice Strong Institutions
Civil Servants/Public Officials
2. Project Details
Company or Institution
General description of the AI solution
Founded in 2017 by MIT and Cambridge alum Lyric Jain, Logically combines advanced AI with one of the world’s largest dedicated fact-checking teams to help government bodies uncover and address harmful misinformation and deliberate disinformation. The company’s mission is to enhance civic discourse, protect democratic debate and process, and provide access to trustworthy information.
This year, Logically launched its new threat intelligence platform, Logically Intelligence (LI). Logically Intelligence brings together Logically’s capabilities in at-scale analysis, classification and detection to help its partners monitor the online media landscape for the spread of damaging activity and narratives. Built on cutting-edge AI and secure, scalable cloud infrastructure, it is a powerful tool that can identify problematic content, actors, and activity online that may cause real world harm. Logically Intelligence also hosts a suite of countermeasures to tackle problematic content, including priority flags or takedown notices to platforms and deep dive investigative reports by its OSINT team into high priority issues identified by its systems. It is one of the only platforms to integrate both analytical capabilities and countermeasure deployment to tackle mis- and disinformation.
Countries around the world are starting to experience the real harms posed by misinformation, from doubt over election results, low uptake of the Covid vaccine or violent protests. Between January 2020 and February 2021, Logically Intelligence detected over 3,425,102 pieces of high threat content in the US, with 1,748,787 pieces occurring in the September – November election period alone.
It is currently being used by governments and public sector entities across the world to mitigate against the impact of mis- and disinformation on democratic processes, national security or public safety, including protecting election integrity and vaccination rollout programs.
Excellence and Scientific Quality: Please detail the improvements made by the nominee or the nominees’ team or yourself if your applying for the award, and why they have been a success.
Logically Intelligence (LI) is a threat intelligence platform that brings together Logically’s capabilities in at-scale analysis, classification and detection to monitor the online media landscape for the spread of damaging activity and narratives. The platform is primarily designed for governments and public sector entities concerned about the impact of mis- and disinformation on democratic processes, national security or public safety e.g. vaccination programs.
LI monitors publicly-available, multilingual content from digital and social media channels across the world. Its machine learning pipelines assess everything from source and content credibility, post veracity, and the identification of propaganda and the corresponding mechanisms for the dissemination of damaging narratives. It also applies advanced natural language processing (NLP) to detect and analyse clusters of threats and emerging narratives, and can identify which demographics or groups the narrative is targeting.
All these enable extraction of powerful insights to understand disinformation campaigns, their social engagement levels and potential harmful impacts.
As well as its monitoring capabilities LI is unique as it offers a suite of countermeasures, including priority flags / takedown notices to platforms and deep-dive investigative reports into high priority issues identified by our systems.
After being tested in beta format during the 2020 US Presidential elections by a battleground state looking to protect election integrity, LI is now being used by governments and organisations across the globe to identify and combat misinformation, including protecting elections and the COVID vaccine rollout. It was recently shortlisted for the Artificial Intelligence Solution of the Year at the UK’s National Technology Awards, the Best AI Product in Cyber Security at the 2021 CogX awards, as well as being central to Logically’s winning of the Rising Star in Tech award at the CogX awards, and Best AI Startup at the AI Breakthrough Awards.
Scaling of impact to SDGs: Please detail how many citizens/communities and/or researchers/businesses this has had or can have a positive impact on, including particular groups where applicable and to what extent.
With so much conflicting information in the public domain, it's important to be able to know the accuracy and reliability of what we are consuming online so that we can make informed decisions in our day-to-day lives. This is particularly true during key events such as elections or global health crises. The Covid-19 pandemic has served to highlight the real-world harms that misinformation can cause in democratic nations and we continue to see false information about Covid-19 circulating and threatening citizens’ health, as well as vaccine take-up.
Logically Intelligence provides a scalable way of addressing these issues, supporting the mission encapsulated by Sustainable Development Goal 16. Citizens need access to reliable information in order to make informed decisions, whether that be during elections or for their own health and safety, whilst policymakers and government organisations must base key policy and governance decisions on accurate information, and be able to stop harmful or misleading content that can have real-world effects on national security and citizen wellbeing in its tracks.
Scaling the fight against mis- and disinformation is integral to its success. Identifying harmful and misleading content amongst millions of posts online and before they go viral and have real-world consequences is key. The use of AI allows for preventative action: once content has gone viral a lot of the damage has already been done. LI is a tool that can identify these kinds of harmful and false narratives in the pre-virality phase, as well as the actors starting them. The consolidation of AI with human analysts has proven an effective way to provide scale and speed, as well as the nuance required to understand emerging and complex geopolitical situations.
Scaling of AI solution: Please detail what proof of concept or implementations can you show now in terms of its efficacy and how the solution can be scaled to provide a global impact ad how realistic that scaling is.
Logically Intelligence has had demonstrable successes supporting governments and our growing client roster is testament to this. For a battleground state during the 2020 US Presidential election, LI ingested millions of pieces of content, identifying and analyzing 40,000 threats to election integrity and public safety. Elsewhere, for one state in India, LI has improved law enforcement’s violation detection capabilities by over 200%.
Scaling is key and Logically is motivated by the benefits that can be achieved from public-private sector collaboration, as disinformation thrives in fractured environments.
As well as engaging with policymakers, regulators and governments, we are working with research institutions globally to improve methodology to combat misinformation, including partnering with Delhi’s Indraprastha Institute of Information Technology to explore the provenance, motivations and psychology of online misinformation sharing and to better identify actors and networks, more accurately predicting what type of content will go viral. Logically is collaborating with the UK’s University of Sheffield to develop intelligence tools to better detect and combat misleading online multimedia content, which is increasingly threatening the integrity of information, and a key application area for growth.
In June 2021, Logically Health launched in India, an initiative using Logically Intelligence to combat harmful health-related misinformation. Logically is also creating an ecosystem of medical experts and running a certification programme for knowledge-sharing purposes. Both are helping us scale our efforts and break into under-served communities – essential for having the greatest positive societal impact.
Logically also runs media literacy workshops; our Covishaala Initiative with NewsMobile is a pan-India training campaign aiming to reach 50 million people. The workshops will ‘train the trainers’ to educate people about identifying and tackling Covid misinformation – delivering an impactful and scalable training course. Our aim is to roll out this programme more widely around the world.
Ethical aspect: Please detail the way the solution addresses any of the main ethical aspects, including trustworthiness, bias, gender issues, etc.
We are committed to ensuring our work is as free from bias and partisan interest as possible. Logically's AI models are trained on data sets that are sampled representatively to encompass diverse patterns. Our human-in-the-loop approach means our analysts regularly train and evaluate data using principles of fair machine learning to safeguard against bias and ensure model prediction quality.
We are a verified signatory of the IFCN’s code of principles, and as such we hold ourselves to the standards set by that organisation. This includes a commitment to non-partisanship and fairness, transparency of sources, and transparency of funding and organisation. Logically teams also all have high levels of gender and ethnic diversity. The team’s own ethnic diversity and multilingual capabilities, and the translation technologies that Logically Intelligence uses, allow for its deployment across the world: it is currently being deployed by government agencies across three continents.
We support free speech, and oppose censorious approaches to individual expression online. However, where misinformation has the ability to cause real world harm, or when disinformation is being used for profit or political gain, this activity needs to be addressed. We have built industry-leading products and services that can help do this, but we will not provide our services to anyone who does not meet the same high standards of integrity and accountability to which we hold ourselves. We will not enter into any contract that would undermine our commitment to political non-partisanship or that is incompatible with our mission of enhancing civic discourse, protecting democratic debate and process, and providing access to trustworthy information, or where there's a reasonable likelihood the client would use the information that we find to cause undue harm to any person or group, or threaten to undermine the human rights of any person or group.