2021 | Online Child Safety-safety tech | Promising | SDG11 | SDG16 | SDG3 | United Kingdom
Safeguarding children online in realtime

1. General

Category

SDG 3: Good Health and Well-being

SDG 11: Sustainable Cities and Communities

SDG 16: Peace and Justice Strong Institutions

Category

Other

Please describe Other

Online child safety – safetytech

2. Project Details

Company or Institution

SafeToNet Ltd

Project

Safeguarding children online in realtime

General description of the AI solution

SafeToNet is a UK SafetyTech company that focuses on the development and distribution of realtime solutions to online child safety. The internet and most social media services were not designed with children in mind. The Unseen Teen report from Data and Society Research Institute suggests that service design practices of social media service providers, in some instances, deliberately ignores the needs of this vulnerable age group. Age gates are not robust enough to prevent under-age and even more vulnerable children from using these services.

SafeToNet’s on-device AI is designed to be platform agnostic and device independent. It provides a realtime safeguarding layer that can be used on iOS, Android, MacOS and Windows devices. It allows children to go online and benefit from all that the internet and associated public online spaces offer, while being able to exercise their digital rights as outlined in the UN General Comment 25.

For legal and technical reasons, SafeToNet’s AI operates entirely on the device and within the technical constraints these environments provides, such as memory usage, storage requirements and battery life.

SafeToNet’s AI is designed to address a number of online harms in real time; text-based conversations that lead to cyberbullying, sexting & sextortion and “dark thoughts” – self-harm and suicide ideation. It comprises an AI-based keyboard the intervenes in realtime, nudges the child’s behaviour, and provides realtime advice and guidance, so that they make safer digital decisions. In addition the AI (SafeToWatch) reacts in realtime to what the device’s camera sees and if it detects child nudity, then it will in realtime redact the image and render it useless.

SafeToNet’s AI-based safetytech provides social media companies with tools to help them deliver on their Duty of Care as outlined in the UK’s Online Safety Bill and similar legislation from around the world.

Website

https://www.safetonet.com/

Organisation

SafeToNet

3. Aspects

Excellence and Scientific Quality: Please detail the improvements made by the nominee or the nominees’ team or yourself if your applying for the award, and why they have been a success.

We use an on-device privacy-preserving AI approach that utilizes safe continual learning and knowledge transfer scheme that enables continuous learning. This improves the security of our models, leverages the visual and acoustic knowledge to enable interpretability, improves accuracy, and reduces the risk of false positives.

Use of “Adversarial learning” during training helps to safeguard the model against perturbations in the data. Our evaluation process is comprised of data conditioning and model evaluation.

In data conditioning, we evaluate the quality of data being labelled according to a defined schema. We analyze the inter-agreement and test data bias among our annotators. The inter-agreement metrics include kappa score, Cronbach's alpha, and level and category distribution per group of annotators. Bias and fairness metrics include disparity and parity constraints i.e., statistical parity, equality of odds, and equality of opportunity.
For model evaluation, we use binary and multi-class (categorical) models. Categorical model measurements are similar to the ones used in binary after averaging them over all categories.

For binary models, we use standard industrial accuracy metrics; Area Under the (Receiver Operating Characteristic) Curve (AUC) and F1 scores. AUC measures the predictive ability and accuracy of our model before setting the optimal threshold. F1 assists in evaluating the internal quality of the model. For Categorical models, we use macro average F1 score, which is the unweighted average over all categories.

For both ML types, we measure precision (the fraction of positives among examples that are correctly predicted) and recall (the fraction of positives that are correctly predicted).

We implemented and validated technology on iOS and Android. Our results are validated and tested through different in-house and public datasets. We will publish results using our model against public datasets in corresponding conferences.

Scaling of impact to SDGs: Please detail how many citizens/communities and/or researchers/businesses this has had or can have a positive impact on, including particular groups where applicable and to what extent.

For UN SDG 16.2 “End abuse, exploitation, trafficking and all forms of violence against and torture of children” to be fully met, children online MUST be included. To be effective and to deliver on the promise of UN CRC Optional Protocols and General Comment 25, online “safetytech” must be privacy-preserving, proactive, in realtime and therefore on the child’s device. Backhauling to a server for retrospective analysis is neither effective nor timely.

The more children that go online (SDG9.1), the more children are readily available for grooming by anyone from anywhere at any time. Grooming for CSE often results in a child taking and sharing intimate images, a growing phenomenon as reported by the UK’s Internet Watch Foundation.

SafeToNet uses AI described above in its SafeToWatch product to analyse what the child’s smartphone’s camera sees in realtime, to prevent the taking of an intimate image of the child for onward sharing. It also intercedes in realtime with an AI-powered keyboard to disrupt sexualised text-based conversations that children have that lead to the taking of these images.

The Global Impact could be enormous. In the UK alone (NSPCC), the cost of child sexual abuse is up to £3.2Bn per annum. Online CSE is a contributing factor to Adverse Childhood Experiences (ACE), which represents a UK cost of £42Bn or £1,800 per household per annum (BMJ). UNICEF says there are 750m children online globally.

The UK’s Online Safety Bill defines “harm” as “content that has an adverse impact on the physical or mental wellbeing of a child…” SafeToNet’s AI also helps protect children from cyberbullying, self-harm and suicide ideation.

Progress is measured in the number of downloads in the markets in which SafeToNet operates, currently UK, US, Germany 106 other countries, so based on the UK figures the global added value is immense.

Scaling of AI solution: Please detail what proof of concept or implementations can you show now in terms of its efficacy and how the solution can be scaled to provide a global impact ad how realistic that scaling is.

SafeToNet’s AI is designed to eliminate in realtime the production of CSAM. The IWF found 126,000 URLs containing over 93,000 illegal images of mostly girls aged 11-13, and over 33,000 between 7 and 10 years old. SafeToNet’s AI contextualises the conversations and activities online and prevents the self-production and streaming of intimate images of children.

SafeToWatch and SafeToNet, two products based on SafeToNet’s AI, are engineered to operate on the child’s smartphone, within and despite all the technical constrictions that apply. As this is the case, there are no scalability issues. SafeToNet and SafeToWatch can be pre-installed so that phones are “safe out of the box” or can be installed by the child’s parent.

SafeToWatch and SafeToNet are on-device realtime safetytech solutions that demonstrate the art of the possible. SafeToWatch is being developed as an SDK so that 3rd party developers can incorporate it into their social apps, and enhance in-app safety, by for example switching off the camera if it detects an intimate image of a child. We believe this deep-rooted safetytech presents growth opportunities for social media service providers the world over as there is an increasing backlash against unsafe devices for children.

SafeToWatch SDK will encourage “AI for Good”. It is a ready-made solution for social media service providers and app developers. Cohort-level reports can be derived from the system’s “back end” so that the number of risks filtered in realtime for example can be compared and contrasted across different regions of the world. In addition, SafeToNet’s Safety-by-Design AI compliments Privacy-by-Design.

SafeToNet is fully compliant with GDPR and our obligations for Special Category Data (sexual, political and religious). ADM (Automated Decision Making) is a key component of GDPR, which SafeToNet’s AI complies with. The Investigatory Powers Act, Computer Misuse Act and Defamation Act also all apply.

Ethical aspect: Please detail the way the solution addresses any of the main ethical aspects, including trustworthiness, bias, gender issues, etc.

The business model of social media companies is to monetise algorithmically produced content, irrespective of what this content is. In an apparent attempt to maximise their revenues they seemingly misuse legislation such as Section230 of the CDA and avoid Age Verification technologies so that children much younger than 13 as set in most of their own terms and conditions can use their services. They also claim a “legitimate interest” to sidestep “consent” as defined in COPPA. Social media companies seem to present themselves as being unethical. Twitter for example is being sued by a 16 year old boy for monetising intimate images of him as a 13 year old.

We believe SafeToNet’s safetytech AI provides an ethical solution for the seemingly unethical business model of most social media service providers around the world, where algorithmically driven content leads to child suicide through sextortion and cyberbullying.

SafeToNet’s AI has been developed to comply with all relevant laws, especially but not limited to GDPR and our obligations for Special Category Data (sexual, political and religious), ADM (Automated Decision Making), the Investigatory Powers Act, Computer Misuse Act and Defamation Act. SafeToNet’s safetytech AI resides entirely on the child’s smartphone, it is robust and in one place. Backhauling content to a server for analysis is too slow, unpredictable and, with some content, illegal.

Specific elements of added value of SafeToNet’s AI to social media operators are to safeguard in realtime children using their services from content that has an adverse impact on their physical and psychological wellbeing. It provides them with tools to meet their Duty of Care as defined in the UK’s Online Safety Bill. SafeToNet’s AI works for all children equally, regardless of race or gender. Unlike most current social media services, it is intrinsically designed as Tech for Good.

CONTACT

International Research Centre
on Artificial Intelligence (IRCAI)
under the auspices of UNESCO 

Jožef Stefan Institute
Jamova cesta 39
SI-1000 Ljubljana

info@ircai.org
ircai.org

FOLLOW US

The designations employed and the presentation of material throughout this website do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area of its authorities, or concerning the delimitation of its frontiers or boundaries.

Design by Ana Fabjan