2021 | Banks | Commercial | Promising | SDG10 | SDG9 | United Kingdom
Zupervise, AI Risk Governance Platform

1. General

Category

SDG 9: Industry, Innovation and Infrastructure

SDG 10: Reduced Inequality

Category

Banks, Commercial

2. Project Details

Company or Institution

Zupervise Limited

Project

Zupervise, AI Risk Governance Platform

General description of the AI solution

Zupervise is a unified risk transparency platform to govern AI in the regulated enterprise.

Organisations have become increasingly reliant on AI for outcomes ranging from improving operational efficiency to managing risk exposure and exceeding client expectations. This subjects them to various regulatory regimes that set out specific guidelines for business processes, model governance and systems of record. There are strong penalties and fines associated with non-compliance. In addition, privacy directives mandate specific rulings on personally identifiable data which fuels most AI models in consumer oriented use-cases, which leads to further security, ethical & legal risks. Increased scrutiny, by regulators, and now, in recent times, shareholders, media & society, results in reputational risks. All of the above, necessitates a strong need for robust risk governance of AI within the enterprise.

Zupervise offers a single pane of glass dashboard with source, risk and operational data integration capabilities for improving transparency in automation deployments & outcomes. You can now manage risks across multiple layers of AI: models, training data, inputs & outputs as well as human outcomes (ethics). For each AI risk we monitor multiple signals, including changes in attributes to be able to forecast a material effect on risk appetite.

The European Commission has put forward a draft legislation framework for a coordinated approach on the human and ethical implications of AI. With Zupervise, enterprises can finally articulate algorithmic decision provenance to executive stakeholders and regulators on-demand. The platform introduces capabilities to delineate accountability and makes it easier to place trust in AI investments with data-driven insights into emerging risks.

Zupervise is designed exclusively for AI governance, built for practitioners who need a simpler, domain-specific platform without complicated rollouts and quicker time to value realisation.

Website

https://zupervise.com/

Organisation

Zupervise Limited

3. Aspects

Excellence and Scientific Quality: Please detail the improvements made by the nominee or the nominees’ team or yourself if your applying for the award, and why they have been a success.

The Zupervise platform consists of curated and pre-identified risks and controls across most common enterprise AI use-cases. We also integrate with risk register systems, including enterprise GRC platforms. We have built a guided workflow within the platform to identify and mitigate risks in data, processes, algorithms and machine-learning models and human outcomes. With our platform, risk practitioners can build their own AI risk and controls taxonomy, or re-use our artefacts, templates and libraries to develop forward-looking internal controls. Our platform also enables visibility and transparency with an out of the box AI Risk Management charter that helps manage a balance between risk appetite and automation experimentation.

A simple configuration of the risk alerting engine enables dashboards with predefined AI risk reports. The solution calls out specific risks with multiple forms of bias or over-dependence on specific features for decision-making. Users can visualise AI risks across the board and aggregate by entity, division, operating unit or function. In addition, we are developing a unique strategic early warning system, designed for AI risk sensing, identifying the probability of a control breakdown based on the increase in AI risk rating and predicting loss events due to AI, ahead of time.

The solution is currently being trialled by multiple enterprises.

Scaling of impact to SDGs: Please detail how many citizens/communities and/or researchers/businesses this has had or can have a positive impact on, including particular groups where applicable and to what extent.

Automation is part of the financial services industry today in many forms, think of automated fraud detection, robo advisors, credit risk evaluations, dynamic pricing, claims handling in insurance , anti-money-laundering efforts in banking, the list is endless. At a human level, decisions are now being increasingly made by algorithms and the decision outcomes affect ordinary lives. For example, access to credit cards, loans & mortgages are, at minimum, influenced, by machines at statistical scale. Similarly, health & life cover decisions within insurance are now being automated.

AI is no longer the exclusive preserve of large technology giants. With easier avenues to build, buy or partner, AI is already part of the value chain in various industries today in many forms with machine learning emerging as the clear choice for multiple use cases.With the pandemic, stretched operational capacities and cost optimisation, mandate less dependency on humans. Automation, enabled by artificial intelligence is now almost a boardroom imperative at most enterprises. This urgency destabilises abilities to effectively control and govern newer technology risks. Existing risk control frameworks need significant improvements to be effective.

In addition, AI models developed with data from a pre-Covid timeline are not entirely accurate any more. Such algorithms need to be retrained to take into account newer data patterns and present performance risks. Every so often, we hear yet another story, of an algorithm going awry – from re-calibrated exam results to a controversial use of AI to process visa applications and several more widely published AI failure stories on mass surveillance and bias in speech recognition. Black Lives Matter and its campaigns have brought to the fore more awareness of societal injustices, exacerbated by human biases – which find their way into AI algorithms, in systems that influence hiring to welfare benefits & criminal justice outcomes.

The unregulated roll-out out of experimental AI poses risks to the achievement of the UN Sustainable Development Goals (SDGs), with particular vulnerability for developing countries. The goal of financial inclusion is threatened by the imperfect and ungoverned design and implementation of AI decision-making software making important financial decisions affecting customers. Automated decision-making algorithms have displayed evidence of bias, lack ethical governance, and limit transparency in the basis for their decisions, causing unfair outcomes and amplify unequal access to finance. [1]

[1]Truby J. Governing Artificial Intelligence to benefit the UN Sustainable Development Goals. Sustainable Development. 2020;28:946–959. https://doi.org/10.1002/sd.2048

Scaling of AI solution: Please detail what proof of concept or implementations can you show now in terms of its efficacy and how the solution can be scaled to provide a global impact ad how realistic that scaling is.

At Zupervise, we are solving an algorithmic hygiene problem for enterprise AI frameworks.The enterprise risk function need not necessarily stifle innovation. An effectively balanced AI Risk appetite that supports model experimentation can make a huge difference to competitive advantage.

Questions we help AI stakeholders answer include diverse implications of AI Risks

* Legal – will the AI comply with regulatory policy standards & legislation frameworks?
* Cyber – is the AI model secure from newer & as yet unknown attack vectors?
* Privacy – does the AI process personally identifiable data for automated decisions?
* Third Party – can you effectively vet your technology vendor’s AI deployments?
* Conduct – how is the AI decision outcome interpreted by humans in the loop?
* ESG – is the AI ethical, responsible, accountable & trustworthy?

Ethical aspect: Please detail the way the solution addresses any of the main ethical aspects, including trustworthiness, bias, gender issues, etc.

At Zupervise, our Mission is to champion governance for enterprise AI technologies;
Our vision is of a world, where AI is deployed responsibly and
with transparency, our prime value driver, embedded into the way we work.

The Zupervise AI Risk Governance platform enables the following:
* Helps define the role of data & model transparency in driving efficiency and scaling enterprise AI;
* Builds out AI Risk governance essentials, including the framework, taxonomy, model and metrics.
* Out of the box bias audits & regulatory inspections;
* Structured pre-deployment algorithmic risk assessments and post-hoc impact evaluations.
* Creating transparency, shaping process governance, securing responsible delivery & building an ethical implementation;
* Continuous improvement with AI Risk monitoring.

CONTACT

International Research Centre
on Artificial Intelligence (IRCAI)
under the auspices of UNESCO 

Jožef Stefan Institute
Jamova cesta 39
SI-1000 Ljubljana

info@ircai.org
ircai.org

FOLLOW US

The designations employed and the presentation of material throughout this website do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area of its authorities, or concerning the delimitation of its frontiers or boundaries.

Design by Ana Fabjan