MIM5 - Transparency
OASC MIM5: Fair and Transparent Artificial Intelligence
Status
Supporting automated decision-making using algorithmic systems that are trusted, fair and transparent.
Background
Governments are increasingly seeking to capture the opportunities offered by automated decision-making algorithmic systems, to improve their services. However, government agencies and the general public have justified concerns over bias, privacy, accountability, and transparency of such automated decision-making processes. New examples continue to emerge of potential negative consequences from the inappropriate use of ('black box') algorithms.
Here we define "Algorithmic System" as: "software that automatically makes predictions, makes decisions and/or gives advice by using data analysis, statistics and/or self-learning logic."
An automated decision-making algorithmic system does not necessarily require any form of self-learning logic (such as machine learning). In actual practice, software is often used that does not contain any self-learning logic, but the application of which may have great and sometimes unknown or unintended impact on citizens.
This is an increasingly important issue as cities and communities are increasingly using complex modelling to support their decision making and moving towards the implementation of local digital twins.
To provide citizens and governments with a proper process to mitigate risk, Amsterdam city council, the original champion of MIM5, is working with other cities to develop a European standard for procurement rules for government agencies to use when procuring algorithmic systems to support automated decision-making. Alongside this, guidance is being developed regarding the actions that government agencies themselves need to take to make sure that automated decision-making is trusted, fair and transparent. This will include providing channels for citizens to query the decision-making process and involving citizens in co-designing the algorithmic systems. Most importantly there is the need to ensure that the data used by those systems is accurate and appropriate.
In addition, there are some useful checklists that have been developed elsewhere, and the UK has developed a Framework on Fair AI for the Public Sector and an Algorithmic Transparency standard (links available in References).
MIM5 is developed based on the Y.MIM structure mandated by the ITU: it is scoped through the process of identifying the objective, the capabilities, the requirements, and the mechanism based on which to provide interoperability guidance and support the subsequent conformance and compliance testing. The Y.MIM process is designed to ensure that each MIM is developed as a minimal but sufficient set of capabilities needed to achieve a given objective, the set of functional and quality requirements needed to achieve those capabilities, and one or more mechanisms to satisfy those requirements, together with guidance to enable a useful level of interoperability between those mechanisms.
As a starting point, in general, there are two main interoperability angles that should be embedded in the MIM5 focus.
From a more technical perspective, the first interoperability angle is to identify the minimal yet sufficient interoperability approach to ensure that the process of training AI algorithms is transparent and fair.
From a more user/social perspective, the second angle focuses more on the minimal but sufficient approach to ensure that AI algorithms can be understood more transparently and fairly by the public and users.
Given the growing discussion on AI-related standardisation, a third angle may also be worth exploring: testing and verification standards/mechanisms.
Objectives
The Objectives section in the Y.MIM structure sets the scope for the MIM by describing at a high level what it will enable a city or community to achieve, which in this case, is the MIM5.
The objective of MIM5 is to identify a set of minimal, but sufficient, mechanisms to help cities and municipalities have the technical capabilities to verify that the algorithmic systems offered by providers to make decisions about the services provided to citizens are fair, trustworthy and transparent.
This includes the models used, the datasets on which the algorithms are trained, and the data that the models use to make decisions.
The current starting point for assessing the fairness, trustworthiness and transparency of those algorithms will be the AI procurement clauses originally developed by Amsterdam.
Capabilities
Based on the Y.MIM structure, the capabilities section provides a short description of a minimal but sufficient set of business requirements needed to enable the objective to be achieved to a good enough extent. These will normally be listed as bullet points or as a set of short paragraphs each pointing to a particular capability. This section will also help make the case to the key decision and policy makers.
In order to match the procurement norm being developed, the following are the set of six minimal requirements from Amsterdam for suppliers of algorithmic systems to ensure that these are fair, trustworthy and transparent. Below is the working version:
PT01
Procedural Transparency
Full disclosure of the type of choices made, parties involved, risks and mitigation actions in the process of creating an algorithmic model.
TT01
Technical Transparency
Full disclosure to allow the buyer of the source code and model to explain the model to citizens or other stakeholders.
TT02
Technical Transparency
Access to the learnings of the model, ideally structured using MIM2, to prevent vendor lock-ins.
TT03
Technical Transparency
Clarity about the process by which an algorithmic system makes decisions in an overall system, i.e. the optimisation goals and outcomes of an algorithm.
TE01
Technical Explainability
Ability to explain on an individual level how a model creates certain outcomes.
TE02
Technical Explainability
Ability to address any restrictions as to whom the information will be classified: public servants, other experts, etc.
SP01
Security and Privacy
Ability to verify the robustness and security of the algorithm system and the compliance with privacy legislation
FR01
Fairness
Ensuring that the algorithmic system does not systematically disadvantage, show bias against, or even discriminate against, different social groups and demographics.
CX01
Context
The assessment of fairness depends on facts, events, and goals, and therefore has to be understood as situation or task-specific and necessarily addressed within the scope of practice. For instance, there may be an explicit goal to address an historic imbalance, where positive discrimination is considered appropriate. Here the aspect of “fairness” needs to be seen in the wider context.
AC01
Accountability
Accountability for the supplier to create algorithms that respect human digital rights, and that are compliant with national and local anti-discrimination laws and regulations.
AC02
Accountability
Agencies should not procure algorithms that are shielded from an independent validation and public review because of trade-secret or confidentiality claims.
DE01
Upgradability
AI Models used should be able to be updgradable and maintainable.
DE02
Continuty & Upgrades
AI Models owners should have capabilities to maintain and upgrade the AI Models when required. The process of upgrading and releasing new versions should be transperent and reproducible
It should be noted that these capabilities should be applied differently to different systems depending on the nature, context and goals of the algorithmic system.
Technically, these capabilities can be translated into a metadata API that every vendor would provide, when supplying high impact algorithms to cities, and the buyers could put in their requirements when procuring.
Requirements
The Requirement section is to identify functional and quality requirements needed to achieve the capabilities described in the above section. Below is the working draft based on the previous MIM5 working group meetings that requires further review:
PT01
PT01-R1
Use a machine readable, standard, formal and clear language to describe the whole process of creating the algorithmic model (this include also visual description, e.g. UML)
PT01
PT01-R2
The supplier should maintain comprehensive documentation of the algorithm's development and decisions.
TT01
TT01-R1
Use a standard, formal and clear language to describe the algorithmic model to different types of stakeholders (different level of complexity)
TT01
TT01-R2
The data sets used to train the model should be transperent and should not violate any copright
TT02
TT02-R1
Use of open Standard and publicly available data models/ontologies for the training datasets
TT02
TT02-R2
Prefer the use of publicly available and validated datasets to train/develop the model
TT02
TT02-R3
Allow the access to the datasets used to train/develop the model or at least to an extensive summary of them
TT03
TT03-R1
Use a machine readable, standard, formal and clear language to describe how the algorithmic model makes decision and produces outcomes
TE01
TE01-R1
Provide a specific level of description based on the complexity and the context/domain of the algorithm application (e.g. managing sensitive information require a more detailed explanation
TE02
TE02-R1
Implement mechanisms to obtain explicit consent for sharing classified information.
TE02
TE02-R2
Maintain transparency in how classified information is handled and shared.
SP01
SP01-R1
Compliance with GDPR and national privacy regulation
SP01
SP01-R2
Adoption of security standard procedures
SP01
SP01-R3
Provisioning of security certifications and external audit
SP01
SP01-R4
The supplier must implement continuous monitoring of the AI algorithm's performance and security
FR01
FR01-R1
The algorithmic system should use of public available and validated datasets to train/develop the model
FR01
FR01-R2
The algorithmic system should be validated against specific testcases created to validate bias, discrimination and in general the outcomes of the algorithm
CX01
CX01-R1
The algorithmic system must be contextually aware and capable of recognizing the specific facts, events, and goals relevant to the task.
CX01
CX01-R2
The algorithmic system should support customizable fairness metrics that can be tailored to the specific situation or task.
CX01
CX01-R3
The algorithmic system should adhere to a set of ethical guidelines that prioritize fairness and equitable outcomes.
AC01
AC01-R1
The algorithmic system must comply with all relevant national and local anti-discrimination laws and regulations.
AC01
AC01-R2
The supplier must conduct regular fairness assessments of the AI algorithm.
AC01
AC01-R3
The supplier must follow ethical data collection and usage practices
AC01
AC01-R4
The supplier should provide regular reports on the AI algorithm's compliance with anti-discrimination laws
AC02
AC02-R1
The algorithmic system information (including model description, training datasets etc) has to be publicly available in order to be validated by any indipendent entity
DE01
DE01-R1
AI Model should be upgradable and maintanable by the community who is capable of doing it and this capabilits should not be limited to the owner of the AI Model. Training methodology of AI Model and releasing should be transperent.
DE02
DE02-R1
The AI Model should have enough capabilities to upgrade and maintain the AI Models
Mechanisms (to be identified)
This section is to identify technical solutions that address the set of requirements. Specifically how each of one or more alternate sets of tried and tested technical solutions can deliver and functional and quality requirements covered in MIM5. This is a core focus of MIM5 in 2024.
Interoperability Guidance (to be identified)
This section described options to support interoperability between any different mechanisms identified.
MIM5 2024 Focus
MIM5 in 2024 aims to follow up on the progress made in 2023 and link it to real use cases within the ongoing projects for further development. Specifically, through engagement with cities and communities, the work will focus on further scoping the existing identified objectives, capabilities and requirements with the needs of cities and communities. Based on these engagements and the feedback gathered, the aim is to progress with the identification of specific sets of mechanisms and to start work on the interoperability guidelines.