References
References that are being used and produced in the context of MIM5 development
References that are being used and produced in the context of MIM5 development
Amsterdam’s generalized procurement conditions, along with its explanatory guide, the White Paper on Public AI Registers, and the Deliverables of the AI HLEG under the “Specifications” section provide an excellent overview of the requirements for fair, trustworthy and transparent automated decision making using algorithmic systems.
Danish Standards PAS DS/PAS 2500-1: 2020, Artificial Intelligence – Part 1: Transparency; DS/PAS 2500-2: 2020, Artificial Intelligence – Part 2: Decision-support usage in public administration.
UK Algorithmic Transparency standard: https://www.gov.uk/government/collections/algorithmic-transparency-standard
Algorithm registers
Algorithm registers offer a standardised, searchable and archivable way to document the decisions and assumptions that were made in the process of developing, implementing, managing, and ultimately dismantling algorithmic applications in public organisations.
Description language
MIM2
Guidelines and catalog of minimum common data models in different verticals to enable interoperability for applications and systems among different cities.
Data models/vocaboularies
ISO/IEC TS 5723:2022
This document provides a definition of trustworthiness for systems and their associated services, along with a selected set of their characteristics.
ITU-T F.748.12
Deep learning software framework evaluation methodology
Evaluation methodology
CAN/CIOSC 101
Ethical design and use of automated decision systems
Design requirements
P3119
Standard for the Procurement of Artificial Intelligence and Automated Decision Systems
Definitions and procecess model
ISO/IEC 12792 Information technology — Artificial intelligence — Transparency taxonomy of AI systems
This document defines a taxonomy of information elements to assist AI stakeholders with identifying and addressing the needs for transparency of AI systems. The document describes the semantics of the information elements and their relevance to the various objectives of different AI stakeholders. This document uses a horizontal approach and is applicable to any kind of organization and application involving AI.
taxonomy
ISO/IEC 6254 Information technology — Artificial intelligence — Explainable artificial intelligence for trustworthiness –Part1: Overview and requirements
This document describes approaches and methods that can be used to achieve explainability objectives of stakeholders with regards to ML models and AI systems‘ behaviours, outputs, and results. Stakeholders include but are not limited to, academia, industry, policy makers, and end users. It provides guidance concerning the applicability of the described approaches and methods to the identified objectives throughout the AI system’s life cycle, as defined in ISO/IEC 22989
ISO/IEC 24028 Information technology — Artificial intelligence — Overview of trustworthiness in Artificial Intelligence
This document surveys topics related to trustworthiness in AI systems, including the following: — approaches to establish trust in AI systems through transparency, explainability, controllability, etc.; — engineering pitfalls and typical associated threats and risks to AI systems, along with possible mitigation techniques and methods; and — approaches to assess and achieve availability, resiliency, reliability, accuracy, safety, security and privacy of AI systems. The specification of levels of trustworthiness for AI systems is out of the scope of this document