AI Standards Hub Collaborates with OECD on AI Standards
The AI Standards Hub and the OECD are collaborating to enhance the OECD Catalogue of Tools and Metrics for Trustworthy AI by incorporating AI-specific standards from the AI Standards Hub’s Standards Database. This partnership aims to enrich the OECD Catalogue as new standards become available, fostering international cooperation by establishing common frameworks for responsible AI governance. Global Standards Development Organisations, such as the International Organisation for Standardization (ISO), play a crucial role in setting global best practices for AI governance.
Standards are essential in the AI governance ecosystem as benchmarks supporting the implementation of AI governance regimes across jurisdictions. For instance, CEN-CENELEC is developing standards to help implement the EU AI Act, which will be pivotal for organizations to comply with the Act’s requirements. Similar trends are observed in other jurisdictions, emphasizing the role of standards as AI governance tools.
The AI Standards Hub provides a comprehensive overview of AI standards, addressing various parts of the AI development lifecycle, including foundational standards, process guidelines, technical measurement standards, and interfacing architectures. The Hub’s ‘Standards at a Glance’ offers insights into how these standards are developed and utilized.
The AI Standards Hub will contribute published standards focusing on AI technologies to the OECD Catalogue, excluding those still under development. The Catalogue will be continuously updated with new standards to ensure users can access the most current and relevant information. The AI Standards Hub, a partnership between The Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory, offers various resources, including an advanced search and filter function for its Standards Database.