On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI. This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).* Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis. This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.
Organizations that choose to report under the HAIP reporting framework would complete a questionnaire that contains the following seven sections:
- Risk identification and evaluation – includes questions regarding, among others, how the organization classifies risk, identifies and evaluates risks, and conducts testing.
- Risk management and information security – includes questions regarding, among others, how the organization promotes data quality, protects intellectual property and privacy, and implements AI-specific information security practices.
- Transparency reporting on advanced AI systems – includes questions regarding, among others, reports and technical documentation and transparency practices.
- Organizational governance, incident management, and transparency – includes questions regarding, among others, organizational governance, staff training, and AI incident response processes.
- Content authentication & provenance mechanisms – includes questions regarding mechanisms to inform users that they are interacting with an AI system, and the organization’s use of mechanisms such as labelling or watermarking to enable users to identify AI-generated content.
- Research & investment to advance AI safety & mitigate societal risks – includes questions regarding, among others, how the organization participates in projects, collaborations and investments regarding research on various facets of AI, such as AI safety, security, trustworthiness, risk mitigation tools, and environmental risks.
- Advancing human and global interests – includes questions regarding, among others how the organization seeks to support digital literacy, human centric AI, and drive positive changes through AI.
Organizations that are developing advanced AI systems are invited to submit their first reports by 15 April 2025. Once submitted, reports will be publicly available on a dedicated OECD website, and organizations are invited to update their report annually. The OECD Secretariat will verify that all questions are answered and that supporting materials (e.g., links) are accessible, but the Secretariat will not assess or verify the substance of submissions. Organizations that commit to the HAIP Code of Conduct and complete the reporting framework will be listed under the HAIP Brand and mentioned on the OECD.AI webpage. However, listing under the HAIP Brand will not constitute an official endorsement of the organization’s practices or the AI system it develops or uses, nor does it constitute a certification of compliance with the HAIP Code of Conduct.
A number of companies have already pledged to complete the inaugural reports. Once they are available in April 2025, these inaugural reports are likely to provide insights into how those companies are operationalizing AI governance and risk management systems.
Neither the HAIP Code of Conduct nor the reporting framework are grounded in binding laws like the EU’s AI Act or U.S. state AI legislation – but there are thematic similarities between what such laws may require (in terms of transparency, risk management, etc.) and the voluntary commitments in the HAIP Code of Conduct. This is because the OECD AI Principles (adopted in 2019 and updated in May 2024) have informed the HAIP Code of Conduct and many AI-related legislative proposals that have been developed in the last few years – including the EU AI Act and the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (discussed in our previous blog post here).
*The HAIP Code of Conduct was drafted by the G7 nations with input from the OECD and stakeholders from the private sector, academia and civil society. The HAIP Code of Conduct is intended to be endorsed by organizations across all stages of the AI lifecycle, including during the design, development, and deployment of advanced AI systems.
The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.