On February 7, 2025, the OECD launched a voluntary framework for companies to report on their efforts to promote safe, secure and trustworthy AI.  This global reporting framework is intended to monitor and support the application of the International Code of Conduct for Organisations Developing Advanced AI Systems delivered by the 2023 G7 Hiroshima AI Process (“HAIP Code of Conduct”).*  Organizations can choose to comply with the HAIP Code of Conduct and participate in the HAIP reporting framework on a voluntary basis.  This reporting framework will allow participating organizations that comply with the HAIP Code of Conduct to showcase the efforts they have made towards ensuring responsible AI practices – in a way that is standardized and comparable with other companies.

Organizations that choose to report under the HAIP reporting framework would complete a questionnaire that contains the following seven sections:

  1. Risk identification and evaluation – includes questions regarding, among others, how the organization classifies risk, identifies and evaluates risks, and conducts testing.
  2. Risk management and information security – includes questions regarding, among others, how the organization promotes data quality, protects intellectual property and privacy, and implements AI-specific information security practices.
  3. Transparency reporting on advanced AI systems – includes questions regarding, among others,  reports and technical documentation and transparency practices.
  4. Organizational governance, incident management, and transparency – includes questions regarding, among others, organizational governance, staff training, and AI incident response processes.
  5. Content authentication & provenance mechanisms – includes questions regarding mechanisms to inform users that they are interacting with an AI system, and the organization’s use of mechanisms such as labelling or watermarking to enable users to identify AI-generated content.
  6. Research & investment to advance AI safety & mitigate societal risks – includes questions regarding, among others, how the organization participates in projects, collaborations and investments regarding research on various facets of AI, such as AI safety, security, trustworthiness, risk mitigation tools, and environmental risks.
  7. Advancing human and global interests – includes questions regarding, among others how the organization seeks to support digital literacy, human centric AI, and drive positive changes through AI.

Organizations that are developing advanced AI systems are invited to submit their first reports by 15 April 2025.  Once submitted, reports will be publicly available on a dedicated OECD website, and organizations are invited to update their report annually.  The OECD Secretariat will verify that all questions are answered and that supporting materials (e.g., links) are accessible, but the Secretariat will not assess or verify the substance of submissions.  Organizations that commit to the HAIP Code of Conduct and complete the reporting framework will be listed under the HAIP Brand and mentioned on the OECD.AI webpage.  However, listing under the HAIP Brand will not constitute an official endorsement of the organization’s practices or the AI system it develops or uses, nor does it constitute a certification of compliance with the HAIP Code of Conduct.

A number of companies have already pledged to complete the inaugural reports.  Once they are available in April 2025, these inaugural reports are likely to provide insights into how those companies are operationalizing AI governance and risk management systems.

Neither the HAIP Code of Conduct nor the reporting framework are grounded in binding laws like the EU’s AI Act or U.S. state AI legislation – but there are thematic similarities between what such laws may require (in terms of transparency, risk management, etc.) and the voluntary commitments in the HAIP Code of Conduct.  This is because the OECD AI Principles (adopted in 2019 and updated in May 2024) have informed the HAIP Code of Conduct and many AI-related legislative proposals that have been developed in the last few years – including the EU AI Act and the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (discussed in our previous blog post here).

*The HAIP Code of Conduct was drafted by the G7 nations with input from the OECD and stakeholders from the private sector, academia and civil society.  The HAIP Code of Conduct is intended to be endorsed by organizations across all stages of the AI lifecycle, including during the design, development, and deployment of advanced AI systems.

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets.  If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Dan Cooper Dan Cooper

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing…

Daniel Cooper is co-chair of Covington’s Data Privacy and Cyber Security Practice, and advises clients on information technology regulatory and policy issues, particularly data protection, consumer protection, AI, and data security matters. He has over 20 years of experience in the field, representing clients in regulatory proceedings before privacy authorities in Europe and counseling them on their global compliance and government affairs strategies. Dan regularly lectures on the topic, and was instrumental in drafting the privacy standards applied in professional sport.

According to Chambers UK, his “level of expertise is second to none, but it’s also equally paired with a keen understanding of our business and direction.” It was noted that “he is very good at calibrating and helping to gauge risk.”

Dan is qualified to practice law in the United States, the United Kingdom, Ireland and Belgium. He has also been appointed to the advisory and expert boards of privacy NGOs and agencies, such as the IAPP’s European Advisory Board, Privacy International and the European security agency, ENISA.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.