On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.
In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023. After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers. The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).
The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems. They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.
The following is a summary of the Principles:
The Principles
The G7 has developed eleven draft guiding principles, which are non-exhaustive and subject to stakeholder consultation:
- Safety Measures: Take appropriate measures, including prior to and throughout the deployment and placement on the market of AI systems, to identify and mitigate perceived risks across the AI lifecycle. Such measures should include testing and mitigation techniques, including traceability in relation to datasets, processes and decisions made during system development;
- Vulnerabilities: Identify and mitigate vulnerabilities relating to AI systems, including by facilitating third-party and user discovery and reporting of issues after deployment;
- Transparency Reports: Publicly report meaningful information detailing an AI system’s capabilities, limitations and domains of appropriate and inappropriate use;
- Information Sharing: Share information on security and safety risks among organizations developing advanced AI systems, including with industry, governments, civil society, and academia;
- Risk Management Policies: Develop, implement, and disclose AI governance and risk management policies, grounded in a risk-based approach; this includes disclosing, where appropriate, privacy policies and mitigation measures, including for personal data, user prompts and advanced AI system outputs;
- Security Controls: Invest in and implement robust security controls, which may include securing model weights and algorithms, servers, operational security measures for information security, and cyber / physical access controls;
- Content Authentication And Provenance: Develop and deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content;
- Research: Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures, including research that supports the advancement of AI safety, security, and addressing key risks;
- Global Benefit: Prioritize the development of advanced AI systems to address the world’s greatest challenges, including to support progress on the United Nations Sustainable Development Goals;
- Technical Standards: Advance development of international technical standards and best practices, including for watermarking; and
- Safeguards: Implement appropriate data input controls and audit, including by committing to implement appropriate safeguards throughout the AI lifecycle, particularly before and throughout training, on the use of: personal data, data protected by intellectual property, and other data which could result in potentially harmful model capabilities.
–
The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.