On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.

In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023.  After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers.  The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).

The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems.  They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.

The following is a summary of the Principles:

The Principles

The G7 has developed eleven draft guiding principles, which are non-exhaustive and subject to stakeholder consultation:

  • Safety Measures:  Take appropriate measures, including prior to and throughout the deployment and placement on the market of AI systems, to identify and mitigate perceived risks across the AI lifecycle.  Such measures should include testing and mitigation techniques, including traceability in relation to datasets, processes and decisions made during system development;
  • Vulnerabilities:  Identify and mitigate vulnerabilities relating to AI systems, including by facilitating third-party and user discovery and reporting of issues after deployment;
  • Transparency Reports:  Publicly report meaningful information detailing an AI system’s capabilities, limitations and domains of appropriate and inappropriate use;
  • Information Sharing:  Share information on security and safety risks among organizations developing advanced AI systems, including with industry, governments, civil society, and academia;
  • Risk Management Policies:  Develop, implement, and disclose AI governance and risk management policies, grounded in a risk-based approach; this includes disclosing, where appropriate, privacy policies and mitigation measures, including for personal data, user prompts and advanced AI system outputs;
  • Security Controls:  Invest in and implement robust security controls, which may include securing model weights and algorithms, servers, operational security measures for information security, and cyber / physical access controls;
  • Content Authentication And Provenance:  Develop and deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content;
  • Research:  Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures, including research that supports the advancement of AI safety, security, and addressing key risks;
  • Technical Standards:  Advance development of international technical standards and best practices, including for watermarking; and
  • Safeguards:  Implement appropriate data input controls and audit, including by committing to implement appropriate safeguards throughout the AI lifecycle, particularly before and throughout training, on the use of: personal data, data protected by intellectual property, and other data which could result in potentially harmful model capabilities.

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Marianna Drake Marianna Drake

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating…

Marianna Drake counsels leading multinational companies on some of their most complex regulatory, policy and compliance-related issues, including data privacy and AI regulation. She focuses her practice on compliance with UK, EU and global privacy frameworks, and new policy proposals and regulations relating to AI and data. She also advises clients on matters relating to children’s privacy, online safety and consumer protection and product safety laws.

Her practice includes defending organizations in cross-border, contentious investigations and regulatory enforcement in the UK and EU Member States. Marianna also routinely partners with clients on the design of new products and services, drafting and negotiating privacy terms, developing privacy notices and consent forms, and helping clients design governance programs for the development and deployment of AI technologies.

Marianna’s pro bono work includes providing data protection advice to UK-based human rights charities, and supporting a non-profit organization in conducting legal research for strategic litigation.

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Will Capstick

Will Capstick is a Trainee who attended BPP Law School.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice embraces regulatory compliance and investigations alongside legislative advocacy. In this context, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services. Lisa also routinely advises clients in and outside of the technology sector on trade related matters, including EU trade controls rules.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”