Photo of Lisa Peets

Lisa Peets

Lisa Peets is co-chair of the firm's Technology and Communications Regulation Practice Group and a member of the firm's global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world's best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), "Lisa provides an excellent service and familiarity with client needs."

On October 30, 2023, days ahead of government leaders convening in the UK for an international AI Safety Summit, the White House issued an Executive Order (“EO”) outlining an expansive strategy to support the development and deployment of safe and secure AI technologies (for further details on the EO, see our blog here). As readers will be aware, the European Commission released its proposed Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU “AI Act”) in 2021 (see our blog here). EU lawmakers are currently negotiating changes to the Commission text, with hopes of finalizing the text by the end of this year, although many of its obligations would only begin to apply to regulated entities in 2026 or later.

The EO and the AI Act stand as two important developments shaping the future of global AI governance and regulation. This blog post discusses key similarities and differences between the two.Continue Reading From Washington to Brussels: A Comparative Look at the Biden Administration’s Executive Order and the EU’s AI Act

On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.

In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023.  After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers.  The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).

The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems.  They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.Continue Reading G7 Countries Publish Draft Guiding Principles for Advanced AI Development

On September 19, 2023, the UK’s Online Safety Bill (“OSB”) passed the final stages of Parliamentary debate, and will shortly become law. The OSB, which requires online service providers to moderate their services for illegal and harmful content, has been intensely debated since it was first announced in 2020, particularly around the types of online harms within scope and how tech companies should respond to them. The final version is lengthy and complex, and will likely be the subject of continued debate over compliance, enforcement, and whether it succeeds in making the internet safer, while also protecting freedom of expression and privacy.Continue Reading UK Online Safety Bill Passes Parliament

On 31 August 2023, the UK’s House of Commons Innovation and Technology Committee (“Committee”) published an interim report (“Report”) evaluating the UK Government’s AI governance proposals and examining different approaches to the regulation of AI systems. As readers of this blog will be aware, in March 2023, the UK Government published a White Paper setting out its “pro-innovation approach to AI regulation” which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, see our blog post here).

The Report recommends that the UK Government introduce a “tightly-focused AI Bill” in the next parliamentary session to “position the UK as an AI governance leader”.Continue Reading UK Parliament Publishes Interim Report on the UK’s AI Governance Proposals

On July 28, 2023, more than five years after the Commission’s original proposal, the EU e-evidence Regulation and Directive were published in the Official Journal of the European Union, signalling the end of the legislative process for this file.

In summary, the Regulation establishes a regime whereby law enforcement authorities (“LEAs”) in one EU Member State will be able to issue legally-binding demands for certain data from certain categories of service providers (namely providers of electronic communications services, domain name and IP registration services, and information society services that enable users to communicate or store data) that are established or have a legal representative in a different EU Member State, or demand such service providers to preserve such data. Continue Reading The EU e-evidence package is published in the Official Journal

In a new strategy published on July 11, the European Commission has identified Web 4.0 and Virtual Worlds—often also referred to as the metaverse—as having the potential to transform the ways in which EU citizens live, work and interact.  The EU’s strategy consists of ten action points addressing four themes drawn from the Digital Decade policy programme and the Commission’s Connectivity package: (1) People and Skills; (2) Business; (3) Government (i.e., public services and projects); and (4) Governance.

The European Commission’s strategy indicates that it is unlikely to propose new regulation in the short to medium-term: indeed, European Competition Commissioner Margarethe Vestager has recently warned against jumping to regulation of Virtual Worlds as the “first sort of safety pad.” Instead, the Commission views its framework of current and upcoming digital technology-related legislation (including the GDPR, the Digital Services Act, the Digital Markets Act and the proposed Markets in Crypto-Assets Regulation) to be applicable to Web 4.0 and Virtual Worlds in a “robust” and “future-oriented” manner. Continue Reading European Commission Publishes New Strategy on Virtual Worlds

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

Late yesterday, the EU institutions reached political agreement on the European Data Act (see the European Commission’s press release here and the Council’s press release here).  The proposal for a Data Act was first tabled by the European Commission in February 2022 as a key piece of the European Strategy for Data (see our previous blogpost here). The Data Act will sit alongside the EU’s General Data Protection Regulation (“GDPR”), Data Governance Act, Digital Services Act, and the Digital Markets Act.Continue Reading Political Agreement Reached on the European Data Act

On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct

On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.

In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI