On September 19, 2023, the UK’s Online Safety Bill (“OSB”) passed the final stages of Parliamentary debate, and will shortly become law. The OSB, which requires online service providers to moderate their services for illegal and harmful content, has been intensely debated since it was first announced in 2020, particularly around the types of online harms within scope and how tech companies should respond to them. The final version is lengthy and complex, and will likely be the subject of continued debate over compliance, enforcement, and whether it succeeds in making the internet safer, while also protecting freedom of expression and privacy.Continue Reading UK Online Safety Bill Passes Parliament
Lisa Peets
Lisa Peets is co-chair of the firm's Technology and Communications Regulation Practice Group and a member of the firm's global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world's best-known technology companies.
Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.
Lisa also supports Covington’s disputes team in litigation involving technology providers.
According to Chambers UK (2024 edition), "Lisa provides an excellent service and familiarity with client needs."
UK Parliament Publishes Interim Report on the UK’s AI Governance Proposals
On 31 August 2023, the UK’s House of Commons Innovation and Technology Committee (“Committee”) published an interim report (“Report”) evaluating the UK Government’s AI governance proposals and examining different approaches to the regulation of AI systems. As readers of this blog will be aware, in March 2023, the UK Government published a White Paper setting out its “pro-innovation approach to AI regulation” which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, see our blog post here).
The Report recommends that the UK Government introduce a “tightly-focused AI Bill” in the next parliamentary session to “position the UK as an AI governance leader”.Continue Reading UK Parliament Publishes Interim Report on the UK’s AI Governance Proposals
The EU e-evidence package is published in the Official Journal
On July 28, 2023, more than five years after the Commission’s original proposal, the EU e-evidence Regulation and Directive were published in the Official Journal of the European Union, signalling the end of the legislative process for this file.
In summary, the Regulation establishes a regime whereby law enforcement authorities (“LEAs”) in one EU Member State will be able to issue legally-binding demands for certain data from certain categories of service providers (namely providers of electronic communications services, domain name and IP registration services, and information society services that enable users to communicate or store data) that are established or have a legal representative in a different EU Member State, or demand such service providers to preserve such data. Continue Reading The EU e-evidence package is published in the Official Journal
European Commission Publishes New Strategy on Virtual Worlds
In a new strategy published on July 11, the European Commission has identified Web 4.0 and Virtual Worlds—often also referred to as the metaverse—as having the potential to transform the ways in which EU citizens live, work and interact. The EU’s strategy consists of ten action points addressing four themes drawn from the Digital Decade policy programme and the Commission’s Connectivity package: (1) People and Skills; (2) Business; (3) Government (i.e., public services and projects); and (4) Governance.
The European Commission’s strategy indicates that it is unlikely to propose new regulation in the short to medium-term: indeed, European Competition Commissioner Margarethe Vestager has recently warned against jumping to regulation of Virtual Worlds as the “first sort of safety pad.” Instead, the Commission views its framework of current and upcoming digital technology-related legislation (including the GDPR, the Digital Services Act, the Digital Markets Act and the proposed Markets in Crypto-Assets Regulation) to be applicable to Web 4.0 and Virtual Worlds in a “robust” and “future-oriented” manner. Continue Reading European Commission Publishes New Strategy on Virtual Worlds
UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI
On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.
In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI
Political Agreement Reached on the European Data Act
Late yesterday, the EU institutions reached political agreement on the European Data Act (see the European Commission’s press release here and the Council’s press release here). The proposal for a Data Act was first tabled by the European Commission in February 2022 as a key piece of the European Strategy for Data (see our previous blogpost here). The Data Act will sit alongside the EU’s General Data Protection Regulation (“GDPR”), Data Governance Act, Digital Services Act, and the Digital Markets Act.Continue Reading Political Agreement Reached on the European Data Act
EU and US Lawmakers Agree to Draft AI Code of Conduct
On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct
EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI
On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.
In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI
UK’s Competition and Markets Authority Launches Review into AI Foundation Models
On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, including its recent AI White Paper, see our blog post here). The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI (see our post here for more details).Continue Reading UK’s Competition and Markets Authority Launches Review into AI Foundation Models
UK ICO Updates Guidance on Artificial Intelligence and Data Protection
On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).
The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection