United Kingdom

On 15 January 2024, the UK’s Information Commissioner’s Office (“ICO”) announced the launch of a consultation series (“Consultation”) on how elements of data protection law apply to the development and use of generative AI (“GenAI”). For the purposes of the Consultation, GenAI refers to “AI models that can create new content e.g., text, computer code, audio, music, images, and videos”.

As part of the Consultation, the ICO will publish a series of chapters over the coming months outlining their thinking on how the UK GDPR and Part 2 of the Data Protection Act 2018 apply to the development and use of GenAI. The first chapter, published in tandem with the Consultation’s announcement, covers the lawful basis, under UK data protection law, for web scraping of personal data to train GenAI models. Interested stakeholders are invited to provide feedback to the ICO by 1 March 2024.Continue Reading ICO Launches Consultation Series on Generative AI

Recent proposals to amend the UK’s national security investment screening regime mean that investors may in future be required to make mandatory, suspensory, pre-closing filings to the UK Government when seeking to invest in a broader range of companies developing generative artificial intelligence (AI). The UK Government launched a Call for Evidence in November 2023 seeking input from stakeholders on a number of potential amendments to the operation of the National Security and Investment Act (NSIA) regime, including whether generative AI, which the Government states is not currently directly in scope of the AI filing trigger, should expressly fall within the mandatory filing regime. The Call for Evidence closes on 15 January 2024.

This blog sets out how the NSIA regime operates, how investments in companies developing AI are currently caught by the NSIA, and the Government’s proposals to refine the scope of AI activities captured by the regime, including potentially directly encompassing generative AI.Continue Reading UK Government Consults on Amending Mandatory Filing Obligations for AI Acquisitions

On 26 October 2023, the UK’s Online Safety Bill received Royal Assent, becoming the Online Safety Act (“OSA”).  The OSA imposes various obligations on tech companies to prevent the uploading of, and rapidly remove, illegal user content—such as terrorist content, revenge pornography, and child sexual exploitation material—from their services, and also to take steps to reduce the risk that users will encounter such material (please see our previous blog post on the Online Safety Bill).Continue Reading UK Online Safety Bill Receives Royal Assent

On September 19, 2023, the UK’s Online Safety Bill (“OSB”) passed the final stages of Parliamentary debate, and will shortly become law. The OSB, which requires online service providers to moderate their services for illegal and harmful content, has been intensely debated since it was first announced in 2020, particularly around the types of online harms within scope and how tech companies should respond to them. The final version is lengthy and complex, and will likely be the subject of continued debate over compliance, enforcement, and whether it succeeds in making the internet safer, while also protecting freedom of expression and privacy.Continue Reading UK Online Safety Bill Passes Parliament

On 31 August 2023, the UK’s House of Commons Innovation and Technology Committee (“Committee”) published an interim report (“Report”) evaluating the UK Government’s AI governance proposals and examining different approaches to the regulation of AI systems. As readers of this blog will be aware, in March 2023, the UK Government published a White Paper setting out its “pro-innovation approach to AI regulation” which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, see our blog post here).

The Report recommends that the UK Government introduce a “tightly-focused AI Bill” in the next parliamentary session to “position the UK as an AI governance leader”.Continue Reading UK Parliament Publishes Interim Report on the UK’s AI Governance Proposals

On July 18, 2023, the Association for UK Interactive Entertainment (“UKIE”), the trade body for the UK video games industry, published new industry principles and guidance surrounding paid loot boxes (the “Principles”) for application in the UK.

The Principles were recommended by the Technical Working Group on Loot Boxes (“TWG”), a panel of games companies, platforms, government departments and regulatory bodies, which was convened by the UK Government in order to mitigate the risk of harms for children as a result of loot boxes in video games.  Each member of the TWG has committed to comply with the Principles moving forward.Continue Reading UKIE Publishes Industry Principles on Paid Loot Boxes

The UK Government has announced plans to introduce new rules on online advertising for online platforms, intermediaries, and publishers.  The aim is to prevent illegal advertising and to introduce additional protections against harmful online ads for under-18s.  Full details are set out in its recently published response (“Response”) to the Department for Culture, Media & Sport’s 2022 Online Advertising Programme Consultation (“Consultation”). 

The new rules would sit alongside the proposed UK Online Safety Bill (“OSB”), which addresses rules on user-generated content (see our previous blog here).  Since the EU’s Digital Services Act (which starts to apply from February 2024, see our previous blog here) will not apply in the UK following Brexit, the OSB and any new rules following this Response, form the UK’s approach to regulating these matters, as distinct from the EU.Continue Reading Further Regulation of Illegal Advertising: UK Government Publishes Response to its Online Advertising Programme Consultation

In a new post on the Inside Class Actions blog, we summarize the UK Supreme Court’s recent judgment on litigation funding agreements, which could potentially have significant impact on collective proceedings and other funded cases in the UK. To read the post, please click here.

Continue Reading UK Supreme Court Hands Down Judgment on Litigation Funding Agreements

On July 7, 2023, the UK House of Lords’ Communications and Digital Committee (the “Committee”) announced an inquiry into Large Language Models (“LLMs”), a type of generative AI used for a wide range of purposes, including producing text, code and translations.  According to the Committee, they have launched the inquiry to understand “what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.

This inquiry is the first UK Parliament initiative to evaluate the UK Government’s “pro-innovation” approach to AI regulation, which empowers regulators to oversee AI within their respective sectors (as discussed in our blog here).  UK regulators have begun implementing the approach already.  For, example, the Information Commissioner’s Office has recently issued guidance on AI and data protection and generative AI tools that process personal data (see our blogs here and here for more details). Continue Reading UK House of Lords Announces Inquiry into Large Language Models

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI