United Kingdom

On 26 October 2023, the UK’s Online Safety Bill received Royal Assent, becoming the Online Safety Act (“OSA”).  The OSA imposes various obligations on tech companies to prevent the uploading of, and rapidly remove, illegal user content—such as terrorist content, revenge pornography, and child sexual exploitation material—from their services, and also to take steps to reduce the risk that users will encounter such material (please see our previous blog post on the Online Safety Bill).Continue Reading UK Online Safety Bill Receives Royal Assent

On September 19, 2023, the UK’s Online Safety Bill (“OSB”) passed the final stages of Parliamentary debate, and will shortly become law. The OSB, which requires online service providers to moderate their services for illegal and harmful content, has been intensely debated since it was first announced in 2020, particularly around the types of online harms within scope and how tech companies should respond to them. The final version is lengthy and complex, and will likely be the subject of continued debate over compliance, enforcement, and whether it succeeds in making the internet safer, while also protecting freedom of expression and privacy.Continue Reading UK Online Safety Bill Passes Parliament

On 31 August 2023, the UK’s House of Commons Innovation and Technology Committee (“Committee”) published an interim report (“Report”) evaluating the UK Government’s AI governance proposals and examining different approaches to the regulation of AI systems. As readers of this blog will be aware, in March 2023, the UK Government published a White Paper setting out its “pro-innovation approach to AI regulation” which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, see our blog post here).

The Report recommends that the UK Government introduce a “tightly-focused AI Bill” in the next parliamentary session to “position the UK as an AI governance leader”.Continue Reading UK Parliament Publishes Interim Report on the UK’s AI Governance Proposals

On July 18, 2023, the Association for UK Interactive Entertainment (“UKIE”), the trade body for the UK video games industry, published new industry principles and guidance surrounding paid loot boxes (the “Principles”) for application in the UK.

The Principles were recommended by the Technical Working Group on Loot Boxes (“TWG”), a panel of games companies, platforms, government departments and regulatory bodies, which was convened by the UK Government in order to mitigate the risk of harms for children as a result of loot boxes in video games.  Each member of the TWG has committed to comply with the Principles moving forward.Continue Reading UKIE Publishes Industry Principles on Paid Loot Boxes

The UK Government has announced plans to introduce new rules on online advertising for online platforms, intermediaries, and publishers.  The aim is to prevent illegal advertising and to introduce additional protections against harmful online ads for under-18s.  Full details are set out in its recently published response (“Response”) to the Department for Culture, Media & Sport’s 2022 Online Advertising Programme Consultation (“Consultation”). 

The new rules would sit alongside the proposed UK Online Safety Bill (“OSB”), which addresses rules on user-generated content (see our previous blog here).  Since the EU’s Digital Services Act (which starts to apply from February 2024, see our previous blog here) will not apply in the UK following Brexit, the OSB and any new rules following this Response, form the UK’s approach to regulating these matters, as distinct from the EU.Continue Reading Further Regulation of Illegal Advertising: UK Government Publishes Response to its Online Advertising Programme Consultation

In a new post on the Inside Class Actions blog, we summarize the UK Supreme Court’s recent judgment on litigation funding agreements, which could potentially have significant impact on collective proceedings and other funded cases in the UK. To read the post, please click here.

Continue Reading UK Supreme Court Hands Down Judgment on Litigation Funding Agreements

On July 7, 2023, the UK House of Lords’ Communications and Digital Committee (the “Committee”) announced an inquiry into Large Language Models (“LLMs”), a type of generative AI used for a wide range of purposes, including producing text, code and translations.  According to the Committee, they have launched the inquiry to understand “what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.

This inquiry is the first UK Parliament initiative to evaluate the UK Government’s “pro-innovation” approach to AI regulation, which empowers regulators to oversee AI within their respective sectors (as discussed in our blog here).  UK regulators have begun implementing the approach already.  For, example, the Information Commissioner’s Office has recently issued guidance on AI and data protection and generative AI tools that process personal data (see our blogs here and here for more details). Continue Reading UK House of Lords Announces Inquiry into Large Language Models

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

On 29 March 2023, the UK’s Department for Culture, Media and Sport (“DCMS”) published the draft Media Bill (the “Bill”), which will deliver on a number of legislative reforms set out in the Government’s White Paper entitled “Up Next; the Government’s vision for the broadcasting sector”, published in April 2022.

The Bill forms part of the UK Government’s wider efforts to ensure the regulation of TV and radio evolves in line with changing technology.

The proposed legislative package, which is distilled into six parts, includes significant developments in the regulation of video-on-demand (“VoD”) service providers.Continue Reading Evolving Regulatory Landscape for VoD Providers: UK Government Publishes Draft Media Bill

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation