UK Government

Opt-out collective actions (i.e. US-style class actions) can only be brought in the UK as competition law claims.  Periodic proposals  to legislate to expand this regime to consumer law claims have so far faltered.  However, this is now back on the Parliamentary agenda.  Several members of the House of Lords have indicated their support for expanding the regime to allow consumers and small businesses to bring opt-out collective actions for breaches of consumer law, and potentially on other bases.

If implemented, this expansion would be very significant and would allow for many new types of class actions in the UK.  Tech companies are already prime targets as defendants to competition-related opt-out class actions.  An expansion of the regime to allow actions for breaches of consumer law, as well as competition law, would only increase their exposure further.

As there is now limited time for legislation to be passed to effect such changes before the UK Parliament is dissolved in advance of an upcoming general election, this may be an issue for the next Parliament.  It will therefore be important to assess what the UK’s main parties say on this – and any manifesto commitments – in the run-up to the election.Continue Reading UK Opt-Out Class Actions for Non-Competition Claims back on Parliamentary Agenda

On 31 August 2023, the UK’s House of Commons Innovation and Technology Committee (“Committee”) published an interim report (“Report”) evaluating the UK Government’s AI governance proposals and examining different approaches to the regulation of AI systems. As readers of this blog will be aware, in March 2023, the UK Government published a White Paper setting out its “pro-innovation approach to AI regulation” which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, see our blog post here).

The Report recommends that the UK Government introduce a “tightly-focused AI Bill” in the next parliamentary session to “position the UK as an AI governance leader”.Continue Reading UK Parliament Publishes Interim Report on the UK’s AI Governance Proposals

On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s strategy, including its recent AI White Paper, see our blog post here). The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI (see our post here for more details).Continue Reading UK’s Competition and Markets Authority Launches Review into AI Foundation Models

On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).

In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection.  Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation

On 18 January 2021, the UK Parliamentary Office of Science and Technology (“POST”)* published its AI and Healthcare Research Briefing about the use of artificial intelligence (“AI”) in the UK healthcare system (the “Briefing”).  The Briefing considers the potential impacts of AI on the cost and quality of healthcare, and the challenges posed by the wider adoption of AI, including safety, privacy and health inequalities.

The Briefing summarises the different possible applications of AI in healthcare settings, which raises unique considerations for healthcare providers.  It notes that AI, developed through machine learning algorithms, is not yet widely used within the NHS, but some AI products are at various stages of trial and evaluation.  The areas of healthcare identified by the Briefing as having the potential for AI to be incorporated include (among others): interpretation of medical imaging, planning patients’ treatment, and patient-facing applications such as voice assistants, smartphone apps and wearable devices.Continue Reading AI Update: UK Parliament Research Briefing on AI in the UK Healthcare System

In April 2019, the UK Government published its Online Harms White Paper and launched a Consultation. In February 2020, the Government published its initial response to that Consultation. In its 15 December 2020 full response to the Online Harms White Paper Consultation, the Government outlined its vision for tackling harmful content online through a new regulatory framework, to be set out in a new Online Safety Bill (“OSB”).

This development comes at a time of heightened scrutiny of, and regulatory changes to, digital services and markets. Earlier this month, the UK Competition and Markets Authority published recommendations to the UK Government on the design and implementation of a new regulatory regime for digital markets (see our update here).

The UK Government is keen to ensure that policy initiatives in this sector are coordinated with similar legislation, including those in the US and the EU. The European Commission also published its proposal for a Digital Services Act on 15 December, proposing a somewhat similar system for regulating illegal online content that puts greater responsibilities on technology companies.

Key points of the UK Government’s plans for the OSB are set out below.Continue Reading UK Government Plans for an Online Safety Bill

On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).
Continue Reading UK Government’s Advisory Committee Publishes Report on Public Sector Use of AI