While the EU Directive on Unfair Terms in Consumer Contracts prohibits certain clauses in standard (i.e., unilaterally imposed) contracts between businesses and consumers, some recently enacted EU laws restrict the use of certain clauses in standard contracts between businesses (“B2B”). The Data Act is the latest example of such a law, as it prohibits certain “unfair contractual terms” (“Unfair Clauses”) in standard contracts between businesses relating to the access and use of data. As such, it has a potentially very wide scope. Businesses entering into such a contract should therefore ensure that they do not include any clause that could be considered “unfair” because such a clause would not be binding on the other party to the contract. This blog post focuses specifically on the Data Act’s provision on Unfair Clauses. For more information on the Data Act, see our previous blog post.Continue Reading EU Data Act Regulates Business-to-Business Contracts Relating to Access and Use of Data
Opt-out collective actions (i.e. US-style class actions) can only be brought in the UK as competition law claims. Periodic proposals to legislate to expand this regime to consumer law claims have so far faltered. However, this is now back on the Parliamentary agenda. Several members of the House of Lords have indicated their support for expanding the regime to allow consumers and small businesses to bring opt-out collective actions for breaches of consumer law, and potentially on other bases.
If implemented, this expansion would be very significant and would allow for many new types of class actions in the UK. Tech companies are already prime targets as defendants to competition-related opt-out class actions. An expansion of the regime to allow actions for breaches of consumer law, as well as competition law, would only increase their exposure further.
As there is now limited time for legislation to be passed to effect such changes before the UK Parliament is dissolved in advance of an upcoming general election, this may be an issue for the next Parliament. It will therefore be important to assess what the UK’s main parties say on this – and any manifesto commitments – in the run-up to the election.Continue Reading UK Opt-Out Class Actions for Non-Competition Claims back on Parliamentary Agenda
Updated August 8, 2023. Originally posted May 1, 2023.
Last week, comment deadlines were announced for a Federal Communications Commission (“FCC”) Order and Notice of Proposed Rulemaking (“NPRM”) that could have significant compliance implications for all holders of international Section 214 authority (i.e., authorization to provide telecommunications services from points in the U.S. to points abroad). The rule changes on which the FCC seeks comment are far-reaching and, if adopted as written, could result in significant future compliance burdens, both for entities holding international Section 214 authority, as well as the parties holding ownership interests in these entities. Comments on these rule changes are due Thursday, August 31, with reply comments due October 2.Continue Reading Comments Due August 31 on FCC’s Proposal to Step Up Review of Foreign Ownership in Telecom Carriers and Establish Cybersecurity Requirements
On July 7, 2023, the UK House of Lords’ Communications and Digital Committee (the “Committee”) announced an inquiry into Large Language Models (“LLMs”), a type of generative AI used for a wide range of purposes, including producing text, code and translations. According to the Committee, they have launched the inquiry to understand “what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.”
This inquiry is the first UK Parliament initiative to evaluate the UK Government’s “pro-innovation” approach to AI regulation, which empowers regulators to oversee AI within their respective sectors (as discussed in our blog here). UK regulators have begun implementing the approach already. For, example, the Information Commissioner’s Office has recently issued guidance on AI and data protection and generative AI tools that process personal data (see our blogs here and here for more details). Continue Reading UK House of Lords Announces Inquiry into Large Language Models
On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.
In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI
Late yesterday, the EU institutions reached political agreement on the European Data Act (see the European Commission’s press release here and the Council’s press release here). The proposal for a Data Act was first tabled by the European Commission in February 2022 as a key piece of the European Strategy for Data (see our previous blogpost here). The Data Act will sit alongside the EU’s General Data Protection Regulation (“GDPR”), Data Governance Act, Digital Services Act, and the Digital Markets Act.Continue Reading Political Agreement Reached on the European Data Act
On 29 March 2023, the UK Information Commissioner’s Office (“ICO”) published updated Guidance on AI and data protection (the “Guidance”) following “requests from UK industry to clarify requirements for fairness in AI”. AI has been a strategic priority for the ICO for several years. In 2020, the ICO published its first set of guidance on AI (as discussed in our blog post here) which it complemented with supplementary recommendations on Explaining Decisions Made with AI and an AI and Data Protection risk toolkit in 2022. The updated Guidance forms part of the UK’s wider efforts to adopt a “pro-innovation” approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors (for further information on the UK Government’s approach to AI regulation, see our blog post here).
The updated Guidance covers the ICO’s view of best practice for data protection-compliant AI, as well as how the ICO interprets data protection law in the context of AI systems that process personal data. The Guidance has been restructured in line with the UK GDPR’s data protection principles, and features new content, including guidance on fairness, transparency, lawfulness and accountability when using AI systems.Continue Reading UK ICO Updates Guidance on Artificial Intelligence and Data Protection
On 29 March 2023, the UK Government published a White Paper entitled “A pro-innovation approach to AI regulation” (“White Paper”). The White Paper elaborates on the approach to AI set out by the Government in its 2022 AI Governance and Regulation Policy Statement (“Policy Statement” – covered in our blog post here). This announcement comes following the Government’s commitments, in the Spring Budget 2023, to build an expert taskforce to develop the UK’s capabilities in AI foundation models and produce guidance on the relationship between intellectual property law and generative AI (for more details of these initiatives, see here).
In its White Paper, the UK Government confirms that, unlike the EU, it does not plan to adopt new legislation to regulate AI, nor will it create a new regulator for AI (for further details on the EU’s proposed AI regulation see our blog posts here and here). Instead, the UK would require existing regulators, including the UK Information Commissioner’s Office (“ICO”), to take responsibility for the establishment, promotion, and oversight of responsible AI in their respective sectors. Regulators’ activities would be reinforced by the establishment of new support and oversight functions within central Government. This approach is already beginning to play out in certain regulated areas in the UK. For example, in October 2022, the Bank of England and Financial Conduct Authority (“FCA”) jointly released a Discussion Paper on Artificial Intelligence and Machine Learning considering how AI in financial services should be regulated and, in March 2023, the ICO updated its Guidance on AI and Data Protection. Continue Reading UK Government Adopts a “Pro-Innovation” Approach to AI Regulation
On April 11, 2023, the Cyberspace Administration of China (“CAC”) released draft Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服务管理办法（征求意见稿）》) (“draft Measures”) (official Chinese version available here) for public consultation. The deadline for submitting comments is May 10, 2023.Continue Reading China Proposes Draft Measures to Regulate Generative AI
This quarterly update summarizes key legislative and regulatory developments in the first quarter of 2023 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.Continue Reading U.S. AI, IoT, CAV, and Privacy & Cybersecurity Legislative & Regulatory Update – First Quarter 2023