EU

On January 24, 2024, the European Commission (“Commission”) announced that, following the political agreement reached in December 2023 on the EU AI Act (“AI Act”) (see our previous blog here), the Commission intends to proceed with a package of measures (“AI Innovation Strategy”) to support AI startups and small and medium-size enterprises (“SMEs”) in the EU.

Alongside these measures, the Commission also announced the creation of the European AI Office (“AI Office”), which is due to begin formal operations on February 21, 2024.

This blog post provides a high-level summary of these two announcements, in addition to some takeaways to bear in mind as we draw closer to the adoption of the AI Act.Continue Reading European Commission Announces New Package of AI Measures

From February 17, 2024, the Digital Services Act (“DSA”) will apply to providers of intermediary services (e.g., cloud services, file-sharing services, search engines, social networks and online marketplaces). These entities will be required to comply with a number of obligations, including implementing notice-and-action mechanisms, complying with detailed rules on terms and conditions, and publishing transparency reports on content moderation practices, among others. For more information on the DSA, see our previous blog posts here and here.

As part of its powers conferred under the DSA, the European Commission is empowered to adopt delegated and implementing acts* on certain aspects of implementation and enforcement of the DSA. In 2023, the Commission adopted one delegated act on supervisory fees to be paid by very large online platforms and very large online search engines (“VLOPs” and “VLOSEs” respectively), and one implementing act on procedural matters relating to the Commission’s enforcement powers. The Commission has proposed several other delegated and implementing acts, which we set out below. The consultation period for these draft acts have now passed, and we anticipate that they will be adopted in the coming months.Continue Reading Draft Delegated and Implementing Acts Pursuant to the Digital Services Act

On December 9, 2023, the European Parliament, the Council of the European Union and the European Commission reached a political agreement on the EU Artificial Intelligence Act (“AI Act”) (see here for the Parliament’s press statement, here for the Council’s statement, and here for the Commission’s statement). Following three days of intense negotiations, during the fifth “trilogue” discussions amongst the EU institutions, negotiators reached an agreement on key topics, including: (i) the scope of the AI Act; (ii) AI systems classified as “high-risk” under the Act; and (iii) law enforcement exemptions.

As described in our previous blog posts on the AI Act (see here, here, and here), the Act will establish a comprehensive and horizontal law governing the development, import, deployment and use of AI systems in the EU. In this blog post, we provide a high-level summary of the main points EU legislators appear to have agreed upon, based on the press releases linked above and a further Q&A published by the Commission. However, the text of the political agreement is not yet publicly available. Further, although a political agreement has been reached, a number of details remain to be finalized in follow-up technical working meetings over the coming weeks.Continue Reading EU Artificial Intelligence Act: Nearing the Finish Line

In a new strategy published on July 11, the European Commission has identified Web 4.0 and Virtual Worlds—often also referred to as the metaverse—as having the potential to transform the ways in which EU citizens live, work and interact.  The EU’s strategy consists of ten action points addressing four themes drawn from the Digital Decade policy programme and the Commission’s Connectivity package: (1) People and Skills; (2) Business; (3) Government (i.e., public services and projects); and (4) Governance.

The European Commission’s strategy indicates that it is unlikely to propose new regulation in the short to medium-term: indeed, European Competition Commissioner Margarethe Vestager has recently warned against jumping to regulation of Virtual Worlds as the “first sort of safety pad.” Instead, the Commission views its framework of current and upcoming digital technology-related legislation (including the GDPR, the Digital Services Act, the Digital Markets Act and the proposed Markets in Crypto-Assets Regulation) to be applicable to Web 4.0 and Virtual Worlds in a “robust” and “future-oriented” manner. Continue Reading European Commission Publishes New Strategy on Virtual Worlds

On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct

On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.

In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI

On February 11, 2021, the European Commission launched a public consultation on its initiative to fight child sexual abuse online (the “Initiative”), which aims to impose obligations on online service providers to detect child sexual abuse online and to report it to public authorities. The consultation is part of the data collection activities announced in the Initiative’s inception impact assessment issued in December last year. The consultation runs until April 15, 2021, and the Commission intends to propose the necessary legislation by the end of the second quarter of 2021.
Continue Reading European Commission Launches Consultation on Initiative to Fight Child Sexual Abuse

On 25 November 2020, the European Commission published a proposal for a Regulation on European Data Governance (“Data Governance Act”).  The proposed Act aims to facilitate data sharing across the EU and between sectors, and is one of the deliverables included in the European Strategy for Data, adopted in February 2020.  (See our previous blog here for a summary of the Commission’s European Strategy for Data.)  The press release accompanying the proposed Act states that more specific proposals on European data spaces are expected to follow in 2021, and will be complemented by a Data Act to foster business-to-business and business-to-government data sharing.

The proposed Data Governance Act sets out rules relating to the following:

  • Conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection;
  • Obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services;
  • Introduction of the concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and
  • Establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission.

Continue Reading AI Update: The European Commission publishes a proposal for a Regulation on European Data Governance (the Data Governance Act)

On January 23, 2020, the European Parliament’s Internal Market and Consumer Protection Committee approved a resolution on artificial intelligence (“AI”) and automated decision-making (“ADM”). The resolution references several major pieces of work carried out by the European Commission on AI and provides a list of existing EU instruments that are relevant to AI and ADM — which together present a potential roadmap of areas of reform.

The resolution was approved in committee by 39 votes in favor, none against and four abstentions. It will next be voted on by the full Parliament in an upcoming plenary session. If adopted, it will be transmitted to the EU Council and the Commission for consideration. The Commission’s Executive Vice-President Margrethe Vestager is expected to present plans for a European approach to AI in a Commission meeting on February 19, 2020.Continue Reading AI Update: European Parliament Committee Approves Resolution on AI for Consumers

On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.
Continue Reading AI Update: EU High-Level Working Group Publishes Ethics Guidelines for Trustworthy AI