The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.

We start this series with a look at how the European Union is approaching the governance of AI.

Future of AI Policy in Europe

As the summer doldrums recede into memory, and lawmakers return to work, it is an apt time to reflect on the future of AI policy in the European Union. The EU sees itself in the lead globally on regulating artificial intelligence, with a draft EU AI Act nearing adoption and a draft EU AI Liability Directive in the works. These initial steps will help shape the wider AI governance structure currently emerging across the world.

I. Policy Vision & Approach

The EU’s AI legislative initiatives are part of an overall policy vision of “technological sovereignty,” which it implements through regulations such as the Digital Markets Act and the Digital Services Act. The EU model is likely to be influential in many important markets across the world, given the so-called “Brussels effect” whereby EU regulations often become global rules. The EU is a large market that is often a first-mover when it comes to regulation, and it can be more efficient for international firms to adopt a single compliance standard.

Yet, when the EU AI Act was first proposed two years ago, some viewed it as putting the cart before the horse: focusing on control rather than capability, or in a twentieth-century analogy, seeking to excel at stop signs rather than producing cars. Notwithstanding the perception among some in Europe that it is in a race with the United States on tech and AI, the real competition is between the U.S. and China, with Europe lagging behind significantly in the development of cutting-edge AI and related technology.

Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy recently made the same argument, suggesting that EU AI regulations could hamper AI’s technological development. Likewise, France’s Digital Minister Jean-Noël Barrot criticized the European Parliament’s draft text on the EU AI Act as “too stringent” and potentially stifling European innovation. President Macron also has sought to focus on the need to build underlying AI technologies, pledging over €7 billion to fund AI research and development. Recently, over 150 European CEOs and tech experts have likewise voiced concern about the EU AI Act’s potential overreach, and urged the EU to become “part of the technological avant-garde.”

Although the EU AI Act is nearly finalized, it is only the first step in a wider regulatory infrastructure emerging in Europe—and globally—that will need to keep competing policy objectives in mind: balancing control with capability, and risk with innovation. Whether Europe becomes the tip of the spear on AI, or a global laggard, will depend at least to some degree on the policy and regulatory choices it makes, which we turn to next.

II. Major Policy & Regulatory Initiatives

The EU is currently in the final stages of landmark legislation on artificial intelligence—the EU AI Act and the related EU AI Liability Directive—which it seeks to complete before next year’s elections for the European Parliament and the selection of a new European Commission.

A. EU AI Act

Proposed by the European Commission in April 2021, the draft EU AI Act is an ambitious piece of legislation that seeks to regulate “high-risk” AI systems, impose transparency obligations on providers of certain non-high-risk AI systems, and prohibit certain AI practices (such as social scoring that leads to detrimental treatment, and the use of subliminal techniques to distort behavior). Notably, it could lead to substantial administrative costs—based on compliance, oversight, and verification costs—for high-risk AI systems, which may add up to 10 percent of the underlying value of the system.

The AI Act also proposes so-called “regulatory sandboxes.” These are controlled environments intended to encourage developers to test new technologies for a limited period of time, with a view to complying with the regulation. Spain, which holds the rotating Presidency of the Council of the EU until the end of December, is hosting one such regulatory sandbox to enable companies and regulators to test procedures and compliance mechanisms to ensure that products meet the standards of the proposed regulation.

The EU AI Act is nearing adoption, with the Council of the EU having adopted its “general approach” in December 2022 and the European Parliament adopting its compromise text in June 2023. This was based on a draft previously approved by the Parliament’s Internal Market Committee and by the Civil Liberties Committee the month before, which incorporated over 3,000 amendments.

Negotiations on the final text (called “trilogues”) have begun among the three EU institutions—the Council of the EU, the European Parliament, and the European Commission—and should conclude over the next couple of months. There are several matters at issue in the final negotiations, including whether to ban all facial recognition used in public places; the regulation of large language models; and whether to treat certain generative AI models as high risk.

The Spanish government has committed to finalizing an agreement on the legislative text during its Council Presidency this year. However, if the Act is not adopted before the elections for the European Parliament in June 2024 and the selection of a new Commission to take office in late 2024, the legislation is likely to be delayed by six months to a year.  If there is such a delay, the new Parliament and Commission may have different priorities for this legislation. Once the AI Act is adopted, it will enter into force across the EU two to three years later, depending on which institution’s text prevails in the negotiations.

B. EU AI Liability Directive and Product Liability Directive

In September 2022, the European Commission proposed a new directive on adapting non-contractual fault-based civil liability rules to AI. The proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI (as defined under the AI Act), as well as rules on the burden of proof and corresponding rebuttable presumptions.

If adopted as proposed, the draft AI Liability Directive will apply to damages that occur two years or more after the Directive enters into force. Five years after its entry into force, the Commission will consider the need for rules on no-fault liability for AI claims. Alongside the AI Liability Directive, the European Commission proposed updates to the Product Liability Directive to harmonize rules for no-fault liability claims by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal, meaning that an injured person can claim compensation for damage caused by a defective AI system.

Stakeholders and academics are questioning, among other things, the adequacy and effectiveness of the proposed liability regime, its coherence with the EU AI Act currently under negotiation, its potentially detrimental impact on innovation, and the interplay between EU and national rules. Once the EU AI Act is finalized, focus will turn to completing these two legislative files.

III. Other Policy Initiatives

Beyond the EU AI Act and associated initiatives, the EU has also been active in shaping the direction of AI policy through engagement with industry and international partners.

A. AI Code of Conduct / Pact

Amid the flurry of media attention over the past few months on the pace of AI developments, particularly on generative AI and large language models, the European Commissioners who were in overall charge of digital policy—Executive Vice President Margrethe Vestager and Commissioner Thierry Breton—each signaled their intentions to pursue a voluntary code of conduct with private industry. The precise terms of such a pact or pacts are still to be publicized. Moreover, there has been latent competition for primacy over EU digital policy between Vestager and Breton. Ultimately, it appears that Vestager’s approach will have global scope, building on her discussions within the G7 (as discussed further below), whereas Breton’s will focus on accelerating the de facto applicability of the EU AI Act within Europe, even before the legislation formally goes into effect in two or three years after adoption.

On September 5, Vestager took an unpaid leave of absence from the Commission to run for the presidency of the European Investment Bank, with the selection taking place sometime in the fall and the winner assuming office in January 2024. Vice-President Věra Jourová—the architect of the EU-U.S. Data Protection Umbrella Agreement and its predecessor Privacy Shield—has taken on Vestager’s digital portfolio in the interim. Depending on who replaces Vestager as Danish Commissioner if she is appointed to the EIB role and resigns from the European Commission, Jourová may continue to hold on to some of those responsibilities until the end of this Commission’s mandate next autumn.  As Vice-President for Values and Transparency, Jourová has already been engaged in the AI policy debate, recently calling for AI-generated content to be watermarked and identifiable.

B. U.S.-EU Trade and Technology Council

Over the past two years, the EU and the U.S. have held ongoing regulatory dialogue on AI within the U.S.-EU Trade and Technology Council (TTC). In December 2022, the TTC’s working group on tech standards issued a new joint roadmap for trustworthy AI and risk management. The Roadmap aims to (i) advance shared terminologies and taxonomies by way of a common repository, (ii) share approaches to AI risk management and trustworthy AI in order to advance collaborative approaches related to AI in international standards bodies, (iii) establish a shared hub of metrics and methodologies for measuring AI trustworthiness, risk management methods, and related tools, and (iv) develop knowledge-sharing mechanisms to monitor and measure existing and emerging AI risks.

Both sides agree on a risk-based approach to AI and the need to develop trustworthy AI, but differ significantly on the necessary regulatory frameworks, allocation of responsibility for risk assessment, and balance between obligatory and voluntary measures. Relatedly, on June 21, a bipartisan group of Congressmen wrote a letter to President Biden expressing concern with the EU’s digital policies and their impact on U.S. firms.

At the last TTC meeting in Sweden on May 30-31, the two sides committed to continue to focus on seizing the opportunities and mitigating the risks of AI, particularly in light of rapid developments in generative AI. They launched three dedicated expert groups that focus on: (i) AI terminology and taxonomy, (ii) cooperation on AI standards and tools for trustworthy AI and risk management, and (iii) monitoring and measuring existing and emerging AI risks. The closing statement of the May meeting confirms that the EU and U.S. will “continue to consult and be informed by industry, civil society, and academia.”

C. G7 Hiroshima AI Process—and Beyond

The EU is also taking the lead in shaping AI policy through the G7. At their last summit in Hiroshima, G7 leaders pledged to “advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values.” It appears that the EU and U.S. are spearheading this effort, and plan to present a joint proposal on an AI voluntary code of conduct to the G7 leaders for their endorsement. Italy will hold the next presidency of the G7 and will host the G7 summit in Puglia in June 2024.

The UK is also seeking to take a leading role in this multilateral push to develop common standards and approaches to mitigating risks associated with AI. In November 2023, UK Prime Minister Sunak will host an AI Safety Summit, which will be attended by both AI researchers and policymakers. Indeed, European Commission President von der Leyen, U.S. Vice-President Harris, French President Macron, and Canadian Prime Minister Trudeau are all expected to attend.

The U.N. Secretary-General, António Guterres, announced in July that he would also convene a high-level meeting to examine options for the global governance of AI. Guterres intends for this group to build on the recommendations in the July 2023 New Agenda for Peace policy brief that member states develop common norms and national strategies on the development, design, and deployment of AI, and a global framework for the use of AI and similar data-driven technologies in counterterrorism. In her recent State of the European Union speech in September, European Commission President von der Leyen endorsed Guterres’ approach. She called for a process similar to the UN’s Intergovernmental Panel on Climate Change, bringing “scientists, tech companies and independent experts all around the table,” building on the G7 Hiroshima Process. Von der Leyen also proposed that these experts “develop a fast and globally coordinated response” to AI’s “risks and … benefits for humanity”.

* * *

Policymakers in Europe have made significant efforts to keep pace with these technological developments, and have already gained extensive technical and regulatory expertise. Yet, as the landscape keeps evolving, thought leadership—and engagement from industry, civil society, and academia—will be essential to identifying both the opportunities and risks of new technological frontiers on AI and developing corresponding policy and regulatory frameworks.

IV. Thought Leadership

Our regulatory and public policy teams closely track and contribute to the discussion around AI policy in Europe. Below is a sampling of related articles on our public-facing blogs:

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Carl Bildt Carl Bildt

Carl Bildt, Former Prime Minister of Sweden, draws on his extensive political experience to advise clients as a non-lawyer member of the firm’s global Public Policy and Government Affairs practice. Carl returned to government office as Sweden’s Minister for Foreign Affairs from 2006…

Carl Bildt, Former Prime Minister of Sweden, draws on his extensive political experience to advise clients as a non-lawyer member of the firm’s global Public Policy and Government Affairs practice. Carl returned to government office as Sweden’s Minister for Foreign Affairs from 2006 to 2014.

As Prime Minister of Sweden from 1991 to 1994, Carl led the government that negotiated and signed Sweden’s accession to the European Union, reformed and liberalized the Swedish economy, and modernized its welfare system. After leaving office, he played a key role as a mediator in the Balkan conflict for the European Union and the United Nations. As Foreign Minister, he was an important proponent of the EU’s “Eastern Partnership” and of EU engagement in the Middle East.

His public policy profile and experience is extensive, having served on various boards, including of the Centre for European Reform, the International Institute for Strategic Studies, and the European Policy Centre, on the Council on Foreign Relations in New York, the European Council on Foreign Relations and as the first non-U.S. member of the Board of Trustees of the RAND Corporation.

Carl also has a well-established profile in technology circles. He is Chair of the Global Commission on Internet Governance, a former adviser to ICANN, and a high-profile proponent of a global digital marketplace. Carl recently co-authored a study with the Atlantic Council entitled “Building a Transatlantic Digital Marketplace.”

Photo of Cecilia Malmström Cecilia Malmström

Cecilia Malmström is a senior advisor in the firm’s Brussels office. She has devoted the better part of her career to global affairs and international relations and has extensive experience with multilateral leadership and cooperation. Cecilia, a non-lawyer, served as European commissioner for…

Cecilia Malmström is a senior advisor in the firm’s Brussels office. She has devoted the better part of her career to global affairs and international relations and has extensive experience with multilateral leadership and cooperation. Cecilia, a non-lawyer, served as European commissioner for trade from 2014 to 2019 and as European commissioner for home affairs from 2010 to 2014. She was first elected as a member of the European Parliament in 1999, serving until 2006, and was minister for EU affairs in the Swedish government from 2006 to 2010.

As European commissioner for trade, Cecilia represented the European Union in the World Trade Organization (WTO) and other international trade bodies. She was responsible for negotiating bilateral trade agreements with key countries, including agreements with Canada, Japan, Mexico, Singapore, Vietnam, and the four founding Mercosur countries.

Cecilia holds a Ph.D. in political science from the department of political science of the University of Gothenburg.

Photo of Bart Szewczyk Bart Szewczyk

Having served in senior advisory positions in the U.S. government, Bart Szewczyk advises on European and global public policy, particularly on technology, economic sanctions and asset seizure, trade and foreign investment, business and human rights, and environmental, social, and governance issues, as well…

Having served in senior advisory positions in the U.S. government, Bart Szewczyk advises on European and global public policy, particularly on technology, economic sanctions and asset seizure, trade and foreign investment, business and human rights, and environmental, social, and governance issues, as well as conducts international arbitration. He also teaches grand strategy as an Adjunct Professor at Sciences Po in Paris and is a Nonresident Senior Fellow at the German Marshall Fund.

Bart recently worked as Advisor on Global Affairs at the European Commission’s think-tank, where he covered a wide range of foreign policy issues, including international order, defense, geoeconomics, transatlantic relations, Russia and Eastern Europe, Middle East and North Africa, and China and Asia. Previously, between 2014 and 2017, he served as Member of Secretary John Kerry’s Policy Planning Staff at the U.S. Department of State, where he covered Europe, Eurasia, and global economic affairs. From 2016 to 2017, he also concurrently served as Senior Policy Advisor to the U.S. Ambassador to the United Nations, Samantha Power, where he worked on refugee policy. He joined the U.S. government from teaching at Columbia Law School, as one of two academics selected nationwide for the Council on Foreign Relations International Affairs Fellowship. He has also consulted for the World Bank and Rasmussen Global.

Prior to government, Bart was an Associate Research Scholar and Lecturer-in-Law at Columbia Law School, where he worked on international law and U.S. foreign relations law. Before academia, he taught international law and international organizations at George Washington University Law School, and served as a visiting fellow at the EU Institute for Security Studies. He also clerked at the International Court of Justice for Judges Peter Tomka and Christopher Greenwood and at the U.S. Court of Appeals for the Third Circuit for the late Judge Leonard Garth.

Bart holds a Ph.D. from Cambridge University where he studied as a Gates Scholar, a J.D. from Yale Law School, an M.P.A. from Princeton University, and a B.S. in economics (summa cum laude) from The Wharton School at the University of Pennsylvania. He has published in Foreign Affairs, Foreign Policy, Harvard International Law Journal, Columbia Journal of European Law, American Journal of International Law, George Washington Law Review, Survival, and elsewhere. He is the author of three books: Europe’s Grand Strategy: Navigating a New World Order (Palgrave Macmillan 2021); with David McKean, Partners of First Resort: America, Europe, and the Future of the West (Brookings Institution Press 2021); and European Sovereignty, Legitimacy, and Power (Routledge 2021).