Skip to content
Photo of Sam Jungyun Choi

Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam's practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

On 17 December 2020, the Council of Europe’s* Ad hoc Committee on Artificial Intelligence (CAHAI) published a Feasibility Study (the “Study”) on Artificial Intelligence (AI) legal standards. The Study examines the feasibility and potential elements of a legal framework for the development and deployment of AI, based on the Council of Europe’s human rights standards. Its main conclusion is that current regulations do not suffice in creating the necessary legal certainty, trust, and level playing field needed to guide the development of AI. Accordingly, it proposes the development of a new legal framework for AI consisting of both binding and non-binding Council of Europe instruments.

The Study recognizes the major opportunities of AI systems to promote societal development and human rights. Alongside these opportunities, it also identifies the risks that AI could endanger rights protected by the European Convention on Human Rights (ECHR), as well as democracy and the rule of law. Examples of the risks to human rights cited in the Study include AI systems that undermine the right to equality and non-discrimination by perpetuating biases and stereotypes (e.g., in employment), and AI-driven surveillance and tracking applications that jeopardise individuals’ right to freedom of assembly and expression.Continue Reading AI Update: The Council of Europe Publishes Feasibility Study on Developing a Legal Instrument for Ethical AI

In April 2019, the UK Government published its Online Harms White Paper and launched a Consultation. In February 2020, the Government published its initial response to that Consultation. In its 15 December 2020 full response to the Online Harms White Paper Consultation, the Government outlined its vision for tackling harmful content online through a new regulatory framework, to be set out in a new Online Safety Bill (“OSB”).

This development comes at a time of heightened scrutiny of, and regulatory changes to, digital services and markets. Earlier this month, the UK Competition and Markets Authority published recommendations to the UK Government on the design and implementation of a new regulatory regime for digital markets (see our update here).

The UK Government is keen to ensure that policy initiatives in this sector are coordinated with similar legislation, including those in the US and the EU. The European Commission also published its proposal for a Digital Services Act on 15 December, proposing a somewhat similar system for regulating illegal online content that puts greater responsibilities on technology companies.

Key points of the UK Government’s plans for the OSB are set out below.Continue Reading UK Government Plans for an Online Safety Bill

On 25 November 2020, the European Commission published a proposal for a Regulation on European Data Governance (“Data Governance Act”).  The proposed Act aims to facilitate data sharing across the EU and between sectors, and is one of the deliverables included in the European Strategy for Data, adopted in February 2020.  (See our previous blog here for a summary of the Commission’s European Strategy for Data.)  The press release accompanying the proposed Act states that more specific proposals on European data spaces are expected to follow in 2021, and will be complemented by a Data Act to foster business-to-business and business-to-government data sharing.

The proposed Data Governance Act sets out rules relating to the following:

  • Conditions for reuse of public sector data that is subject to existing protections, such as commercial confidentiality, intellectual property, or data protection;
  • Obligations on “providers of data sharing services,” defined as entities that provide various types of data intermediary services;
  • Introduction of the concept of “data altruism” and the possibility for organisations to register as a “Data Altruism Organisation recognised in the Union”; and
  • Establishment of a “European Data Innovation Board,” a new formal expert group chaired by the Commission.

Continue Reading AI Update: The European Commission publishes a proposal for a Regulation on European Data Governance (the Data Governance Act)

On 11 November 2020, the European Data Protection Board (“EDPB”) issued two draft recommendations relating to the rules on how organizations may lawfully transfer personal data from the EU to countries outside the EU (“third countries”).  These draft recommendations, which are non-final and open for public consultation until 30 November 2020, follow the EU Court of Justice (“CJEU”) decision in Case C-311/18 (“Schrems II”).  (For a more in-depth summary of the CJEU decision, please see our blog post here and our audiocast here. The EDPB also published on 24 July 2020 FAQs on the Schrems II decision here).

The two recommendations adopted by the EDPB are:

Continue Reading EDPB adopts recommendations on international data transfers following Schrems II decision

The National Institute of Standards and Technology (“NIST”) has published the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems.  AI Initiative Co-Chair Lee Tiedrich, Sam Choi, and James Yoon discuss


Continue Reading NIST Solicits Comments on Four Principles of Explainable Artificial Intelligence and Other Developments

The National Institute of Standards and Technology (“NIST”) is seeking comments on the first draft of the Four Principles of Explainable Artificial Intelligence (NISTIR 8312), a white paper that seeks to define the principles that capture the fundamental properties of explainable AI systems.  NIST will be accepting comments until October 15, 2020.

In February 2019, the Executive Order on Maintaining American Leadership in Artificial Intelligence directed NIST to develop a plan that would, among other objectives, “ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.”  In response, NIST issued a plan in August 2019 for prioritizing federal agency engagement in the development of AI standards, identifying seven properties that characterize trustworthy AI—accuracy, explainability, resiliency, safety, reliability, objectivity, and security.

NIST’s white paper focuses on explainability and identifies four principles underlying explainable AI.Continue Reading AI Standards Update: NIST Solicits Comments on the Four Principles of Explainable Artificial Intelligence and Certain Other Developments

On July 30, 2020, the UK Information Commissioner’s Office (“ICO”) published its final guidance on Artificial Intelligence (the “Guidance”).  The Guidance sets out a framework for auditing AI systems for compliance with data protection obligations under the GDPR and the UK Data Protection Act 2018.  The Guidance builds on the ICO’s earlier commitment to enable good data protection practice in AI, and on previous guidance and blogs issued on specific issues relating to AI (for example, on explaining decisions on AI, trade-offs, and bias and discrimination, all covered in Covington blogs).
Continue Reading UK ICO publishes guidance on Artificial Intelligence

On June 24, 2020, the United Nations Economic Commission for Europe (“UNECE”)* adopted two regulations that will have a significant impact on manufacturers of connected and autonomous vehicles (“CAVs”). These regulations impose obligations relating to cybersecurity and software updates for passenger cars, vans, trucks, and buses, while the cybersecurity regulations also reach light four-wheeler vehicles if equipped with automated driving functionalities from level 3 (conditional automation) onward. The regulations will enter into force in January 2021.

The European Union, South Korea, and Japan are expected to take steps to adopt these UNECE regulations in their respective national laws in the next couple of years. Given the widespread use of UN Regulations in the automotive sector globally, we anticipate that other countries will also adopt these regulations. Once implemented, any manufacturer that sells vehicles in the implementing countries must comply with the regulatory requirements, including by ensuring that its supply chain would not prevent compliance. As a result, the effects of the regulations are likely to flow down to vehicle manufacturers even in countries that do not adopt them, such as the United States.
Continue Reading IoT Update: UN Takes the Driver’s Seat for International Regulations on Connected and Autonomous Vehicles Cybersecurity and Software Updates

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

On February 10, 2020, the UK Government’s Committee on Standards in Public Life* (the “Committee”) published its Report on Artificial Intelligence and Public Standards (the “Report”). The Report examines potential opportunities and hurdles in the deployment of AI in the public sector, including how such deployment may implicate the “Seven Principles of Public Life” applicable to holders of public office, also known as the “Nolan Principles” (available here). It also sets out practical recommendations for use of AI in public services, which will be of interest to companies supplying AI technologies to the public sector (including the UK National Health Service (“NHS”)), or offering public services directly to UK citizens on behalf of the UK Government. The Report elaborates on the UK Government’s June 2019 Guide to using AI in the public sector (see our previous blog here).
Continue Reading UK Government’s Advisory Committee Publishes Report on Public Sector Use of AI