On April 8, 2019, the EU High-Level Expert Group on Artificial Intelligence (the “AI HLEG”) published its “Ethics Guidelines for Trustworthy AI” (the “guidance”).  This follows a stakeholder consultation on its draft guidelines published in December 2018 (the “draft guidance”) (see our previous blog post for more information on the draft guidance).  The guidance retains many of the same core elements of the draft guidance, but provides a more streamlined conceptual framework and elaborates further on some of the more nuanced aspects, such as on interaction with existing legislation and reconciling the tension between competing ethical requirements.

According to the European Commission’s Communication accompanying the guidance, the Commission will launch a piloting phase starting in June 2019 to collect more detailed feedback from stakeholders on how the guidance can be implemented, with a focus in particular on the assessment list set out in Chapter III.  The Commission plans to evaluate the workability and feasibility of the guidance by the end of 2019, and the AI HLEG will review and update the guidance in early 2020 based on the evaluation of feedback received during the piloting phase.

The guidance is not binding, but stakeholders can voluntarily use the guidance as a way to operationalise their commitment to achieving “Trustworthy AI,” which is the AI HLEG’s term for the gold standard of an ethical approach to AI.  According to the AI HLEG, Trustworthy AI consists of the following three components:

  1. Lawful. It should comply with all applicable laws and regulations;
  2. Ethical. It should comply with ethical principles and values; and
  3. Robust. It should be robust from both a technical and social perspective.

Each component is considered “necessary but not sufficient for the achievement of Trustworthy AI,” and as such all three should “work in harmony and overlap.”  The introduction of “lawfulness” as a component of Trustworthy AI is one of the key changes in the final version of the guidance as compared to the draft.  The guidance recognizes that AI systems do not operate in a legal vacuum, and that AI systems are subject to a number of existing laws, including (but not limited to) the General Data Protection Regulation (GDPR), the Product Liability Directive, the Regulation on the Free Flow of Non-Personal Data, anti-discrimination legislation, consumer law, and sector-specific laws (such as Medical Devices Regulation in the healthcare sector).  The guidance confirms that organizations developing, deploying and using AI systems should comply with such existing laws, to the extent that they apply.  The guidance does not discuss the legal obligations that apply to AI systems in further detail, but focuses on the latter two components – that AI systems should be “ethical” and “robust”.

Chapter I of the guidance outlines the four ethical principles that should apply to AI systems, which are: (1) respect for human autonomy; (2) prevention of harm; (3) fairness; and (4) explicability.  The guidance frames these as “ethical imperatives” that AI practitioners should always try to adhere to.  Yet, the guidance recognizes that tensions may arise between these principles, for which there is no fixed solution.  For instance, there may be a situation where prevention of harm (such as terrorism) may conflict with respect for human autonomy (such as privacy).  As such, the guidance notes that while the four ethical principles offer some guidance towards solutions, they remain abstract prescriptions, and AI practitioners should approach ethical dilemmas “via reasoned, evidence-based reflection rather than intuition or random discretion.”

Chapter II of the guidance sets out the following seven key requirements to achieve Trustworthy AI that apply in the life-cycle of the development, deployment and use of AI systems:

  1. Human agency and oversight. Including fundamental rights, human agency and human oversight.
  2. Technical robustness and safety. Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility.
  3. Privacy and data governance. Including respect for privacy, quality and integrity of data, and access to data.
  4. Transparency. Including traceability, explainability and communication.
  5. Diversity, non-discrimination and fairness. Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation.
  6. Societal and environmental wellbeing. Including sustainability and environmental friendliness, social impact, society and democracy.
  7. Accountability. Including auditability, minimization and reporting of negative impact, trade-offs and redress.

Chapter II also recommends both technical and non-technical measures to achieve Trustworthy AI.  Technical measures include architectures for Trustworthy AI, ethics and rule of law by design, explanation methods, testing and validating, and quality of service indicators.  Non-technical measures include regulation, codes of conduct, standardization, certification, accountability via governance frameworks, education and awareness to foster an ethical mindset, stakeholder participation and social dialogue and diversity and inclusive design teams.

On regulation as a non-technical measure to achieve Trustworthy AI, the guidance again confirms that existing legislation already supports the trustworthiness of AI systems.  On the face of this guidance, it is not apparent that the AI HLEG supports specific further regulation of AI at this stage, but the guidance notes that the AI HLEG will soon issue “AI Policy and Investment Recommendations,” which will address whether existing regulation may need to be revised, adapted or introduced in this space.

Chapter III of the guidance provides a Trustworthy AI assessment list (the “assessment list”), which acts as a checklist for stakeholders to ensure that AI systems and applications meet the ethical principles and Trustworthy AI requirements set out above.  A notable addition to this section includes guidance on the roles of individuals within an organisation for implementing the assessment list (including the Management and Board, Compliance/Legal/Corporate responsibility departments, Product and Service development teams, Quality Assurance, HR, Procurement, and developers and project managers in their day-to-day roles).  The guidance recommends engaging individuals at all levels of the organization, including those from the operational level all the way up to management.

The guidance includes additional instructions for using the assessment list, which recommends taking a proportionate approach and paying close attention to both areas of concern and questions that cannot be (easily) answered.   It gives an example of an organization that is unable to ensure diversity when developing and testing the AI system, due to the lack of diversity in the development team.  In this situation, the guidance recommends involving other stakeholders either inside or outside the organization to satisfy this requirement.

The guidance stresses that the assessment list will need to be adapted to the particular application of an AI system at issue.  It notes that “different situations raise different challenges,” for example, an AI system involving music recommendations will raise different ethical considerations to an AI system that proposes critical medical treatments.  Greater importance is given to AI systems that directly or indirectly affect individuals.  Further to this, the guidance suggests that additional sectoral guidance may be necessary to deal with the different ethical challenges raised in different sectors.

The final section of Chapter III gives examples of opportunities and critical concerns raised by AI, as follows:

  • Examples of opportunities: Using AI to tackle climate action and sustainable infrastructure, improve health and well-being, improving the quality of education, and achieving digital transformation;
  • Examples of critical concerns: Using AI to identify and track individuals (using for instance, facial recognition technology), covert AI systems, AI-enabled citizen scoring, and lethal autonomous weapons.

In the areas of “critical concern”, the guidance calls for a proportionate approach that takes into account the fundamental human rights of the individuals concerned.  When organizations use AI systems that involve these critical concerns, they will need to undergo a careful ethical (as well as legal) assessment.

Next Steps

As noted above, the guidance will now enter a “piloting phase” where interested stakeholders can provide feedback on implementing the guidance and the assessment list in real projects.  Based on this feedback, the AI HLEG will update the guidance in early 2020.

In the meantime, according to the Communication the Commission will work towards a set of international AI ethics guidelines that brings the European approach to the global stage.  The Commission intends to cooperate with “like-minded partners” by finding convergence with other countries’ AI ethics guidelines and building an international group for broader discussion.  It will also continue to “play an active role in international discussions and initiatives,” such as contributing to the G7 and G20 summits on this issue.

Finally, the Commission announced in its Communication the following plans, to be implemented by the third quarter of 2019:

  • To launch networks of AI research excellence centers;
  • To launch networks of digital innovation hubs (focusing on AI in manufacturing and big data);
  • To start discussions with Member States and stakeholders to “develop and implement a model for data sharing and making best use of common data spaces”;
  • To continue work on its draft report identifying the challenges with the use of AI in the product liability space; and
  • For the European High-Performance Computing Joint Undertaking to develop next generation supercomputers which the Commission considers “essential for processing data and training AI.”

These plans further build on the Commission’s broader European AI Strategy, aimed at boosting Europe’s competitiveness in the field of AI.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”

Photo of Marty Hansen Marty Hansen

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has…

Martin Hansen has over two decades of experience representing some of the world’s leading innovative companies in the internet, IT, e-commerce, and life sciences sectors on a broad range of regulatory, intellectual property, and competition issues, including related to artificial intelligence. Martin has extensive experience in advising clients on matters arising under EU and U.S. law, UK law, the World Trade Organization agreements, and other trade agreements.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.