Emerging Technologies

On 13 October 2023, members of the G7 released a set of draft guiding principles (“Principles”) for organisations developing advanced AI systems, including generative AI and foundational models.

In parallel, the European Commission launched a stakeholder survey (“Survey”) on the Principles, inviting any interested parties to comment by 20 October 2023.  After the Survey is complete, G7 members intend to compile a voluntary code of conduct that will provide guidance for AI developers.  The Principles and voluntary code of conduct will complement the legally binding rules that EU co-legislators are currently finalizing under the EU AI Act (for further details on the AI Act, see our blog post here).

The Principles build on the existing OECD AI principles published in May 2019 (see our blog post here) in response to recent developments in advanced AI systems.  They would apply to all participants in the AI value chain, including those responsible for the design, development, deployment, and use of AI systems.Continue Reading G7 Countries Publish Draft Guiding Principles for Advanced AI Development

On June 3, the New York State legislature passed their version of a right to repair bill—titled the “Digital Fair Repair Act”—that would allow consumers to repair their digital electronic equipment without involving the manufacturer.Continue Reading Right to Repair: New York State Passes Right to Repair Law

President Donald Trump signed an executive order (EO) on December 3, providing guidance for federal agency adoption of artificial intelligence (AI) for government decision-making in a manner that protects privacy and civil rights.

Emphasizing that ongoing adoption and acceptance of AI will depend significantly on public trust, the EO charges the Office of Management and Budget with charting a roadmap for policy guidance by May 2021 for how agencies should use AI technologies in all areas excluding national security and defense.  The policy guidance should build upon and expand existent applicable policies addressing information technology design, development, and acquisition.Continue Reading AI Update: New Executive Order on Promoting the Use of Artificial Intelligence in Federal Agencies Pushes Developing Public Trust for Future Expansion

FCC Chairman Pai announced today that the FCC will move forward with a rulemaking to clarify the meaning of Section 230 of the Communications Decency Act (CDA).  To date, Section 230 generally has been interpreted to mean that social media companies, ISPs, and other “online intermediaries” have not been subject to liability for their users’ actions.

On July 27, the Trump Administration—acting through the National Telecommunications and Information Administration—submitted a Petition for Rulemaking on Section 230, and Chairman Pai announced on August 3 that the FCC would seek public comment on the petition.  That petition asked the FCC to adopt rules to “clarify” the circumstances under which the liability shield of Section 230 applies.  Citing the FCC General Counsel’s reported position that the Commission has the legal authority to interpret Section 230, Chairman Pai today stated that a forthcoming agency rulemaking will strive to “clarify its meaning.”Continue Reading FCC Announces Section 230 Rulemaking

On 19 February 2020, the European Commission presented its long-awaited strategies for data and AI.  These follow Commission President Ursula von der Leyen’s commitment upon taking office to put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within the new Commission’s first 100 days.  Although the papers published this week do not set out a comprehensive EU legal framework for AI, they do give a clear indication of the Commission’s key priorities and anticipated next steps.

The Commission strategies are set out in four separate papers—two on AI, and one each on Europe’s digital future and the data economy.  Read together, it is clear that the Commission seeks to position the EU as a digital leader, both in terms of trustworthy AI and the wider data economy.Continue Reading AI Update: European Commission Presents Strategies for Data and AI (Part 1 of 4)

On February 4, 2020, the United Kingdom’s Centre for Data Ethics and Innovation (“DEI”) published its final report on “online targeting” (the “Report”), examining practices used to monitor a person’s online behaviour and subsequently customize their experience. In October 2018, the UK government appointed the DEI, an expert committee that advises the UK government on how to maximize the benefits of new technologies, to explore how data is used in shaping peoples’ online experiences. The Report sets out its findings and recommendations.
Continue Reading Centre for Data Ethics and Innovation publishes final report on “online targeting”

The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI.  The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence.  Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.

The draft guidance builds upon the ICO’s previous work in this area, including its AI Auditing Framework, June 2019 Project ExplAIN interim report, and September 2017 paper ‘Big data, artificial intelligence, machine learning and data protection’.  (Previous blog posts that track this issue are available here.)  Elements of the new draft guidance touch on points that go beyond narrow GDPR requirements, such as AI ethics (see, in particular, the recommendation to provide explanations of the fairness or societal impacts of AI systems).  Other sections of the guidance are quite technical; for example, the ICO provides its own analysis of the possible uses and interpretability of eleven specific types of AI algorithms.

Organizations that develop, test or deploy AI decision-making systems should review the draft guidance and consider responding to the consultation. The consultation is open until January 24, 2020.  A final version is expected to be published later next year.Continue Reading UK ICO and The Alan Turing Insitute Issue Draft Guidance on Explaining Decisions Made by AI

On 19 September 2019, the European Parliamentary Research Service (“EPRS”)—the European Parliament’s in-house research service—released a briefing paper that summarizes the current status of the EU’s approach to developing a regulatory framework for ethical AI.  Although not a policymaking body, the EPRS can provide useful insights into the direction of EU policy on an issue.  The paper summarises recent calls in the EU for adopting legally binding instruments to regulate AI, in particular to set common rules on AI transparency, set common requirements for fundamental rights impact assessments, and provide an adequate legal framework for facial recognition technology.

The briefing paper follows publication of the European Commission’s high-level expert group’s Ethics Guidelines for Trustworthy Artificial Intelligence (the “Guidelines”), and the announcement by incoming Commission President Ursula von der Leyen that she will put forward legislative proposals for a “coordinated European approach to the human and ethical implications of AI” within her first 100 days in office.Continue Reading European Parliamentary Research Service issues a briefing paper on implementing EU’s ethical guidelines on AI