Artificial Intelligence (AI)

On September 8, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced a new bipartisan framework for artificial intelligence (“AI”) legislation.  Senator Blumenthal said, “This bipartisan framework is a milestone – the first tough, comprehensive legislative blueprint for real, enforceable AI protections. It should put us on a path to addressing the promise and peril AI portends.” He also told CTInsider that he hopes to have a “detailed legislative proposal” ready for Congress by the end of this year.

Continue Reading Senators Release Bipartisan Framework for AI Legislation

On September 6, 2023, U.S. Senator Bill Cassidy, ranking member of the Senate Health, Education, Labor and Pensions (HELP) Committee, published a white paper addressing artificial intelligence (AI) and its potential benefits and risks in the workplace, as well as in the health care  context, which we discuss here.

The whitepaper notes that employers

On August 25, 2023, China’s National Information Security Standardization Technical Committee (“TC260”) released the final version of the Practical Guidelines for Cybersecurity Standards – Method for Tagging Content in Generative Artificial Intelligence Services (《网络安全标准实践指南——生成式人工智能服务内容标识方法》) (“Tagging Standard”) (Chinese version available here), following a draft version circulated earlier this month.

Continue Reading Labeling of AI Generated Content: New Guidelines Released in China

On July 7, 2023, the UK House of Lords’ Communications and Digital Committee (the “Committee”) announced an inquiry into Large Language Models (“LLMs”), a type of generative AI used for a wide range of purposes, including producing text, code and translations.  According to the Committee, they have launched the inquiry to understand “what needs to happen over the next 1–3 years to ensure the UK can respond to the opportunities and risks posed by large language models.

This inquiry is the first UK Parliament initiative to evaluate the UK Government’s “pro-innovation” approach to AI regulation, which empowers regulators to oversee AI within their respective sectors (as discussed in our blog here).  UK regulators have begun implementing the approach already.  For, example, the Information Commissioner’s Office has recently issued guidance on AI and data protection and generative AI tools that process personal data (see our blogs here and here for more details). 

Continue Reading UK House of Lords Announces Inquiry into Large Language Models

On July 13, 2023, the Cybersecurity Administration of China (“CAC”), in conjunction with six other agencies, jointly issued the Interim Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能管理暂行办法》) (“Generative AI Measures” or “Measures”) (official Chinese version here).  The Generative AI Measures are set to take effect on August

On 21 June 2023, at the close of a roundtable meeting of the G7 Data Protection and Privacy Authorities, regulators from the United States, France, Germany, Italy, United Kingdom, Canada and Japan published a joint “Statement on Generative AI” (“Statement”) (available here). In the Statement, regulators identify a range of data protection-related concerns they believe are raised by generative AI tools, including legal authority for processing personal information, and transparency, explainability, and security. The group of regulators also call on companies to “embed privacy in the design conception, operation, and management” of generative AI tools.

In advance of the G7 meeting, on 15 June 2023, the UK Information Commissioner’s Office (“ICO”) separately announced that it will be “checking” whether businesses have addressed privacy risks before deploying generative AI, and “taking action where there is risk of harm to people through poor use of their data”.

Continue Reading UK and G7 Privacy Authorities Warn of Privacy Risks Raised by Generative AI

The Federal Communications Commission and National Science Foundation announced this week that they will co-host a workshop on July 13, 2023, entitled “The Opportunities and Challenges of Artificial Intelligence for Communications Networks and Consumers.”

Per the press release, the workshop will cover a number of issues, including “AI’s transformative potential to optimize network traffic; improve

On 31 May 2023, at the close of the fourth meeting of the US-EU Trade & Tech Council (“TTC”), Margrethe Vestager – the European Union’s Executive Vice President, responsible for competition and digital strategy – announced that the EU and US are working together to develop a voluntary AI Code of Conduct in advance of formal regulation taking effect. The goal, according to Vestager, is to develop non-binding international standards on risk audits, transparency and other requirements for companies developing AI systems. The AI Code of Conduct, once finalized, would be put before G7 leaders as a joint transatlantic proposal, and companies would be encouraged to voluntarily sign up.

Continue Reading EU and US Lawmakers Agree to Draft AI Code of Conduct

On May 23, 2023, the White House announced that it took the following steps to further advance responsible Artificial Intelligence (“AI”) practices in the U.S.:

  • the Office of Science and Technology Policy (“OSTP”) released an updated strategic plan that focuses on federal investments in AI research and development (“R&D”);
  • OSTP issued a new request for information (“RFI”) on critical AI issues; and
  • the Department of Education issued a new report on risks and opportunities related to AI in education.


Continue Reading White House Announces New Efforts to Advance Responsible AI Practices

On 11 May 2023, members of the European Parliament’s internal market (IMCO) and civil liberties (LIBE) committees agreed their final text on the EU’s proposed AI Act. After MEPs formalize their position through a plenary vote (expected this summer), the AI Act will enter the last stage of the legislative process: “trilogue” negotiations with the European Commission, Parliament and the Council, which adopted its own amendments in late 2022 (see our blog post here for further details). European lawmakers hope to adopt the final AI Act before the end of 2023, ahead of the European Parliament elections in 2024.

In perhaps the most significant change from the Commission and Council draft, under MEPs’ proposals, providers of foundation models – a term defined as an AI model that is “trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks” (Article 3(1c)) – would be subject to a series of obligations. For example, providers would be under a duty to “demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development” (Article 28b(2)(a)), as well as to draw up “extensive technical documentation and intelligible instructions for use” to help those that build AI systems using the foundation model (Article 28b(2)(e)).

Continue Reading EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI