Policymakers and other stakeholders continue to promote the development and adoption of artificial intelligence (“AI”) worldwide. For example, the European Commission recently released a white paper describing a proposed framework for regulating AI. In the United States, lawmakers have considered AI legislation and President Trump signed an Executive Order on AI that, among other things, promotes investment in AI and directed the National Institute of Standards and Technology to establish AI standards, including for AI trustworthiness. Over the past year, consistent with the Executive Order, the federal government has invested significantly in AI, as detailed in an annual report recently released by the Trump Administration.

Additionally, as we explained in a July 2019 article, some state and local governments also have established task forces to help expand adoption and use of AI (or automated decision making) within their jurisdictions and to evaluate requirements that might apply to such AI use. In particular, we previously described the efforts of government task forces in New York City and Vermont, and noted that other governments were also considering these issues.

The following outlines the recent progress of these task forces. These developments, as well as those mentioned above, should be of interests not only to organizations that are developing and deploying AI, but also to government contractors seeking to supply AI to governmental agencies as they, potentially, may set the stage for future regulations or procurement requirements pertaining to AI.

New York City’s Automated Decision Systems Task Force

In May 2018, New York City created the Automated Decision Systems Task Force (the “ADS Task Force”) pursuant to Local Law 49 of 2018 to produce a report with recommendations for automated decision making systems. The ADS Task Force was directed to examine several topics relating to trustworthiness, including:

  • Procedures for persons affected by city decisions involving the use of an automated decision system to request and receive an explanation of such decision;
  • Procedures for determining whether an automated decision system disproportionately impacts persons based upon their status as members of a protected category, and for addressing instances in which persons have been harmed by such system; and
  • Processes for making information publicly available that would allow the public to assess meaningfully how each automated decision system functions and is used by the city, including making technical information about such system publicly available.

The ADS Task Force released its report in November 2019. In the report, the ADS Task Force made high-level, principles-based recommendations that aligned with its three core themes of automated decision system management and that provide some guidance for operationalizing them. The ADS Task Force’s recommendations are as follows:

  • Management capacity. (1) Develop and centralize resources within the city government that can guide policy and assist agencies in the development, implementation, and use of automated decision systems; (2) adopt a phased approach to developing and institutionalizing agency and citywide automated decision system management practices; and (3) strengthen the capacity of city agencies to develop and use automated decision systems.
  • Public involvement. (1) Facilitate public education about automated decision systems; and (2) engage the public in ongoing work around automated decision systems.
  • Operations management. (1) Establish a framework for agency reporting and publishing of information related to automated decision systems; (2) incorporate information about automated decision systems into processes for public inquiry about or challenge to city agency decisions; and (3) create an internal city process for assessing specific automated decision systems for any risk of disproportionate impact to any individual or group on the basis of protected characteristics.

The final ADS Task Force report has faced some criticism, both from outside observers and members of the Task Force itself. For example, Albert Cahn, Executive Director of the Surveillance Technology Oversight Project, observed that after a considerable investment of public resources, the final report was only 36 pages in length, with the first sixteen pages devoted to introductions, task force member biographies, and history of the task force itself, and a mere eight pages of policy recommendations that were viewed by many as vague. Some task force members themselves were critical of the process, with one member referring to the final report as “a waste” and a “sad precedent.”

Although the Task Force has dissolved, New York City’s focus on AI policy continues. On November 19, 2019, New York City Mayor Bill de Blasio issued an executive order creating an “algorithms management and policy officer” in the city government. This new office is tasked with serving as the “centralized resource to help guide the City and its agencies in the development, responsible use, and assessment of algorithmic and related technical tools and systems” as well as engaging with the public on the use of these technologies. A week after this executive order, Council Member Peter Koo introduced legislation that, if adopted, would require annual reporting on every automated decision systems used by city agencies, including what each automated decision system is intended to measure or reveal and a description of the decisions made or based on such system. In January 2020, the New York City Committee on Technology held a hearing on this legislation, during which the co-chairs of the ADS Task Force discussed the process that led to the final report, while acknowledging the criticism that the report’s recommendations did not go far enough. Following the hearing, the proposed legislation was deferred by the Committee of Technology until the next legislative session.

Vermont’s Artificial Intelligence Task Force

In 2018, Vermont launched the Vermont Artificial Intelligence Task Force (the “AI Task Force”) charged with assessing the development and use of AI, including (1) benefits and risks; (2) whether and how to use AI in state government, including an analysis of any fiscal impact; and (3) whether state regulation of AI is needed.

The AI Task Force was directed to prepare a report by June 30, 2019 that would provide (1) a summary of the development and use of AI in Vermont; (2) a proposed definition for AI and a proposal for state regulation, if needed; (3) a proposal for the responsible and ethical development of AI in Vermont; and (4) a recommendation on whether there should be a permanent commission to study the AI field.

After some delays, the AI Task Force released its report in January 2020, with the following recommendations:

  • Establish a permanent commission on AI to propose policy initiatives and support its responsible development.
  • Adopt a code of ethics to set standards for responsible AI.
  • Create incentives for the further development of the AI industry in Vermont.
  • Support the responsible use of AI by agencies of state and local government.
  • Enhance education and workforce development programs targeted to AI, with the recommended involvement of Vermont’s higher education community for workforce training in the development and use of AI.
  • Expand education of the public on the power and opportunity of AI and the risks created by it so Vermont has an informed citizenry on these issues.

The report establishes the following baseline definition for artificial intelligence:

Artificial intelligence (A.I.) systems are systems (usually software) capable of perceiving an environment through data acquisition and then processing and interpreting the derived information to take action(s) or imitate intelligent behavior given a specified goal. AI systems can also learn/adapt their behavior by analyzing how the environment is affected by prior actions.

“As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”

This definition is adapted from the European Union’s glossary definition. The European Commission is considering the definition of AI in the context of the white paper mentioned above.

The AI Task Force did not recommend the promulgation of any new regulations of AI at this time. However, the other recommendations in the report suggest that AI will continue to garner attention in Vermont, both in terms of regulation and government use and procurement.

It is worth noting that Vermont currently prohibits the use of biometric identifiers in certain agency processes, including identifying applicants for non-commercial driver licenses. Additionally, in March 2020, Vermont enacted legislation that expanded the categories of personally identifiable information that may trigger notification obligations to individuals and regulators in the event of a breach to include biometric and genetic data.

Alabama’s Commission on Artificial Intelligence and Associated Technologies, Massachusetts Bill and California’s First Digital Innovation Officer

Established in May 2019, Alabama’s 25-member Commission on Artificial Intelligence and Associated Technologies was expected to deliver its report at the beginning of this month (May 1, 2020). No such report is available at this time, and it would not be surprising if it was delayed due to the current pandemic. The report is anticipated to address the use of AI in a wide array of fields within the state (including “governance, health care, education, environment, transportation, and industries of the future such as autonomous cars, industrial robots, algorithms for disease diagnosis, manufacturing, and other rapid technological innovations”).

Very recently, Massachusetts has taken under consideration a bill to establish a commission that would analyze the use of automated decision systems in the state. The proposed commission would conduct a statewide government survey on all uses of ADS by state governmental bodies and examine the policies used by state agencies to procure and use such systems and validate and test the systems once deployed. On May 11, 2020, the Committee on State Administration and Regulatory Oversight recommended the legislation be passed and passed the bill forward to the House Committee on Ways and Means.

In addition, within the past week, California appointed its first director of the newly established California Office of Digital Innovation, which will focus on developing applications for use by members of the public when interacting with the state. The California Office of Enterprise Technology Solutions will continue to focus on implementing technology solutions across state governments. The activities of these two state offices potentially may be relevant to AI, in addition to the activities undertaken by the California state legislature.

For additional insights relating to artificial intelligence, visit Covington’s Artificial Intelligence Toolkit.