As the policy debate concerning government oversight of artificial intelligence evolves, public procurement regulations have become a potential entry point for regulating artificial intelligence.  Earlier this year, the White House issued an Executive Order on AI mandating that the National Institute of Standards and Technology develop a guide to federal engagement on AI technical standards.  While the federal government’s actions have understandably garnered significant attention, state and local governments are also undertaking preliminary efforts to engage on the technical standards for AI procured and utilized by their agencies. 

New York City’s Automated Decision Systems Task Force

NYC’s municipal government began exploring AI trustworthiness in 2017, when the City Council voted unanimously to study the use of algorithms by city agencies.  The City Council recognized that automated decision systems may help agencies make decisions more efficiently and effectively but may also reflect hidden biases and flawed data.  In May 2018, NYC created the Automated Decision Systems Task Force, which is comprised of representatives from city agencies and offices as well as representatives from the private sector, nonprofit, advocacy, and research communities.  The Task Force is charged with producing a report with recommendations on an array of topics, including how information regarding automated decision systems will be made public and what procedures will be available for persons harmed by a city agency’s automated decision systems.

The deadline for the Task Force’s report is December 2019, but there are concerns that this deadline will not be met.  During an April 2019 hearing before the City Council, the Task Force co-chair reported slow progress in defining the scope of the Task Force’s mandate, and other non-governmental representatives on the Task Force noted that lack of cooperation and information sharing from city agencies had stymied meaningful progress on the report.  Other representatives also noted concerns that the Task Force was falling far short of its transparency obligations.

Vermont’s Artificial Intelligence Task Force

Vermont launched a task force in 2018 with the mandate “to investigate the field of artificial intelligence in the State and make recommendations on the responsible growth of Vermont’s emerging technology markets, the use of artificial intelligence in State government, and State regulation of the artificial intelligence field.”  The Vermont Task Force is charged with producing a report that summarizes the state’s current use of AI and makes recommendations for (i) a definition for AI, (ii) state regulation of AI, (iii) a plan for the ethical and responsible development of AI, and (iv) whether the state should establish a permanent commission to study AI.

While the initial deadline for the report was June 30, 2019, the Vermont Task Force requested an extension until September 30, 2019 to allow for further public engagement and additional time for deliberation on the recommendations.  It is unclear whether a formal extension was granted, but the Vermont Task Force has yet to publish a final report.

Other AI Initiatives Around the Country

While New York City and Vermont have taken the lead, more state and local governments are likely to follow suit.  In May 2019, Alabama established the Commission on Artificial Intelligence and Associated Technologies to review “all aspects of the growth of artificial intelligence and associated technology in the state and the use of artificial intelligence in governance, healthcare, education, environment, transportation, and industries of the future…”    Unlike the New York City and Vermont initiatives which are focused primarily on the use of AI in government systems, the Alabama Commission will evaluate AI issues broadly to look beyond government to industries such as “autonomous cars, industrial robots, [and] algorithms for disease diagnosis.”  The Alabama Commission will produce a report by May 2020.

Last August, the California legislature endorsed the Asilomar AI Principles.  Like the Alabama Commission, the California legislature was not focused specifically on use of AI by the government but rather more broadly on AI development in general.

These preliminary efforts by state and local governments evidence the substantial interest that public procurements of AI and AI development in general have generated at all levels of government.  Broadly speaking, public procurement regimes at the federal, state, and local levels follow the same guiding principles, but it is not uncommon for state and local governments to depart from their federal counterparts on specific issues, as evidenced by several state governments (such as Vermont and Oregon) mandating net neutrality from their internet service providers.  Consequently, the actions taken by state and local governments in the area of AI and automated decision systems bear monitoring.