Photo by David Everett Strickler on Unsplash

This Week in Government Technology – September 1st-8th, 2024

State of California Seeks AI Solutions for Public Sector Problems

California is turning to generative AI (GenAI) to tackle some of its most pressing challenges, including housing, individuals without housing, and budget analysis. To explore how large language models (LLMs) might assist, the state is hosting a GenAI showcase, where vendors will demonstrate cutting-edge capabilities. The initiative follows Governor Gavin Newsom’s executive order and summit to examine GenAI’s potential to enhance public services. Vendors will have the opportunity to present AI solutions to key state departments to address inefficiencies and improve service delivery for residents. The first showcase will focus on California’s housing crisis, inviting AI developers to present their technology to the state government for use case evaluation and possible acquisition. 

Presidential Advisors Advocate for AI Testing Protocols in Policing

An advisory panel to the president has approved recommendations requiring federal law enforcement agencies to follow standardized protocols when field-testing AI tools. The National AI Advisory Committee’s Law Enforcement Subcommittee proposed a checklist for testing AI, emphasizing transparency and performance measurement. The recommendations aim to establish clear guidelines for AI testing, ensuring tools are safe and effective before broader adoption. The subcommittee also urged publicizing testing results and advocated for additional funding to support state and local law enforcement’s AI testing efforts. These steps mark progress toward responsible AI integration in law enforcement.

Maryland’s Intentional Approach to AI Technology

Maryland officials are cautiously approaching AI due to concerns over data security and unpredictable software behavior. While acknowledging AI’s potential to streamline services, officials stress the importance of safeguarding sensitive information. Governor Wes Moore’s executive order supports AI’s benefits but calls for strict oversight, and state agencies are closely monitoring AI’s use to prevent unauthorized data sharing. Local leaders also ensure AI applications are carefully tested before broader implementation to protect public trust in government operations.

Civil Rights Groups Challenge DHS’s Use of AI in Immigration Enforcement

A coalition of over 140 immigrant and civil rights organizations has sent a letter to Secretary of Homeland Security Alejandro Mayorkas, raising concerns about the Department of Homeland Security’s (DHS) use of artificial intelligence. The letter calls for DHS to suspend certain AI systems, particularly those used by Customs and Border Protection and Immigration and Customs Enforcement, citing violations of federal policies on responsible AI use. As agencies face upcoming deadlines related to AI use case inventories, the coalition argues that AI tools deployed for immigration enforcement lack transparency and may perpetuate bias and discrimination.

A Comprehensive Guide to Data Policy for AI Development

The Data Foundation has released a guide aimed at helping policymakers navigate the challenges of data policy in the context of artificial intelligence. As AI systems heavily rely on vast amounts of data, the guide emphasizes the importance of high-quality, responsibly governed data. It highlights key components for sound data practices, including data integrity, privacy protections, transparency, and technical infrastructure. The guide also addresses the evolving landscape of AI, urging policymakers to develop comprehensive approaches to ensure AI data use aligns with public interests, ethical standards, and democratic values.