Jacobs: Oversight is needed for government use of artificial intelligence
In a special series on AI and procurement for The Regulatory Review, assistant professor Abigail Jacobs outlines how the lack of publicly available data on artificial intelligence systems can ultimately affect the public.
Governments are increasingly using artificial intelligence (AI) for public services, and managing people and processes. However, procuring these AI technologies often leaves out public input on key features such as training data, model design, validations, and thresholds for decisions.
“AI systems can be opaque, making it difficult to fully understand the logic and processes underlying an output,” says Jacobs, who is an expert in inequality in sociotechnical systems. She says the hidden process of acquiring and using AI systems has implications for individual decisions and policy.
Jacobs argues for review processes, and development and design tools to help support fairness in AI systems. “Whether an agency is developing the AI system or procuring it, there are a range of methods for bringing the knowledge of outside experts and the general public into the deliberation about system design,” she says.
“Without new approaches, the introduction of AI systems will inappropriately deny and award benefits and services to the public, diminish confidence in governments’ ability to use technical tools appropriately, and ultimately undermine the legitimacy of agencies and the market for AI tools more broadly.”
Read “The Hidden Governance in AI” on theregreview.org.
Abigail Jacobs is an assistant professor at the University of Michigan School of Information. Her research focuses on structure, governance, and inequality in sociotechnical systems. Learn more about assistant professor Jacobs.