What Uber, Facebook and others did wrong with auditing algorithms
UMSI Professor Christian Sandvig has long studied and spoken against the unintended repercussions of algorithmic decision-making, whereby certain groups of people are discriminated against or overlooked. With a National Sciences Foundation award for $50,000, he and his fellow investigators, including co-PI UMSI Assistant Professor Casey Pierce, will now have the resources to fund a workshop that investigates this phenomenon further.
Their alliterative project, Auditing Algorithms: Adding Accountability to Automated Authority, explores a new research design that has shown promise in diagnosing these unwanted consequences of algorithmic systems. The purpose of the workshop is to engage a population of researchers who wouldn’t normally interact with one another. The researchers will be experts in mathematics, computers, psychology, and race/gender inequality, among many other fields.
The goal for this workshop is to address the economic and social obstacles to the development and advancement of information and communication technologies (ICT). Sandvig and his team know that more and more large groups – from Facebook to Google to Uber – are relying on this technology. Their intent is to determine where the algorithms are failing certain peoples and prevent future injustices. Until these issues are corrected, ICT will unfortunately be contributing to the proliferation of crime, social inequality, and race segregation. Even if the discrimination that stems from these automated, social, computational platforms is unintentional, the harm it causes it real.