Merve Hickok: Artificial intelligence has the power to undermine human rights. Congress must establish safeguards.
Artificial intelligence is all over the news, creating confusion, laughable memes and fear. From Pope Francis wearing a puffer jacket, to a chatbot declaring its love for a New York Times columnist, things are getting messy.
But beyond the noise, many are wondering: When can we talk about AI safeguards and regulation? For Merve Hickok, an intermittent lecturer at the University of Michigan School of Information, the time is now.
“AI systems can undermine every single human right we have globally agreed on,” Hickok says. “Are we amplifying biases, or narrowing them? Are we deepening structural inequities and injustices? Are we taking away presumptions of innocence?”
For years, Hickok has been directing these questions at federal regulators, state governments and global entities. In March, she testified in front of Congress, arguing for stricter regulation.
“The U.S does not have the guardrails in place, the laws that we need, the public education, or the expertise in the government to manage the consequences of these rapid technological changes,” she stated.
Currently the president and research director of the Center for AI and Digital Policy (CAIDP), Hickok and her team focus on three pillars: AI policy education, advisory and advocacy.
The Center’s latest advocacy work is an official complaint to the Federal Trade Commission, asking FTC to halt further commercial deployment of GPT products, open investigation on OpenAI and start a rulemaking process to regulate similar generative systems. CAIDP listed privacy, bias, public safety, children’s safety, cybersecurity, consumer protection, deceptive and unfair trade practices in its 47-page complaint submitted on March 30, 2023.
“We come in as expert advisors on how to develop AI policies and regulations to protect fundamental rights, democratic values and rule of law,” she says. “Without safeguards, public and private actors can deploy AI systems which significantly undermine human rights, such as your rights to expression, rights to association, rights to self-identity and rights to privacy, to name a few.”
CAIDP is currently working with the Council of Europe to draft an international convention on AI, working with policymakers to finalize EU AI Act, and other organizations such as OECD, UNESCO and federal entities to implement major policy frameworks. On the education front, CAIDP is providing semester-long AI policy clinic training to researchers, policymakers and civil society advocates in more than 60 countries.
The risks of AI technologies are tangible. These algorithms are increasingly being used to make high-stakes decisions regarding labor and employment, criminal justice, healthcare, housing, access to credit and education.
The goal? Challenging algorithmic decision making and outcomes, preserving human dignity and the right to “not be turned into a data point and quantified.”
“For example, let’s talk about mortgage applications or home valuations,” she posits. “If you are a Black or brown homeowner, your house may be valued less because the data could include the systemic historical injustices of housing or redlining, or you might be rejected credit or end up with higher interest rates because the algorithm deems you not creditworthy.
“Take another example: car insurance. If you are a woman, it might reject you or demand higher premium rates because it associates women with higher risk.”
For everything from college recruitment processes to housing to services for people with disabilities, people are subject to these decisions — knowingly or unknowingly — every day. Still, the technology is moving faster than the regulations, leaving consumers and historically marginalized populations vulnerable.
“We need to ask if this technology is beneficial to society,” she says. “What we do when we talk to policy makers is get them to understand the hype, what these technologies can do, and more importantly, what they cannot do.”
For everyday citizens, Hickok argues it’s critical to reclaim our agency and not fall into the trap of believing “the ship has sailed” on regulation.
“We can make changes,” she says. “These companies need the public's trust. We need to be clear about what kind of future we want and demand it from corporations and policymakers.
“One action we can take is to continue demanding policies from elected representatives. As citizens and consumers, we can submit complaints to the Consumer Financial Protection Bureau, the Federal Trade Commission, Federal Communication Commision, Department of Housing and the Equal Employment Opportunity Commission. They are there to protect your civil rights.”