University of Michigan School of Information
Hickok: Tech companies ‘nowhere near’ developing safe and trustworthy AI
Wednesday, 07/24/2024
One year ago, seven leading AI companies committed to a set of eight voluntary commitments on developing safe and trustworthy AI.
Though the commitments are a good first step, they remain voluntary and unenforceable, leading to potential harms and risks around AI safety, accountability and transparency.
University of Michigan School of Information lecturer Merve Hickok, an expert on AI, democracy and social justice, says the U.S needs to impose regulation to protect the rights of its citizens.
“One year on, we see some good practices towards their own products, but [they’re] nowhere near where we need them to be in terms of good governance or protection of rights at large,” Hickok says.
RELATED
Read “AI companies promised to self-regulate one year ago. What’s changed?” on MIT Technology Review.
Learn more about UMSI lecturer Merve Hickok by visiting her faculty profile.
— Noor Hindi, UMSI public relations specialist