Toyama: AI amplifies the good and the bad in society
As artificial intelligence continues to rapidly progress, multiple industries are experiencing a shake-up. The Writers Guild of America went on strike to protest unfair labor practices and the emergence of AI technologies to replace human labor and creativity. The education sector is struggling to cope with ChatGPT as students use the platform for homework and college entrance essays.
Are we doomed? Probably. But not inevitably.
University of Michigan School of Information professor Kentaro Toyama discusses the “Law of Amplification” in an article for Divided We Fall, a nonprofit news organization that provides bipartisan dialogue on pressing issues.
The law of amplification, Toyama says, shows us how technology’s impact “is to amplify underlying human forces.”
“Where human forces are positive and capable, technology improves outcomes, but where human forces are negative, indifferent, or dysfunctional, even the best technology doesn’t lead to good results,” Toyama says. “Artificial intelligence is no exception. As we’re already seeing, ChatGPT helps honest writers brainstorm and helps bad students cheat. Deepfake technology can boost special effects in movies and it can generate misleading political content. Automated face recognition helps responsible police departments catch crime suspects and it misleads sloppy officials into arresting innocent people.”
What to do next, Toyama says, depends on how we manage AI through laws, fair contracts and continued discussion on the negative impacts of AI.
“What can be done in response?” Toyama asks. “A corollary to the Law of Amplification is not to seek solutions to problematic technology in technology itself. The problem is less the amplifying technology, as the underlying human forces. Those forces must be changed through law, culture, and social norms.”
Read “AI in Arts and Entertainment: A Double-Edged Sword” on Divided We Fall.
Learn more about Toyama’s research and publications by visiting his UMSI faculty profile.