Skip to main content

University of Michigan School of Information

Menu

Media Center

UMSI welcomes media inquiries

Unveiling the future: UMSI experts share insights on AI in 2024

Giant "ai" in blue with background phrases reading "fairness and equity," "privacy and security," "regulatory landscape," "harm mitigation," "safety and security" in yellow, orange, and blue.

Monday, 01/29/2024

2023 was a big year for artificial intelligence. It underwent remarkable advancement, reshaped the way we interact with technology and transformed various industries: education, healthcare, manufacturing, policing and more. 

In October, the Biden administration issued a comprehensive executive order, establishing new standards for AI safety and security. While the order encourages legislation, it cannot implement or enforce regulations in the rapidly evolving field of AI.  

Meanwhile, the European Union (EU) is attempting to keep up. In December, the EU reached a political agreement on the EU AI Act to regulate artificial intelligence and create transparency requirements from governments and private companies. It won’t be known how effective these laws will be until they come into effect six months after they are voted on. 

In 2024, everyday citizens can expect AI to be increasingly used in schools, policing, immigration practices and jobs. It will continue expanding and changing how we interact with technology and systems. However, ethical considerations and the responsible use of AI will be paramount to ensuring fairness and equity, researchers warn. 

Here, experts at the University of Michigan School School of Information lend their thoughts on the future of AI, harm mitigation and where they’d like to see the landscape changing. 


The Executive Order: A Catalyst for Change?

The Biden administration's executive order established new standards for AI safety and security, acknowledging the need for regulation. However, its effectiveness hinges on subsequent legislation, and questions linger about who will shape the regulatory landscape.

UMSI assistant professor Matthew Bui is an expert on data justice, activism, race and technology. Bui questions the efficacy of placing the onus of accountability on companies and warns against the potential concentration of power in the hands of a few.

“Tech companies in the U.S. have a lot of power in decision-making,” he says. “The executive order opens up discussions about responsible innovation and potential solutions, but the power remains in the hands of companies. I think technology is overrepresented in our discussions versus civil society, which often has a limited seat at the table.” 

In 2024, Bui says he’d like to see the government continue taking on more of a role in ensuring safety. 

“I think the US government has a role in advocating for communities, especially minoritized communities of color. They should center the communities most impacted by digital harms and do all they can to reduce these harms.” 

Merve Hickok, an expert in AI ethics, policy and governance, agrees. A lecturer at UMSI, Hickok stresses the importance of elevating the needs of people and communities rather than focusing on helping companies churn out technologies that open opportunities for misuse. 

“The ideal scenario is we prioritize the interests of people and communities over commercial interest and power. Instead of AI systems and surveillance being imposed on people, society should be able to imagine and determine what would be the most beneficial uses of these technologies,” she says. “We need to make sure we center civil rights first, have risk management and auditing mechanisms in place. Before you put your product on the market, what guardrails do you need to hit to prevent discrimination and ensure safety? Companies are shaping the priorities and they’re also trying to delay the accountability process.” 


Moving Beyond the Executive Order

UMSI assistant professor Abigail Jacobs says that while the executive order issued by the Biden Administration is big, it doesn’t set laws and can only direct agencies.

“What’s concerning right now is a lot of people are treading in existential nightmare scenarios. This is disempowering and strategic,” she says. “The executive order is comprehensive in that it focuses on different ways AI can have impacts on society, including worker rights, tenant rights and the criminal justice system. This is a start, but it will have to take those impacts seriously.” 

Researchers are making three requests to mitigate the harms of AI and provide a more equitable landscape in the future: more access to data for researchers, improved relationships between researchers, lawmakers and companies and a broader analysis of the decisions behind algorithmic use. 

UMSI associate professor Libby Hemphill has been pushing for more data access for researchers. 

Under Elon Musk, X, formerly Twitter, closed the books on researchers. This effectively eliminated access to a plethora of data researchers needed to analyze the algorithms used by social media platforms and study the impacts of these networks on society. 

In 2024, Hemphill says she’s looking forward to more transparency and a seat at the table. 

“We need safe harbors for researchers to do algorithmic audits,” she says. “For AI data and social media data, I’m interested in ways that faculty can be useful to policymakers as they try to figure out what data protections and data access regulations look like. The people who control access to this data right now like internet service providers, social media platforms, big contract networks, it’s not in their financial interest for the rest of us to have any control over that data. And they have enough money to lobby effectively for the status quo.” 

UMSI assistant professor Ben Green is an expert on algorithmic fairness, human-algorithm interaction, and AI regulation. Green says it’s instrumental that society take a step back and consider where AI is needed, and more importantly, where it would create more harm than good. He argues “policymakers focus on the technical details of the AI systems, but don’t consider whether those systems are worth building.”

“We have to look at the broader decisions and think less of technical details and safety, and more at why these algorithms are being used in certain sectors in the first place,” he says. “Many algorithms that are out there are harmful, such as those used in policing and child welfare.” 

Are we doomed? UMSI experts say “not yet.” But, there is much work to do in the coming year, and more questions than answers to consider. Though the future of AI remains uncertain, Hemphill says it can be shaped responsibly with careful consideration, ethical governance and collaborative efforts. 

“We’re not doomed,” Hemphill says. “We’re just in a hot spot right now. The road will be bumpy because AI is big and hard and scary. And decision-making is big and hard and scary. But that doesn't mean we can’t do it. We have faced big technical challenges as a society before, and we have managed to find ways to make sense of them.”

RELATED

UMSI assistant professor Matthew Bui is an expert on data justice, activism, race and technology. His research has been instrumental in understanding how entrepreneurs of color navigate digital platforms such as Yelp, Instagram and Facebook for their businesses, as well as how community organizations engage with data to draw attention to issues of racial justice. 

UMSI assistant professor Ben Green is an expert on algorithmic fairness, human-algorithm interaction, and AI regulation. Through his research, Green works to support design and governance practices that prevent algorithmic harms and advance social justice. His book, The Smart Enough City: Putting Technology in Its Place to Reclaim Our Urban Future, was published in 2019. He is currently working on his second book. 

UMSI lecturer Merve Hickok is the president and research director at the Center for AI and Digital Policy and the founder of AIethicist.org. She has been advocating for better AI practices and regulation for more than a decade. She is also an affiliate at the University of Michigan, Gerald R. Ford School of Public Policy and the Michigan Institute for Data Science (MIDAS). 

UMSI assistant professor Abigail Jacobs is an expert on how assumptions in AI systems are an important site of governance in AI. She has been working closely with the National Institute of Standards and Technology (NIST), Trustworthy AI in Technology (TRAILS), the Northwestern Center for Advancing Safety of Machine Intelligence (CASMI) to lead workshops on impacts of AI and measuring and mitigating sociotechnical harms. She is jointly appointed in Complex Systems and is an affiliate with the Center for Ethics, Society, and Computing at U-M and the Michigan Institute for Data Science (MIDAS). 

UMSI associate professor Libby Hemphill is an expert on social media, civic engagement and political communication. 

 

— Noor Hindi, UMSI public relations specialist