Skip to main content

University of Michigan School of Information

Menu

Andalibi: Should AI technologies be used to detect emotions?

"Quoted by Protocol, Assistant professor Nazanin Andalibi, 'Why emotion AI sets off warning bells, but sentiment analysis does not.'" Headshot of Nazanin Andalibi.

Thursday, 05/05/2022

Artificial intelligence (AI) systems that use facial data can be used to help researchers and businesses understand how people feel. AI can use cues to gauge people’s feelings and emotions, but researchers say there are problems and biases. People don’t all use the same facial expressions, voice patterns or body language, so sentiment can be misunderstood.

Despite the shortcomings, companies like Zoom are looking into using emotional AI technologies. Recently, the company announced new features that are geared towards analyzing customer sentiment during sales or business meetings. 

University of Michigan School of Information assistant professor Nazanin Andalibi, an expert in AI and facial recognition, was recently quoted by Protocol in an article about controversies swirling around facial and emotion recognition software.

Much of the discourse has centered around misuse of facial recognition and AI, which Andalibi agrees is “problematic” and “terrible.” But those aren’t the only red flags for her.

“One reason I am concerned about just focusing on problems with the face or voice is that this may support stakeholders — like those purchasing and deploying these technologies, [such as] regulators, technologists or other actors — to move away from the collection of facial or voice data and simply shift to other sensitive data types without truly addressing their fundamental harmful implications, even if and when there are no bias, validity or accuracy concerns.”


RELATED:

Read “Why emotion AI sets off warning bells, but sentiment analysis does not” on protocol.com.

Learn more about assistant professor Nazanin Andalibi