University of Michigan School of Information
Andalibi: Emotion recognition can pose harms to users, workers

Thursday, 02/06/2025
By Noor HindiEmotion recognition technology is increasingly integrated into everyday software and systems, from video call platforms analyzing people’s moods, to tools assessing student distraction in schools or monitoring drivers’ anger or impairment inside cars.
Using artificial intelligence to analyze facial expressions, vocal tones, physiological data, or behavioral cues to infer an individual’s emotional state, emotion recognition is increasing in prevalence. Despite this,
University of Michigan School of Information assistant professor Nazanin Andalibi, an expert on emotion AI, says these systems raise serious privacy concerns due to how they collect, interpret and utilize emotion-related data.
“There’s a privacy harm, and in addition to this, if the platform knows, or thinks it knows, that you have depression, or you suffer from anxiety, or you’re in a vulnerable emotional state, what might that mean within this surveillance, capitalistic system that we exist in?” Andalibi asks. “And one possible outcome could be receiving advertisements for products or services that may be unhelpful.”
RELATED
Listen to “Emotion Recognition and What Nazanin Andalibi’s Research Tells Us about its Impacts” on World Privacy Forum through Spotify or Apple Podcasts.
Recently named one of 100 brilliant women in AI ethics, Nazanin Andalibi’s research focuses on how artificial intelligence technologies impact vulnerable people. Learn more about her research by visiting her UMSI faculty profile.