Dutch DPA: AI Emotion Recognition Raises Risks and Ethical Concerns
Using AI to recognize emotions is "dubious and risky," the Dutch Data Protection Authority (AP) said Tuesday.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
Though organizations increasingly are using AI to recognize human emotions, emotion recognition is based on shaky assumptions about emotions and whether they are measurable, the AP said in a report. Moreover, the EU AI Act bans using AI for emotion recognition.
Nonetheless, companies use AI to analyze people's emotional state during a conversation with customer service. Smartwatches measure if the wearer is stressed, and chatbots recognize emotions and react with empathy. While businesses believe AI will improve their efficiency, products or customer service, AP's research showed it's unclear how AI systems recognize emotions and whether the outcomes are reliable.
For instance, many systems that use AI and claim they can recognize emotions are built on controversial assumptions, AP said. One outcome is that biometric characteristics, such as voice and facial expressions, are translated into emotion indiscriminately, the AP said.
In addition, emotions are experienced differently, and it's inaccurate to assume that biometrics can measure them, the AP said, pointing to, among other factors, cultural and individual variations.
Various voice recognition applications will soon be covered by specific AI regulations and are already subject to privacy laws such as the General Data Protection Regulation, the authority noted.
AP said a major ethical question is whether the technology is desirable at all and, if so, for what purposes?