Voice Recognition Raises Privacy Concerns for Governments and Businesses, Experts Say
The growing use of voice recognition technologies in the public and private sectors is prompting data protection and regulatory concerns, an advisory body, the U.K. Biometrics & Forensics Ethics Group (BFEG), and a privacy consultant said.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
A BFEG briefing note published Wednesday examined the public sector's technical, ethical and legal use of biometric voice recognition technology. Its findings focused on possible use cases such as by law enforcement in surveillance situations and by government user services teams for logging in to a secure portal.
One key takeaway was that use of voice recognition technology is likely to increase, driven partly by the rapid development of AI and Machine Learning techniques, the group said. Meanwhile, "government regulation and oversight is currently sparse," and work is needed to ensure that use of voice recognition technology adheres to strict ethical principles.
Ethical concerns include data collection without informed consent, inherent bias in training datasets and an inability to effectively detect spoofing and deepfakes, the BFEG said.
To maintain compliance with the EU and U.K. General Data Protection Regulations and the EU Law Enforcement Directive, voice recognition systems must obtain clear and informed consent from users, with the option to withdraw that consent at any time, the paper said.
All data must be gathered and stored securely, and a privacy notice published to explain how the personal data is processed. Humans must have oversight of the entire system.
Among other recommendations, the paper urged the government to ensure that voice recognition models be "demonstrably free" of implicit biases against any section of the population, and audio recording that could be used by voice recognition technology be treated as containing biometric data and handled accordingly.
Use cases must be underpinned by an effective approach for spotting and excluding voice samples that aren't genuine, such as deepfakes, to mitigate the risk of digital identity theft, privacy violations and loss of public confidence.
The BFEG also called for an independent regulator to govern voice recognition technology use in all settings.
The issues the briefing note raised "are just as relevant -- if not more so -- to the private sector," said Peter Borner, the Data Privacy Group's chief trust officer, in an email Thursday.
"From what I'm seeing with my clients, voice recognition is quietly becoming mainstream," he said.
Banks and insurers use it to cut call-center fraud and big retailers are exploring voice authentication for loyalty programs, Borner noted. The global market is mushrooming: Europe's voice recognition market is forecast to hit around $13.5 billion by 2030, he said.
Ethical risks in the private sector can be harder to manage than in the public, Borner said.
Consent and transparency are weak spots, he added. Most people don't realize their voice is being turned into a biometric template, and privacy notices rarely explain how that data is secured or how long it will be kept.
Another concern is that companies often buy off-the-shelf solutions with little transparency about the training data or error rates, he said. That's a problem because accuracy and bias vary widely, particularly across accents and gender.
In addition, "with deepfake tools now good enough to clone a voice from a few seconds of audio, companies need robust anti-spoofing measures and clear incident response plans -- otherwise we’re heading for a wave of voice-based identity theft."
The EU AI Act will be the big driver of change, Borner said. Voice systems used for ID will be treated as high-risk AI, meaning providers will have to prove accuracy, manage bias and keep humans in the loop.
Companies that get ahead of the curve will build trust; those that don't risk reputational fallout, Borner added.