Business Sector Faces Uncertainty in Complying With First Phase of EU AI Act
Companies deploying AI in ways that the EU AI Act has banned have until Feb. 2 to stop using them, but exactly how to do that remains unclear, privacy experts told us. The European Commission consulted with stakeholders in December on the practical aspects of compliance and plans to issue guidance ahead of the deadline, an EC spokesperson emailed.
Sign up for a free preview to unlock the rest of this article
The act forbids: (1) Exploiting vulnerabilities of a natural person or specific group of people based on age, disability or specific social or economic situations in order to distort their behavior in a way that causes others to harm them. (2) Classifying people or groups over a certain time period based on their social behavior or inferred or predicted personal or personality characteristics. (3) Assessing people to predict their risk of committing a crime based solely on profiling.
(4) Using a biometric categorization system to deduce someone's race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation, although this doesn't apply to labeling or filtering unlawfully acquired biometric data in the law enforcement sphere. (5) Creating facial recognition databases via untargeted scraping from the internet of facial images from CCTV footage. (6) Deploying "real-time" remote biometric identification systems in publicly accessible space for law enforcement purposes unless it's strictly necessary for one of several stated objectives.
Another prohibition, against inferring people's emotions in the workplace and educational institutions, except where AI use is intended for medical or safety reasons, is causing the biggest headaches for business, data protection lawyer and Phil Lee of Digiphile said in an email.
It's unclear whether this prohibition applies only if emotion-recognition systems sometimes use biometric data obtained, for example, by scanning people's faces, Lee said. Many companies instead use language-based model scanning tools, such as large language models, to detect emotional status among staff, such as by analyzing the language they use in responses to staff surveys. The language-based models don't use biometrics, but it's unclear whether the act as written bans them or if companies can legally use them.
Another ambiguity arises from the ban against deploying subliminal, manipulative or deceptive techniques to cause people to make decisions they would otherwise not have, Lee said. Some would argue this covers certain targeted advertising practices, though the act exempts legitimate commercial advertising practices.
Are some companies using banned AI systems? "Short and straightforward answer is yes," emailed Ashley Casovan, manager of the International Association of Privacy Professional (IAPP) AI Governance Center.
Some, such as Clearview AI and Palantir, have engaged in prohibited scraping of data for use in predictive policing, which is also prohibited, Casovan said. Many applications also use emotion or sentiment recognition for assisting with human resources recruiting and retention.
"This is where it starts to get more complicated, as these are often features within an existing product that will no longer be allowed vs the whole product being required to no longer exist," Casovan said. Where it gets even more complicated -- and will likely require additional use cases to come forward so appropriate precedent can be set -- is on the unacceptable categories such as social scoring. "This is a broad scope, which leaves significant room for interpretation."
The exploitation of vulnerabilities of persons is another category that will likely be tested in cases where groups or individuals have been harmed by AI system as part of a process that they are engaging in (e.g., use of assistive robotics in the workplace and health care decision-making systems), Casovan said.
Organizations may not be aware that they're subject to the "fuzzier categories" and "clear oversight and enforcement may take some time to catch up," Casovan said. The biggest concern IAPP is hearing from the business community is the lack of clarity about the act's scope, she said. "I have not heard of a company that has refused to comply; however, we hear periodically that there are companies that are threatening not to operate in Europe" or offer certain products there.
Companies dealing with the banned systems deadline must first determine if they're subject to the act, Casovan said. AI developers or deployers should have a team responsible for understanding what the measure requires for their company. They should inventory what AI they're using and for what purposes, classify the risk for each category, and thoroughly document why they decide to rate a system as unacceptable or high risk.
The upcoming EC guidelines "will be a living document that can be updated based on practical experience from the application of the AI Act and further evidence," the EC spokesperson said.