French Regulator Adopts 'Pragmatic Approach' on Interplay Between GDPR and AI Act, Attorneys Say
French data protection regulator CNIL's guidelines on AI models "illustrate a more pragmatic approach than other regulatory positions addressing multiple issues raised by AI," Hogan Lovells privacy attorneys argued in a Feb. 18 post. For example, CNIL's position differs from that of Garante, the data protection authority (DPA) in Italy. Meanwhile, DPAs in other EU countries are waiting to see if the European Data Protection Board (EDPB) can craft a consensus, Etienne Drouard, a co-author of the post, said in an interview Thursday.
Sign up for a free preview to unlock the rest of this article
CNIL's guidelines recommend that when personal data is used to train an AI model and may potentially be memorized by it, individuals must be informed in advance.
In addition, CNIL recommended, while the EU General Data Protection Regulation (EDPR) gives people the right to access, rectify, object and delete their personal data, AI developers should incorporate privacy protections in the design stage and "pay special attention to personal data within training sets" (see 2502070025).
The French guidelines balance the need for detailed information with practical considerations, particularly concerning indirect data collection and web scraping, the post said. They acknowledge that the exercise of data subjects' rights "must be balanced against operational realities." That flexible stance contrasts with the general approach at EU level, the article said.
Privacy regulators must prioritize their concerns around building AI models or regulating AI usage in relation to the General Data Protection Regulation (GDPR), Drouard said. They can either focus on the primary collection of personal data for training, which involves the "original sin" of gathering data without user consent, or, at the least, prior information and right to object, or they can concentrate on finding solutions for preserving data rights at the output level, once the AI model has been built, he said.
The CNIL has taken the position that legitimate interests (LI) could be the basis for collecting personal data for AI model training, an approach rejected by the Garante, which called for user consent, Drouard said. The EDPB, whose Dec. 18 opinion on processing personal data for AI models, favors LI and has abandoned the general requirement of prior consent, he added (see 2412180004).
The EDPB opinion made clear that questions about individuals' rights, pseudonymization, anonymization, limited retention and other issues must be resolved case-by-case, through further EDPB opinions, Drouard said. This is why the CNIL published its recommendations just before the Feb. 10-11 AI Action Summit in Paris. The recommendations are also a way to show that the privacy watchdog should be appointed AI regulator for France, he said. On a broader scale, he added, the recommendations set out a proposal for other DPAs for further guidance from the EDPB.
The problem with having the CNIL as AI regulator is that the GDPR would have first priority, the AI Act (AIA) second, Drouard said. But the GDPR is about individual privacy rights, while the AIA is concerned about societal/cognitive bias toward individuals and groups. Any corrective measures under the AIA would be for the benefit of society, not an individual's situation. There's a huge difference, he said, between protecting individuals' privacy (GDPR) and safeguarding our societies from discrimination, exclusion, or democratic or geopolitical risks resulting from AI development.
The CNIL is attempting to combine both purposes of the two pieces of legislation, Drouard said. Its position isn't as extreme as Garante's, he added. CNIL's approach is "better than expected": It's a necessary evolution after 15 years of focusing on consent for processing personal data. AI makes it useless to raise consent as a basis for training models, and consent isn't the solution to the development of AI models, but LI is, he said.
With the CNIL and Garante having taken different positions on processing personal data to train AI models, observers are waiting to see what Germany, the Netherlands and Ireland will do, Drouard said. There will be other EDPB guidelines issued over the next three years, he said. No other DPAs have addressed this issue yet, because they're waiting to see how a broader consensus can be found within the EDPB, he added.