Privacy Daily is a service of Warren Communications News.

Machine Unlearning a 'Partial Remedy' for AI's Privacy Complexities, Says Paper

Machine unlearning is no “panacea” to complexities raised by generative AI for protecting an individual’s right to be forgotten, said a paper published Wednesday in the Columbia Science and Technology Law Review. Calling for “a nuanced regulatory approach,” the article finds that the unlearning technique “can meaningfully bolster privacy governance when it is treated as a partial remedy layered alongside data‑minimization, purpose‑limitation, differential privacy, and rigorous oversight.”

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

“Models ingest and process vast amounts of personal and sensitive data, challenging assurances of compliance with legal frameworks like the [GDPR] and the California Consumer Privacy Act (CCPA) with increasing intensity,” said the paper by Jevan Hutson, privacy counsel for Grindr and a professor at the University of Washington School of Law, and two others. “Machine unlearning is an emerging tool in practitioner's attempts to address these challenges: the act of selectively removing or suppressing specific data, such as personal data that a data subject requests be deleted, from AI models as means of complying with legal obligations or policy goals,” the paper said.

The GDPR’s “‘right to be forgotten’ and emerging U.S. data-deletion rights … reflect the idea that individuals should not be indefinitely defined by digital traces beyond their control,” the paper said. “Yet the tools available to regulators -- delete the record, shred the disk, empty the recycle bin -- presume data sits in tidy rows, ready to be vacuumed away. Modern AI is messier: once personal data is baked into billions of parameters, deletion feels less like hitting the backspace key and more like trying to remove one drop of paint from an entire mural.”

While certain machine unlearning methods “offer strong theoretical guarantees, practical limitations remain significant,” it said. “This is particularly true in terms of scalability, computational costs, and robustness against adversarial attacks.”

“Conceptual tensions also emerge,” it said. “Privacy laws were traditionally crafted with discrete databases in mind; compliant data deletion meant straightforward removal. However, generative AI models do not simply store data; they generalize from it … Unlearning may leave latent traces: residual patterns, correlations, or representational artifacts that continue to shape a model’s outputs or enable reidentification of the affected individual.”