European Regulators Weigh Federated Learning for Data Protection in Training AI
The Spanish Data Protection Agency and European Data Protection Supervisor Tuesday announced a report that analyzes how the decentralized technique of federated learning can increase privacy when data is collected and utilized to train AI models.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
Federated learning uses decentralized data to train AI models. Only results from local data are shared, without sending original data to a central server. This helps mitigate privacy risks, particularly in use cases where AI is applied in the health sector.
In addition, federated learning aligns with data-protection principles such as minimization and purpose limitation by ensuring that information remains under the control of the controller and is not exposed to third parties, the watchdogs' report said.
Federated learning is not a panacea, however, and the report highlighted challenges. These include the need for comprehensive security throughout the federated learning ecosystem and ensuring data quality to avoid bias. The report also stressed the importance of making data protection a priority at the design stage of any AI project.