Privacy Daily is a service of Warren Communications News.
'Novel' AI Privacy Approach

Canadian DPA Mulls Age Assurance, AI Privacy Strategies

Priorities for Canada's Office of the Privacy Commissioner include addressing the privacy impacts of fast-moving technological developments and ensuring that children's privacy is protected, the OPC said in a Friday report setting out the results of a consultation on age assurance.

Sign up for a free preview to unlock the rest of this article

The OPC also funded Vancouver Island University research, posted Friday on Cornell University's arXiv website, about ensuring that privacy protections evolve alongside emerging AI technologies and youth-centered digital interactions.

The OPC's June-September 2024 consultation on age assurance technologies and their privacy implications identified six themes, the office said.

One was the need to differentiate between forms and uses of age assurance. Respondents pointed out that age assurance isn't monolithic or single-purpose and that it doesn't always take the form of an access gate. The OPC agreed that privacy considerations might differ depending on the type of assurance used and its purpose for its use, and said it would make such differentiations where appropriate.

A second theme was that the impacts associated with the use or misuse of age assurance shouldn't be underestimated, the OPF said. It acknowledged that while the harms age assurance seeks to mitigate are significant, including behavior such as online grooming and extortion, the harms caused by lack of access are also important. OPC said it would "maintain our priority focus" in this area.

The third theme was that age assurance isn't the goal but a way to achieve the objective of keeping young people safer online, a position with which the regulator agreed. Respondents argued that assurance doesn't have to be perfect to work, and that it's just one tool among many to help keep children safe.

Theme four was that the OPC should consider who should be responsible for age assurance. Options include implementing age assurance at the device, individual website or online service, or app store level. There were differences of opinion on what role stakeholders -- parents, website/online services and service providers -- should play. The OFC said it would further explore the potential responsibilities of various players.

The fifth theme was that "age estimation deserves special caution -- or could be preferable to age verification." The OPC said it continues to believe that age estimation can be implemented in a privacy-protective and accurate way, but that there are sensitivities connected to its use. It's planning guidance on whether image estimation should be limited to certain situations and what privacy safeguards should be built into such systems.

The final theme was that age assurance should be subject to a risk-based assessment. The OPC said that while its initial position was that age-assurance systems should be restricted to situations that pose a high risk to the best interests of youngsters, it would now take a more nuanced view. The regulator said it would draft guidance on when age assurance should be used and on how to design age assurance techniques.

Protecting children in the digital world was also a focus of the Vancouver Island University study on privacy ethics alignment in AI.

The increasing integration of AI in digital systems has "reshaped privacy dynamics, particularly for young digital citizens navigating data-driven environments," the authors wrote. The study examined evolving privacy concerns among three groups: (1) Digital citizens (ages 16-19), (2) parents, educators and (3) AI professionals. It assessed differences in data ownership, trust, transparency, parental mediation, education and risk-benefit perceptions.

Researchers analyzed input from 482 participants via surveys, interviews and focus groups. They found that young users stress autonomy and digital freedom; parents/educators want regulatory oversight; and AI professionals prioritize the balance between ethical system design and technological efficiency.

The data showcased gaps in AI literacy and transparency, highlighting the need for comprehensive, stakeholder-driven frameworks that accommodate the needs of diverse users, the authors said.

The study suggested a "novel" privacy-ethics alignment in AI where privacy decision-making would be a dynamic negotiation between stakeholders. Such a model would provide a roadmap for aligning privacy expectations and practical governance strategies, the authors said.