Parents should avoid giving AI toys to children younger than five, and use “extreme caution” when purchasing them for kids six to 12, said Common Sense Media in guidance published Thursday. AI smart toys, with voice-based interactions, pose risks to children’s privacy, safety and development, the nonprofit's researchers found after conducting a risk assessment.
Machine unlearning is no “panacea” to complexities raised by generative AI for protecting an individual’s right to be forgotten, said a paper published Wednesday in the Columbia Science and Technology Law Review. Calling for “a nuanced regulatory approach,” the article finds that the unlearning technique “can meaningfully bolster privacy governance when it is treated as a partial remedy layered alongside data‑minimization, purpose‑limitation, differential privacy, and rigorous oversight.”
Though New York’s Responsible AI Safety and Education (Raise) Act is largely identical to California’s AI safety and transparency law, SB-53, there are key distinctions between them, said Justine Gluck, a policy analyst at the Future of Privacy Forum (FPF), in a blog post.
Privacy professionals don’t need a technical mind to expand into the field of AI governance, Azelya Tanriverdi, Fitch Ratings data privacy director, said in an IAPP op-ed Wednesday.
A consumer privacy advocate raised concerns over a new OpenAI health and wellness tool, unveiled Monday.
Punishing harmful uses of AI and protecting children should be at the top of the list for Congress when passing AI policies, attorneys from Andreessen Horowitz said in a post Wednesday.
Mattel is right to delay the release of its first toy with OpenAI, child advocates said in a statement Tuesday. The advocates raised privacy and safety concerns about the toy.
States are passing a large variety of laws to regulate AI, with some, like Colorado, taking a comprehensive approach and others, like California, targeting specific issues such as discrimination and employment, Vedder Price attorney Michael Kurzer observed Thursday on a panel at the Risk Digital Global virtual conference. Also, the lawyer said he sees “strong overlap between regulation of privacy and the issues that we're focused on now with AI.”
China is crafting guardrails for applications and AI development and has spoken with the U.S. about AI safety issues, Lan Xue, a Brookings Institution visiting nonresident fellow, said Thursday at a streamed Forum Global International AI Summit in Brussels.
States' AI regulatory landscape related to privacy is “very fragmented,” and companies are struggling to navigate it, said Simonne Brousseau, a privacy and AI lawyer at Faegre Drinker, at a vCon Foundation conference Wednesday about AI and telecom issues. Brousseau said privacy, like data breaches, is governed by a patchwork of requirements across the country, all saying somewhat different things. She said AI increasingly faces a similar patchwork approach, with legions of AI bills being proposed in states.