States are passing a large variety of laws to regulate AI, with some, like Colorado, taking a comprehensive approach and others, like California, targeting specific issues such as discrimination and employment, Vedder Price attorney Michael Kurzer observed Thursday on a panel at the Risk Digital Global virtual conference. Also, the lawyer said he sees “strong overlap between regulation of privacy and the issues that we're focused on now with AI.”
China is crafting guardrails for applications and AI development and has spoken with the U.S. about AI safety issues, Lan Xue, a Brookings Institution visiting nonresident fellow, said Thursday at a streamed Forum Global International AI Summit in Brussels.
States' AI regulatory landscape related to privacy is “very fragmented,” and companies are struggling to navigate it, said Simonne Brousseau, a privacy and AI lawyer at Faegre Drinker, at a vCon Foundation conference Wednesday about AI and telecom issues. Brousseau said privacy, like data breaches, is governed by a patchwork of requirements across the country, all saying somewhat different things. She said AI increasingly faces a similar patchwork approach, with legions of AI bills being proposed in states.
A service for making AI-generated apps said it’s embracing privacy by design by integrating an AI-powered code scanner.
Regulators enforcing AI laws could be drawn to investigations of companies based on what type of data they collect, regardless of the organization’s size, said Metaverse Law founder Lily Li on a webinar Monday.
Former officials at the FTC and Consumer Financial Protection Bureau co-authored a guide on using existing privacy and consumer protection laws to regulate against AI chatbot harms to minors, the Electronic Privacy Information Center said Monday. EPIC co-wrote the white paper.
Lack of privacy and data protection is something that 11% of teens aged 15 to 18 consider to be the biggest concern with generative AI, according to a study from the Family Online Safety Institute (FOSI) published Monday.
Nearly 75% of health care employees are shadow using AI, resulting in more than 80% of data policy infractions, a panelist said during the Health Care Compliance Association event Wednesday. Accordingly, providers should directly address the issue with staff immediately and implement measures that make employees' AI tools less of a privacy risk.
Facial recognition technology (FRT) deployment in stadiums can facilitate safety and security, though its use raises privacy and cybersecurity concerns, said Orrick lawyers in a Monday blog post.
Government intervention can either advance or harm human rights, so there needs to be transparency and accountability from governments and companies developing and deploying AI, said experts during a panel at a Center for Democracy & Technology (CDT) event Tuesday.