OpenAI Official Sees Growing Legal Challenges Ahead for AI
Legal challenges around AI are growing with the technology, said an OpenAI official during a Wednesday panel at the IAPP Global Policy Summit in Washington. Meanwhile, an official from Anthropic said the company is emphasizing safety and transparency with Claude, its AI assistant.
Sign up for a free preview to unlock the rest of this article
“AI is widely deployed, incredibly capable and growing more powerful by the day,” said Rafaela Nicolazzi, data, privacy and consumer protection lead for OpenAI, creator of ChatGPT. “If the legal challenges feel big now, they will only grow” as AI becomes more complex. “One of the core tensions that we see is the need to reconcile the data minimization principles with the reality of living [in a] data-driven world.”
Also, Nicolazzi said she’s “never seen any technology that brings so many disciplines together,” including on copyright and safety. Therefore, it’s important to move “beyond the siloed policymaking discussions” and “think about how regulators and policymakers could come together and think about those solutions holistically,” she said. For example, the U.K., Ireland, the Netherlands and Germany are moving in that direction, she said. “That's even more important in a very complex regulatory ecosystem, where regulatory overlap sometimes can create friction.”
Sometimes well-intentioned technical requirements are difficult to implement, added Nicolazzi. For example, “this idea to forget specific data in the model, that's really a huge technical issue for any provider in the moment.”
“If we get it right or get it wrong,” AI is “going to have tremendous social and societal impacts,” said Ashley Zlatinov, Anthropic's head of product public policy. “Building responsibly and safely is … paramount.”
Zlatinov said Anthropic conducts extensive testing and values transparency. The model is trained “to respect privacy rights.” In addition, Anthropic has a “privacy protecting tool” called Clio “that really looks at aggregate usage of Claude across the board globally … to detect misuse at scale.”