Privacy Daily is a service of Warren Communications News.
‘Context and Nuance’

Industry Asks OSTP to Defend AI-Related Data Collection

The Trump administration should protect industry from regulations that restrict data-collection activity needed to deliver innovative AI services, tech associations said in comments due Saturday.

Sign up for a free preview to unlock the rest of this article

The National Science Foundation requested comments on President Donald Trump’s AI Action Plan on behalf of the White House Office of Science and Technology Policy. The administration initiated the action plan after Trump rescinded the Biden administration’s AI executive order.

Computer & Communications Industry Association, NetChoice, Software Information Industry Association (SIIA) and the Business Software Alliance argued for the Trump administration to protect industry’s large-language models against regulations that could prevent AI systems from operating at full capacity.

AI models often rely heavily on “diverse datasets” that include personal information, said CCIA: “This data is essential for understanding context and nuances in language usage by different types of entities, ranging from individuals to small businesses to large corporations, enabling tasks such as sentiment analysis.” Eliminating or masking personal data from training models “can impair a model’s ability to understand language effectively and deliver services the public expects.”

Mozilla urged the administration to ensure dominant tech companies can't unfairly dominate this era of AI innovation, which it said could threaten individual privacy rights. Creating clear guidelines for consumers and businesses through federal privacy legislation will be critical to enabling AI industry growth and diversity, said Mozilla.

The Center for Democracy & Technology said the action plan should require agencies to set “robust standards” for monitoring open model capabilities and identify “potential public safety and national security risks” instead of “prematurely imposing export restrictions that would undercut American competitiveness and cede AI leadership.” Unleashing AI system potential “requires enabling responsible uses through robust guardrails to protect individual’s safety, privacy, and civil liberties,” said CDT.

NetChoice said it has drafted federal privacy legislation endorsing “mandatory breach notifications when sensitive personal information is compromised, rights for consumers to access and request deletion of their personal information, and the ability to opt out of third-party data sales.” The association said varying state privacy regulations are causing compliance confusion due to conflicting definitions for personal data, consent requirements and enforcement mechanisms.

SIIA said there have been concerted efforts at the federal and state levels to limit the use of publicly available information with the intention of restricting data broker sales to third parties. While well-intended, these restrictions in practice can have a negative impact on innovation as well as AI’s role in law enforcement activity. SIIA recommended the administration work with Congress to pass a federal privacy law to preempt state laws on AI safety and security.

Katy Milner, an attorney at Hogan Lovells, said in an interview Monday she expects states to continue to use privacy mandates to regulate how AI systems interact with data. California’s rulemaking on automated decisionmaking technology is rooted in the state’s privacy authority, she said. It’s a straightforward approach, but AI innovators believe existing regulations can, in many cases, be applied to AI technology, she said.