Meta is building a capability called Private Processing for the messaging app WhatsApp, giving customers the option of using AI to process messages in a secure cloud environment that no one, including the social media and messaging platforms, can access.
While wearable smart-tech devices can be a benefit in the workplace, companies that want to deploy this technology must ensure they are balancing privacy and security risks of workers and workplaces, said lawyers during a Practising Law Institute webinar Wednesday.
Legal challenges around AI are growing with the technology, said an OpenAI official during a Wednesday panel at the IAPP Global Policy Summit in Washington. Meanwhile, an official from Anthropic said the company is emphasizing safety and transparency with Claude, its AI assistant.
OpenAI CEO Sam Altman said Thursday he understands the “very strong reactions” and privacy concerns surrounding his biometric identity company Tools for Humanity (TFH).
Consumer advocates oppose a Connecticut AI bill backed by Gov. Ned Lamont (D), the Electronic Privacy Information Center said Tuesday. EPIC said it raised red flags about SB-1249 in a letter to state legislators on March 28, along with Access Humboldt, Consumer Federation of America and TechEquity.
Having humans understand and actively involved in implementing AI systems and tools in businesses can help counter privacy and ethical concerns, said tech experts on a Microsoft webinar Thursday.
About half of U.S. workers worry about the future impact of AI use at work, Pew Research Center said in a Tuesday report. Nearly one-third believe AI will lead to fewer job opportunities in the future, found Pew, which surveyed nearly 5,300 employed adults in October. However, most workers (63%) said they don’t use AI much today.
While AI practices continue to raise privacy concerns, privacy laws may create a pathway for AI regulation, said Clark Hill privacy attorney Myriah Jaworski in a Tuesday webinar about the rise of AI liability.
Due to “misperceptions” about its multistate AI policymaker working group, Future of Privacy Forum “will be withdrawing from our work supporting the Working Group,” FPF CEO Jules Polonetsky said in a blog post Tuesday. FPF had convened the bipartisan group of 200 state legislators from more than 45 states to work on AI bills.
Assessments around AI should be done altogether and not split into separate categories of risk, privacy, cybersecurity or other issues, said chief privacy officers at an International Association of Privacy Professionals webinar Tuesday.