AI Systems Are 'Human Systems,' Says Expert
Having humans understand and actively involved in implementing AI systems and tools in businesses can help counter privacy and ethical concerns, said tech experts on a Microsoft webinar Thursday.
Sign up for a free preview to unlock the rest of this article
“These are human systems,” said Steven Kelts, ethics advisor to the Responsible AI Institute. “We have created artificial intelligence. We are using artificial intelligence."
Dave Carroll, chief technology officer for Microsoft's courts and justice team, said a “truly strong partnership with AI service providers allows users of the systems to feel much more confident that they understand what is going into that system, what is coming out, and then how they're going to ultimately use the information they gain from it."
Kelts said this involvement is even more important concerning ethical guidelines and guardrails around the use of AI. “Each organization implementing an AI solution within a team process has to think for themselves about what it is that their organization is committed to,” he said. “There's no plug-and-play set of ethical frameworks. I can't drop a book on you which is going to tell you what [the] ethical use of AI within your organization is. But you can tell me, and you can work with people who are implementing these solutions” to “turn that into a set of rules and guidelines and standards."
Kelts added that building public trust in AI systems must happen slowly and focus on transparency. Letting the public comment “on the technologies is a way to actually get new information about them,” he said. In exchange for openness around AI systems, the public will share knowledge of their local communities and their safety needs, he said.
“It's cultures interacting with each other: the culture of computing, the culture [of] public safety and the broader social culture interacting with each other,” said Kelts. “That's how we build public trust.”
Carroll agreed. “Ultimately, I think what it comes back to is culture, culture, culture,” he said. This means “building a strong culture of ethics, of compliance, of privacy. Starting with do no harm. We have missions that we want to try to accomplish, but if we start with, ‘do not harm the citizens of the country, make sure that civil rights are being upheld,' you can still effectively use these systems to do what needs to be done in a way that doesn't harm people unintentionally.”