AI is top of mind for leaders. Execs discuss the risks

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/10/53249487823_6808d5bb83_o-e1697208210718.jpg?w=2048

When it comes to AI, leaders have a love-hate relationship; it’s both promising and scary.

At Fortune‘s Most Powerful Women Summit in Laguna Niguel, Calif., executives from PwC, Google DeepMind, Slack, and IBM addressed the the risks and potential rewards of AI for leaders and companies.

Slack CEO Lidiane Jones said she is optimistic about how AI will improve worker productivity by automating mundane tasks, opening up more space for creativity and helping with burnout. She said the AI revolution is really a “productivity revolution.”

Kate Woolley, general manager of IBM Ecosystem at IBM Corp, seemed to echo Jones, adding that AI provides an opportunity to accelerate innovation.  

“I believe in the transformational power that AI can have in the future, and the world that it can enable for my daughters and their generation,” Google DeepMind’s chief operating officer, Lila Ibrahim, said. 

Mitra Best, partner and technology impact officer at PwC, stressed that we’re at an inflection point, one that most hadn’t realized would come this year. Still, AI has been around for a long time, and it’s been used by companies as such. It’s generative AI that’s a “game changer” because of how it redfines work and permeates every aspect of our lives, she said. 

“It’s so powerful…it could change the future in a good way or in a bad way,” Best said. 

With that, it’s important that AI is used to enhance human experience, not replacing it, Best added. It’s not yet clear how this technological transformation, led by AI, will play out—or how companies will mitigate the risks involved. 

“Bias, misinformation, your risk of widening the digital divide and the erosion of trust,” Best said. “Those are the risks that we want to avoid.”

Protection from AI risks

Ibrahim later said there are three major risks associated with AI: misinformation and bias, misuse and bad actors, and long-term risks. Companies need to come together and work together, such as with Frontier Model Forum, where leading AI firms develop best practices and meet with governmental entities. 

It’s crucial that they bring civil societies in on the collaboration, so that companies don’t “further propagate some of the biases,” Ibrahim said. 

Best’s team developed a tool called bias analyzer, which takes the output of decision-making algorithms and models and benchmark them against the library of what’s expected and flag for potential biases, or areas of bias, she explained. And, although Woolley said that we have to be careful when regulating AI, she stressed that we should be holding companies accountable for the technology that they’re creating. 

“Because I think when companies are accountable for what they’re creating, they’re very likely to create the right thing,” she said. 

Toward the end of the panel, an audience member asked about the actual concept of humanizing AI—playing on the title of the session, but really asking what the panelists thought of digital companies that seem to be popping up. 

Best answered: “I actually don’t like humanizing AI because I don’t think AI is human, and I think the narrative needs to change… AI is a tool that can extend our capabilities, our capacity, and our creativity.”