In a surprising shift from the enthusiasm that dominated discussions about artificial intelligence at last year’s World Economic Forum conference in Davos, global elites are now expressing a newfound concern about the risks associated with AI. Here’s the full story.

The Change in Feelings

Last year’s optimism about AI, characterized by excitement over its capabilities and potential applications, has given way to a more sober assessment of the challenges it presents.

Chris Padilla, IBM’s vice president of government affairs, noted the shift in tone, and said, “Last year, the conversation [surrounding AI] was ‘gee whiz.’ Now, what are the risks? What do we have to do to make AI trustworthy?”

The Concerns

The concerns voiced at Davos span a range of issues, from the economic impact of job displacement due to automation to the potential for AI to contribute to the spread of disinformation, especially in the context of crucial events such as elections.

Leaders from various sectors, including business, government, and economics, are now openly discussing the need to reassess the rapid advancement of AI and consider measures to address its risks.

Human Beings Must Control the Machines

Chinese Premier Li Quang shared the importance of human control over AI, stating during a speech at the conference, “Human beings must control the machines instead of having the machines control us. AI must be guided in a direction that is conducive to the progress of humanity, so there should be a red line in AI development — a red line that must not be crossed.”

Even industry insiders are acknowledging the potential dangers of unchecked AI development.

Don’t Want to See an AI Hiroshima

Sam Altman, CEO of OpenAI, a leading AI research laboratory, recognized the limitations of current AI models, and said, “The OpenAI-style of model is good at some things, but not good at sort of like life and death situations.”

One of the most notable warnings came from Salesforce CEO Marc Benioff, a figure deeply entrenched in the tech industry. Despite his company’s investments in AI, Benioff expressed grave concern, sharing during a panel discussion, “We don’t want to have a Hiroshima moment. We’ve seen technology go really wrong, and we saw Hiroshima, we don’t want to see an AI Hiroshima.”

AI Has to Be Almost a Human Right

In a separate interview, Benioff delved into his long-standing worries about AI and its societal impact.

“I think AI has to be almost a human right,” he said, sharing concerns about the potential exacerbation of inequality by AI technologies.

While it’s encouraging that CEOs are acknowledging the risks associated with AI, skepticism remains. Some leaders at Davos have announced plans to replace human workers with AI.

So what do you think? How do you think global leaders can strike a balance between reaping the benefits of AI and addressing the ethical concerns and potential risks associated with its widespread use?