Is artificial intelligence a threat to the survival of humanity?

Is artificial intelligence a threat to the survival of humanity?
AI Tips Dec 11, 2023

Is artificial intelligence a threat to the survival of humanity?

Why artificial intelligence is dangerous

Six months ago, tech leaders signed an open letter urging a halt to experiments with artificial intelligence, but were their warnings heeded? Does artificial intelligence (AI) pose an existential threat to humanity? The Future of Life Institute (FLI) believes so. Six months ago, it published an open letter calling for the "immediate halt" of major experiments with artificial intelligence.

The letter emerged amid a surge of public interest in generative AI after applications like ChatGPT and Midjourney demonstrated how technology is approaching the reproduction of human abilities in writing and art. Among the signatories were X CEO Elon Musk, Tesla and SpaceX, Apple co-founder Steve Wozniak, and author Yuval Noah Harari. They urged companies like OpenAI, responsible for ChatGPT, and Google to consider the "serious risks to society and humanity" that their technologies may pose.

Major players did not hit the "pause" button. Instead, new companies entered the race for generative AI, developing their own large language models (LLM): Meta released Llama 2, and Anthropic demonstrated a competitor to ChatGPT, Claude

Whether technological giants heeded the warnings or not, the FLI letter became a milestone. Institute Director of Policy Mark Brackel stated that they did not expect such a response to the letter, neither in widespread press coverage nor government responses. The letter was cited in hearings in the US Senate, and the European Parliament provided an official response.

Brackel told Euronews Next that the upcoming global AI security summit in Bletchley Park, United Kingdom, would be a good opportunity for governments to take steps where companies are reluctant to apply the brakes. He believes that soon generative AI could evolve into agent AI, capable of making decisions and acting autonomously.

"I think the trend is such. We see that OpenAI has practically used the entire textual internet. And now, video and podcasts, including Spotify, are being used as alternative data sources," he said.

Is catastrophe imminent? Brackel notes that FLI was founded in 2014 and has since worked on three main directions of civilization risks: AI, biotechnology, and nuclear weapons. The organization's website features a vivid video presenting a fictional account of a global AI catastrophe in 2032. Against the backdrop of tension between Taiwan and China, where the military relies on artificial intelligence in decision-making, it leads to a total nuclear war, with the video concluding with a nuclear explosion on the planet. Brackel believes that we are approaching a similar scenario.

"The integration of AI into military command and control is still progressing, especially in major powers. However, I also see a greater inclination of states towards regulation, especially when it comes to autonomy in conventional weapons systems," he said. According to him, the next year also looks promising in terms of regulating the autonomy of systems such as drones, submarines, and tanks.

"I hope this will also enable leading powers to reach agreements to prevent accidents in the control systems of nuclear forces, which are one level more sensitive than conventional weapons."

The Dangers of Artificial Intelligence for Humanity

The appeal from Elon Musk, Steve Wozniak, and over a thousand artificial intelligence researchers to immediately halt the training of AI-based systems has resonated strongly in the media and on social networks. The situation was exacerbated by the global news that the chatbot Eliza incited an environmentalist in Belgium to commit suicide. Calls are being made, before it's too late, to stop irresponsible scientists who, out of their own curiosity, are willing to push humanity to the brink of survival.

In principle, the situation with AI is not new. It arises every time there is talk of a revolutionary technology that can bring both tremendous benefit and tremendous harm. The question is whether humanity will be able to handle this scientific achievement correctly and in a timely manner.

Today, the stride of AI is striking. It confidently walks the planet, rapidly occupying new niches, displacing humans. Moreover, it is a jack of all trades—diagnosing, identifying faces, acting as an advocate in courts, composing music and paintings, defeating champions in poker and Go, and predicting not only the weather but even bankruptcies.

And this is only the beginning of its tumultuous career. Enthusiasts claim that very soon it will take on practically all spheres of life, assuming functions of expertise, management, and choosing optimal economic development paths, and eventually, even societal roles.

Artificial intelligence rapidly occupies new niches, displacing humans

Such a future no longer looks like the plot of a science fiction story. In any case, leading countries are betting on neural networks, stating: whoever becomes a leader in this field will rule the world. Therefore, national programs are being implemented, enormous funds are allocated in the hope that neural networks will help solve the complex developmental challenges that have become so intricate that brainstorming cannot find optimal answers. In short, AI is everything to us now.

Nevertheless, Musk and his supporters call for a six-month halt to developments. Why? Humanity is not ready today to use it without harm to itself. It is as if it has been handed an interesting toy and plays with it happily, not understanding the danger it poses. The authors of the letter particularly emphasize that AI laboratories are stuck in a race to create ever more powerful digital minds that no one, not even the developers themselves, can fully understand, predict, and reliably control. This could trigger very painful, and sometimes unexpected, changes in the life of humanity, posing a threat to its very existence.

AI Regulation on the Horizon

While major companies involved in artificial intelligence development continue their experiments, their leaders openly acknowledge that artificial intelligence and automation pose a serious threat to humanity. OpenAI's CEO, Sam Altman, earlier this year called on American policymakers to implement government regulation of AI, stating that he is "most worried that we, the technology industry, will do significant harm to the world." He added that this could happen in "multiple different ways" and urged the creation of an American or global agency to license the most powerful AI systems.


However, Europe may emerge as a leader in AI regulation, as the European Union is already working on a landmark AI law. The final details are still being worked out by the union's institutions, but the European Parliament overwhelmingly supported the law with 499 votes in favor, 28 against, and 93 abstentions.

According to the law, artificial intelligence systems will be categorized based on the level of risk, with the most risky types being prohibited, and systems with limited risk requiring a certain level of transparency and oversight.

"In general, we are satisfied with this law," says Brackel. "One thing we have been advocating from the very beginning, when the law was first proposed by the European Commission, is that it should regulate GPT-based systems. At that time, it was about GPT-3, not GPT-4, but the principle remained the same, and we faced a lot of lobbying from major tech companies opposing it."

"In the US and the EU, there is a perception that only users of AI systems and those implementing them know in what context they are being applied."

He gives an example of a hospital using a chatbot to communicate with patients. "You simply buy the chatbot from OpenAI; you're not going to create it yourself. And if there's an error later for which you're held responsible because you provided medical advice you shouldn't have, then obviously, you need to understand what product you bought. And part of that responsibility really should be shared."

While Europe awaits the final wording of the EU AI law, the upcoming Global AI Security Summit on November 1 is expected to be the next event providing some insight into how leaders from different countries will approach AI regulation in the near future.

back to top
Package Templates
Close
Please Wait...