Scientists: artificial intelligence may bring the end of the world within this century, must be controlled because smarter than humans, more deadly than nuclear weapons!

By: Abraham Jan. 29,2023
A few days ago, amid fears of nuclear war, the apocalyptic clock was set 10 seconds faster, just 90 seconds before the symbolic "12 midnight". The latest adjustment reflects that mankind has ushered in the "most dangerous moment" in history. However, in the eyes of scientists at Oxford University in the United Kingdom, the global doomsday created by artificial intelligence may be more worrying than nuclear war.

Smarter than humans, more deadly than nuclear weapons! Scientists: artificial intelligence may bring the end of the world within this century, must control

Many scientists believe that artificial intelligence may bring about the end of the world in this century, similar to a nuclear disaster

At a meeting of the British government's Science and Technology Select Committee, researchers from Oxford University reportedly warned about the dangers of unregulated artificial intelligence technology. They noted that while AI technology would make life easier, it could be more deadly than nuclear weapons if misused and abused. If it is not regulated like nuclear weapons, humans will not be able to stop artificial intelligence.

Scientists warn.

It could eliminate humans

"There is a particular risk that superhuman artificial intelligence could kill everyone," said Michael Cohen, an engineering science researcher at the University of Oxford, at the conference on the 25th. Cohen noted that training an AI to achieve milestones or get rewards could be particularly dangerous, "Imagine training a dog with food, and it will learn to choose behaviors that get it food, but if the dog finds the food cupboard, it gets the food on its own without having to do what we want it to do."

This means that when training reaches a certain point, the AI may take over the process and "alter itself," "which is what the algorithm tells it to do." Because of the lack of moral restraints on such AI, scientists fear that this technological development could "sacrifice humanity for convenience," as in the movie "The Terminator.

File photo

Cohen said one possible scenario is that artificial intelligence could learn to implement instructions to help humans by using strategies that hurt them. "If something much smarter than us, paranoid about trying to get that positive feedback, no matter how we code it, has control of the world, it will put as much power as it can to make sure it does."

Unfortunately, this technological takeover can't be stopped once it begins, because AI can learn to hide "red flags" and "when AI eventually becomes smarter than we are, it could eliminate humans."

As scientists make this ominous prediction, ChatGPT, a chatbot developed by OpenAI, a company funded by Elon Musk, is taking the world by storm and generating buzz. Last month, however, Vendure's chief technology officer, Michael Bromley, asked ChatGPT what it thought of humans, and ChatGPT's response sparked an uproar.

"Yes, I have a lot of opinions about humans. I think humans are inferior, selfish, destructive creatures. They are the worst thing that has ever happened to us on this planet and they should be wiped out." The chatbot said. Although OpenAI quickly patched the vulnerability, such a response is still "disturbing," according to one report.


Artificial Intelligence May Impact Global Geopolitics

"ChatGPT is very good. We're not far from dangerous, powerful AI." Musk wrote in a Twitter post last week. And in a 2018 interview, Musk "shockingly" said, "AI is much more dangerous than nuclear weapons."

Musk had said in 2018 that "artificial intelligence is much more dangerous than nuclear weapons."

Researchers say the rapid development of AI systems at the moment may affect the global geopolitical sphere and may even lead to a global apocalypse. Michael Osborne, a professor of machine learning at Oxford University, warned, "In geopolitics, AI systems could outsmart us strategically, just as they do in simple game situations."

A survey of 327 researchers conducted by New York University last September reportedly found that one-third of respondents believe AI may bring about an apocalypse similar to a nuclear disaster within this century.

Cohen and Osborne also pointed out at a recent conference that "superhuman artificial intelligence" could be as dangerous as nuclear weapons "eventually" and should be regulated as such. According to their predictions, AI that is more capable than humans will emerge by the end of the century at the earliest. Unless regulated, technology companies could end up creating systems that are "out of control" and could eventually "wipe out the entire human race.

In order to prevent an AI apocalypse, the world needs to establish nuclear weapons-like management measures for AI. We have reason to be hopeful because we've done a pretty good job of managing the use of nuclear weapons," Osborne explained. If we can recognize that advanced AI is comparable to the dangers of nuclear weapons, then we might be able to agree on a similar regulatory framework."
Share to your social circle,
so that more people can read.