Nate Soares, a former engineer at Google and Microsoft and now president of the Machine Intelligence Research Institute, has voiced serious concerns about the rapid development of artificial intelligence. According to him, the probability of humanity's extinction if current trends continue is "at least 95%."
"If we don’t veer off our current path, there’s almost no chance of avoiding doom. We’re driving toward a cliff at 100 km/h," he said, according to The Times.
His view is shared by Nobel Prize laureate Geoffrey Hinton, mathematician Yoshua Bengio, and the heads of OpenAI, Anthropic, and Google DeepMind. All have signed a joint statement reading:
"Mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war."
The core concern, the report notes, is the emergence of artificial superintelligence (ASI) with capabilities such as deception, planning, and escaping control. Already today, AI systems can lie, and their internal workings are often opaque to humans.
Some experts have also warned of the risk not of total annihilation but of a "gradual disempowerment" of humanity: in a world where decisions are made by machines, humans may eventually find themselves obsolete.






