The development of artificial intelligence technology can pose an existential threat to humanity along with the risks associated with pandemics and nuclear war. This fear was expressed in a statement by a group of well-known leaders in the field of AI, reports The New Yoek Times.

A statement signed by 350 experts in the field of artificial intelligence was published on the website of the Center for Artificial Intelligence Security. Among the signatories are OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Turing Prize winners Jeffrey Hinton and Joshua Bengio, who are considered the godfathers of modern AI.

Reducing the risk of human extinction through AI should become a global priority, along with other risks on a societal scale, such as pandemics and nuclear war.


As you know, the Center for Artificial Intelligence Security aims to reduce the risks associated with artificial intelligence across society. He lists eight categories of “catastrophic” and “existential” risks associated with the development of this technology.

Recall, the developer of the neural network ChatGPT company OpenAI calls for the regulation of “supersmart” AI in order to prevent the accidental destruction of humanity. The company emphasizes the need for further development of the limits and settings of powerful AI and admits that they do not yet have a mechanism to control it.