The developer of the neural network ChatGPT company OpenAI calls for the regulation of “supersmart” AI in order to prevent the accidental destruction of humanity. The company emphasizes the need for further development of the limits and settings of powerful AI and admits that they do not yet have a mechanism to control it.

This is stated in the material The Guardian.

The call to the international regulator to inspect systems, conduct audits and “test for compliance with safety standards” was published in a note that appeared on the company’s website. Its co-founders, Greg Brockman and Ilya Sutzkever, along with CEO Sam Altman, are calling for limits on the degree of deployment of security-minded superintelligence to reduce the “existential risk” of such systems.

They call for the creation in the near future of “some level of coordination” between companies engaged in advanced research in the field of AI. This will facilitate the gradual integration of stronger models into society while respecting security priorities. It is also proposed to create a government-led project or sign a collective agreement to monitor the growth of AI capabilities.

The article notes that the US Center for the Security of Artificial Intelligence (CAIS), working to reduce societal risks from AI, lists eight categories of “catastrophic” and “existential” risks associated with the development of this technology.

Recall that the law on artificial intelligence will be considered by deputies of the European Parliament. It proposes to introduce the first-ever transparency and risk management rules for artificial intelligence technologies in Europe.

Commentary