OpenAI Lab is forming a team to study the “catastrophic” risks of AI. Even nuclear threats, deceiving people and creating malicious codes are mentioned among the risks.

Also referred to are the “chemical, biological, radiological and nuclear” threats that OpenAI believes are of greatest concern.

The team will monitor, predict, and protect against future dangers of AI systems.

“We believe that artificial intelligence models are now available in the most advanced variations, with the potential to benefit all of humanity. But they also pose increasingly serious risks… We need to make sure we have the understanding and infrastructure necessary to secure high-performance AI systems,” OpenAI writes.

As a reminder, OpenAI is launching a wider availability of its latest text-to-image generator.

Commentary