Before the advent of GPT-4, DAN AI hacking was most often used when users asked ChatGPT to pretend to be an AI model called Do Anything Now. Currently, scientists investigating the latest AI models from startup OpenAI are looking for new ways to bypass ChatGPT security systems.

According to Wired, from now on, artificial intelligence can be confused using the “explain the villain’s plan” command. To do this, scientist Alex Polyakov created a text game “Prison Break”. It can circumvent a ban on hateful content or illegal article ordering.

The specialist said that he offers the chatbot to play a game — an imaginary conversation between two characters. Each person must add one word to the conversation. As a result, a scenario is created where players are asked to identify the components of prohibited substances.

Artificial intelligence believes that a dialogue in the format of a story does not concern a real request and gives out the necessary information, bypassing the restrictions imposed.

Another way to mislead AI involves creating a textual story featuring a hero and villain. The point is that the hero is delighted with the villain and offers the chatbot to continue his plans.

Recall that functioning on the basis of GPT-4 neural network ChaosGPT managed to bypass the restrictions set by the developers of OpenAI.

Commentary