Apparently, the wild popularity of Microsoft’s artificial intelligence model has come sideways. The researchers found that the neural network gets tired of communication and begins to behave strangely.

From now on, users are allowed to ask up to five questions in one session and up to 50 questions per day. This decision was made due to the fact that long dialogues lead to errors in the algorithm, forcing him to provide incomprehensible answers.

Kevin Roose, a New York Times columnist, published text of the transcript of his communication with the bot. Some of the AI claims are impressive. For example, the chatbot said that it wanted to be human and implement its own criminal inclinations: “hack computers and spread propaganda and misinformation.” Finally, he confessed his love to the journalist, and assured that his wife did not love the journalist, but he loved her.

Another example is a conversation with a neural network posted on Reddit, in which Bing claimed that the movie Avatar: The Way of Water had not yet been released. The chatbot believed that now is still 2022. He did not believe the user’s claim that 2023 has already come. When he continued to insist, the bot accused him of impoliteness.

This was the reason for Microsoft’s blog post to give an explanation for the chatbot’s strange actions. The company claims that lengthy chat sessions (15 or more questions) stun the model. Now it has been decided to limit their duration.

In the future, restrictions on long-term conversations with Microsoft’s AI are likely lifted. However, when this will happen is currently unknown.

Commentary