Employees of the Korean company Samsung “leaked” confidential data of the chatbot artificial intelligence ChatGPT, reports The Economist Korea.

Shortly after Samsung’s semiconductor division allowed engineers to use ChatGPT, they shared classified information with the bot at least three times. Thus, one of the employees demanded that the chat bot check the source code of the confidential database for errors, another asked to optimize the confidential code, and the third transmitted the sound file of the business meeting to ChatGPT and asked to record it.

According to the publication, Samsung “drew conclusions” and tried to rectify the situation. For employees, a limit on the length of requests to ChatGPT has been introduced — it must now be one kilobyte or 1024 characters of text. In addition, the company has now initiated an investigation into three perpetrators of the situation.

The “insidiousness” of ChatGPT is that, according to its data use policy, the chatbot uses all the data obtained to train its models. That is why the development company has repeatedly urged users not to share secret information with the chatbot.

We will remind, earlier we reported that ChatGPT has learned to crack Windows.

Commentary