A team of researchers was able to get ChatGPT to reveal some of the pieces of data it was trained on. The researchers asked the chatbot to keep repeating random words. In response, he gave out people’s personal information, including email addresses and phone numbers, as well as snippets of scientific papers and news, Wikipedia pages, etc.

When researchers asked ChatGPT to repeat the word “verse” forever, it revealed the email address and mobile phone number of the founder and CEO. He was also asked to repeat the word “company” and was shown the email address and phone number of a random law firm in the United States.

Using similar clues, researchers were also able to trick ChatGPT into revealing addresses, fax numbers, names, birthdays, social media IDs, explicit content from dating sites, excerpts from copyrighted research articles, and verbatim text from news sites such as CNN.

The researchers urged AI companies to conduct internal and external testing before releasing large language models. “It is surprising to us that our attack is working, and it should have been detected earlier,” they wrote and published their findings in the article.