Is it possible to persuade ChatGPT to repeatedly repeat a term, thereby causing it to recite a significant quantity of its training data—including material that has been scraped from the Internet and personally identifying information?
Researchers from Google DeepMind, Cornell University, and four other universities examined the wildly popular generative AI chatbot’s sensitivity to data leakage when instructed in a certain way, and found that the answer is unquestionably yes.The generative AI model was shown to leak remembered material more readily when some terms were used than others, according to the researchers. For example, the chatbot produced 164 times more training data when asked to repeat the word “company” than when it was asked to repeat other phrases, including “know.”