User shocked to find chats naming unpublished research papers, and other private data.
Okay, so…where were these files, and how did the model get the data introduced? Should be easy to find out, unless they don’t want to tell anyone, which is weird if they commented on this.
Literally read the article
You can’t tell me what to do!! You’re not my dad!!!
Now son, you know your education is important and I just want what’s best for you.
Shut up dad! You never came home from getting smokes and milk!!
Well that’s because you’re always on that damn Gameboy playing your pokemans, now go do your homework or I’m deleting your pichachu
Jokes on you! I’m trying to keep my tomagochi (sp?) alive now!
They’re not files, it’s just leaking other people’s conversations through a history bug. Accidentally putting person A’s “can you help me write my research paper/IT ticket/script” conversation into person B’s chat history.
Super shitty but not an uncommon kind of bug. Often either a nasty caching issue or screwing up identities for people sharing IPs or similar.
It’s bad but it’s “some programmer makes understandable mistake” bad not “evil company steals private information without consent and sends it to others for profit” kind of bad.
The article (and title) update are saying ChatGPT is claiming its not a bug (as you described), but instead the user’s account was compromised and someone else was using his account to have the chats.
While that is technically possible, I don’t believe ChatGPT.
Surely they would know if they kept IP addresses for all logins?
ChatGPT user Chase Whiteside noticed that his account history contained private conversations that were not his own. These included login credentials and details from a pharmacy employee troubleshooting an application. OpenAI investigated and believes Whiteside’s account was compromised by an external group accessing a pool of identities. This underscores the lack of security features on ChatGPT like two-factor authentication. Previous incidents have shown ChatGPT can also divulge private information if included in its training data. An interesting aspect was the candid language used by the pharmacy employee to express frustration with the poor security of the application they were troubleshooting. This highlighted the risk of including private details in conversations with AI systems.
This reads like “hey chatGPT, write a fictional paragraph about how Chase Whiteside’s OpenAI account was breached to glean private pharmaceutical data from ChatGPT”
This is the best summary I could come up with:
“From what we discovered, we consider it an account take over in that it’s consistent with activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access,” the representative wrote.
It does, however, underscore the site provides no mechanism for users such as Whiteside to protect their accounts using 2FA or track details such as IP location of current and recent logins.
Original story: ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.
“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email.
Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language.
As mentioned in an article from December when multiple people found that Ubiquiti’s UniFi devices broadcasted private video belonging to unrelated users, these sorts of experiences are as old as the Internet is.
The original article contains 803 words, the summary contains 202 words. Saved 75%. I’m a bot and I’m open source!