even if you disable the feature, I have zero to no trust I’m OpenAI to respect that decision after having a history of using copyrighted content to enhance their LLMs
even if you disable the feature, I have zero to no trust I’m OpenAI to respect that decision after having a history of using copyrighted content to enhance their LLMs
Maybe for training new models, which is a totally different thing. This update is like everything you type will be stored and used as context.
I already never share any personal thing on these cloud-based LLMs, but it’s getting more and more important to have a local private LLM on your computer.
Always has been. Nothing has changed. Every conversation you’ve ever had with chatGPT is stored and owned by open AI. This is why I’ve largely rejected their use.
If it’s not local or E2EE, you are the product (even when you pay for the service).
But the fact they OpenAI stored all input typed doesn’t mean you can make a prompt and ChatGPT will use it as context, unless you had that memory feature turned on (which allowed you to explicitly “forget” what you choose from the context).
You’re confusing what it means OpenAI to have a conversation stored and ChatGPT using that text and searchable context for every prompt you make.
I think you might be confused about the difference between giving the LLM access to your stored conversations during your session and using OpenAI using AI to search your stored conversations.
What the LLM has access to during your session changes nothing but your session.
It’s not some “I, Robot” central AI that either has access or doesn’t as a whole.