even if you disable the feature, I have zero to no trust I’m OpenAI to respect that decision after having a history of using copyrighted content to enhance their LLMs
leaving this here:
Submitted 11 months ago by _LordMcNuggets_@feddit.org to technology@lemmy.world
https://lifehacker.com/tech/chatgpt-memory-remembers-everything-youve-said
even if you disable the feature, I have zero to no trust I’m OpenAI to respect that decision after having a history of using copyrighted content to enhance their LLMs
leaving this here:
I’m not going to defend OpenAI in general, but that difference is meaningless outside of how the LLM interacts with you.
If data privacy is your focus, it doesn’t matter that the LLM has access to it during your session to modify how it reacts to you. They don’t need the LLM at all to use that history.
This isn’t an “I’m out” type of change for privacy. If it is, you missed your stop when they started keeping a history.
Yeah, like they have the history already…
This only works if you have an account and sign in. Don’t do that and have your browser clear Cookies and Site Data at quit and the problem is solved.
If you’re not on a VPN they might still log your IP and connect your chats in the backvend though.
Duck.AI on tor browser
Where is this being stored? What is the capacity? How many accounts would be needed to overflow storage?
What worries me, is all the info from those conversations actually becoming public. I haven’t fed it personal info, but I bet a lot of people do. Not only stuff you might tell it, but information fed from people you know. Friends, family, acquaintances, even enemies could say some really personal or downright false things about you to it and it could one day add that to public ChatGPT. Sounds like some sort of Black Mirror episode, but I think it could happen. Wouldn’t be surprised if intelligence agencies already have access to this data. Maybe one day cyber criminals or even potential employers will have all this data too.
In related news:
Blocking outputs isn’t enough; dad wants OpenAI to delete the false information.
Were long due on some guillotine action.
So you’re just now finding out the rules are totally different for those with money and power.
Neat 😀
I think this is great. One of the main reasons I’ve been paying for the subscription is the limited memory of the free version. Now, the more I use it, the more it remembers about me and references things I’ve mentioned in past conversations. Sure, there are potential privacy concerns, but the same goes for commenting on Lemmy - I don’t tell ChatGPT anything I wouldn’t be comfortable sharing here.
You’re getting down-voted, but, yes, this change only really affects user experience.
I don’t know why anyone would think that what the LLM can access for context during your session is a limiting factor for what OpenAI has access to.
If this change freaks you out, the time for you to be freaked out about history was the moment they started storing it.
wait, weren’t they always doing this?
There’s a difference between OpenAI storing conversations and the LLM being able to search all your previous conversations in every clean session you start.
i think the difference was only ever going to be when they felt like running the extra processing to do the work.
That is the difference, but it’s a pretty minimal difference. Open AI hardly needs to give the LLM access to your conversations during your session to access your conversations.
In fact, I don’t see any direct benefit to OpenAI with this change.
I already expected something like that after this news broke: gizmodo.com/investigators-say-south-korean-presid…
I worked in cybersecurity and my global org was handing over details about who used ChatGPT during certain timeframes at the request of the feds (United States) two years ago on at least one occasion.
Yeah I don’t think they encrypt it anyway so I guess if they would deny a governments request they might still find a way to get to data like this.
I assumed they would log everything and create a profile in you from day one. I signed up with a fresh email account.
The headline: ChatGPT Will Soon Remember Everything You’ve Ever Told It
Seems like if they weren’t completely evil the obvious way to execute something like this would be to give people the option to keep all the personal data locally. This probably amounts to a few hundred kb of data that the complex server side LLM could just temporarily pull as needed. In my mind this seems most useful for a LLM home assistant but the idea of openai keeping a database of learned trends, preferences, and behaviors is pretty repulsive.
I run deepseek locally on a M1, good enough for 80% of what I need. OpenAi is just another wannabe gafam, can’t trust it.
Is not a bad feature, I find it interesting. The thing is that I doubt normal users would take care not puting sensitive information on it, that can profile them. Clearly there should be more campaigns on schools and even for citizens about being conscious of the information they share. That would absolutely change a lot of the shit we are living nowadays .
This will never ever be used in a surveillance capacity by an administration that’s turning the country into a fascist hyper capitalist oligarchical hellscape. Definitely not. No way. It can’t happen here.
It reminds me of the kids in 1984 who turn their father in for being an enemy of the state
This will be useful to the user, but it won’t change privacy. Humans at OpenAI still have full access to your history, and this will only expand AI capabilities to tap into previous conversations. However, rogue and unlawful administrations will still seek to access that data regardless.
Imagine if someone writes a malicious extension and now with this, they will also have access to entire chat history.
They literally tell you when you sign up that they can and will look at what you tell ChatGPT. This changes absolutely nothing about that.
Maybe for training new models, which is a totally different thing. This update is like everything you type will be stored and used as context.
I already never share any personal thing on these cloud-based LLMs, but it’s getting more and more important to have a local private LLM on your computer.
Always has been. Nothing has changed. Every conversation you’ve ever had with chatGPT is owned by open AI. This is why I’ve largely rejected their use.
If it’s not local or E2EE, you are the product.
ai systems that get to know you over your life That’s not as attractive as Sam Altman thinks it is.
If we knew it was altruistic, and only working for our benefit, it might be.
But as it is, it is not working for you, you are not its master.
Big corps and governments are.
I would be even more worried if it was altruistic and for our benefit because we fuck that shit up all the time even before malicious actors are able to weasel their way into power and turn it into something horrible.
Snitched by chat because of a fun drug moment.
vane@lemmy.world 11 months ago
It’s interesting to watch from a perspective of a person, who used to be able to find knowledge only in books. I’m slowly start to feel like Neanderthal. This global (d)arpa net experiment on humans looks more and more interesting.
cecilkorik@lemmy.ca 11 months ago
AI is just a search engine you can talk to that summarizes everything it finds into a small nugget for you to consume, and in the process sometimes lies to you and makes up answers. I have no idea how people think it is an effective research tool. None of the “knowledge” it is sharing is actually created by it, it’s just automated plagiarism. We still need humans writing books (and websites) or the AI won’t know what to talk about.
Dasus@lemmy.world 11 months ago
Books haven’t been the go to for several decades. When’s the last time you went to search something in a library before Googling it? Or hell, in general. Because we used to have to do that you know. When I was a kid and I wanted to know something, I had to cycle to library.
Now I can ask my phone about it, then ask it for the source, then check the source and I can use a search engine to find an actual book on the source on the subject.
It’s a tool.
It’s a poor craftsman who blames his tools. If you’re trying to use a hammer as a screwdriver, ofc it’s gonna suck.
vane@lemmy.world 11 months ago
It’s all because it’s cheaper to talk to LLM machine that outputs most probable phrases based on statistics than to talk to people these days. It’s accessibility thing. You have a feeling that you’re speaking with a person, it’s whole trick. It says much about who we are as people.
Amount of effort needed to ask questions and not being hated in real life is way bigger than asking LLM.
Adding more to the topic of AI as a whole is that you need to realize that we have completly new kind of computer software that is non deterministic. It’s a completly new thing and comparing it to traditional software is just pointless and confusing.
I’m not saying I would provide my life to LLM, but the fact that we developed software that is always generating readable output is a huge step in software development.