I'm very sensitive about my privacy and I have disabled all personalisation and memory on ChatGPT.
However, I've noticed multiple times now where it would say things that imply it knows things about me. When it does this I ask how it would know that and it always says it just guessed and it doesn't actually know anything about me. I assumed it must be telling the truth because it seemed very unlikely a company like OpenAI would be lying about the data they're collecting on users and training their chat agent to gaslighting users when asked about it, but now after running some tests I think this is what's happening...
Here's some examples of the gaslighting:
- https://ibb.co/m5PWfchn
- https://ibb.co/VsL9BpF
- https://ibb.co/8nYdf1xx
These are all new chats.
My colleague (data scientist) asked "what if we wanted to study peoples prompts to teach them better prompt methods?" And they did a 180 without even needing to ask. "Oh yeah, we can get you that. No problem."
Of course this was for enterprise chatgpt but my experience was very much they are run like a startup, tell the customer whatever you need to to make money since they're burning through so much.
https://news.ycombinator.com/item?id=42022905
And it's still strange that after a year, it only sometimes picks it up, depending on how relevant it thinks your location is.
For me it's a coin toss whether the answer tailored to my location is useful or not; I'm often following up with something like "do not show local results, I'm shopping in the US".
But this also happens with other features: the other day it was telling me it won't access a link I pasted. Next message, I ask why. Chat answers that it can't, it has no way to browse the web. I see Web Search wasn't explicitly turned on in the conversation (not that it has mattered before anyway). Turn on for the next message and tell it to try again, voilà. Didn't try just regenerating the answer, but that also works sometimes.
I think the larger issue here is that when you ask how it knows these things it seems to have been instructed to lie and say it doesn't know and has just guessed which seem extremely unlikely. This simply isn't acceptable in my opinion.
Bear in mind that it shares memory from previous chats.
There's at least two types. One saving things you tell it. Another querying recent chats.
There seems to be another kind of memory for when it does searches. May be related to Atlas. I've tried to clear bugs from this (it gets my name wrong) but it's not in the other two.