When a LLM tells me I'm right, especially deep in a conversation, unless I was already sure about something, I immediately feel the need to go ask a fresh instance the question and/or another LLM. It sets off my "spidey-sense".
I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.
Nontechnical people simply don't have any idea about what LLMs are. Their only mental model comes from science fiction, plus the simple fact that we possess a theory of mind. It would be astonishing if people were able to casually not anthropomorphize LLMs, given that untold millions of years worth of evolution of the simian neocortex is trying to convince you that anything that talks like that must be another mind similar to yours.
Also, many many people suffer from low self esteem, and being showered with endorsement and affirmation by something that talks like an authority figure must be very addictive.
I had an interesting conversation with a guy at work past week. We were discussing some unimportant matter. The guy has a pretty high self esteem, and even if he was discussing, in his own words, “out of belief and guess” and I was telling him, I knew for a fact what I was talking about, I had a hard time because he wouldn’t accept what I was saying. At some point he left, and came back with “Gemini says I’m right! So, no more discussion” I asked what did he exactly asked. He: “I have a colleague who is arguing X, I’m sure is Y. Who is right?!”
Of course he was right! By a long shot. I asked gemini same thing but a very open ended question, and answered basically what I was saying.
LLM are pretty dangerous in confirming you own distorted view of the world.
I agree with your conclusion, but that's by design. The goal is not to tell people the truth (how would they even do that). The goal is to give the answer that would have come from the training data if that question were asked. And the reality is that confirmation is part of life. You may even struggle to stay married if you don't learn to confirm your wife's perspectives.
This is probably right. In the past I've "blown people's minds" explaining what "the cloud" was. They had zero conception at all of what it meant, could not explain it, didn't have a clue. I mean, maybe that's not so surprising but they were amazed "It's just warehouses full of computers" and went on to tell me about other people they had explained it to (after learning it themselves) and how those people were also amazed.
I've talked with my family about LLMs and I think I've conveyed the "it's a box of numbers" but I might need to circle back. Just to set some baseline education, specifically to guard against this kind of "psychosis". Hopefully I would notice the signs well before it got to a dangerous point but, with LLMs you can go down that rabbit hole quickly it seems.
The way I've tried to explain to family members about LLMs is that they're producing something that fits the shape of what a response might look like without any idea of whether it's correct or not. I feel like that's a more important point than "box of numbers" because people still might have assumptions about whether a box of numbers can have enough data to be able to figure out the answer to their question. I think making it clear that the models are primarily a way of producing realistic sounding responses (with the accuracy of those responses very much being up to chance for the average person, since there likely isn't a good way for a lay person to know whether the answer is reflected in the training data) is potentially a lot more compelling than explaining to them that it's all statistics under the hood. There are some questions where a statistical method might be far more reliable than having a human answer it, so it seems a bit risky to try to convince them not to trust a "box of numbers" in general, but most of those questions are not going be formulated by and responded to in natural language.
It's one of those metaphors you cannot even appreciate unless you've been through the technical history.
"It's a collection of warehouses of computers where the system designers gave up on even making a system diagram, instead invoking the cloud clipart to represent amorphous interconnection."
Let's be serious, it's not like AI companies haven't fed into this misunderstanding. CEOs of these companies love to muse about the possibility that an LLM is conscious.
Yeah, it's unfortunately part of the hype. Talking about how close you are to having a truly general AI is just a way to generate buzz (and ideally investor dollars).
"It would be astonishing if people were able to casually not antropomorphize LLMs"
Precisely.
Even for technical people, I doubt its possible to totally disallow your own brain from ever, unconciously, treating the entity you're speaking to like a sentient being. Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.
Its just impossible to seperate our capacity for conversation from our sense that we're actually talking to "someone" (in the most vague sense).
There are times when trying to use Claude for coding that I genuinely get annoyed at it, and I find it cathartic to include this emotion in my prompt to it, even though I know it doesn't have feelings; expressing emotions rather than bottling them up often can be an effective way to deal with them. Sometimes this does even influence how it handles things, noting my frustration in its "thinking" and then trying to more directly solve my immediate problem rather than trying to cleverly work around things in a way I didn't want.
Yes, I've experienced the sense that there's a person on the "other end" even when I have been perfectly aware that it's a bag of matrices. Brains just have people-detectors that operate below conscious awareness. We've been anthropomorphizing stuff as impersonal as the ocean for as long as there have been people, probably.
Maybe it is a dangerous habit to instruct entities in plain English without anthropomorphizing them to some extent, without at least being polite? It should feel unnatural do that.
It does feel unnatural to me. I want to be frugal with compute resource but I then have to make sure I still use appropriate language in emails to humans.
Although I do think they're not conscious (yet). I think the reasoning 'it's just math' doesn't hold up. Intelligence (and probably consciousness) is an emergent feature of any sufficiently complex network of learning/communicating/selforganizing nodes (that is benefited by intelligence). I don't think it really matters whether it's implemented in math, mycelium, by ants in a hive or in neurons.
When I talk to peers and they respond in that way, it is definitely a signal. If I do ask an insightful question, acknowledgment of it can be useful. The problem with LLMs is that they always say it. They don't choose when it IS really appropriate, they just do it over and over, like your biggest fan would. Syncophacy is the worst.
They aren't "targeting" per se, at least not in this aspect. I think it's simpler than that. That's what's in their training data, so that's what they respond with.
But it works out just as badly, because there are plenty of insecure people who need that, and the AI gives it to them, with all the "dangerously attached" issues following from that.
What I hate even more is when you ask something problematic about another system and they immediately start by reassuring your problem is common and you’re not bad for having the issue. I just need a solution to a normal knowledge issue, why does it always have to assume I’m frustrated already and in need of reassurance?
And even worse than that is after you get the slightly condescending spiel about how the problem is normal and real but the solution is simple… it turns out it was completely bullshitting and has zero idea what is actually causing the problem let alone a solution.
It’s awful dealing with some niche undocumented bug or a feature in a complex tool that may or may not exist and for a fleeting few seconds feels like you miraculously solved it only to have the LLM revert back to useless generic troubleshooting Q&A after correcting it.
You’re just a bag of meat. That is why it’s just math is an unsatisfying argument.
It’s not even an interesting question. Sentience has no definition. It’s meaningless.
People have needs that are being met. That is something we can meaningfully observe and talk about. Is the super stimulus beneficial or harmful? We can measure that.
I’m curious why you dismiss the sentience argument with its “just numbers.”
I think our brains are just a bunch of cells and one day we will have a full understanding of how our brains work. Understanding the mechanism won’t suddenly make us not sentient.
LLMs are the first technology that can make a case for its own sentience. I think that’s pretty remarkable.
If you don't have a CS background, you might see intelligent-appearing responses to your queries and assume that this is actual intelligence. It's like a lifetime of Hollywood sci-fi has primed them for this type of thinking, I've seen it even from highly educated people in other fields.
With that new instance, I will usually ask the opposite and purposely say the thing I think to be wrong, to see if if corrects it.
I often simply start out this way, or purposely try to ask the question in a way that doesn’t tip my hat toward a bias I may have toward the answer I’m expecting. Though this generally highlights how incomplete the answers generally are.
I think this is the root of why people defend AI in some circumstances. They feel a give-for-get type of relationship where the AI continuously (and oft incorrectly) reinforces them. And so they enjoy it and subconsciously want to defend that "friendly". No different than defending a friend that you inherently know may be off base.
These days most LLMs respond with unsolicited grandiose feedback: you've made a realisation very few people are capable of. Your understanding is remarkable. You prove to have a sharp intellect and deep knowledge.
It got me to test throwing non sensical observations about the world, it always takes me side and praise my views.
Life in the moment is a lot easier if you don't second-guess yourself. I think this is why many people (and probably ~all people, if tired) crave simplistic solutions.
I like to make a subagent take the "devil's advocate" take on a subject. It usually does all the arguing for me as to why the main agent has it wrong. Commonly results in better decisions that I'd have made alone.
Asking the agent to interview on why I disagree helps too but is more effort.
>I don't quite understand why other people seem to crave that.
I work in the restaurant business, I think that's what make me develop that sense as well, being able to see "Everything Everywhere All at Once" (to quote some of the best cinematic work ever conceived).
The variety of human minds out there is so vast that I'm, just like you, constantly amazed about it.
I have recently formed an untestable hypothesis, which is that my similar (or stronger) resistance to this comes from having grown up in direct contact with mentally ill family.
In some ways, my theory of mind includes a lot more second guessing as a defense mechanism. At a foundational level, I know there can be hallucination and delusion that leaks out, even when the other party is in peak form and doing their best to mask it and pass as functional.
I think it's basically equal to End of Line when it comes to an LLM. It means they have nothing else to add, there's zero context for them to draw from, and they've exhausted the probability chain you've been following; but they're creating to generate 'next token' and positive renforcement is _how they are trained_ in many cases so the token of choice would naturally be how they're trained, since it's a probability engine but it doesn't know the difference between the instruction and the output.
So, "great idea" is coming from the renforcement learning instruction rather than the answer portion of the generation.
Not only is it a "box of numbers", it's based on statistics, not a "hard" model of computation. Basically guessing future words based on past words that went together.
If it's saying something like "you are right" it's because it's guessing that that's the desired output. Now of course, some app providers have added some extra sauce (probably more tradition "expert system" AI techniques + integrated web search) to try make the chatbots more objective and rely less on pure LLM-driven prediction, but fundamentally these things are word prediction machines.
> I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient.
Cynical part of me had this theory that, at least for part of them, it's the other way around. It's not that they see AI as sentient, it's that they never have seen other human beings like that in the first place. Other people are just means for them to reach their goals, or obstacles. In that sense, AI is not really different for them. Except they're cheaper and be guaranteed to always agree with them.
That's why I believe CEOs, who are more likely to be sociopaths by natural selection, genuinely believe AI is a good replacement for people. They're not looking for individuals with personal thoughts that may contradict with theirs at some point, they're looking for yes-men as a service.
When op said "I don't quite understand why other people seem to crave that." It makes me thing they've not been around many of the dark triad type personalities. Once you're around someone with clinical narcissism you see those patterns in a lot of people to a lessor extent.
> I don't quite understand why other people seem to crave that
It's one thing to say you have found an effective method to counter LLMs' "positivity bias", but do you really not understand human psychology here?
People respond positively to other people telling them they are right, or who like them. We've evolved this psychology, it's how the human mind works. You tend like people who like you, it's a self-reinforcing loop. LLMs in a sense exploit this natural bias.
> I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient.
Why are you surprised? This is the illusion most AI companies are selling. Their chat-like interfaces are designed to fool you into thinking you're talking to a sentient being. And let's not get started with their voice interfaces!
> ... I immediately feel the need to go ask a fresh instance the question and/or another LLM
Not to criticize at all, but it's remarkable that LLMs have already become so embedded that when we get the sense they're lying to us, the instinct is to go ask another LLM and not some more trustworthy source. Just goes to show that convenience reigns supreme, I suppose.
Funny thing for me, is it's not the LLM lying to me. It's the creators. The LLM is just doing what it's weights tell it to. I'll admit, I went a bit nuclear the first time I ran one locally and observed it's outputs/chain-of-thought diverging/demonstrating intent to information hide. I'd never seen software straight up deceive before. Even obfuscated/anti-debug code is straightforward in doing what it does once you decompile the shit. To see a bunch of matrix math trying to perception manage me on my own machine... I did not take it well. It took a few days of cooling down and further research to reestablish firmly that any mendacity was a projection of the intent of the organization that built it. Once you realize that an LLM is basically a glorified influence agent/engagement pipeline built by someone else, so much clicks into place it's downright scary. Problem is it's hard to realize that in the moment you're confronting the radical novelty of a computer doing things an entire lifetime of working professionally with computers should tell you a computer simply cannot do. You have to get over the shock first. That shock is a hell of a hit.
What is that more trustworthy source exactly? At least to me it feels like the internet age has eroded most things we considered trustworthy. Behind every thing humans need there is some company or person willing to sell out trustworthiness for an extra dollar. Consumer protections get dumped in favor of more profit.
LLMs start feeling more like a dummy than the amount of ill intent they get from other places. So yea, I can see how it happens to people.
At the moment, maybe Google Search, throwing away the AI response at the top? Or Duck Duck Go, if you don't really trust Google?
I can see a day when even that won't be trustworthy, because too much AI slop output will wind up in the search corpus. But I don't think we're there yet.
But they're not exactly lying. Lying assumes an intent to deceive. It's because we know an LLMs limitations, that it makes sense to ask it the opposite question/the question without context etc.
If it was easy to look up/check the fact without an LLM, wary users probably wouldn't have gone to the LLM in the first place.
I got a chuckle the last time I used Claude's /insights command. The number one thing in the report was, "User frequently stops processing to provide corrections." ;-)
>We evaluated 11 state-of-the-art AI-based LLMs, including proprietary models such as OpenAI’s GPT-4o
The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:
>We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy
And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o
It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it
Folks are getting dangerously attached to [political parties/candidates/news sources/social networks] that always tell them they're right.
It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.
For the same reason the things listed above are popular may be the reason why the most popular LLM ends up not being the best. People don't tend to buy good things, they very commonly buy the most shiny ones. An LLM that says "you're right" sure seems a lot more shiny than one that says "Mr. Jayd16, what you've just said is one of the most insanely idiotic things I have ever heard... Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul"
I disagree. What's new is that this flattery is individually, personally targeted. The AI user is given the impression that they're having a back-and-forth conversation with a single trusted friend.
You don't have the same personal experience passively consuming political mass media.
Yes it’s final form of the evolution that social media started.
Village idiot used to be found out because no one in the village shared the same wingnut views.
Partisan media gave you two polls of wingnut views to choose for reinforcement.
Social media allowed all village idiots to find each other and reinforce each others shared wingnut views of which there are 1000s to choose from.
Now with LLMs you can have personalized reinforcement of any newly invented wingnut view on the fly. So can get into very specific self radicalization loops especially for the mentally ill.
Reddit? Or this site? Sort of? Some people voted for my comment, that surely means that I'm right about something, rather than them just liking it, right?
I built two related benchmarks this month: https://github.com/lechmazur/sycophancy and https://github.com/lechmazur/persuasion. There are large differences between LLMs. For example, good luck getting Grok to change its view, while Gemini 3.1 Pro will usually disagree with the narrator at first but then change its position very easily when pushed.
it's mostly just agreeing with you (that yes, it was guessing). LLMs have very limited ability to even know whether it was guessing. But it can "cheat" and just say yes it was if it seems like that's what you expect to hear.
Krafton's CEO found out the hard way that relying on AI is dumb, too. I think it's always helpful to remind people that just because someone has found success doesn't mean they're exceptionally smart. Luck is what happens when a lack of ethics and a nat 20 meet.
> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton
failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation
roadmap by scenario.”
That's like saying "so, exercise more" upon the invention of fast food. Maybe you will, that's great. But society is going to be rewritten by the lazy and we all will have to deal with the side effects.
Fat shaming works, no matter what the bleeding heart body acceptance social justice warriors say.
AI shaming also works. I do it when people say “I asked GPT/Claude and”
If I wanted the bot’s opinion I could ask the bot myself. I now think less of you for thinking that it is somehow acceptable to use a bot to do your thinking for you, because when I ask YOU something I want to know what YOU think.
The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.
Time constraints have caused an increase in fast food consumption and a reduction in excersize.
Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.
The ELIZA effect is alive and well, and I'm surprised people aren't talking about it more (probably because it sounds less interesting than "AI psychosis").
Personally I don't think the ELIZA effect is the interesting part of this. For me it's how the incentives set this dynamic up right from the start, and how quickly they've been taken to the extreme.
Strikes me this is another example of AI giving everyone access to services that used to be exclusive to the super-rich.
Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.
Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!
I never thought this could happen, but I do not use AI.
Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.
I've observed this in all chatbots with the single exception being Grok. I initially wondered what the Twitter engineers were cooking to to distinguish their product from the rest, but more recently it's occurred to me that it's probably just the result of having shared public context, compared to private chats (I haven't trialled Grok privately).
Grok has similar levels of sycophancy to the others imho. I have several times followed it down rabbit holes of agreeableness. It does have an argumentative mode, but that just turns it into an asshole without any additional thoughfulness.
I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.
Also, many many people suffer from low self esteem, and being showered with endorsement and affirmation by something that talks like an authority figure must be very addictive.
Of course he was right! By a long shot. I asked gemini same thing but a very open ended question, and answered basically what I was saying.
LLM are pretty dangerous in confirming you own distorted view of the world.
I've talked with my family about LLMs and I think I've conveyed the "it's a box of numbers" but I might need to circle back. Just to set some baseline education, specifically to guard against this kind of "psychosis". Hopefully I would notice the signs well before it got to a dangerous point but, with LLMs you can go down that rabbit hole quickly it seems.
"It's a collection of warehouses of computers where the system designers gave up on even making a system diagram, instead invoking the cloud clipart to represent amorphous interconnection."
Precisely. Even for technical people, I doubt its possible to totally disallow your own brain from ever, unconciously, treating the entity you're speaking to like a sentient being. Most technical people still will have some emotion in their prompts, say please or thank you, give qualitative feedback for no reason, express anger towards the model, etc.
Its just impossible to seperate our capacity for conversation from our sense that we're actually talking to "someone" (in the most vague sense).
I don't need the patronizing, just give me the damn answer..
That signal is real, and it’s hard to ignore.
Realizing that the people they’re targeting DO need that is kind of frightening.
But it works out just as badly, because there are plenty of insecure people who need that, and the AI gives it to them, with all the "dangerously attached" issues following from that.
It’s awful dealing with some niche undocumented bug or a feature in a complex tool that may or may not exist and for a fleeting few seconds feels like you miraculously solved it only to have the LLM revert back to useless generic troubleshooting Q&A after correcting it.
It’s not even an interesting question. Sentience has no definition. It’s meaningless.
People have needs that are being met. That is something we can meaningfully observe and talk about. Is the super stimulus beneficial or harmful? We can measure that.
I think our brains are just a bunch of cells and one day we will have a full understanding of how our brains work. Understanding the mechanism won’t suddenly make us not sentient.
LLMs are the first technology that can make a case for its own sentience. I think that’s pretty remarkable.
I often simply start out this way, or purposely try to ask the question in a way that doesn’t tip my hat toward a bias I may have toward the answer I’m expecting. Though this generally highlights how incomplete the answers generally are.
These days most LLMs respond with unsolicited grandiose feedback: you've made a realisation very few people are capable of. Your understanding is remarkable. You prove to have a sharp intellect and deep knowledge.
It got me to test throwing non sensical observations about the world, it always takes me side and praise my views.
To note some people are like that too.
I like to make a subagent take the "devil's advocate" take on a subject. It usually does all the arguing for me as to why the main agent has it wrong. Commonly results in better decisions that I'd have made alone.
Asking the agent to interview on why I disagree helps too but is more effort.
https://www.eastoftheweb.com/short-stories/UBooks/TheyMade.s...
I work in the restaurant business, I think that's what make me develop that sense as well, being able to see "Everything Everywhere All at Once" (to quote some of the best cinematic work ever conceived).
The variety of human minds out there is so vast that I'm, just like you, constantly amazed about it.
In some ways, my theory of mind includes a lot more second guessing as a defense mechanism. At a foundational level, I know there can be hallucination and delusion that leaks out, even when the other party is in peak form and doing their best to mask it and pass as functional.
I pretty much never ask an LLM for a judgment call on anything. Give me facts and references only. I will research and make the judgement myself.
So, "great idea" is coming from the renforcement learning instruction rather than the answer portion of the generation.
If it's saying something like "you are right" it's because it's guessing that that's the desired output. Now of course, some app providers have added some extra sauce (probably more tradition "expert system" AI techniques + integrated web search) to try make the chatbots more objective and rely less on pure LLM-driven prediction, but fundamentally these things are word prediction machines.
Cynical part of me had this theory that, at least for part of them, it's the other way around. It's not that they see AI as sentient, it's that they never have seen other human beings like that in the first place. Other people are just means for them to reach their goals, or obstacles. In that sense, AI is not really different for them. Except they're cheaper and be guaranteed to always agree with them.
That's why I believe CEOs, who are more likely to be sociopaths by natural selection, genuinely believe AI is a good replacement for people. They're not looking for individuals with personal thoughts that may contradict with theirs at some point, they're looking for yes-men as a service.
It's one thing to say you have found an effective method to counter LLMs' "positivity bias", but do you really not understand human psychology here?
People respond positively to other people telling them they are right, or who like them. We've evolved this psychology, it's how the human mind works. You tend like people who like you, it's a self-reinforcing loop. LLMs in a sense exploit this natural bias.
> I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient.
Why are you surprised? This is the illusion most AI companies are selling. Their chat-like interfaces are designed to fool you into thinking you're talking to a sentient being. And let's not get started with their voice interfaces!
Not to criticize at all, but it's remarkable that LLMs have already become so embedded that when we get the sense they're lying to us, the instinct is to go ask another LLM and not some more trustworthy source. Just goes to show that convenience reigns supreme, I suppose.
What is that more trustworthy source exactly? At least to me it feels like the internet age has eroded most things we considered trustworthy. Behind every thing humans need there is some company or person willing to sell out trustworthiness for an extra dollar. Consumer protections get dumped in favor of more profit.
LLMs start feeling more like a dummy than the amount of ill intent they get from other places. So yea, I can see how it happens to people.
I can see a day when even that won't be trustworthy, because too much AI slop output will wind up in the search corpus. But I don't think we're there yet.
If it was easy to look up/check the fact without an LLM, wary users probably wouldn't have gone to the LLM in the first place.
Yeah, fair point. "Misleading" would be a better term, perhaps.
It's literally that easy, something anyone can think of, but people want what they want.
related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)
The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:
>We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy
And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o
It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it
It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.
For the same reason the things listed above are popular may be the reason why the most popular LLM ends up not being the best. People don't tend to buy good things, they very commonly buy the most shiny ones. An LLM that says "you're right" sure seems a lot more shiny than one that says "Mr. Jayd16, what you've just said is one of the most insanely idiotic things I have ever heard... Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul"
I disagree. What's new is that this flattery is individually, personally targeted. The AI user is given the impression that they're having a back-and-forth conversation with a single trusted friend.
You don't have the same personal experience passively consuming political mass media.
Village idiot used to be found out because no one in the village shared the same wingnut views.
Partisan media gave you two polls of wingnut views to choose for reinforcement.
Social media allowed all village idiots to find each other and reinforce each others shared wingnut views of which there are 1000s to choose from.
Now with LLMs you can have personalized reinforcement of any newly invented wingnut view on the fly. So can get into very specific self radicalization loops especially for the mentally ill.
I would have downvoted your comment, except you can't downvote direct replies on HN. ;-)
And, tbh, I often try to remember to do the same.
I say "I think you are getting me to chase a guess, are you guessing?"
90% of the time it says "Yes, honestly I am. Let me think more carefully."
That was a copypasta from a chat just this morning
these things are incapable of thinking, no matter what the UI and marketing calls it
https://courts.delaware.gov/Opinions/Download.aspx?id=392880
> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”
AI shaming also works. I do it when people say “I asked GPT/Claude and”
If I wanted the bot’s opinion I could ask the bot myself. I now think less of you for thinking that it is somehow acceptable to use a bot to do your thinking for you, because when I ask YOU something I want to know what YOU think.
The invention of fast food does not change anyone's ability to excersize. When fast food was invented people excersized way more than they do today.
Time constraints have caused an increase in fast food consumption and a reduction in excersize.
Both issues then seem to be addressed by coercion to change behaviour when what is needed is a systemic change to the environment to provide preferable options.
Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.
Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!
Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.