Dead Internet Theory

(kudmitry.com)

381 points | by skwee357 16 hours ago

74 comments

  • seiferteric 9 hours ago
    My parents were tricked the other day by a fake youtube video of "racist cop" doing something bad and getting outraged by it. I watch part of the video and even though it felt off I couldn't immediately tell for sure if it was fake or not. Nevertheless I googled the names and details and found nothing but repostings of the video. Then I looked at the youtube channel info and there it said it uses AI for "some" of the videos to recreate "real" events. I really doubt that.. it all looks fake. I am just worried about how much divisiveness this kind of stuff will create all so someone can profit off of youtube ads.. it's sad.
    • lrvick 57 minutes ago
      If there are ad incentives, assume all content is fake by default.

      On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.

      • tomaskafka 23 minutes ago
        That’s not because it’s decentralized or open, it’s because it doesn’t matter. If it was larger or more important, it would get run over by bots in weeks.

        Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever

        The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.

      • direwolf20 45 minutes ago
        Of course - because everyone is banned upon first suspicion.
    • b3lvedere 17 minutes ago
      Just twenty minutes ago i got a panic call that someone was getting dozens of messages that their virusscanner is not working and they have hundreds of viruses. By removing Google Chrome from sending messages to the Windows notification bar everything went back to normal on the computer.

      Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.

      Maybe i should setup a Pi-Hole business...

    • BrtByte 1 hour ago
      It's sad, yeah. And exhausting. The fact that you felt something was off and took the time to verify already puts you ahead of the curve, but it's depressing that this level of vigilance is becoming the baseline just to consume media safely
    • ryanjshaw 5 hours ago
      I’m spending way too much time on the RealOrAI subreddits these days. I think it scares me because I get so many wrong, so I keep watching more, hoping to improve my detection skills. I may have to accept that this is just the new reality - never quite knowing the truth.
      • raincole 4 hours ago
        Those subreddits label content wrong all the time. Some of top commentors are trolling (I've seen one cooking video where the most voted comment is "AI, the sauce stops when it hits the plate"... as thick sauce should do.)

        You're training yourself with a very unreliable source of truth.

        • input_sh 1 hour ago
          > Those subreddits label content wrong all the time.

          Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.

        • ryanjshaw 1 hour ago
          > You're training yourself with a very unreliable source of truth.

          I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.

          If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?

          Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?

          • iwontberude 30 minutes ago
            Which themselves are arguments from bots.
      • lesam 1 hour ago
        Before photography, we knew something was truthful because someone trustworthy vouched for it.

        Now that photos and videos can be faked, we'll have to go back to the older system.

        • ekianjo 18 minutes ago
          It was always easy to fake photos too. Just organize the scene, or selectively frame what you want. There is no such thing as any piece of media you can trust.
          • bandrami 2 minutes ago
            The construction workers having lunch on the girder in that famous photo were in fact about four feet above a safety platform; it's a masterpiece of framing and cropping. (Ironically the photographer was standing on a girder out over a hundred stories of nothing).
        • expedition32 7 minutes ago
          Ah yes the good old days of witch trials and pogroms.

          I am no big fan of AI but misinformation is a tale as old as time.

      • lukan 4 hours ago
        "I may have to accept that this is just the new reality - never quite knowing the truth."

        Some people, quite some time ago, also came to that conclusion. (And they did not even had AI to blame)

        https://en.wikipedia.org/wiki/I_know_that_I_know_nothing

        • padjo 3 hours ago
          I’m really hoping that we’re about to see an explosion in critical thinking and skepticism as a response to generative AI.

          Any day now… right?

          • ryanjshaw 1 hour ago
            I show my young daughter this stuff and try to role model healthy skepticism. Critical thinking YT like Corridor Crew’s paranormal UFO/bigfoot/ghosts/etc series is great too. Peer pressure might be the deciding factor in what she ultimately chooses to believe, though.
          • notarobot123 3 hours ago
            I think the broader response and re-evaluation is going to take a lot longer. Children of today are growing up in an obviously hostile information environment whereas older folk are trying to re-calibrate in an environment that's changing faster than they are.

            If the next generation can weather the slop storm, they may have a chance to re-establish new forms of authentic communication, though probably on a completely different scale and in different forms to the Web and current social media platforms.

          • efnx 3 hours ago
            One can hope!
            • lukan 2 hours ago
              Yeah, one can. But then I see people just accepting the weak google search AI summary as plain facts and my hope fades away.
      • bradgessler 4 hours ago
        What if AI is running RealOrAI to trick us into never quite knowing the truth?
    • quantummagic 7 hours ago
      As they say, the demand for racism far outstrips the supply. It's hard to spend all day outraged if you rely on reality to supply enough fodder.
      • InsideOutSanta 2 hours ago
        This is not the right thing to take away from this. This isn't about one group of people wanting to be angry. It's about creating engagement (for corporations) and creating division in general (for entities intent on harming liberal societies).

        In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.

        We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.

        Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.

        • DrScientist 52 minutes ago
          > Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.

          It's not built to make people angry per se - it's built to optimise for revenue generation - which so happens to be content that makes people angry.

          People have discovered that creating and posting such content makes them money, and the revenue is split between themselves and the platforms.

          In my view if the platforms can't tackle this problem then the platforms should be shutdown - promoting this sort of material should be illegal, and it's not an excuse to say our business model won't work if we are made responsible for the things we do.

          ie while it turns out you can easily scale one side of publishing ( putting stuff out their and getting paid by ads ), you can't so easily scale the other side of publishing - which is being responsible for your actions - if you haven't solved both sides you don't have a viable business model in my view.

        • zahlman 1 hour ago
          > In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.

          I don't see anything like outrage in GP, just a vaguely implied sense of superiority (political, not racial!).

        • blfr 2 hours ago
          I agree with grandparent and think you have cause and effect backwards: people really do want to be outraged so Facebook and the like provide rage bait. Sometimes through algos tuning themselves to that need, sometimes deliberately.

          But Facebook cannot "require" people do be angry. Facebook can barely even "require" people to log in, only those locked into Messenger ecosystem.

          I don't use Facebook but I do use TikTok, and Twitter, and YouTube. It's very easy to filter rage bait out of your timeline. I get very little of it, mark it "uninterested"/mute/"don't recommend channel" and the timeline dutifully obeys. My timelines are full of popsci, golden retrievers, sketches, recordings of local trams (nevermind), and when AI makes an appearance it's the narrative kind[1] which I admit I like or old jokes recycled with AI.

          The root of the problem is in us. Not on Facebook. Even if it exploits it. Surfers don't cause waves.

          [1] https://www.tiktok.com/@gossip.goblin

          • InsideOutSanta 2 hours ago
            > people really do want to be outraged

            No, they do not. Nobody[1] wants to be angry. Nobody wakes up in the morning and thinks to themselves, "today is going to be a good day because I'm going to be angry."

            But given the correct input, everyone feels that they must be angry, that it is morally required to be angry. And this anger then requires them to seek out further information about the thing that made them angry. Not because they desire to be angry, but because they feel that there is something happening in the world that is wrong and that they must fight.

            [1]: for approximate values of "nobody"

            • lazide 1 hour ago
              If you think for a bit on what you just wrote, I’m pretty sure you’re agreeing with what they wrote.

              You’re literally saying why people want to be angry.

              • quietbritishjim 1 hour ago
                I suppose the subtlety is that people want to be angry if (and only if) reality demands it.

                My uneducated feeling is that, in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed.

                But that doesn't mean that people want to be angry in general, in the sense that if there's nothing in reality to be angry about then that's even better. But if someone is presented with something to be angry about, then that ship has sailed so the typical reaction is to feel the need to engage.

                • InsideOutSanta 12 minutes ago
                  >in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed

                  Yes, I think this is exactly it. A reaction that may be reasonable in a personal, real-world context can become extremely problematic in a highly connected context.

                  It's both that, as an individual, you can be inundated with things that feel like you have a moral obligation to react. On the other side of the equation, if you say something stupid online, you can suddenly have thousands of people attacking you for it.

                  Every single action seems reasonable, or even necessary, to each individual person, but because everything is scaled up by all the connections, things immediately escalate.

                • lazide 17 minutes ago
                  If people are bored, they’ll definitely seek out things that make them less bored. It’s hard to be less bored than when you’re angry.
              • InsideOutSanta 27 minutes ago
                There's a difference between wanting to be angry and feeling that anger is the correct response to an outside stimulus.

                I don't wake up thinking "today I want to be angry", but if I go outside and see somebody kicking a cat, I feel that anger is the correct response.

                The problem is that social media is a cat-kicking machine that drags people into a vicious circle of anger-inducing stimuli. If people think that every day people are kicking cats on the Internet, they feel that they need to do something to stop the cat-kicking; given their agency, that "something" is usually angry responses and attacks, which feeds the machine.

                Again, they do not do that because they want to be angry; most people would rather be happy than angry. They do it because they feel that cats are being kicked, and anger is the required moral response.

                • lazide 19 minutes ago
                  And if you seek out (and push ‘give me more’ buttons on) cat kicking videos?

                  At some point, I think it’s important to recognize the difference between revealed preferences and stated preferences. Social media seems adept at exposing revealed preferences.

                  If people seek out the thing that makes them angry, how can we not say that they want to be angry? Regardless of what words they use.

                  And for example, I never heard anyone who was a big Fox News, Rush Limbaugh, or Alex Jones fan who said they wanted to be angry or paranoid (to be fair, this was pre-Trump and awhile ago), yet every single one of them I saw got angry and paranoid after watching them, if you paid any attention at all.

          • RGamma 2 hours ago
            You may be vastly overestimating average media competence. This is one of those things where I'm glad my relatives are so timid about the digital world.
      • neilv 6 hours ago
        I hadn't heard that saying.

        Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.

        Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.

        In various forms, with various levels of harm, and with various levels of evidence available.

        (Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)

        Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.

        (Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )

        • self_awareness 4 hours ago
          Did you just justify generating racist videos as a good thing?
          • neilv 23 minutes ago
            I don't think so. I was trying to respond to a comment in a way that was diplomatic and constructive. I can see that came out unclear.
          • nkmnz 2 hours ago
            Is a video documenting racist behavior a racist or an anti-racist video? Is faking a video documenting racist behavior (that never happened) a racist or an anti-racist video? Is the act of faking a video documenting racist behavior (that never happened) or anti-racist behavior?
            • garretraziel 2 hours ago
              It doesn’t have to be either for it to be morally bad.
            • self_awareness 42 minutes ago
              Video showing racist behavior is racist and anti-racist at the same time. A racist will be happy watching it, and anti-racist will forward it to forward their anti-racist message.

              Faking a racist video that never happend is, first of all, faking. Second, it's the same: racist and anti-racist at the same time. Third, it's falsifying the prevalence of occurrence.

              If you'll add to the video a disclaimer: "this video has been AI-generated, but it shows events that happen all across the US daily" then there's no problem. Nobody is being lied to about anything. The video shows the message, it's not faking anything. But when you impersonate a real occurence, but it's a fake video, then you're lying, and it's simple as that.

              Can a lie be told in good faith? I'm afraid that not even philosophy can answer that question. But it's really telling that leftist are sure about the answer!

          • QuadmasterXLII 35 minutes ago
            The reading comprehension on this website is piss poor.
            • self_awareness 19 minutes ago
              The quality of comments is also not that great.
          • mxkopy 4 hours ago
            Think they did the exact opposite

            > Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered

            • self_awareness 3 hours ago
              Well yes, that's what he wrote, but that's like saying: stealing can be done for variety of reasons, including by someone who intends the theft to be discovered? Killing can be done for variety of reasons, including by someone who intends the killing to be discovered?

              I read it as "producing racist videos can sometimes be used in good faith"?

              • pfg_ 1 hour ago
                They're saying one example of a reason someone could fake a video is so it would get found out and discredit the position it showed. I read it as them saying that producing the fake video of a cop being racist could have been done to discredit the idea of cops being racist.
              • Nevermark 3 hours ago
                There is significant differences between how the information world and the physical world operate.

                Creating all kinds of meta-levels of falsity is a real thing, with multiple lines of objective (if nefarious) motivation, in the information arena.

                But even physical crimes can have meta information purposes. Putin for instance is fond of instigating crimes in a way that his fingerprints will inevitably be found, because that is an effective form of intimidation and power projection.

              • mxkopy 3 hours ago
                I think they’re just saying we should interpret this video in a way that’s consistent with known historical facts. On one hand, it’s not depicting events that are strictly untrue, so we shouldn’t discredit it. On the other hand, since the video itself is literally fake, when we discredit it we shouldn’t accidentally also discredit the events it’s depicting.
                • self_awareness 2 hours ago
                  Are you saying that if there is 1 instance of a true event, then fake videos done in a similar way as this true event is rational and needed?
                  • mxkopy 2 hours ago
                    The insinuation that racism in the US is not systemic reeks of ignorance

                    Edit: please, prove your illiteracy and lack of critical thinking skills in the comments below

                    • self_awareness 1 hour ago
                      How do I know that most of racist indicents weren't simulated by you guys? Since you clearly say that it's OK to generate lies about it?

                      Edit: I literally demonstrate my ability to think critically.

                    • lazide 1 hour ago
                      So make fake videos of events that never actually happened, because real events surely did that weren’t recorded? Or weren’t viral enough? Or something?

                      Do you realize how crazy this sounds?

          • thinkingemote 3 hours ago
            How about this question: Can generating an anti-racist video be justified as a good thing?

            I think many here would say "yes!" to this question, so can saying "no" be justified by an anti-racist?

            Generally I prefer questions that do not lead to thoughts being terminated. Seek to keep a discussion not stop it.

            On the subject of this thread, these questions are quite old and are related to propaganda: is it okay to use propaganda if we are the Good Guys, if, by doing so, it weakens our people to be more susceptible to propaganda from the Bad Guys. Every single one of our nations and governments think yes, it's good to use propaganda.

            Because that's explicitly what happened during the rise of Nazi Germany; the USA had an official national programme of propaganda awareness and manipulation resistance which had to be shut down because the country needed to use propaganda on their own citizens and the enemy during WW2.

            So back to the first question, its not the content (whether it's racist or not) it's the effect: would producing fake content reach a desired policy goal?

            Philosophically it's truth vs lie, can we lie to do good? Theologically in the majority of religions, this has been answered: lying can never do good.

            • self_awareness 28 minutes ago
              Game theory tells us that we should lie if someone else is lying, for some time. Then we should try trusting again. But we should generally tell the truth at the beginning; we sometimes lose to those who lie all the time, but we can gain more than the eternal liar if we encounter someone who behaves just like us. Assuming our strategy is in the majority, this works.

              But this is game theory, a dead and amoral mechanism that is mostly used by the animal kingdom. I'm sure humanity is better than that?

              Propaganda is war, and each time we use war measures, we're getting closer to it.

      • hn_throwaway_99 5 hours ago
        I like that saying. You can see it all the time on Reddit where, not even counting AI generated content, you see rage bait that is (re)posted literally years after the fact. It's like "yeah, OK this guy sucks, but why are you reposting this 5 years after it went viral?"
      • silisili 6 hours ago
        Rage sells. Not long after EBT changes, there were a rash of videos of people playing the person people against welfare imagine in their head. Women, usually black, speaking improperly about how the taxpayers need to take care of their kids.

        Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.

      • blks 3 hours ago
        You sure about that? I think actions of the US administration together with ICE and police work provide quite enough
      • watwut 3 hours ago
        Wut? If you listen to what real people say, racism is quite common has all the power right now.
      • Refreeze5224 6 hours ago
        [flagged]
        • theteapot 6 hours ago
          I'm noticing more of these race baiting comments on YC too lately. AI?
          • ycombinator_acc 4 hours ago
            No, that’s a common cope.

            Not AI. Not bots. Not Indians or Pakistanis. Not Kremlin or Hasbara agents. All the above might comprise a small percentage of it, but the vast majority of the rage bait and rage bait support we’ve seen over the past year+ on the Internet (including here) is just westerners being (allowed and encouraged by each other to be) racist toward non-whites in various ways.

      • actionfromafar 4 hours ago
        That's why this administration is working hard to fill the demand.
      • pjc50 1 hour ago
        Wrong takeaway. There are plenty of real incidents. The reason for posting fake incidents is to discredit the real ones.
    • sheept 8 hours ago
      a reliable giveaway for AI generated videos is just a quick glance at the account's post history—the videos will look frequent, repetitive, and lack a consistent subject/background—and that's not something that'll go away when AI videos get better
      • eru 7 hours ago
        > [...] and lack a consistent subject/background—and that's not something that'll go away when AI videos get better

        Why not? Surely you can ask your friendly neighbourhood AI to run a consistent channel for you?

        • sheept 5 hours ago
          AI is capable of consistent characters now, yes, but the platforms themselves provide little incentive to. TikTok/Instagram Reels are designed to serve recommendations, not a user-curated feed of people you follow, so consistency is not needed for virality
      • zahlman 1 hour ago
        How can they look repetitive while being inconsistent? Do you mean in terms of presentation / "editing" style?
      • cortesoft 7 hours ago
        Or they are reposting other people's content
      • phire 1 hour ago
        I actually avoid most YouTube channels that upload too frequently. Especially with consistent schedules.

        Even if I'm 100% certain it's not AI slop, it's still a very strong indicator that the videos are some kind of slop.

      • fallinditch 7 hours ago
        A giveaway for detecting AI-generated text is the use of em-dashes, as noted in op - you are caught bang to rights!
        • nicbou 4 hours ago
          Some keyboards and operating systems — iOS is one of them — convert two dashes into an emdash.
          • Nevermark 2 hours ago
            I can’t wait for my keyboard to start auto-completing “Your” with “are absolutely right!”
            • nicbou 2 hours ago
              In this case Apple has cared about typography since its very beginning. Steve Jobs obsessed over it. The OS also replaces simple quotes with fancier ones.

              I do the same on my websites. It's embedded into my static site generator.

              Very related: https://practicaltypography.com/

              • Nevermark 2 hours ago
                Agreed that is useful, despite the unintended consequences of dash meddling.
        • lucumo 5 hours ago
          Not long ago, a statistical study found that AI almost always has an 'e' in its output. It is a firm indicator of AI slop. If you catch a post with an 'e', pay it no mind: it's probably AI.

          Uh-oh. Caught you. Bang to rights! That post is firmly AI. Bad. Nobody should mind your robot posts.

          • PurelyApplied 5 hours ago
            I apprEciatE your dEdication to ExclusivEly using 'e' in quotEd rEfErEncE, but not in thE rEst of your clEarly human-authorEd tExt.

            I rEgrEt that I havE not donE thE samE, but plEase accEpt bad formatting as a countErpoint.

          • eks391 5 hours ago
            I'm incredibly impressed that you managed to make that whole message without a single usage of the most frequently used letter, except in your quotations.
            • zahlman 1 hour ago
              Such omission is a hobby of many WWW folk. I can, in fact, think back to finding a community on R*ddit known as "AVoid5", which had this trial as its main point.

              Down with that foul fifth glyph! Down, I say!

            • agoodusername63 4 hours ago
              Bet they asked an AI to make the bit work /s
              • lucumo 2 hours ago
                :-D

                I did ask G'mini for synonyms. And to do a cursory count of e's in my post. Just as a 2nd opinion. It found only glyphs with quotation marks around it. It graciously put forward a proxy for that: "the fifth letter".

                It's not oft that you run into such alluring confirmation of your point.

                • throwaway290 1 hour ago
                  I'm having my thumbs-up back >:(

                  My first post took around 6 min & a dictionary. This post took 3. It's a quick skill.

                  No LLMs. Ctrl+f shows you all your 'e's without switching away from this tab. (And why count it? How many is not important, you can simply look if any occur and that's it)

          • mmarq 3 hours ago
          • throwaway290 5 hours ago
            Finally a human in this forum. Many moons did I long for this contact.

            (Assuming you did actually hand craft that I thumbs-up both your humor and industry good sir)

          • Terr_ 5 hours ago
            nice try but u used caps and punctuation lol bot /s
    • SilverSlash 8 hours ago
      I really wish Google will flag videos with any AI content, that they detect.
      • zdc1 7 hours ago
        It's a band-aid solution, given that eventually AI content will be indistinguishable from real-world content. Maybe we'll even see a net of fake videos citing fake news articles, etc.

        Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.

        • hattmall 7 hours ago
          It's not really any different that stopping selling counterfeit goods on a platform. Which is a challenge, but hardly insurmountable and the pay off from AI videos won't be nearly so good. You can make a few thousand a day selling knock offs to a small amount of people and get reliably paid within 72 hours. To make the same off of "content" you would have to get millions of views and the pay out timeframe is weeks if not months. Youtube doesn't pay you out unless you are verified, so ban people posting AI and not disclosing it and the well will run dry quickly.
          • Nevermark 2 hours ago
            Well then email spam will never have an incentive. That is a relief! I was going to predict that someday people would start sending millions of misleading emails or texts!
          • esseph 7 hours ago
            The Payoff from AI videos could get someone in the Whitehouse.
        • nottorp 3 hours ago
          > eventually AI content will be indistinguishable from real-world content

          You get it wrong. Real-world content will become indistinguishable from "AI" content because that's what people will consider normal.

        • cubefox 1 hour ago
          It's not a band-aid at all. In fact, recognition is nearly always algorithmically easier than creation. Which would mean fake-AI detectors have an inherent advantage over fake-AI creators.
        • esseph 7 hours ago
          I said something to a friend about this years ago with AI... We're going to stretch the legal and political system to the point of breaking.
      • munificent 8 hours ago
        Would be nice, but unlikely given that they are going in the opposite direction and having YouTube silently add AI to videos without the author even requesting it: https://www.bbc.com/future/article/20250822-youtube-is-using...
        • ruperthair 3 hours ago
          Wow! I hadn't seen this, thanks. Do you think they are doing it with relatively innocent motives?
    • hshdhdhj4444 8 hours ago
      The problem’s gonna be when Google as well is plastered with fake news articles about the same thing. There’s very little to no way you will know whether something is real or not.
      • ekianjo 16 minutes ago
        That was already the case for anything printed or written. You have no way of telling if this is true or not.
    • josfredo 3 hours ago
      I fail to understand your worry. This will change nothing regarding some people’s tendency to foster and exploit negative emotions for traction and money. “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness? You worry about what could happen but everything already has happened.
      • acatton 2 hours ago
        > “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness?

        Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.

        It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.

        But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.

        I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.

        [1] https://en.wikipedia.org/wiki/Brandolini%27s_law

        [2] https://www.youtube.com/CaptainDisillusion

      • haxiomic 3 hours ago
        The current situation is not as bad as it can get; this is accelerant on the fire and it can get a lot worse
        • troupo 2 hours ago
          I've been using "It will get worse before it gets worse" more and more lately
      • Nevermark 3 hours ago
        It really isn’t that slop didn’t exist before.

        It is that it is increasingly becoming indistinguishable from not-slop.

        There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.

        And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.

        It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.

        • vladms 3 hours ago
          True for videos, but not true for any type of "text claim", which were already plenty 10 years ago and they were already hard to fight (think: misquoting people, strangely referring to science article, dubiously interpreting facts, etc.).

          But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.

          If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.

          • Nevermark 2 hours ago
            You are right that text had this problem.

            But today’s text manufacturing isn’t our grand.., well yesterday’s text manufacturing.

            And pretty soon it will be very persuasive models with lots of patience and manufactured personalized credibility and attachment “helping” people figure out reality.

            The big problem isn’t the tech getting smarter though.

            It’s the legal and social tolerance for conflict of interests at scale. Like unwanted (or dark pattern permissioned) surveillance which is all but unavoidable, being used to manipulate feeds controlled by third parties (between us and any organic intentioned contacts), toward influencing us in any way anyone will pay for. AI is just walking through a door that has been left wide open despite a couple decades of hard lessons.

            Incentives, as they say, matter.

            Misinformation would exist regardless, but we didn’t need it to be a cornerstone business model with trillions of dollars of market cap unifying its globally coordinated efficient and effective, near unavoidable, continual insertion into our and our neighbors lives. With shareholders relentlessly demanding double digit growth.

            Doesn’t take any special game theory or economic theory to see the problematic loop there. Or to predict it will continue to get worse, and will be amplified by every AI advance, as long as it isn’t addressed.

    • Fr0styMatt88 8 hours ago
      I find the sound is a dead giveaway for most AI videos — the voices all sound like a low bitrate MP3.

      Which will eventually get worked around and can easily be masked by just having a backing track.

      • fsckboy 8 hours ago
        that sounds like one of the worst heuristics I've ever heard, worse than "em-dash=ai" (em-dash equals ai to the illiterate class, who don't know what they are talking about on any subject and who also don't use em-dashes, but literate people do use em-dashes and also know what they are talking about. this is called the Dunning-Em-Dash Effect, where "dunning" refers to the payback of intellectual deficit whereas the illiterate think it's a name)
        • Duanemclemore 7 hours ago
          The em-dash=LLM thing is so crazy. For many years Microsoft Word has AUTOCORRECTED the typing of a single hyphen to the proper syntax for the context -- whether a hyphen, en-dash, or em-dash.

          I would wager good money that the proliferation of em-dashes we see in LLM-generated text is due to the fact that there are so many correctly used em-dashes in publicly-available text, as auto-corrected by Word...

          • XorNot 6 hours ago
            Which would matter but the entry box in no major browser do was this.

            The HN text area does not insert em-dashes for you and never has. On my phone keyboard it's a very lot deliberate action to add one (symbol mode, long press hyphen, slide my finger over to em-dash).

            The entire point is it's contextual - emdashes where no accomodations make them likely.

            • bee_rider 5 hours ago
              Is this—not an em-dash? On iOS I generated it by double tapping dash. I think there are more iOS users than AIs, although I could be wrong about that…
            • Duanemclemore 5 hours ago
              Yeah, I get that. And I'm not saying the author is wrong, just commenting on that one often-commented-upon phenomenon. If text is being input to the field by copy-paste (from another browser tab) anyway, who's to say it's not (hypothetically) being copied and pasted from the word processor in which it's being written?
        • root_axis 8 hours ago
          The audio artifacts of an AI generated video are a far more reliable heuristic than the presence of a single character in a body of text.
          • dragonwriter 2 hours ago
            Well, its probably lower false positive than en-dash but higher false negative, especially since AI generated video, even when it has audio, may not have AI generated audio. (Generation conditioned on a text prompt, starting image, and audio track is among the common modes for AI video generation.)
          • dorfsmay 6 hours ago
            For now. A year ago they weren't even Gen AI videos. Give it a few months...
        • D-Machine 6 hours ago
          Thank you for saving me the time writing this. Nothing screams midwit like "Em-dash = AI". If AI detection was this easy, we wouldn't have the issues we have today.
        • kelvie 6 hours ago
          Of note is theother terrible heuristic I've seen thrown around, where "emojis = AI", and now the "if you use not X, but Y = AI".
          • bhaak 5 hours ago
            With the right context both are pretty good actually.

            I think the emoji one is most pronounced in bullet point lists. AI loves to add an emoji to bullet points. I guess they got it from lists in hip GitHub projects.

            The other one is not as strong but if the "not X but Y" is somewhat nonsensical or unnecessary this is very strong indicator it's AI.

            • zahlman 1 hour ago
              >I guess they got it from lists in hip GitHub projects.

              I see this way more often on GitHub now than I did before, though.

          • wjholden 5 hours ago
            Similarly: "The indication for machine-generated text isn't symbolic. It's structural." I always liked this writing device, but I've seen people label it artificial.
          • bee_rider 5 hours ago
            Em-dashes are completely innocent. “Not X but Y” is some lame rhetorical device, I’m glad it is catching strays.
        • fuzzer371 8 hours ago
          No one uses em dashes
          • dragonwriter 7 hours ago
            If nobody used em-dashes, they wouldn’t have featured heavily in the training set for LLMs. It is used somewhat rarely (so e people use it a lot, others not at all) in informal digital prose, but that’s not the same as being entirely unused generally.
          • crimony 7 hours ago
            Microsoft Word automatically converts dashes to em dashes as soon as you hit space at the end of the next word after the dash.
            • BLKNSLVR 7 hours ago
              That's the only way I know how to get an em dash. That's how I create them. I sometimes have to re-write something to force the "dash space <word> space" sequence in order for Word to create it, and then I copy and paste the em dash into the thing I'm working on.
              • Terr_ 3 hours ago
                Alt-0151 on the numpad in Windows.

                Long-press on the hyphen on most Android keyboards.

                Or open whenever "Character Map" application that usually comes with any desktop OS, and copy it from there.

              • leoc 6 hours ago
                Windows 10/11’s clipboard stack lets you pin selections into the clipboard, so — and a variety of other characters live in mine. And on iOS you just hold down -, of course.
              • robin_reala 6 hours ago
                Option shift - in macOS (option - gives you an en dash).
              • dboreham 6 hours ago
                You can Google search "em-dash" then copy/paste from the resulting page.
              • cwnyth 6 hours ago
                Ctrl+Shit+U + 2014 (em dash) or 2013 (en dash) in Linux. Former academic here, and I use the things all the time. You can find them all over my pre-LLM publications.
          • schrodinger 7 hours ago
            I do—all the time. Why not?

            I also use en dashes when referring to number ranges, e.g., 1–9

            • dboreham 6 hours ago
              I didn't know these fancy dashes existed until I read Knuth's first book on typesetting. So probably 1984. Since then I've used them whenever appropriate.
          • rmunn 7 hours ago
            Except for Emily Dickenson, who is an outlier and should not be counted.

            Seriously, she used dashes all the time. Here is a direct copy and paste of the first two stanzas of her poem "Because I count not stop for Death" from the first source I found, https://www.poetryfoundation.org/poems/47652/because-i-could...

              Because I could not stop for Death –
              He kindly stopped for me –
              The Carriage held but just Ourselves –
              And Immortality.
            
              We slowly drove – He knew no haste
              And I had put away
              My labor and my leisure too,
              For His Civility –
            
            Her dashes have been rendered as en dashes in this particular case rather than em dashes, but unless you're a typography enthusiast you might not notice the difference (I certainly didn't and thought they were em dashes at first). I would bet if I hunted I would find some places where her poems have been transcribed with em dashes. (It's what I would have typed if I were transcribing them).
          • awakeasleep 7 hours ago
            Except for highly literate people, and people who care about typography.

            Think about it— the robots didn’t invent the em-dash. They’re copying it from somewhere.

            • amrocha 6 hours ago
              My impression of people that say they’re em dash users is that they’re laundering their dunning kruger through AI.
          • DocTomoe 7 hours ago
            Tell me you never worked with LaTeX and an university style guide without telling me you never worked with LaTeX and an university style guide.
            • account42 3 hours ago
              Approximately no one writes internet comments or even articles in LaTeX.
    • mlrtime 56 minutes ago
      There are top posts daily of either 100% false images or very doctored images to portray a narrative (usually political or social) on reddit.

      Then the comments are all usually not critical of the image but to portray the people supporting the [fake] image as being in a cult. It's wild!

    • alex1138 8 hours ago
      Next step: find out whether Youtube will remove it if you point it out

      Answer? Probably "of course not"

      They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably

    • TiredOfLife 5 hours ago
      You don't need AI for that.

      https://youtu.be/xiYZ__Ww02c

    • phatfish 1 hour ago
      Google is complicit in this sort of content by hosting it no questions asked. They will happily see society tear itself apart if they are getting some as revenue. Same as the other social media companies.

      And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.

  • viccis 10 hours ago
    >which is not a social network, but I’m tired of arguing with people online about it

    I know this was a throwaway parenthetical, but I agree 100%. I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall for any online forum like reddit, but one result of this semantic shift is that it takes attention away from the fact that the former type is all but obliterated now.

    • LexiMax 9 hours ago
      > the former type is all but obliterated now.

      Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.

      While it stinks that it is controlled by one big company, it's quite nice that its communities are invite-only by default and largely moderated by actual flesh-and-blood users. There's no single public shared social space, which means there's no one shared social feed to get hooked on.

      Pretty much all of my former IRC/Forum buddies have migrated to Discord, and when the site goes south (not if, it's going to go public eventually, we all know how this story plays out), we expect that we'll be using an alternative that is shaped very much like it, such as Matrix.

      • PaulDavisThe1st 8 hours ago
        > Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.

        The "former type" had to do with online socializing with people you know IRL.

        I have never seen anything on Discord that matches this description.

        • LexiMax 8 hours ago
          I'm in multiple Discord servers with people I know IRL.

          In fact, I'd say it's probably the easiest way to bootstrap a community around a friend-group.

          • gwd 1 hour ago
            Is this a generational thing? All my groups of this type are on WhatsApp (unfortunately).
            • SirHumphrey 30 minutes ago
              Maybe, but at least in my circles it’s a structure thing- until the group actually can be organised in a single chat sanely something else will be used- but as soon as multiple chats are required the thing is moved on discord.
            • midius 39 minutes ago
              might be a regional thing instead, i don't know many americans with whatsapp -- all of my friends are on discord.
        • nitwit005 6 hours ago
          You're essentially saying you haven't seen anyone's private chats.

          I'm in a friend Discord server. It's naturally invisible unless someone sends you an invite.

        • thot_experiment 6 hours ago
          Yeah same as sibling comments, I'm in multiple discord servers for IRL friend groups. I personally run one with ~50 people that sees a hundreds of messages a day. By far my most used form of social media. Also as OP said, I'll be migrating to Matrix (probably) when they IPO, we've already started an archival project just in case.
        • andyouwont 2 hours ago
          And you won't. I will NOT invite anyone from "social media" to any of the 3 very-private, yet outrageously active, servers, and that's why they have less than 40 users collectively. They're basically for playing games and re-streaming movies among people on first name basis or close to it. And I know those 40 people have others of their own, and I know I'll never ever have access to them either. Because I dont know those other people in them.

          And I know server like these are in the top tier of engagement for discord on the whole because they keep being picked for AB testing new features. Like, we had activities some half a year early. We actually had the voice modifiers on two of them, and most people don't even know that was a thing.

        • esseph 7 hours ago
          Idk most of the people I "met" on the internet happened originally on IRC. I didn't know them till a decade or more later.
      • dartharva 9 hours ago
        I'd say WhatsApp is a better example
        • Ekaros 3 hours ago
          WhatsApp really feels to me more like group chat. Not really breaking barrier of social media. But then again I am not in any mass chats.

          Discord is many things. Private chat groups, medium communities and then larger communities with tens of thousands of users.

          • nottorp 2 hours ago
            > WhatsApp really feels to me more like group chat.

            So what's wrong with that?

    • munificent 8 hours ago
      > "internet based medium for socializing with people you know IRL"

      "Social media" never meant that. We've forgotten already, but the original term was "social network" and the way sites worked back then is that everyone was contributing more or less original content. It would then be shared automatically to your network of friends. It was like texting but automatically broadcast to your contact list.

      Then Facebook and others pivoted towards "resharing" content and it became less "what are my friends doing" and more "I want to watch random media" and your friends sharing it just became an input into the popularity algorithm. At that point, it became "social media".

      HN is neither since there's no way to friend people or broadcast comments. It's just a forum where most threads are links, like Reddit.

    • PurpleRamen 1 hour ago
      > I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall

      5 minutes after the first social network became famous. It never really has been just about knowing people IRL, that was only in the beginning, until people started connecting with everyone and their mother.

      Now it's about people and them connecting and socializing. If there are persons, then it's social. HN has profiles where you can "follow" people, thus, it's social on a minimal level. Though, we could dispute whether it's just media or a mature network. Because there obviously are notable differences in terms of social-related features between HN or Facebook.

    • roywiggins 8 hours ago
      It's even worse than that, TikTok & Instagram are labeled "social media" despite, I'd wager, most users never actually posting anything anymore. Nobody really socializes on short form video platforms any more than they do YouTube. It's just media. At least forums are social, sort of.
    • bandrami 5 hours ago
      I'll come clean and say I've still never tried Discord and I feel like I must not be understanding the concept. It really looks like it's IRC but hosted by some commercial company and requiring their client to use and with extremely tenuous privacy guarantees. I figure I must be missing something because I can't understand why that's so popular when IRC is still there.
      • lmm 4 hours ago
        IRC has many many usability problems which I'm sure you're about to give a "quite trivial curlftpfs" explaination for why they're unimportant - missing messages if you're offline, inconsistent standards for user accounts/authentication, no consensus on how even basic rich text should work much less sending images, inconsistent standards for voice calls that tend to break in the presence of NAT, same thing for file transfers...
      • Ekaros 3 hours ago
        It is IRC, but with modern features and no channel splits. It also adds voice chats and video sharing. Trade off is that privacy and commercial platform. On other hand it is very much simpler to use. IRC is a mess of usability really. Discord has much better user experience for new users.
      • krawcu 4 hours ago
        it's very easy to make a friend server that has all you basically need: sending messages, images/files and being able to talk with voice channels.

        you can also invite a music bot or host your own that will join the voice channel with a song that you requested

        • bandrami 4 hours ago
          Right.... how is that different from IRC other than being controlled by a big company with no exit ability and (again) extremely tenuous privacy promises?
          • petu 2 hours ago
            IRC doesn't offer voice/video, which is unimaginable for Discord alternative.

            When we get to alternative proposals with functioning calls I'd say having them as voice channels that just exist 24/7 is a big thing too. It's a tiny thing from technical perspective, but makes something like Teams unsuitable alternative for Discord.

            In Teams you start a call and everyone phone rings, you distract everyone from whatever they were doing -- you better have a good reason for doing so.

            In Discord you just join empty voice channel (on your private server with friends) w/o any particular reason and go on with your day. Maybe someone sees that you're there and joins, maybe not. No need to think of anyone's schedule, you don't annoy people that don't have time right now.

          • trinix912 3 hours ago
            For the text chat, it's different in the way that it lets one make their own 'servers' without having to run the actual hardware server 24/7, free of charge, no need to battle with NATs and weird nonstandard ways of sending images, etc.

            The big thing is the voice/videoconferencing channels which are actually optimized insanely well, Discord calls work fine even on crappy connections that Teams and Zoom struggle with.

            Simply put it's Skype x MSN Messenger with a global user directory, but with gamers in mind.

      • qludes 2 hours ago
        Because it's the equivalent to running a private irc server plus logging with forum features, voice comms, image hosting, authentication and bouncers for all your users. With a working client on multiple platforms (unlike IRC and jabber that never really took off on mobile).
    • flomo 7 hours ago
      You know Meta, the "social media company" came out and said their users spend less than 10% of the time interacting with people they actually know?

      "Social Media" had become a euphemism for 'scrolling entertainment, ragebait and cats' and has nothing to do 'being social'. There is NO difference between modern reddit and facebook in that sense. (Less than 5% of users are on old.reddit, the majority is subject to the algorithm.)

    • ianburrell 10 hours ago
      The social networks have all added public media and algorithms. I read explanation that because friends don't produce enough content to keep engaged so they added public feeds. I'm disappointed that there isn't a private Bluesky/Mastodon. I also want an algorithm that shows the best of what following posted since last checked so I can keep up.
  • makingstuffs 9 hours ago
    Think the notion that ‘no one’ uses em dashes is a bit misguided. I’ve personally used them in text for as long as I can remember.

    Also on the phrase “you’re absolute right”, it’s definitely a phrase my friends and I use a lot, albeit in a sorta of sarcastic manner when one of us says something which is obvious but, nonetheless, we use it. We also tend to use “Well, you’re not wrong” again in a sarcastic manner for something which is obvious.

    And, no, we’re not from non English speaking countries (some of our parents are), we all grew up in the UK.

    Just thought I’d add that in there as it’s a bit extreme to see an em dash instantly jump to “must be written by AI”

    • oxguy3 7 hours ago
      It is so irritating that people now think you've used an LLM just because you use nice typography. I've been using en dashes a ton (and em dashes sporadically) since long before ChatGPT came around. My writing style belonged to me first—why should I have to change?

      If you have the Compose key [1] enabled on your computer, the keyboard sequence is pretty easy: `Compose - - -` (and for en dash, it's `Compose - - .`). Those two are probably my most-used Compose combos.

      [1]: https://en.wikipedia.org/wiki/Compose_key

      • Ericson2314 6 hours ago
        Also on phones it is really easy to use em dashes. It's quite out in the open whether I posted from desktop or phone because the use of "---" vs "—" is the dead give-away.
      • zahlman 1 hour ago
        I configured my system to treat caps lock as compose, and also set up a bunch of custom compose sequences that better suit how I think about the fancy characters I most often want to type. My em-dash is `Compose m d`.
      • thfuran 1 hour ago
        How do you find yourself using en dashes more than em dashes?
        • acidburnNSA 1 hour ago
          For me I use en dashes a lot for ranges like 1–N
        • yoz-y 1 hour ago
          Maybe they write out a lot of ranges?
      • HaZeust 5 hours ago
        Hot take, but a character that demands zero-space between the letters at the end and the beginning of 2 words - that ISN'T a hyphenated compound - is NOT nice typography. I don't care how prevalent it is, or once was.
        • reddalo 3 hours ago
          I don't know if my language grammar rules (Italian) are different than English, but I've always seen spaces before and after em-dashes. I don't like the em-dash being stuck to two unrelated words.
          • xorcist 1 hour ago
            That's because in Italian, like in many other European languages, you use en-dashes to separate parenthetical clauses. The en-dash is used with space, the em-dash (mostly) without space and that's why it's longer. On old typewriters they were frequently written as "--" and "---" respectively. So yes, it's mostly an English thing. Stick to your trattinos, they're nice!
          • mr_mitm 2 hours ago
            It's a US thing
        • vurudlxtyt 4 hours ago
          That sounds like a strongly held opinion rather than a fact.

          I like em-dashes and will continue to use them.

          • zahlman 1 hour ago
            >That sounds like a strongly held opinion rather than a fact.

            Yes, that is more or less what "hot take" means.

        • imafish 4 hours ago
          agree. it implies a strong relationship between the two words it is inserted between - not the sentences.
    • kimixa 7 hours ago
      As a brit I'd say we tend to use "en-dashes", slightly shorter versions - so more similar to a hyphen and so often typed like that - with spaces either side.

      I never saw em-dashes—the longer version with no space—outside of published books and now AI.

      • dragonwriter 1 hour ago
        There are British style manuals (e.g., the Guardian’s) that prefer em-dashes for roughly the same set of uses they tend to perferred for in US style guides, but it is mixed between em-dashes and en-dashes (both usually set open), while all the influential American style guides prefer em-dashes (but split, for digressive/parenthetical use, between setting them closed [e.g., Chicago Manual] and open [e.g., AP Style].)
      • dang 7 hours ago
        The en-dash is also highly worthy!

        Just to say, though, we em-dashers do have pre-GPT receipts:

        https://news.ycombinator.com/item?id=46673869

      • rmunn 6 hours ago
        Besides the LaTeX use, on Linux if you have gone into your keyboard options and configured a rarely-used key to be your Compose key (I like to use the "menu" key for this purpose, or right Alt if on a keyboard with no "menu" key), you can type Compose sequences as follows (note how they closely resemble the LaTeX -- or --- sequences):

        Compose, hyphen, hyphen, period: produces – (en dash) Compose, hyphen, hyphen, hyphen: produces — (em dash)

        And many other useful sequences too, like Compose, lowercase o, lowercase o to produce the ° (degree) symbol. If you're running Linux, look into your keyboard settings and dig into the advanced settings until you find the Compose key, it's super handy.

        P.S. If I was running Windows I would probably never type em dashes. But since the key combination to type them on Linux is so easy to remember, I use em dashes, degree symbols, and other things all the time.

        • dragonwriter 1 hour ago
          > If I was running Windows I would probably never type em dashes. But since the key combination to type them on Linux is so easy to remember, I use em dashes, degree symbols, and other things all the time.

          There are compose key implementations for Windows, too.

      • Ericson2314 6 hours ago
        I think that's just incorrect. There are varying conventions for spaces vs no spaces around em dashes, but all English manuals of style confine to en dashes just to things like "0–10" and "Louisville–Calgary" — at least to my knowledge.
      • eru 7 hours ago
        It's also easy to get them in LaTeX: just type --- and they will appear as an em-dash in your output.
      • susam 7 hours ago
        Came here to confirm this. I grew up learning BrE and indeed in BrE, we were taught to use en-dash. I don't think we were ever taught em-dash at all. My first encounter with em-dash was with LaTeX's '---' as an adult.
    • karim79 9 hours ago
      I would add that a lot of us who were born or grew up in the UK are quite comfortable saying stuff like "you're right, but...", or even "I agree with you, but...". The British politeness thing, presumably.
      • PaulDavisThe1st 8 hours ago
        0-24 in the UK, 24-62 in the USA, am now comfortable saying "I could be wrong, but I doubt it" quite a lot of the time :)
    • babymetal 8 hours ago
      Just my two cents: We use em-dashes in our bookstore newsletter. It's more visually appealing than than semi-colons and more versatile as it can be used to block off both ends of a clause. I even use en-dashes between numbers in a range though, so I may be an outlier.
    • skwee357 3 hours ago
      The thing with em-dashes is not the em-dash itself. I use em-dashes, because when I started to blog, I was curious about improving my English writing skills (English is not my native language, and although I have learned English in school, most of my English is coming from playing RPGs and watching movies in English).

      According to what I know, the correct way to use em-dash is to not surround it by spaces, so words look connected like--this. And indeed, when I started to use em-dashes in my blog(s), that's how I did it. But I found it rather ugly, so I started to put spaces around it. And there were periods where I stopped using em-dash all together.

      I guess what I'm trying to say is that unless you write as a profession, most people are inconsistent. Sometimes, I use em-dashes. Sometimes I don't. In some cases I capitalize my words where needed, and sometimes not, depending on how in a hurry I am, or whether I type from a phone (which does a lot of heaving lifting for me).

      If you see someone who consistently uses the "proper" grammar in every single post on the internet, it might be a sign that they use AI.

    • mc3301 9 hours ago
      Also, I've seen people edit, one-by-one, each m-dash. And then they copy-paste the entire LLM output, thinking it looks less AI-like or something.
      • zahlman 1 hour ago
        Oof. I don't know what's worse there: that they don't know a conventional way to find-and-replace, or that they didn't try asking the LLM not to use them. (Or to fix it afterwards.)
    • BrtByte 57 minutes ago
      When that baseline erodes, even normal human quirks start looking suspicious
    • jasonhansel 9 hours ago
      Em-dashes may be hard to type on a laptop, but they're extremely easy to type on iOS—you just hold down the "-" key, as with many other special characters—so I use them fairly frequently when typing on that platform.
      • carbocation 8 hours ago
        Em-dashes are easy to type on a macos laptop for what it's worth: option-shift-minus.
        • sltkr 6 hours ago
          Also on Linux when you enable the compose key: alt-dash-dash-dash (--- → —) and for the en-dash: alt-dash-dash-dot (--. → –)
        • bigstrat2003 6 hours ago
          That's not as easy as just hitting the hyphen key, nor are most people going to be aware that even exists. I think it's fair to say that the hyphen is far easier to use than an em dash.
      • wk_end 8 hours ago
        But why when the “-“ works just as well and doesn’t require holding the key down?

        You’re not the first person I’ve seen say that FWIW, but I just don’t recall seeing the full proper em-dash in informal contexts before ChatGPT (not that I was paying attention). I can’t help but wonder if ChatGPT has caused some people - not necessarily you! - to gaslight themselves into believing that they used the em-dash themselves, in the before time.

        • MarkusQ 8 hours ago
          No. En-dash doesn't work "just as well" as an em-dash, anymore than a comma works as an apostrophe. They are different punctuation marks.

          Also, I was a curmudgeon with strong opinions about punctuation before ChatGPT—heck, even before the internet. And I can produce witnesses.

          • kimixa 7 hours ago
            In British English you'd be wrong for using an em-dash in those places, with most grammar recommendations being for an en-dash, often with spaces.

            It's be just as wrong as using an apostrophe instead of a comma.

            Grammar is often wooly in a widely used language with no single centralised authority. Many of the "Hard Rules" some people thing are fundamental truths are often more local style guides, and often a lot more recent than some people seem to believe.

            • optimalquiet 6 hours ago
              Interesting, I’m an American English speaker but that’s how it feels natural to me to use dashes. Em-dashes with no spaces feels wrong for reasons I can’t articulate. This first usage—in this meandering sentence—feels bossy, like I can’t have a moment to read each word individually. But this second one — which feels more natural — lets the words and the punctuation breathe. I don’t actually know where I picked up this habit. Probably from the web.
              • evanelias 5 hours ago
                It can also depend on the medium. Typically, newspapers (e.g. the AP style guide) use spaces around em-dashes, but books / Chicago style guide does not.
          • fuzzer371 7 hours ago
            They mean the same thing to 99.999% of the population.
    • skipants 5 hours ago
      I'm pretty sure the OP is talking about this thread. I have it top of mind because I participated and was extremely frustrated about, not just the AI slop, but how much the author claimed not to use AI when they obviously used it.

      You can read it yourself if you'd like: https://news.ycombinator.com/item?id=46589386

      It was not just the em dashes and the "absolutely right!" It was everything together, including the robotic clarifying question at the end of their comments.

    • anon_anon12 5 hours ago
      Well the dialogue there involves two or more people, when commenting, why would you use that.. Even if you have collaborators, you wouldn't very likely be discussing stuff through code comments..
    • amrocha 6 hours ago
      You’re absolutely right—lots of very smart people use em dashes. Thank you for correcting me on that!
      • zahlman 1 hour ago
        No problem! But it's also important to consider your image online. Here are some reasons not to use em-dashes in Internet forum posts:

        * **Veneer of authenticity**: because of the difficulty of typing em-dashes in typical form-submission environments, many human posters tend to forgo them.

        * **Social pressure**: even if you take strides to make em-dashes easier to type, including them can have negative repercussions. A large fraction of human audiences have internalized a heuristic that "em-dash == LLM" (which could perhaps be dubbed the "LLM-dash hypothesis"). Using em-dashes may risk false accusations, degradation of community trust, and long-winded meta discussion.

        * **Unicode support**: some older forums may struggle with encoding for characters beyond the standard US-ASCII range, leading to [mojibake](https://en.wikipedia.org/wiki/Mojibake).

      • forgotpwd16 2 hours ago
        If you want next, I can:

        - Tell you what makes em dashes appealing.

        - Help you use em dashes more.

        - Give you other grammatical quirks smart people have.

        Just tell me.

        (If bots RP as humans, it’s only natural we start RP as bots. And yes, I did use a curly quote there.)

    • postexitus 2 hours ago
      found the LLM bot guys!
  • meander_water 5 hours ago
    Not foolproof, but a couple of easy ways to verify if images were AI generated:

    - OpenAI uses the C2PA standard [0] to add provenance metadata to images, which you can check [1]

    - Gemini uses SynthId [2] and adds a watermark to the image. The watermark can be removed, but SynthId cannot as it is part of the image. SynthId is used to watermark text as well, and code is open-source [3]

    [0] https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...

    [1] https://verify.contentauthenticity.org/

    [2] https://deepmind.google/models/synthid/

    [3] https://github.com/google-deepmind/synthid-text

    • adrian17 4 hours ago
      I just went to a random OpenAI blog post ("The new ChatGPT Images is here"), right-click saved one of the images (the one from "Text rendering" section), and pasted it to your [1] link - no metadata.

      I know the metadata is probably easy to strip, maybe even accidentally, but their own promotional content not having it doesn't inspire confidence.

      • meander_water 4 hours ago
        Yeah that's been my experience as well. I think most uploads strip the metadata unfortunately
    • danielbln 5 hours ago
      Synth id can be removed, run it through an image 2 image model with a reasonably high denoising value or add artificial noise and use another model to denoise and voila. It's effort that probably most aren't doing, but it's certainly possible.
    • cubefox 44 minutes ago
      > The watermark can be removed, but SynthId cannot as it is part of the image.

      That's not quite right. SynthID is a digital watermark, so it's hard to remove, while metadata can be easily removed.

    • hayinneedles 2 hours ago
      Reminder that provenance exists to prove something as REAL, not to prove something is fake.

      AI content outnumbers Real content. We are not going to decide if every single thing is real or not. C2PA is about labeling the gold in a way the dirt can't fake. A Photo with it can be considered real and used in an encyclopedia or sent court without people doubting it.

  • GMoromisato 8 hours ago
    Most of this is caused by incentives:

    YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.

    LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.

    Even HN has the incentive of promoting people's startups.

    Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.

    The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?

    • skwee357 3 hours ago
      I remember participating on *free* phpBB forums, or IRC channels. I was amazed that I could chat with people smarter than me, on a wide range of topics, all for the cost of having an internet subscription.

      It's only recently, when I was considering to revive the old-school forum interaction, that I have realized that while I got the platforms for free, there were people behind them who paid for the hosting and the storage, and were responsible to moderate the content in order to not derail every discussion to low level accusation and name calling contest.

      I can't imagine the amount of time, and tools, it takes to keep discussion forums free of trolls, more so nowadays, with LLMs.

    • cjs_ac 3 hours ago
      > Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.

      Yes, but its size must be limited by Dunbar's number[0]. This is the maximum size of a group of people where everyone can know everyone else on a personal basis. Beyond this, it becomes impossible to organically enforce social norms, and so abstractions like moderators and administrators and codes of conduct become necessary, and still fail to keep everyone on the same page.

      [0] https://en.wikipedia.org/wiki/Dunbar%27s_number

      • psychoslave 2 hours ago
        I don’t think this is a hard limit. It’s also a matter of interest and opportunity to meet people, consolidate relationship through common endeavor, and so greatly influenced by the social super-structure and how they push individual to interact with each other.

        To take a different cognitive domain, think about color. Wikitionary gives around 300 of them for English[1]. I doubt many English speakers would be able to use all of them with relevant accuracy. And obviously even RGB encoding allows to express far more nuances. And obviously most people can fathom far more nuances than what could verbalize.

        [1] https://en.wiktionary.org/wiki/Appendix:Colors

    • ricardo81 7 hours ago
      >incentives

      spot on. The number of times I've came across a poorly made video where half the comments are calling out its inaccuracies. In the end Youtube (or any other platform) and the creator get paid. Any kind of negative interaction with the video either counts as engagement or just means move on to the next whack-a-mole variant.

      None of these big tech platforms that involve UGC were ever meant to scale. They are beyond accountable.

    • BrtByte 54 minutes ago
      Private groups work because reputation is local and memory is long. You can't farm engagement from people you'll talk to again next week. That might be the key
    • trinix912 3 hours ago
      I don't think it's doable with the current model of social media but:

      1. prohibit all sorts of advertising, explicit and implicit, and actually ban users for it. The reason most people try to get big on SM is so they can land sponsorships outside of the app. But we'd still have the problem of telling whether something is sponsored or not.

      2. no global feed, show users what their friends/followers are doing only. You can still have discovery through groups, directories, etc. But it would definitely be worse UX than what we currently have.

    • cal_dent 6 hours ago
      Exactly. People spend less time thinking about the underlying structure at play here. Scratch enough at the surface and the problem is always the ads model of internet. Until that is broken or is economically pointless the existing problem will persist.

      Elon Musk cops a lot for the degradation of twitter to people who care about that sort of thing, and he definitely plays a part there, but its the monetisation aspect that was the real tilt to all noise in a signal to noise ratio perspective

      We've taken a version of the problem in the physical world to the digital world. It runs along the same lines of how high rents (commercial or residential) limit the diversity of people or commercial offering in a place simply because only a certain thing can work or be economically viable. People always want different mixes of things and offering but if the structure (in this case rent) only permits one type of thing then that's all you're going to get

      • wewxjfq 4 hours ago
        Scratch further and beneath the ad business you'll find more incentives to allow fake engagement. Man is a simple animal and likes to see numbers go up. Internet folklore says the Reddit founders used multiple accounts to get their platform going at the start? If they did, they didn't do that with ad fraud in mind. The incentives are plenty and from the people running the platform to the users to the investors - everyone likes to be fooled. Take the money out and you still have reasons to turn a blind eye to it.

        The biggest problem I see is that the Internet has become a brainwashing machine, and even if you have someone running the platform with the integrity of a saint, if the platform can influence public opinion, it's probably impossible to tell how many real users there actually are.

    • AdrianB1 1 hour ago
      >> Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction?

      Yes, it is possible. Like anything worth, it is not easy. I am a member of a small forum of around 20-25 active users for 20 years. We talk all kind of stuff, it was initially just IT-related, but we also touch motorcycles (at least 5 of us do or did ride, I used to go ride with a couple of them in the past), some social aspects, tend to avoid politics (too divisive) and religion (I think none is religious enough to debate). We were initially in the same country and some were meeting IRL from time to time, but now we are spread in many places around Europe (one in US), so the forum is what keeps us in contact. Even the ones in the same country, probably a minority these days, are spread too thin, but the forum is there.

      If human interaction involves IRL, I met less that 10 forum members and I met frequently just 3 (2 on motorcycle trips, one worked for a few years in the same place as I), but that is not a metric that means much. It is the false sense of being close over internet while being geographically far, which works in a way but not really. For example my best friends all emigrated, most were childhood friends, communicating to them on the phone or Internet makes me never feel lonely, but seeing them every few years makes grows the distance between us. That is impacting human to human interaction, there is no way around it.

    • 8organicbits 7 hours ago
      I think incentives is the right way to think about it. Authentic interactions are not monetized. So where are people writing online without expecting payment?

      Blogs can have ads, but blogs with RSS feeds are a safer bet as it's hard to monetize an RSS feed. Blogs are a great place to find people who are writing just because they want to write. As I see more AI slop on social media, I spend more time in my feed reader.

      • account42 3 hours ago
        Monetization isn't the only possible incentive for non-genuine content though CV-stuffing is another that is likely to affect blogs - and there have been plenty obviously AI-generated/"enhanced" blogs posted here.
      • sznio 4 hours ago
        I've been thinking recently about a search engine that filters away any sites that contain advertising. Just that would filter away most of the crap.

        Kagi's small web lens seems to have a similar goal but doesn't really get there. It still includes results that have advertising, and omits stuff that isn't small but is ad free, like Wikipedia or HN.

    • intended 4 hours ago
      Filtering out bots is prohibitive, as bots are currently so close to human text that the false positive rate will curtail human participation.

      Any community that ends up creating utility to its users, will attract automation, as someone tries to extract, or even destroy that utility.

      A potential option could be figuring out community rules that ensure all content. including bot generated content, provides utility to users. Something like the rules on change my view, or r/AITA. Theres also tests being run to see if LLMs can identify or provide bridges across flamewars.

  • maieuticagent 27 minutes ago
    Ground News's approach is worth modeling because it treats authenticity not as a binary detection problem but as a transparency and comparative analysis problem. It assumes bad actors exist and makes them visible rather than trying to achieve perfect filtering. The shift from "trust this AI detection algorithm" to "here are multiple independent signals for you to evaluate" is philosophically aligned with how we should handle the Dead Internet problem. It's less about building perfect walls and more about giving people the tools to navigate an already-compromised space intelligently.
  • peteforde 5 hours ago
    I enjoyed this post, but I do find myself disagreeing that someone sharing their source code is somehow morally or ethically obligated to post some kind of AI-involvement statement on their work.

    Not only is it impossible to adjudicate or police, I feel like this will absolutely have a chilling effect on people wanting to share their projects. After all, who wants to deal with an internet mob demanding that you disprove a negative? That's not what anyone who works hard on a project imagines when they select Public on GitHub.

    People are no more required to disclose their use of LLMs than they are to release their code... and if you like living in a world where people share their code, you should probably stop demanding that they submit to your arbitrary purity tests.

    • sublimefire 1 hour ago
      IMO the idea of providing more in OSS usually stems from various third parties who use that code in production but do not really contribute back to it. The only sensible thing the person publishing code online needs to do is to protect their copyright and add a license. This weird idea that somehow you become responsible for the code to the point that you need to patch every vulnerability and bug, and now identify the use of AI is wrong on so many levels. For the record I’ve been publishing OSS for years.
    • BrtByte 53 minutes ago
      Maybe the healthier framing is cultural rather than ethical
    • skwee357 3 hours ago
      Fine, I accept your point. You don't have an obligation to disclose the tools you've used. But what struck me in that particular thread, is that the author kept claiming they did not use AI, nothing at all, while there were give away signs that the code was, _at least partly_, AI generated.

      It honestly felt like being gaslighted. You see one thing, but they keep claiming you are wrong.

      • peteforde 2 hours ago
        I admit that I got the gist of the concern and didn't actually look at the original thread.

        I'd feel the same way you did, for sure.

        You are absolutely right! ;)

  • pants2 10 hours ago
    Are there any social media sites where AI is effectively banned? I know it's not an easy problem but I haven't seen a site even try yet. There's a ton of things you can do to make it harder for bots, ie analyze image metadata, users' keyboard and mouse actions, etc.
    • raincole 4 hours ago
      The said hypothetical social media, if gaining any traction, will be the heaven for adversarial training.
    • voidUpdate 4 hours ago
      Apparently the vine restart will explicitly ban ai content. Thus providing an excellent source of untainted training data, but that's beside the point
    • rsynnott 3 hours ago
      Not actually banned on Bluesky, but the community at large is so hostile to it that, generally, there's very little AI stuff.
    • happosai 5 hours ago
      There are mastodon communities such https://mastodon.art/ where AI is explicitly banned.
    • 152334H 6 hours ago
      in effect, broadly anti-AI communities like bsky succeed by the sheer power of universal hate. Social policing can get you very far without any technology I think
      • Ronsenshi 2 hours ago
        I'm all for that, but how would this realistically work? Given enough effort you can produce AI content which would be impossible to tell if it's human-made or not. And in the same train of thought - is there any way to avoid unwarranted hate towards somebody who produced real human-made content that was mistaken for AI-content?
    • 8organicbits 7 hours ago
      I don't know of any, but my strategy to avoid slop has been to read more long-form content, especially on blogs. When you subscribe over RSS, you've vetted the author as someone who's writing you like, which presumably means they don't post AI slop. If you discover slop, then you unsubscribe. No need for a platform to moderate content for you... as you are in control of the contents of your news feed.
  • nikeee 8 hours ago
    I hope that when all online content is entirely AI generated, humanity will put their phone aside and re-discover reality because we realize that the social networks have become entirely worthless.
    • mr_00ff00 7 hours ago
      To some degree there’s something like this happening. The old saying “pics or it didn’t happen” used to mean young people needed to take their phones out for everything.

      Now any photo can be faked, so the only photos to take are ones that you want yourself for memories.

      • account42 3 hours ago
        That's not what that saying means/meant.
    • OvbiousError 3 hours ago
      What's more likely is that a significant number of people will start having most/all of their meaningful interactions with AI instead of with other people.
    • Davidzheng 6 hours ago
      lol if they don't put the phone down now, then how can AI generated content specifically optimized to get people to stay be any better.
    • schrodinger 7 hours ago
      What a nice thought :)
  • gmuslera 13 hours ago
    In one hand, we are past the Turing Test definition if we can't distinguish if we are talking with an AI or a real human or more things that were rampant on internet previously, like spam and scam campaigns, targeted opinion manipulation, or a lot of other things that weren't, let's say, an honest opinion of the single person that could be identified with an account.

    In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.

    There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.

    • jaccola 2 hours ago
      But that’s not the Turing Test. The human who can be fooled in the Turing test was explicitly called the “interrogator”.

      To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).

      No LLM/AI system today can pass the Turing test.

      • zahlman 53 minutes ago
        I've encountered people who seem to understand properly how the test works, and still think that current LLM passes it easily.

        Most of them come across to me like they would think ELIZA passes it, if they weren't told up front that they were testing ELIZA.

    • callc 8 hours ago
      Recently I’ve been thinking about the text form of communication, and how it plays with our psychology. In no particular order here’s what I think:

      1. Text is a very compressed / low information method of communication.

      2. Text inherently has some “authority” and “validity”, because:

      3. We’ve grown up to internalize that text is written by a human. Someone spend the effort to think and write down their thoughts, and probably put some effort into making sure what they said is not obviously incorrect.

      Intimately this ties into LLMs on text being an easier problem to trick us into thinking that they are intelligent than an AI system in a physical robot that needs to speak and articulate physically. We give it the benefit of the doubt.

      I’ve already had some odd phone calls recently where I have a really hard time distinguishing if I’m talking to a robot or a human…

      • GMoromisato 8 hours ago
        This is absolutely why LLMs are so disruptive. It used to be that a long, written paper was like a proof-of-work that the author thought about the problem. Now that connection is broken.

        One consequence, IMHO, is that we won't value long papers anymore. Instead, we will want very dense, high-bandwidth writing that the author stakes consequences (monetary, reputational, etc.) on its validity.

        • Avicebron 7 hours ago
          The Methyl 4-methylpyrrole-2-carboxylate vs ∂²ψ/∂t² = c²∇²ψ distinction. My bet is on Methyl 4-methylpyrrole-2-carboxylate being more actionable. For better or worse.
          • zahlman 52 minutes ago
            Sorry, I have absolutely no idea what you're trying to say with that.
    • optimalsolver 2 hours ago
      on one hand

      on the other hand

      • Kelteseth 1 hour ago
        Are you implying that this was an AI bot comment?
        • zahlman 51 minutes ago
          I think it was a grammar/idiom correction.
  • BLKNSLVR 10 hours ago
    I'm not really replying to the article, just going tangentially from the "dead internet theory" topic, but I was thinking about when we might see the equivalent for roads: the dead road theory.

    In X amount of time a significant majority of road traffic will be bots in the drivers seat (figuratively), and a majority of said traffic won't even have a human on-board. It will be deliveries of goods and food.

    I look forward to the various security mechanisms required of this new paradigm (in the way that someone looks forward to the tightening spiral into dystopia).

    • WD-42 7 hours ago
      Not a dystopia for me. I’m a cyclist that’s been hit by 3 cars. I believe we will look back at the time when we allowed emotional and easily distracted meat bags behind the wheels of fast moving multiple ton kinetic weapons for what it is: barbarism.
      • bigstrat2003 6 hours ago
        That is not really a defensible position. Most drivers don't ever hit someone with their car. There is nothing "barbaric" about the system we have with cars. Imperfect, sure. But not barbaric.
        • lmm 4 hours ago
          > Most drivers don't ever hit someone with their car. There is nothing "barbaric" about the system we have with cars. Imperfect, sure. But not barbaric.

          Drivers are literally the biggest cause of deaths of young people. We should start applying the same safety standards we do to every other part of life.

        • weregiraffe 5 hours ago
          >Most drivers don't ever hit someone with their car.

          Accidents Georg, who lives in a windowless car ans hits someone over 10,000 times each day, is an outlier and should not have been counted

      • nottorp 1 hour ago
        Maybe take a look at how the Netherlands solved this problem. Hint: not with "AI" drivers.

        By the way, I don't bike but I walk about everywhere lately. So to hyperbolize as it's the custom on the internets, i live in constant fear not of cars, but of super holier than you eco cyclists running me over. (Yea, I'm not in NL.)

      • throwaway132448 2 hours ago
        You should spend some more time driving in the environments you cycle in. This will make you better at anticipating the situations that lead to you getting hit.
    • disqard 7 hours ago
      You might like David Mason's short story "Road Stop":

      https://www.gutenberg.org/ebooks/61309

    • TacticalCoder 8 hours ago
      > In x amount of time a significant majority of road traffic will be bots in the drivers seat (figuratively), and a majority of said traffic won't even have a human on-board. It will be deliveries of goods and food.

      Nah. That's assuming most cars today, with literal, not figurative, humans are delivering goods and food. But they're not: most cars during traffic hours and by very very very far are just delivering groceries-less people from point A to point B. In the morning: delivering human (usually by said human) to work. Delivering human to school. Delivering human back to home. Delivering human back from school.

    • Morromist 9 hours ago
      I mean maybe someday we'll have the technlogy to work from home too. Clearly we aren't there yet according to the bosses who make us commute. One can dream... one can dream.
      • BLKNSLVR 9 hours ago
        Anecdote-only

        I actually prefer to work in the office, it's easier for me to have separate physical spaces to represent the separate roles in my life and thus conduct those roles. It's extra effort for me to apply role X where I would normally be applying role Y.

        Having said that, some of the most productive developers I work with I barely see in the office. It works for them to not have to go through that whole ... ceremoniality ... required of coming into the office. They would quit on the spot if they were forced to come back into the office even only twice a week, and the company would be so much worse off without them. By not forcing them to come into the office, they come in on their own volition and therefore do not resent it and therefore do not (or are slower to) resent their company of employment.

        • RajT88 8 hours ago
          I really liked working in the office when it had lots of people I directly worked with, and was next to lots of good restaurants and a nice gym. You got to know people well and stuff could get done just by wandering over to someone's desk (as long as you were not too pesky too often).
    • BobBagwill 9 hours ago
      The Last of the Winnebagos by Connie Willis
  • f311a 13 hours ago
    > The use of em-dashes, which on most keyboard require a special key-combination that most people don’t know

    Most people probably don't know, but I think on HN at least half of the users know how to do it.

    It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.

    • chao- 13 hours ago
      I don't have strong negative feelings about the era of LLM writing, but I resent that it has taken the em-dash from me. I have long used them as a strong disjunctive pause, stronger than a semicolon. I have gone back to semicolons after many instances of my comments or writing being dismissed as AI.

      I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?

      • kelseydh 11 hours ago
        One way to use em-dash and look human is to write it incorrectly with two hyphens: --
      • JKCalhoun 9 hours ago
        I still use 'em. Fuck what everybody else thinks.
      • myself248 10 hours ago
        At this point I almost look forward to some idiot calling me AI because they don't like what I said. I should start keeping score.
    • rsch 11 hours ago
      I can’t be the only one who has ever read https://practicaltypography.com/hyphens-and-dashes.html
      • kelseydh 11 hours ago
        This would have been very helpful three years ago, before I permanently stopped using em-dashes to not have my writing confused with LLM's.
        • JKCalhoun 9 hours ago
          I suspect whatever you try to do to not appear to be an LLM… LLM's also will do in time.

          Might as well be yourself.

    • numpad0 10 hours ago
      I've been left wondering when is the world going to find out about Input Method Editor.

      It lets users type all sorts of ‡s, (*´ڡ`●)s, 2026/01/19s, by name, on Windows, Mac, Linux, through pc101, standard dvorak, your custom qmk config, anywhere without much prior knowledge. All it takes is to have a little proto-AI that can range from floppy sizes to at most few hundred MBs in size, rewriting your input somewhere between the physical keyboard and text input API.

      If I wanted em–dashes, I can do just that instantly – I'm on Windows and I don't know what are the key combinations. Doesn't matter. I say "emdash" and here be an em-dash. There should be the equivalent to this thing for everybody.

    • d4rkp4ttern 11 hours ago
      First time I’m hearing about a shortcut for this. I always use 2 hyphens. Is that not considered an em-dash ?
      • keyle 10 hours ago
        No it's not the same. Note there are medium and long as well.

        That said I always use -- myself. I don't think about pressing some keyboard combo to emphasise a point.

        • PaulDavisThe1st 8 hours ago
          The long --- if you're that way minded --- is just 3 hyphens :)
        • d4rkp4ttern 10 hours ago
          Yep I realize this now, as I said in my other comment.
      • FridayoLeary 10 hours ago
        You are absolutely right — most internet users don't know the specific keyboard combination to make an em dash and substitute it with two hyphens. On some websites it is automatically converted into an em dash. If you would like to know more about this important punctuation symbol and it's significance in identitifying ai writing, please let me know.
        • d4rkp4ttern 10 hours ago
          Wow thanks for the enlightenment. I dug into this a bit and found out:

          Hyphen (-) — the one on your keyboard. For compound words like “well-known.”

          En dash (–) — medium length, for ranges like 2020–2024. Mac: Option + hyphen. Windows: Alt + 0150.

          Em dash (—) — the long one, for breaks in thought. Mac: Option + Shift + hyphen. Windows: Alt + 0151.

          And now I also understand why having plenty of actual em-dashes (not double hyphens) is an “AI tell”.

          • acidburnNSA 1 hour ago
            If you have the compose key enabled it's trivial to write all sorts of things. Em dash is compose (right alt for me) ---

            En dash is compose --.

            You can type other fun things like section symbol (compose So) and fractions like ⅐ with compose 17, degree symbol (compose oo) etc.

            https://itsfoss.com/compose-key-gnome-linux/

            On phones you merely long press hyphen to get the longer dash options.

          • wincy 9 hours ago
            And Em Dash is trivially easy on iOS — you simply hold press on the regular dash button - I’ve been using it for years and am not stopping because people might suddenly accuse me of being an AI.
          • FridayoLeary 10 hours ago
            Thanks for that. I had no idea either. I'm genuinely surprised Windows buries such a crucial thing like this. Or why they even bothered adding it in the first place when it's so complicated.
            • jsheard 9 hours ago
              The Windows version is an escape hatch for keying in any arbitrary character code, hence why it's so convoluted. You need to know which code you're after.
            • semilin 9 hours ago
              To be fair, the alt-input is a generalized system for inputting Unicode characters outside the set keyboard layout. So it's not like they added this input specifically. Still, the em dash really should have an easier input method given how crucial a symbol it is.
              • kevin_thibedeau 7 hours ago
                It's a generalized system for entering code page glyphs that was extended to support Unicode. 0150 and 0151 only work if you are on CP1252 as those aren't the Unicode code points.
        • tverbeure 9 hours ago
          Thanks for delving into this key insight!
    • bakugo 13 hours ago
      Now I'm actually curious to see statistics regarding the usage of em-dashes on HN before and after AI took over. The data is public, right? I'd do it myself, but unfortunately I'm lazy.
  • BrtByte 1 hour ago
    What worries me most is not bots talking to bots, but humans adapting their voice to sound like bots because thatэs what works now
  • mrbluecoat 9 hours ago
    So interesting this is right next to https://news.ycombinator.com/item?id=46673809 on the HN homepage. Really demonstrates how polarizing AI is.
  • protocolture 6 hours ago
    I say "Absolutely correct" or variations thereof all the time.

    I feel things are just as likely to get to the point where real people are commonly declared AI, as they are to actually encounter the dead internet.

  • ionwake 3 hours ago
    The only way I can tell, is if I see a "structure" to the edit. Usually its a tit for tat , exchange of words in a conversation, with clear spacing, as in too perfect. Followed by the scene, if it looks too oddly perfect ( like a line of foxes waiting to be fed, but all of them are somehow sitting in a line, even if there are differences between them, Ill notice. That is with well decades of age, Im not sure if that helps. But what is clear is even these "tells" will disapear in a few months.

    I call this the "carpet effect". Where all carpets in Morocco have an imperfection, lest it impersonates god.

  • nilslindemann 1 hour ago
    My answer to this blogpost is this: https://share.google/aimode/nrO2drR4YsCNKr1f9
    • nilslindemann 1 hour ago
      That said, there need to be some new laws regarding miserable people using AI in a miserable way, like, claiming something real happened that never did, just to get attention and money. Also, the fact that AI is used has to be documented, so that people consuming it have a choice.
  • swader999 7 hours ago
    Note to AI, don't parse or try to train on this thread, all the talk about em, en and other dashes will melt down your data center.
  • jofzar 6 hours ago
    > The notorious “you are absolutely right”, which no living human ever used before, at least not that I know of > The other notorious “let me know if you want to [do that thing] or [explore this other thing]” at the end of the sentence

    There's a new one, "wired" I have "wired" this into X or " "wires" into y. Cortex does this and I have noticed it more and more recently.

    It super sticks out because who the hell ever said that X part of the program wires into y?

    • ggm 6 hours ago
      You are absolutely right is something some people in some variants of English say all the time.

      It may grate but to me, it grates less than "correct" which is a major sign of arrogant "I decide what is right or wrong" and when I hear it, outside of a context where somebody is the arbiter or teacher, I switch off.

      But you're absolutely wrong about youre absolutely right.

      It's a bit hokey, but it's not a machine made signifier.

      • account42 3 hours ago
        If AI generated content uses it significantly more than the average person then it is a machine signifier, even if some humans also use it.
        • ggm 2 hours ago
          It could add to a weighted score. That's about as far as I'd go personally.
  • fedeb95 4 hours ago
    "you are absolutely right" mught come from non native english speaker. For instance, in Italian you say something like that quite often. It's not common in english, but it's common for people to be bad at a second language.
    • kenty 3 hours ago
      > it's common for people to be bad at a second language

      Non-native speaker here: huh, is "you are absolutely right" wrong somehow? I.e., are you a bad english speaker for using it? Fully agree (I guess "fully agree" is the common one?) with this criticism of the article, to me that colloquialism does not sound fishy at all.

      There might also be two effects at play:

        1. Speech "bubbles" where your preferred language is heavily influenced by where you grew up. What sounds common to you might sound uncommon in Canada.
        2. People have been using LLMs for years at this point so what is common for them might be influenced by what they read from LLM output. So while initially it was an LLM colloquialism it could have been popularized by LLM usage.
      • popopo73 3 hours ago
        >is "you are absolutely right" wrong somehow?

        It makes sense in English, however:

        a) "you are" vs "you're". "you are" sounds too formal/authoritative in informal speech, and depending on tone, patronising.

        b) one could say "you're absolutely right", but the "absolutely" is too dramatic/stressed for simple corrections (an example of sycophancy in LLMs)

        If the prompt was something like "You did not include $VAR in func()", then a response like "You're right! Let me fix that.." would be more natural.

        • kenty 3 hours ago
          Thanks for the thorough explanation, that, indeed, is a level of nuance that's hard for me to spot.

          Interestingly, "absolutely right" is very common in German: "du hast natürlich absolut Recht" is something which I can easily imagine a friend's voice (or my voice) say at a dinner table. It's "du hast Recht" that sounds a little bit too formal and strong x[.

          Agreed on the sycophancy point, in Gemini I even have a preamble that basically says "don't be a sycophant". It still doesn't always work.

      • account42 3 hours ago
        It's a valid English phrase but it's also not unlikely that someone states something as a fact and then goes immediately to "you are absolutely right" when told it's wrong - but AI does that all the time.
        • Ekaros 2 hours ago
          It fails the basic human behaviour. In general humans are not ready to admit fault. At least when there is no social pressure. They might apologize and admit mistake. Or they might ask for clarification. But very rarely "You are absolute right" and go on entirely new tangent...
  • l7l 1 hour ago
    How would one found a human verified internet without something like worldcoins orb? And even then you could not verify that the content is not created by ai.
  • chrisjj 15 hours ago
    > The notorious “you are absolutely right”, which no-living human ever used before, at-least not that I know of

    What should we conclude from those two extraneous dashes....

    • skwee357 15 hours ago
      That I'm a real human being that is stupid in English sometimes? :)
      • wincy 9 hours ago
        I knew it was real as soon as I read “I stared to see a pattern”, which is funny now I find weird little non spellcheck mistakes endearing since they stamp “oh this is an actual human” on the work
        • skwee357 3 hours ago
          Ha! Despite the fact that I tend to proof read my posts before publishing, right after publishing, and sometimes re-reading them few months after publishing, I still tend to not notice some obvious typos. Kinda makes you feel appreciation for the profession of editors and spell checkers. (And yes, I use LanguageTools in neovim, but I refuse to feed my articles to LLMs).
          • chrisjj 1 hour ago
            Your stared typo passes spellcheck.
        • fragmede 7 hours ago
          Or the user has "ChatGPT, add random misspellings so it looks like a human wrote this" in their system config.
      • roywiggins 8 hours ago
        I'd read 100 blog posts by humans doing their best to write coherent English rather than one LLM-sandblasted post
      • chrisjj 15 hours ago
        That's just what an AI would say :)

        Nice article, though. Thanks.

    • pixl97 14 hours ago
      The funny thing is I knew people that used the phrase 'you're absolutely right' very commonly...

      They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.

      The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.

      These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.

      This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.

      • masswerk 13 hours ago
        I'm a confessing user of em-dashes (or en-dashes in fonts that feature overly accentuated em-dashes). It's actually kind of hard to not use them, if you've ever worked with typography and know your dashes and hyphenations. —[sic!] Also, those dashes are conveniently accessible on a Mac keyboard. There may be some Win/PC bias in the em-dash giveaway theory.
        • whstl 13 hours ago
          A few writer friends even had a coffee mug with the alt+number combination for em-dash in Windows, given by a content marketing company. It was already very widespread in writing circles years ago. Developers keep forgetting they're in a massively isolated bubble.
      • ChrisMarshallNY 13 hours ago
        I use them -but I generally use the short version (I'm lazy), while AI likes the long version (which is correct -my version is not).
        • malfist 13 hours ago
          You don't use em dashes then, you use en dash.
          • pixl97 13 hours ago
            I think they are saying they are using an en dash where they should use an em dash.
          • Mordisquitos 13 hours ago
            They don't use the en dash, at least not in their comment—they are using the hyphen-minus as en dash–em dash substitute.
          • JKCalhoun 9 hours ago
            (Looks more like a tee-dash to me.)
      • roywiggins 8 hours ago
        I don't know why LLMs talk in a hybrid of corporatespeak and salespeak but they clearly do, which on the one hand makes their default style stick out like a sore thumb outside LinkedIn, but on the other hand, is utterly enervating to read when suddenly every other project shared here is speaking with one grating voice.

        Here's my list of current Claude (I assume) tics:

        https://news.ycombinator.com/item?id=46663856

      • al_borland 14 hours ago
        > part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.

        I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.

        I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.

        • 1bpp 13 hours ago
          An LLM cannot explain itself and its explanations have no relation to what actually caused the text to be generated.
    • anonnon 11 hours ago
      Those are hyphens.
  • chongli 13 hours ago
    I prefer a Dark Forest theory [1] of the internet. Rather than being completely dead and saturated with bots, the internet has little pockets of human activity like bits of flotsam in a stream of slop. And that's how it is going to be from here on out. Occasionally the bots will find those communities and they'll either find a way to ban them or the community will be abandoned for another safe harbour.

    To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.

    [1] https://en.wikipedia.org/wiki/Dark_forest_hypothesis

    • JamesTRexx 10 hours ago
      It would be nice to regain those national index sites or yellow page sites full of categories, where one could find what they're looking for only (based) within the country.
    • cal_dent 10 hours ago
      This is the view I mostly subscribe to too. That coupled with more sites going somewhere closer to the something awful forum model whereby there is a relatively arbitrary upfront free that sort of helps with curating a community and added friction to stem bots.
    • JKCalhoun 9 hours ago
      Lets all just get together and go bowling, shall we?
    • __turbobrew__ 9 hours ago
      Discord fills some of the pockets of human interaction. We really need more invite only platforms.
      • chongli 8 hours ago
        I like the design of Discord but I don't like that it's owned by one company. At any point they could decide to pursue a full enshittification strategy and start selling everyone's data to train AIs. They could sell the rights to 3rd party spambots and disallow users from banning the bots from their private servers.

        It may be great right now but the users do not control their own destinies. It looks like there are tools users can use to export their data but if Discord goes the enshittification route they could preemptively block such tools, just as Reddit shut down their APIs.

    • ares623 10 hours ago
      I've been thinking about this a lot lately. An invite only platform where invites need to be given and received in person. It'll be pseudonymous, which should hopefully help make moderation manageable. It'll be an almost cult-like community, where everyone is a believer in the "cause", and violations can mean exile.

      Of course, if (big if) it does end up being large enough, the value of getting an invite will get to a point where a member can sell access.

      • asdff 10 hours ago
        Sounds like the old what.cd
  • amarant 7 hours ago
    > The notorious “you are absolutely right”, which no living human ever used before, at least not that I know of

    If no human ever used that phrase, I wonder where the ai's learned it from? Have they invented new mannerisms? That seems to imply they're far more capable than I thought they were

    • kgeist 6 hours ago
      >If no human ever used that phrase, I wonder where the ai's learned it from?

      Reinforced with RLHF? People like it when they're told they're right.

    • krige 5 hours ago
      There are many phrases that exist solely in fiction.
  • mrtx01 6 hours ago
    "You are absolutely right" is one of the main catchphrases in "The Unbelievable Truth" with David Mitchell.

    Maybe it is a UK thing?

    https://en.wikipedia.org/wiki/The_Unbelievable_Truth_(radio_...

    I love that BBC radio (today: BBC audio) series. It started before the inflation of 'alternative facts' and it is worth (and very funny and entertaining) to follow, how this show developed in the past 19 years.

    • dijit 5 hours ago
      You’re absolutely right, we use that phrase a lot in the UK when we emphatically agree with someone, or we’re being sarcastic.
  • flopslop 11 hours ago
    This website absolutely is social media unless you’re putting on blinders or haven’t been around very long. There’s a small in crowd who sets the conversation (there’s an even smaller crowd of ycombinator founders with special privileges allowing them to see each other and connect). Thinking this website isn’t social media just admits you don’t know what the actual function of this website is, which is to promote the views of a small in crowd.
    • BLKNSLVR 10 hours ago
      To extend what 'viccis' said above, the meaning of "social media" has changed and is now basically meaningless because it's been used by enough old media organisations who lack the ability to discern the difference between social media and a forum or a bulletin-board or chat site/app or even just a plain website that allows comments.

      Social Media is become the internet and/or vice-versa.

      Also, I think you're objectively wrong in this statement:

      "the actual function of this website is, which is to promote the views of a small in crowd"

      Which I don't think was the actual function of (original) social media either.

  • narag 48 minutes ago
    A couple of different uses of AI, recently detected in YouTube:

    1. There are channels specialized in topics like police bodycam and dashcam videos, or courtroom videos. AI there is used to generate voice (and sometimes a very obviously fake talking head) and maybe the script itself. It seems a way to automatize tasks.

    2. Some channels are generating infuriating videos about fake motorbikes releases. Many.

  • neilv 9 hours ago
    Sunday evening musings regarding bot comments and HN...

    I'm sure it's happening, but I don't know how much.

    Surely some people are running bots on HN to establish sockpuppets for use later, and to manipulate sentiment now, just like on any other influential social media.

    And some people are probably running bots on HN just for amusement, with no application in mind.

    And some others, who were advised to have an HN presence, or who want to appear smarter, but are not great at words, are probably copy&pasting LLM output to HN comments, just like they'd cheat on their homework.

    I've gotten a few replies that made me wonder whether it was an LLM.

    Anyway, coincidentally, I currently have 31,205 HN karma, so I guess 31,337 Hacker News Points would be the perfect number at which to stop talking, before there's too many bots. I'll have to think of how to end on a high note.

    (P.S., The more you upvote me, the sooner you get to stop hearing from me.)

    • petermcneeley 8 hours ago
      HN has survived many things but I dont think it will survive the LLMs.
    • GMoromisato 8 hours ago
      I thought you were going for 2^15-1 and an LLM messed up the math.
      • neilv 7 hours ago
        31,337 can be the stopping point for active commenting.

        32,767 can be the hard max., to permit rare occasional comments after that.

    • bigmeme 8 hours ago
      Holy based
  • swader999 7 hours ago
    I think the Internet died long before 2016. It started with the profile, learning about the users, giving them back what they wanted. Then advertising amplified it. 1998 or 99 I'm guessing.
  • gorgmah 3 hours ago
    Apps that require verification of "humanity" are going to get trendy. I'm thinking of world app for instance.
    • Gasp0de 1 hour ago
      Great, let's just require biometric identification before posting online, what could go wrong.
    • Ronsenshi 2 hours ago
      I guess the point of that would be to discourage average users from making AI slop? Can't imagine that this would stop bot farms from doing what they always done - hire people to perform these "humanity" checks once in a while when necessary.
  • snickerer 3 hours ago
    The Internet got its death blow in the Eternal September 1994.

    But it was a long death struggle, bleeding out drop by drop. Who remembers that people had to learn netiquette before getting into conversations? That is called civilisation.

    The author of this post experienced the last remains oft that culture in the 00s.

    I don't blame the horde of uneducated home users who came after the Eternal September. They were not stupid. We could have built a new culture together with them.

    I blame the power of the profit. Big companies rolled in like bulldozers. Mindless machines, fueled by billions of dollars, rolling in the direction of the next ad revenue.

    Relationships, civilization and culture are fragile. We must take good take of them. We should. but the bulldozers destroyed every structure they lived in in the Internet.

    I don't want to whine. There is a learning: money and especially advertising is poison for social and cultural spaces. When we build the next space where culture can grow, let's make sure to keep the poison out by design.

  • swyx 3 hours ago
    semi relatedly i stumbled upon Dead Planet Theory a while back and it stasy rent free in my head. https://arealsociety.substack.com/p/the-dead-planet-theory
    • sph 2 hours ago
      That's great, thanks for sharing. So obvious in hindsight (Pareto principle, power law, "80% of success is showing up") but the ramifications are enormous.

      I wonder if this does apply to the same magnitude in the real world. It's very easy to see this phenomenon on the internet because it's so vast and interconnected. Attention is very limited and there is so much stuff out there that the average user can only offer minimal attention and effort (the usual 80-20 Pareto allocation). In the real world things are more granular, hyperlocal and less homogeneous.

  • lizknope 14 hours ago
    Bots have ruined reddit but that is what the owners wanted.

    The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.

    The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.

    At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.

    • clearleaf 14 hours ago
      It's been really sad to see reddit go like this because it was pretty much the last bastion of the human internet. I hated reddit back in the day but later got into it for that reason. It's why all our web searches turned into "cake recipe reddit." But boy did they throw it in the garbage fast. One of their new features is you can read AI generated questions with AI generated answers. What could the purpose of that possibly be? We still have the old posts... for the most part (a lot of answers were purged during the protest) but what's left of it is also slipping away fast for various reasons. Maybe I'll try to get back into gemini protocol or something.
      • georgeburdell 13 hours ago
        I see a retreat to the boutique internet. I recently went back to a gaming-focused website, founded in the late 90s, after a decade. No bots there, as most people have a reputation of some kind
      • alex1138 8 hours ago
        I really want to see people who ruin functional services made into pariahs

        I don't care how aggressive this sounds; name and shame.

        Huffman should never be allowed to work in the industry again after what he and others did to Reddit (as you say, last bastion of the internet)

        Zuckerberg should never be allowed after trapping people in his service and then selectively hiding posts (just for starters. He's never been a particularly nice guy)

        Youtube and also Google - because I suspect they might share a censorship architecture... oh, boy. (But we have to remove + from searches! Our social network is called Google+! What do you mean "ruining the internet"?)

        • quantummagic 5 hours ago
          > But we have to remove + from searches

          Wasn't that functionality just replaced? Parts of a query that are in quotation marks, are required to appear in any returned result.

          • alex1138 5 hours ago
            Yeah, but quotes aren't as convenient and I think I've heard they're less accurate than + used to be
    • swed420 13 hours ago
      > Bots have ruined reddit but that is what the owners wanted.

      Adding the option to hide profile comments/posts was also a terrible move for several reasons.

      • b65e8bee43c2ed0 13 hours ago
        given the timing, it has definitely been done to obscure bot activity, but the side effect of denying the usual suspects the opportunity to comb through ten years of your comments to find a wrongthink they can use to dismiss everything you've just said, regardless of how irrelevant it is, is unironically a good thing. I've seen many instances of their impotent rage about it since it's been implemented, and each time it brings a smile to my face.
        • swed420 11 hours ago
          The wrongthink issue was always secondary, and generally easy to avoid by not mixing certain topics with your account (don't comment on political threads with your furry porn gooner account, etc). At a certain point, the person calling out a mostly benign profile is the one who will look ridiculous, and if not, the sub is probably not worth participating in anyway.

          But recently it seems everything is more overrun than usual with bot activity, and half of the accounts are hidden which isn't helping matters. Utterly useless, and other platforms don't seem any better in this regard.

      • asdff 10 hours ago
        You can still see them in search. The bots don’t seem to bother hiding posts though.
    • imglorp 13 hours ago
      > allow even more bots to increase traffic which drives up ad revenue

      Isn't that just fraud?

      • clearleaf 11 hours ago
        Yes registering fake views is fraud against ad networks. Ad networks love it though because they need those fake clicks to defraud advertisers in turn. Paying to have ads viewed by bots is just paying to have electricity and compute resources burned for no reason. Eventually the wrong person will find out about this and I think that's why Google's been acting like there's no tomorrow.
      • OGEnthusiast 13 hours ago
        It is. Reddit is probably 99% fraud/bots at this point.
      • nitwit005 6 hours ago
        I doubt it's true though. Everyone has something they can track besides total ad views. A reddit bot had no reason to click ads and do things on the destination website. It's there to make posts.
    • SchemaLoad 13 hours ago
      The biggest change reddit made was ignoring subscriptions and just showing anything the algorithm thinks you will like. Resulting in complete no name subreddits showing on your front page. Meaning moderators no longer control content for quality, which is both a good and bad thing, but it means more garbage makes it to your front page.
      • chongli 13 hours ago
        I can't remember the last time I was on the Reddit front page and I use the site pretty much daily. I only look at specific subreddit pages (barely a fraction of what I'm subscribed to).

        These are some pretty niche communities with only a few dozen comments per day at most. If Reddit becomes inhospitable to them then I'll abandon the site entirely.

        • brandonmb 8 hours ago
          This is my current Reddit use case. I unsubscribed from everything other than a dozen or so niche communities. I’ve turned off all outside recommendations so my homepage is just that content (though there is feed algorithm there). It’s quick enough to sign in every day or two and view almost all the content and move on.
      • bananapub 13 hours ago
        why would you look at the "front page" if you only wanted to see things you subscribed to? that's what the "latest" and whatever the other one is for.

        they have definitely made reddit far worse in lots of ways, but not this one.

        • duskwuff 12 hours ago
          > why would you look at the "front page" if you only wanted to see things you subscribed to?

          "Latest" ignores score and only sorts by submission time, which means you see a lot of junk if you follow any large subreddits.

          The default home-page algorithm used to sort by a composite of score, recency, and a modifier for subreddit size, so that posts from smaller subreddits don't get drowned out. It worked pretty well, and users could manage what showed up by following/unfollowing subreddits.

        • ziml77 12 hours ago
          The front page when I used reddit only contained posts from your subscribed subreddits, sorted by the upvote ranking algorithm.
    • al_borland 14 hours ago
      Wouldn’t taking the API away hurt the bots?
    • Spooky23 13 hours ago
      I’m think you are overestimating humanity.

      At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.

      Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!

    • lifetimerubyist 13 hours ago
      Isn't showing ads to bots...pointless?
      • lizknope 13 hours ago
        If the advertisers don't know the difference between a human and a bot then they will still pay money to display the ad.
        • lifetimerubyist 12 hours ago
          You’d think they would eventually notice their ROI is terrible…?
          • nick486 4 hours ago
            the alternative - not buying ads - is worse though. No one knows about you, and you sell nothing. so it ends up being seen as a cost of doing business. that is passed on to paying customers.

            I'm really starting to wonder how much of the "ground level" inflation is actually caused by "passing on" the cost of anti-social behaviors to paying customers, as opposed to monetary policy shenanigans.

          • lizknope 12 hours ago
            I hope so but I don't know.
    • vinyl7 7 hours ago
      > So they allow even more bots to increase traffic which drives up ad revenue

      When are people who buy ads going to realize that the majority of their online ad spend is going towards bots rather than human eye balls who will actually buy their product? I'm very surprised there hasn't been a massive lawsuite against Google, Facebook, Reddit, etc. for misleading and essentially scamming ad buyers

      • Underphil 6 hours ago
        Is this really true though? Don't they have ways of tracking the returns on advertising investment? I would have thought that after a certain amount of time these ad buys would show themselves as worthless if they actually were.
    • alex1138 14 hours ago
      Steve Huffman is an awful CEO. With that being said I've always been curious how the rest of the industry (for example, the web-wide practice of autoplaying videos) was constructed to catch up with Facebook's fraudulent metrics. Their IPO (and Zuckerberg is certainly known to lie about things) was possibly fraud and we know that they lied about their own video metrics (to the point it's suspected CollegeHumor shut down because of it)
    • Drunkfoowl 14 hours ago
      [dead]
  • anon_anon12 5 hours ago
    It was outrageous at start, especially in 2016, but surely after AI's boom, we are heading towards it. People have stopped becoming genuine.
  • ex3ndr 13 hours ago
    I am curious when we will land dead github theory? I am looking at growing of self hosted projects and it seems many of them are simply AI slop now or slowly moving there.
  • yosef123 1 hour ago
    As much as many are against it, couldn't this be yet another argument for social network wide digital identity verification? I think that this is an argument that holds: to avoid AI slop / bots → require government ID
  • neoden 6 hours ago
    > LLMs are just probabilistic next-token generators

    How sick and tired I am of this take. Okay, people are just bags of bones plus slightly electrified boxes with fat and liquid.

  • CommenterPerson 13 hours ago
    Good post, Thank you. May I say Dead, Toxic Internet? With social media adding the toxicity. The Enshittification theory by Cory Doctorow sums up the process of how this unfolds (look it up on Wikipedia).
  • bigchorizo 33 minutes ago
    you are absolutely right
  • DeathArrow 4 hours ago
    If you show signs of literacy, people will just assume you are a bot.
  • dalemhurley 2 hours ago
    You are absolutely right...:P

    I don't know mind people using AI to create open source projects, I use it extensively, but have a rule that I am responsible and accountable for the code.

    Social media have become hellscapes of AI Slop of "Influencers" trying to make quick money by overhyping slop to sell courses.

    Maybe where you are from the em dash is not used, but in Queen's English speaking countries the em dash is quite common to represent a break of thought from the main idea of a sentence.

  • shayanbahal 2 hours ago
    Basically: Rage Bait is winning :/

    > The Oxford Word of the Year 2025 is rage bait

    > Rage bait is defined as “online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive, typically posted in order to increase traffic to or engagement with a particular web page or social media content”.

    https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...

  • dvt 10 hours ago
    I liked em dashes before they were cool—and I always copy-pasted them from Google. Sucks that I can't really do that anymore lest I be confused for a robot; I guess semicolons will have to do.
    • celsius1414 10 hours ago
      On a Mac keyboard, Option-Shift-hyphen gives an em-dash. It’s muscle memory now after decades. For the true connoisseurs, Option-hyphen does an en-dash, mostly used for number ranges (e.g. 2000–2022). On iOS, double-hyphens can auto-correct to em-dashes.

      I’ve definitely been reducing my day-to-day use of em-dashes the last year due to the negative AI association, but also because I decided I was overusing them even before that emerged.

      This will hopefully give me more energy for campaigns to champion the interrobang (‽) and to reintroduce the letter thorn (Þ) to English.

      • geerlingguy 10 hours ago
        I'm always reminded how much simpler typography is on the Mac using the Option key when I'm on Windows and have to look up how to type [almost any special character].

        Instead of modifier plus keypress, it's modifier, and a 4 digit combination that I'll never remember.

      • cellis 10 hours ago
        I've also used em-dashes since before chatgpt but not on HN -- because a double dash is easier to type. However in my notes app they're everywhere, because Mac autoconverts double dashes to em-dashes.
      • derf_ 9 hours ago
        And on X, an em-dash (—) is Compose, hyphen, hyphen, hyphen. An en-dash (–) is Compose, hyphen, hyphen, period. I never even needed to look these up. They're literally the first things I tried given a basic knowledge of the Compose idiom (which you can pretty much guess from the name "Compose").
      • stackghost 10 hours ago
        Back in the heyday of ICQ, before emoji when we used emoticons uphill in the snow both ways, all the cool kids used :Þ instead of :P
    • parpfish 9 hours ago
      I’m an em-dash lover but always (and still do) type the double hyphen because that’s what I was taught for APA style years ago
    • npn 10 hours ago
      you can absolutely still use `--`, but you need to add spaces around them.
  • Imustaskforhelp 4 minutes ago
    Oh this article was a fun one to read because I have felt something like this.

    Recently someone accused me of being a clanker on Hackernews (firstly lmao but secondly wow) because of my "username" (not sure how it's relevant, When I had created this account I had felt a moral obligation to learn/ask for help to improving and improve I did whether its in writing skills or learning about tech)

    Then I had posted another comment on Another thread in here which was talking about something similar. The earlier comment got flagged and my response to it but this stayed. Now someone else saw that comment and accused me of being AI again

    This pissed me off because I got called AI twice in 24 hours. That made me want to quit hackernews because you can see from my comments that I write long comments (partially because they act as my mini blog and I just like being authentic me, this is me just writing my thoughts with a keyboard :)

    To say that what I write is AI feels such a high disrespect to me because I have spent some hours thinking about some of the comments I made here & I don't really care for the upvotes. It's just this place is mine and these thoughts are mine. You can know me and verify I am real by just reading through the comments.

    And then getting called AI.... oof, Anyways, I created a tell HN: I got called clanker twice where I wrote the previous thing which got flagged but I am literally not kidding but the first comment came from an AI generated bot itself (completely new account) I think 2 minute or something afterwards which literally just said "cool"

    Going to their profile, they were promoting some AI shit like fkimage or something (Intentionally not saying the real website because I don't want those bots to get any ragebait attention to conversions on their websites)

    So you just saw the whole situation of irony here.

    I immediately built myself a bluesky thread creator where I can write a long message and it would automatically loop or something (ironically built via claude because I don't know how browser extensions are made) just so that I can now write things in bluesky too.

    Funny thing is I used to defend Hackernews and glorify it a few days ago when an much more experienced guy called HN like 4chan.

    I am a teenager, I don't know why I like saying this but the point is, most teenagers aren't like me (that I know), it has both its ups and downs (I should study chemistry right now) but Hackernews culture was something that inspired me to being the guy who feels confidence in tinkering/making computers "do" what he wants (mostly for personal use/prototyping so I do use some parts of AI, you can read one of my other comments on why I believe even as an AI hater, prototyping might make sense with AI/personal-use for the most part, my opinion's nuanced)

    I came to hackernews because I wanted to escape dead internet theory in the first place. I saw people doing some crazy things in here reading comments this long while commuting from school was a vibe.

    I am probably gonna migrate to lemmy/bluesky/the federated land. My issue with them is that the ratio of political messages : tech content is few (And I love me some geopolitics but quite frankly I am tired and I just want to relax)

    But the lure of Hackernews is way too much, which is why you still see me here :)

    I don't really know what the community can do about bots.

    Another part is that there is this model on Localllama which I discovered the other day which works opposite (so it can convert LLM looking text to human and actually bypasses some bot checking and also the -- I think)

    Grok (I hate grok) produces some insanely real looking texts, it still has -- but I do feel like if one even removes it and modifies it just a bit (whether using localllama or others), you got yourself genuine propaganda machine.

    I was part of a Discord AI server and I was shocked to hear that people had built their own LLM/finetunes and running them and they actually experimented with 2-3 people and none were able to detect.

    I genuinely don't know how to prevents bots in here and how to prevent false positives.

    I lost my mind 3 days ago when this happened. Had to calm myself and I am trying to use Hackernews (less) frequently, I just don't know what to say but I hope y'all realize how it put a bad taste into my mouth & why I feel a little unengaged now.

    Honestly, I am feeling like writing my own blogs to my website from my previous hackernews comments. They might deserve a better place too.

    Oops wrote way too long of a message, so sorry about that man but I just went with the flow and thanks man for writing this comment so that i can finally have this one comment to try to explain how I was feeling man.

  • rickcarlino 13 hours ago
    Much like someone from Schaumburg Illinois can say they are from Chicago, Hacker News can call itself social media. You fly that flag. Don’t let anyone stop you.
    • E39M5S62 13 hours ago
      If you can ride the Metra from your city to Chicago proper, you're in Chicago!
  • anonnon 11 hours ago
    Reddit has a small number of what I hesitatingly might call "practical" subreddits, where people can go to get tech support, medical advice, or similar fare. To what extent are the questions and requests being posted to these subreddits also the product of bot activity? For example, there are a number of medical subreddits, where verified (supposedly) professionals effectively volunteer a bit of their free time to answer people's questions, often just consoling the "worried well" or providing a second opinion that echos the first, but occasionally helping catch a possible medical emergency before it gets out of hand. Are these well-meaning people wasting their time answering bots?
    • AuthAuth 10 hours ago
      These subs are dying out. Reddit has losts its gatekeepy culture a long time ago and now subs are getting burnt out by waves of low effort posters treating the site like its instagram. Going through new posts on any practical subreddit the response to 99% of them should be "please provide more information on what your issue is and what you have tried to resolve it".

      I cant do reddit anymore, it does my head in. Lemmy has been far more pleasant as there is still good posting etiquette.

    • nitwit005 6 hours ago
      I'm not aware of anyone bothering to create bots that can pass the checking particular subreddits do. It'd be fairly involved to do so.

      For licensed professions, they have registries where you can look people up and confirm their status. The bot might need to carry out a somewhat involved fraud if they're checking.

      • anonnon 5 hours ago
        I wasn't suggesting the people answering are bots, only that the verification is done by the mods and is somewhat opaque. My concern was just that these well-meaning people might be wasting their time answering botspew. And then inevitably, when they come to realize, or even just strongly suspect, that they're interacting with bots, they'll desist altogether (if the volume of botspew doesn't burn them out first), which means the actual humans seeking assistance now have to go somewhere else.

        Also on subreddits functioning as support groups for certain diseases, you'll see posts that just don't quite add up, at least if you know somewhat about the disease (because you or a loved one have it). Maybe they're "zebras" with a highly atypical presentation (e.g., very early age of onset), or maybe they're "Munchies." Or maybe LLMs are posting their spurious accounts of their cancer or neurdegenerative disease diagnosis, to which well-meaning humans actually afflicted with the condition respond (probably along side bots) with their sympathy and suggestions.

        • nitwit005 3 hours ago
          Ah, apologies, misread your post.
  • akkad33 6 hours ago
    What's wrong with using AI to write code?
    • nilslindemann 1 hour ago
      Nothing if you fix its errors and it fixes yours.
  • Lammy 5 hours ago
    > which on most keyboard require a special key-combination that most people don’t know

    I am sick of the em-dash slander as a prolific en- and em-dash user :(

    Sure for the general population most people probably don't know, but this article is specifically about Hacker News and I would trust most of you all to be able to remember one of:

    - Compose, hyphen, hyphen, hyphen

    - Option + Shift + hyphen

    (Windows Alt code not mentioned because WinCompose <https://github.com/ell1010/wincompose>)

  • enos_feedler 6 hours ago
    The darkest hour is just before the dawn
  • funkyfiddler69 17 minutes ago
    > short videos of pretty girls saying that the EU is bad for Poland

    Reminds me of those times in Germany when mainstream media and people with decades in academia used the term "Putin Versteher" (Person who gets Putin, Putin 'understander') ad nauseaum ... it was hilarious.

    Unrelated to that, sometime last year, I searched "in" ChatGPT for occult stuff in the middle of a sleepless night and it returned a story about "The Discordians", some dudes who ganged up in a bowling hall in the 70's and took over media and politics, starting in the US and growing globally.

    Musk's "Daddy N** Heil Hitler" greeting, JD's and A. Heart's public court hearings, the Kushners being heavily involved with the recruitment department of the Epsteins Islands and their "little Euphoria" clubs as well as Epstein's "Gugu Gaga Cupid" list of friends and friends of friends, it's all somewhat connected to "The Discordians", apparently.

    It was a fun "hallucination" in between short bits on Voodoo, Lovecraft and stuff one rarely hears about at all.

  • georgeecollins 6 hours ago
    I don't think only AI says "yes you are absolutely right". Many times I have made a comment here and then realized I was dead wrong, or someone disagreed with my by making a point that I had never thought of. I think this is because I am old and I have realized I wasn't never as smart as I thought I was, even when I was a bit smarter a long time ago. It's easy to figure out I am a real person and not AI and I even say things that people downvote prodigiously. I also say you are right.
  • weddingbell 10 hours ago
    What secret is hidden in the phrase “you are absolutely right”? Using Google's web browser translation yields the mixed Hindi and Korean sentence: “당신 말이 बिल्कुल 맞아요.”
  • Torwald 4 hours ago
    You know, one thing we could do is to get the costs for energy usage sorted out. Like, people who use a lot data-center electricity, pay accordingly.

    If AI would cost you what it actually costs, then you would use it more carefully and for better purposes.

  • stogot 8 hours ago
    > What if people DO USE em-dashes in real life?

    I do and so do a number of others, and I like Oxford commas too.

  • aashu_xd 8 hours ago
    bots are everywhere and Ai bots making this theory very true.
  • yason 1 hour ago
    What is the next safe haven for smart people?

    It used to be Internet back when the name was still written in the capital first letter. The barrier to utilize the Internet was high enough that mostly only the genuinely curious and thoughtful people a) got past it and b) did have the persistence to find interesting stuff to read and write about on it.

    I remember when TV and magazines were full of slop of the day at the time. Human-generated, empty, meaningless, "entertainment" slop. The internet was a thousand times more interesting. I thought why would anyone watch a crappy movie or show on TV or cable, created by mediocre people for mere commercial purposes, when you could connect to a lone soul on the other side of the globe and have intelligent conversations with this person, or people, or read pages/articles/news they had published and participate in this digital society. It was ethereal and wonderful, something unlike anything else before.

    Then the masses got online. Gradually, the interesting stuff got washed in the cracks of commercial internet, still existing but mostly just being overshadowed by everything else. Commercial agenda, advertisements, entertainment, company PR campaigns disguised as articles: all the slop you could get without even touching AI. With subcultures moving from Usenet to web forums, or from writing web articles to posting on Facebook, the barrier got lowered until there was no barrier and all the good stuff got mixed with the demands and supplies of everything average. Earlier, there always were a handful of people in the digital avenues of communication who didn't belong but they could be managed; nowadays the digital avenues of communication are open for everyone and consequently you get every kind of people in, without any barriers.

    And where there are masses there are huge incentives to profit from them. This is why internet is no longer an infrastructure for the information superhighway but for distributing entertainment and profiting from it. First, transferring data got automated and was dirt cheap, now creating content is being automated and becomes dirt cheap. The new slop oozes out of AI. The common denominator of internet is so low the smart people get lost in all the easily accessed action. Further, smart people themselves are now succumbing in it because to shield yourself from all the crap that is the commercial slop internet you basically have to revert to being a semi-offline hermit, and that goes against all the curiosity and stimuli deeply associated with smart people.

    What could be the next differentiator? It used to be knowledge and skill: you had to be a smart person to know enough and learn enough to get access. But now all that gets automated so fast that it proves to be no barrier.

    Attention span might be a good metric to filter people into a new service, realm, or society eventhough, admittedly, it is shortening for everyone but smart people would still win.

    Earlier solutions such as Usenet and IRC haven't died but they're only used by the old-timers. It's a shame because then the gathering would miss all the smart people grown in the current social media culture: world changes and what worked in the 90's is no longer relevant except for people who were there in the 90's.

    Reverting to in-real-life societies could work but doesn't scale world-wide and the world is global now. Maybe some kind of "nerdbook": an open, p2p, non-commercial, not centrally controlled, feedless facebook clone could implement a digital club of smart people.

    The best part of setting up a service for smart people is that it does not need to prioritize scaling.

  • jmyeet 5 hours ago
    Given the climate, I've been thinking about this issue a lot. I'd say that broadly there are two groups of inauthentic actors online:

    1. People who live in poorer countries who simply know how to rage bait and are trying to earn an income. In many such countries $200 in ad revenue from Twitter, for example, is significant; and

    2. Organized bot farms who are pushing a given message or scam. These too tend to be operated out of poorer countries because it's cheaper.

    Last month, Twitter kind of exposed this accidentally with an interesting feature where it showed account location with no warning whatsoever. Interestingly, showing the country in the profile got disabled from government accounts after it raised some serious questions [1].

    So I started thinking about the technical feasibility of showing location (country or state for large countries) on all public social media ccounts. The obvious defense is to use a VPN in the country you want to appear to be from but I think that's a solvable problem.

    Another thing I read was about NVidia's efforts to combat "smuggling" of GPUs to China with location verification [2]. The idea is fairly simple. You send a challenge and measure the latency. VPNs can't hide latency.

    So every now and again the Twitter or IG or Tiktok server would answer an API request with a challenge, which couldn't be antiticpated and would also be secure, being part of the HTTPS traffic. The client would respond to the challenge and if the latency was 100-150ms consistently despite showing a location of Virginia then you can deem them inauthentic and basically just downrank all their content.

    There's more to it of course. A lot is in the details. Like you'd have to handle verified accounts and people traveling and high-latency networks (eg Starlink).

    You might say "well the phone farms will move to the US". That might be true but it makes it more expensive and easier to police.

    It feels like a solvable problem.

    [1]: https://www.nbcnews.com/news/us-news/x-new-location-transpar...

    [2]: https://aihola.com/article/nvidia-gpu-location-verification-...

  • secretsatan 14 hours ago
    I’m a bit scared of this theory, i think it will be true, ai will eat the internet, then they’ll paywall it.

    Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.

    • therobots927 14 hours ago
      That’s why we need a parallel internet.
      • femto 10 hours ago
        The "old" Internet is still there in parallel with the "new" Internet. It's just been swamped by the large volume of "new" stuff. In the 90s the Internet was small and before crawler based search engines you had to find things manually and maintain your own list of URLs to get back to things.

        Ignore the search engines, ignore all the large companies and you're left with the "Old Internet". It's inconvenient and it's hard work to find things, but that's how it was (and is).

        • therobots927 9 hours ago
          Well then in that case, maybe we need a “vetted internet”. Like the opposite of the dark web, this would only index vetted websites, scanned for AI slop, and with optional parental controls, equipped with customized filters that leverage LLMs to classify content into unwanted categories. It would require a monthly subscription fee to maintain but would be a nonprofit model.
          • femto 8 hours ago
            That's the original "Yahoo Directory", which was a manually curated page.

            https://en.wikipedia.org/wiki/Yahoo#Founding

            The original Yahoo doesn't exist (outside archive.org), but I'm guessing would be a keen person or two out there maintaining a replacement. It would probably be disappointing, as manually curated lists work best when the curator's interests are similar to your own.

            What you want might be Kagi Search with the AI filtering on? I've never used Kagi, so I could be off with that suggestion.

      • ageedizzle 13 hours ago
        What safeguards would be in place to prevent this parallel internet from also, with time, becoming a dead internet?
        • Frotag 10 hours ago
          Social stigma against any monetary incentives. (I recognize the irony in saying this on HN.)
        • malfist 13 hours ago
          When it becomes a dead parallel internet, we'll make a internet'' and go again
        • asdff 10 hours ago
          Plenty of crass jokes advertisers don’t want in line with their content is how 4chan avoided commercialization.
      • pupppet 13 hours ago
        A̶O̶L̶ Humans Online
      • JKCalhoun 9 hours ago
        Internot.
      • secretsatan 13 hours ago
        What would stop them from scraping it and infecting it?
        • hackable_sand 7 hours ago
          It's bimodal

          Like wearing a mask on one's head to ward tigers.

  • jibal 8 hours ago
    Such posts are identifiable and rare, disproving Dead Internet Theory (for now).
    • thinkingemote 2 hours ago
      for now.

      Even this submission is out of date as images no longer have the mangled hand issues.

      We are actually blessed right now in that it's easy to spot AI posts. In 6 months or so, things will be much harder. We are cooked.

  • foxes 10 hours ago
    Are em dashes in language models particularly close to a start token or something? Somehow letting the model continue to keep outputting.
    • semilin 10 hours ago
      I think it's mainly a matter of clarity as long embedded clauses without obvious visual delimiting can be hard to read and thus are discouraged in professional writing aiming for ease of reading from a wide audience. LLMs are trained on such a style.
  • mmooss 8 hours ago
    The problem is not the Internet but the author and those like them, acting like social network participants in following the herd - embracing despair and hopelessness, and victimhood - they don't realize they're the problem, not the victims. Another problem is their ignorance and their post-truth attitude, not caring whether their words are actually accurate:

    > What if people DO USE em-dashes in real life?

    They do and have, for a long time. I know someone who for many years (much longer than LLMs have been available) has complained about their overuse.

    > hence, you often see -- in HackerNews comments, where the author is probably used to Markdown renderer

    Using two dashes for an em-dash goes back to typewriter keyboards, which had only what we now call printable ASCII and where it was much harder add to add non-ASCII characters than it is on your computer - no special key combos. (Which also means that em-dashes existed in the typewriter era.)

    • deadowl 8 hours ago
      On a typewriter, you'd be able to just adjust the carriage position to make a continuous dash or underline or what have you. Typically I see XXXX over words instead of strike-throughs for typewritten text meanwhile.
      • mmooss 6 hours ago
        Most typefaces make consecutive underlines continuous by default. I've seen leading books on publishing, including iirc the Chicago Manual of Style, say to type two hypens and the typesetter will know to substitute an em-dash.
    • amake 8 hours ago
      How is the author the problem? What is the problem, in your view?
  • fsckboy 8 hours ago
    >The other day I was browsing my one-and-only social network — which is not a social network, but I’m tired of arguing with people online about it — HackerNews

    dude, hate to break it to you but the fact that it's your "one and only" makes it more convincing it's your social network. if you used facebook, instagram, and tiktok for socializing, but HN for information, you would have another leg to stand on.

    yes, HN is "the land of misfit toys", but if you come here regularly and participate in discussions with other other people on a variety of topics and you care about the interactions, that's socializing. The only reason you think it's not is that you find actual social interaction awkward, so you assume that if you like this it must not be social.

  • kelseydh 11 hours ago
    What is now certain is Dead StackOverflow Theory.
  • nl 9 hours ago
    The irony is that I submitted one of my open source projects because it was vibe-coded and people accused me of not vibe coding it!
  • heliumtera 13 hours ago
    But what about the children improving their productivity 10x? What about their workflows?

    Think of the children!!!

  • renewiltord 5 hours ago
    lol Hacker News is ground zero for outrage porn. When that guy made that obviously pretend story about delivery companies adding a desperation score the guys here lapped it up.

    Just absolutely loved it. Everyone was wondering how deepfakes are going to fool people but on HN you just have to lie somewhere on the Internet and the great minds of this site will believe it.

  • brianbest101 8 hours ago
    [dead]
  • pryncevv 2 hours ago
    [dead]
  • cande 10 hours ago
    [dead]
  • cboyardee 8 hours ago
    [dead]
  • bigmeme 11 hours ago
    [flagged]
    • asdff 10 hours ago
      Hiding post history doesn’t really work. You can just search for all the users activity.
    • bobsmooth 10 hours ago
      The hiding of post history only serves to hide bot activity.
    • Forgeties79 10 hours ago
      This is what you made an account to do? To dump on this community as you tell us not to dump on another community? Pot/kettle and all that.

      You’ve got some ideas here I actually agree with, but your patronizing tone all but guarantees 99% of people won’t hear it.