Sam Altman may control our future – can he be trusted?

(newyorker.com)

1406 points | by adrianhon 23 hours ago

135 comments

  • ronanfarrow 20 hours ago
    Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.
    • cs702 20 hours ago
      Thank you for coming on HN and offering to answer questions.[a]

      This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.

      OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]

      Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]

      Is your understanding of OpenAI's current competitive position similar?

      ---

      [a] You may want to provide proof online that you are who you say you are: https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...

      [b] https://www.latimes.com/business/story/2026-04-01/openais-sh...

      [c] For example, there are 2x more stories mentioning Claude than ChatGPT on HN over the past year. Compare https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru... to https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

      • ronanfarrow 16 hours ago
        Thank you for this, very much appreciate the thoughtful response.

        The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.

        • cs702 14 hours ago
          Thank you. Yes, I saw that. The company's always been surrounded by endless talk about insane hype, speculative bubbles, and financial engineering. I wasn't asking so much about that.

          I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.

          If you have an opinion about that, everyone here would love to hear about it.

          • globalnode 1 hour ago
            at this point even googles ai search results are better than gpt - obv. this is not for full programs but if you know what youre doing and just want a snippet, thats all you need.
          • Ericson2314 9 hours ago
            Ronan Farrow's expertise is investigations into elite amorality, not evaluating technical products. Why are you asking this question?
            • cs702 8 hours ago
              I didn't asking him to evaluate them. I asked him how customer and partners perceive them.

              He's had so many conversations that he likely has a sense of how perceptions of the company and its offerings have changed.

              I'm curious.

            • bloppe 5 hours ago
              Much of the article and general palace intrigue is predicated on the idea that OpenAI has a singularly revolutionary product. If it later turns out to be a commodity, or OpenAI is simply outcompeted nonetheless, then the idea that Sam Altman's personal shortcomings are something to stress about would seem quaint. Just another hubristic tech billionaire acting in bad faith doesn't really pry attention the same way as someone "controlling your future".
          • irishcoffee 9 hours ago
            My guess is that the answer to your question, fantastic question, is that nobody knows. I remember having the same thoughts when Covid was first “arriving” if you will: we wanted people in the know to throw us a nugget of information, and they just didn’t know.

            As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.

            • philipallstar 1 hour ago
              How would fraud help here? Don't they just need scale of lots of customers paying a little bit? How do you fraud your way into that?
        • keepamovin 46 minutes ago
          If you were in charge of the deciding what should be done with Sam Altman, what would you choose?
      • unsupp0rted 12 hours ago
        Many of us prefer OpenAI's Codex, because we think it's a better product.

        No comment on the CEO: I just find the product superior in everything but UI/UX and conversation. It's better at quality code.

        • mliker 12 hours ago
          Who is “us”? It does seem that some scientists prefer Codex for its math capabilities but when it comes to general frontend and backend construction, Claude Code is just as good and possibly made better with its extensive Skills library.

          Both codex and Claude code fail when it comes to extremely sophisticated programming for distributed systems

          • keldaris 8 hours ago
            As a scientist (computational physicist, so plenty of math, but also plenty of code, from Python PoCs to explicit SIMD and GPU code, mostly various subsets of C/C++), I can confirm - Codex is qualitatively better for my usecases than Claude. I keep retesting them (not on benchmarks, I simply use both in parallel for my work and see what happens) after every version update and ever since 5.2 Codex seems further and further ahead. The token limits are also far more generous (and it matters, I found it fairly easy to hit the 5h limit on max tier Claude), but mostly it's about quality - the probability that the model will give me something useful I can iterate on as opposed to discard immediately is much higher with Codex.

            For the few times I've used both models side by side on more typical tasks (not so much web stuff, which I don't do much of, but more conventional Python scripts, CLI utilities in C, some OpenGL), they seem much more evenly matched. I haven't found a case where Claude would be markedly superior since Codex 5.2 came out, but I'm sure there are plenty. In my view, benchmarks are completely irrelevant at this point, just use models side by side on representative bits of your real work and stick with what works best for you. My software engineer friends often react with disbelief when I say I much prefer Codex, but in my experience it is not a close comparison.

            • physicsguy 1 hour ago
              I've tried both against similar and haven't found it such a clear cut difference. I still find neither are able to fully implement a complex algorithm I worked on in the past correctly with the same inputs. Not sharing exactly the benchmark I'm using but think about something for improving performance of N^2 operations that are common in physics and you can probably guess the train of thought.
            • ricksunny 7 hours ago
              >As a scientist (computational physicist,

              Is there one that you prefer for, i dunno, physics?

          • zeroxfe 11 hours ago
            I'm in that camp -- I have the max-tier subscription to pretty much all the services, and for now Codex seems to win. Primarily because 1) long horizon development tasks are much more reliable with codex, and 2) OpenAI is far more generous with the token limits.

            Gemini seems to be the worst of the three, and some open-weight models are not too bad (like Kimi k2.5). Cursor is still pretty good, and copilot just really really sucks.

          • the__alchemist 8 hours ago
            Claude Code, Codex, and Cursor are old news. If you're having problems, it's because you're not using the latest hotness: Cludge. Everyone is using it now - don't get left behind.
            • outside1234 6 hours ago
              Cludge has been left behind by Clanker, that’s the new hotness. 45B valuation!
          • unsupp0rted 12 hours ago
            Us = me and say /r/codex or wherever Codex users are. I've tried both, liked both, but in my projects one clearly produces better results, more maintainable code and does a better job of debugging and refactoring.
            • sampullman 11 hours ago
              That's interesting, I actively use both and usually find it to be a toss up which one performs better at a given task. I generally find Claude to be better with complex tool calls and Codex to be better at reviewing code, but otherwise don't see a significant difference.
              • SOLAR_FIELDS 10 hours ago
                If you want to find an advocate for Codex that can give a pretty good answer as to why they think it's better, go ask Eric Provencher. He develops https://repoprompt.com/. He spends a lot of time thinking in this space and prefers Codex over Claude, though I haven't checked recently to see if he still has that opinion. He's pretty reachable on Discord if you poke around a bit.
                • hirako2000 1 hour ago
                  Quite irrelevant what factions think. This or that model may be superior for these and those use cases today, and things will flip next week.

                  Also. RLHF mean that models spit out according to certain human preference, so it depends what set of humans and in what mood they've been when providing the feedback.

              • aswanson 11 hours ago
                Any difference in performance on mobile development?
                • sampullman 9 hours ago
                  For that I'm not so sure. I tried both early 2025 and was disappointed in their ability to deal with a TCA based app (iOS) and Jetpack compose stuff on Android, but I assume Opus 4.6 and GPT 5.4 are much better.
            • rocketpastsix 10 hours ago
              yea Im not in this "us" you speak of.
              • Finbel 3 hours ago
                Of course you're not one of "us" if you're one of "them".
          • lhl 3 hours ago
            As some other people mentioned, using both/multiple is the way to go if it's within your means.

            I've been working on a wide range of relatively projects and I find that the latest GPT-5.2+ models seem to be generally better coders than Opus 4.6, however the latter tends to be better at big picture thinking, structuring, and communicating so I tend to iterate through Opus 4.6 max -> GPT-5.2 xhigh -> GPT-5.3-Codex xhigh -> GPT-5.4 xhigh. I've found GPT-5.3-Codex is the most detail oriented, but not necessarily the best coder. One interesting thing is for my high-stakes project, I have one coder lane but use all the models do independent review and they tend to catch different subsets of implementation bugs. I also notice huge behavioral changes based on changing AGENTS.md.

            In terms of the apps, while Claude Code was ahead for a long while, I'd say Codex has largely caught up in terms of ergonomics, and in some things, like the way it let's you inline or append steering, I like it better now (or where it's far, far, ahead - the compaction is night and day better in Codex).

            (These observations are based on about 10-20B/mo combined cached tokens, human-in-the-loop, so heavy usage and most code I no longer eyeball, but not dark factory/slop cannon levels. I haven't found (or built) a multi-agent control plane I really like yet.)

            • baq 1 hour ago
              This is the way. Eg. IME Gemini is really damn good at sql.
          • baq 1 hour ago
            I’m one of those ‘us’, Claude’s outputs require significant review and iteration effort (to put it bluntly they get destroyed by gpt and Gemini). I’m basically using sonnet to do code search and write up since it is a better (more human-like) writer than gpt and faster and more reliable than gemini, but that’s about it.
          • zem 10 hours ago
            I've found claude startlingly good at debugging race conditions and other multithreading issues though.
            • josephg 10 hours ago
              My rule of thumb is that its good for anything "broad", and weaker for anything "deep". Broad tasks are tasks which require working knowledge of lots of random stuff. Its bad at deep work - like implementing a complex, novel algorithm.

              LLMs aren't able to achieve 100% correctness of every line of code. But luckily, 100% correctness is not required for debugging. So its better at that sort of thing. Its also (comparatively) good at reading lots and lots of code. Better than I am - I get bogged down in details and I exhaust quickly.

              An example of broad work is something like: "Compile this C# code to webassembly, then run it from this go program. Write a set of benchmarks of the result, and compare it to the C# code running natively, and this python implementation. Make a chart of the data add it to this latex code." Each of the steps is simple if you have expertise in the languages and tools. But a lot of work otherwise. But for me to do that, I'd need to figure out C# webassembly compilation and go wasm libraries. I'd need to find a good charting library. And so on.

              I think its decent at debugging because debugging requires reading a lot of code. And there's lots of weird tools and approaches you can use to debug something. And its not mission critical that every approach works. Debugging plays to the strengths of LLMs.

          • 7thpower 10 hours ago
            Not a scientist and use codex for anything complex.

            I enjoy using CC more and use it for non coding tasks primarily, but for anything complex (honestly most of what I do is not that complex), I feel like I am trading future toil for a dopamine hit.

          • DeathArrow 2 hours ago
            Many paying customers say that Anthropic degraded the capability of Opus and Claude Code in the last months and the outcomes are worse. There are even discussions on HN about this.

            Last one is from yesterday: https://news.ycombinator.com/item?id=47660925

        • bko 9 hours ago
          I also find Codex much more generous in terms of what you get with a Pro ($20/mo) subscription. I use it pretty much non-stop and I have yet to hit a limit. Weekly reset is much better as well.
        • KaiserPro 1 hour ago
          GPT/claude/gemini is pretty interchangeable at this point.
          • baq 1 hour ago
            Absolutely not the case. They're complementary.
        • thaoanh404 57 minutes ago
          i find myself being more productive with codex/copilot on coding tasks, but claude does seem to be better at planning
        • DeathArrow 2 hours ago
          I prefer GLM 5.1 and MiniMax 2.7. With a better harness like Forge Code, I have better results for way less money than by using GPT and Opus.
        • shevy-java 2 hours ago
          Does this work for people? To me having a "better product" would be completely irrelevant if the use cases are evil.
        • aaa_aaa 5 hours ago
          Shill talk
        • enraged_camel 10 hours ago
          [flagged]
      • brightbeige 20 hours ago
        He’s replying on this twitter thread - perhaps someone with an account can ask there and link his comment here?

        https://xcancel.com/RonanFarrow/status/2041127882429206532#m

        • jamiequint 15 hours ago
          Here is the actual link, not a link to some weird third-party site that can't be trusted.

          https://x.com/RonanFarrow/status/2041127882429206532

          • rounce 10 hours ago
            FYI xcancel is just a mirror that allows reading replies without needing an account.
          • SwellJoe 13 hours ago
            Whereas X can be trusted?
            • jamiequint 9 hours ago
              Yes? It's the data source, not a third-party. How is this even a question?
              • minimaxir 7 hours ago
                There's pedantic, and then there's needlessly pedantic.

                xcancel is a valid workaround for X links on Hacker News and is sufficient for original attribution.

              • SwellJoe 7 hours ago
                X restricts what you can view without logging in. Many folks don't want to log in to X, for obvious reasons. Posting an xcancel link is kinda like folks posting various `archive` URLs to bypass paywalls, work around overloaded servers, etc. That's an extremely common practice here that usually goes without comment.
      • ed 11 hours ago
        It's worth noting Codex has 2x more stories than Claude https://hn.algolia.com/?query=codex
        • cloverich 6 hours ago
          But by page 5, those stories have around 50-60 karma, while claude page five is still 500+

          (i found your comment surprising based on my daily hn reading recollection - i mostly read top N daily and feel i only occassionally see codex stories).

      • ATMLOTTOBEER 8 hours ago
        Yeah we moved to Claude a few months ago, mostly because the devs kept using it anyway. Altman stuff is interesting but at the end of the day you just go with whatever tool works
      • georgemcbay 17 hours ago
        > You may want to provide proof online that you are who you say you are

        Unfortunately it probably doesn't even matter here on HN considering how brigaded down this story is predictably getting.

        But yeah, it was a fantastic piece.

    • taurath 17 hours ago
      The statements around the sexual abuse allegations seemed to be the most puzzling to me - his sister’s allegations and claims of underage partners because he has a tendency to hook up with younger partners. It does seem like this piece gives him a pretty clean bill of health in that matter - I guess would you be able to talk about how you investigated?

      Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.

      Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.

      • ronanfarrow 16 hours ago
        All fair points on trauma and memory.

        As noted in the piece, we spent months talking to Altman's partners and what we found and didn't is as described.

        • taurath 14 hours ago
          Thanks for the response! Cheers just fully reread the piece and appreciate your reporting.
        • girvo 12 hours ago
          It's super neat to see you here on HN taking questions, kudos :)
      • gowld 8 hours ago
        That's not a fair assessment. "False memory syndrome" and "repressed/recovered memory" are both outside scientific mainstream consensus.
        • taurath 2 hours ago
          Correct, because there truly isn’t a great way to answer with certainty - there was evidence in the 80s of suggestive techniques being used by poorly trained psychologists, and there are many people who remember and then find corroboration.

          There’s a lot more who remember and may not have corroboration more than with themselves and among their close friends or healthcare provider. Part of CSA is usually there is very little a kid can do about evidence, as the power discrepancy is far too much. Often with rich abusers, the exact same process occurs. Perps pick victims who are vulnerable or controllable, and constantly seek power and domination. Nothing to do with the boardroooms or batch of ceo billionaires running the economy right now certainly.

      • fontain 7 hours ago
        I am very sympathetic to the situation you describe. I certainly think it is possible that Annie is describing something that happened. I think the author did a fair job of representing the allegations, finding the right balance between disclosing that they were unable to corroborate the allegations without dismissing them.

        That said, "recovering" memories as a therapy does not pass any sort of sniff test and it doesn't take a concerted effort to discredit the concept. Human memory is very malleable. Patients with mental health issues (which could predate abuse, or could be caused by abuse) are often in search of answers and that makes them very vulnerable.

        Could a memory be buried deep in our subconscious, forgotten, only to return to the surface later? Sure, we all forget things and then remember them when triggered by something, whether that's a smell or sound or something else entirely. But can we engineer that process, with any degree of reliability? How can we even begin to reliably reverse engineer the triggers?

        I think it is also important to keep in mind that Annie is rich, and the health care available to rich people can be very predatory. There are endless examples of nonsense therapies for all types of health, from ear seeds to treatments for "chronic Lyme".

        Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them. If Annie's memories were triggered in adulthood, sure, that's really no different than remembering something... but "recovered"? That is something else entirely.

        Correct me where I'm wrong, I'd like to learn your perspective, maybe there's a missing piece.

        • taurath 2 hours ago
          > recovering" memories as a therapy

          Recovered memory therapy was a discredited hypnotherapy that leaned heavily on suggestion or was associated often with fairly coercive interrogations during the 80s CSA panic - https://en.wikipedia.org/wiki/Day-care_sex-abuse_hysteria

          > Memories that return organically due to a trigger are a world apart from "recovered" memories, we shouldn't conflate them.

          Agree, though I think the mechanism can be a bit more towards the idea of a “recovery” of traumatic memory, even if the term as understood carries false connotations.

          The concept you’re missing is dissociation, and dissociative disorders. In the 40s it was called just “hysteria”, and for many cases up to the late 90s an extreme form was called multiple personality disorder, now DID (dissociative identity disorder). https://en.wikipedia.org/wiki/Dissociative_disorder

          Not everyone who goes through traumatic events will respond to it via dissociation of identity, and indeed not all people are equally capable of developing a dissociative disorder, 2 people may go through very similar events (say survive a war as siblings or even twins) and one might dissociate the traumatic experience and one might not. Dissociation doesn’t work quite like you might imagine from a term like “multiple personalities” - that happens in some extreme cases, but think of identity dissociation as an adaptive response to events or situations that are paradoxical (esp to a child’s mind), extreme or traumatic, and can’t be escaped or use of other mechanisms cant be called upon.

          Dissociation is on a sort of spectrum, where at one side you have common experiences like zoning out when on a common commute, and on another you have separated self-parts/alter egos to handle wildly different situations.

          It’s a mechanism I frankly wasn’t aware of and I’m not sure that I would be able to fully beleive or empathize with, but for my getting a diagnosis of a dissociative disorder changed my life, and made a thousand things about me that I could never figure out make sense. The “model” as it put it at the time responded to experiment, and by recognizing that I was dealing with pretty constant, heavy dissociation and different self states with memory deficiencies helped me figure out how to work through a ton of really intractable problems for me. I’m finally after decades of ineffective therapy able to really understand how I work.

          Idk how to talk about it without sounding like I’m trying to sell the idea. But yeah it was a mind blowing thing to me. Over the last 20 years especially a ton of truly respectable research has been done and the increase in efficacy of treatments on dissociation, and trauma generally is one of the unsung advancements for humanity in the last decade. I think the number is that around 3-6% of people meet the clinical criteria for a dissociative disorder - OSDD, DID, DPDR, or dissociative amnesia. 5x more people than have schizophrenia, 5x more than have red hair.

          My favorite public clinical resource I point to people is the CTAD Clinic YouTube - https://youtube.com/@thectadclinic?si=5AyR5H8K8Cf2sn3C

          Pretty easy to understand explainers from a clinician in the UK.

          For a more clinical and study approach this one is the currently best put together research IMO: https://www.taylorfrancis.com/books/edit/10.4324/97810030573...

          The TLDR is dissociation is an important mechanism that most people don’t know about but has had a wave of research and study and is much more common than one might expect. The sad part is how often dissociative disorders correlate w abuse.

          • fontain 1 hour ago
            Thank you very much for the details.

            I’m reading more now and I think the missing piece for me is the distinction between “repressed” memories and “recovered” memories.

            I understood repressed memories to be an accepted idea, distinct from “recovered” memories. I am reading that the people mentioned in your original comment rejected the idea of repressed memory altogether, and believed that everything traumatic must be remembered.

            So, to me, reading that someone “recovered” memory reads like they went through a specific type of therapy intended to “find” these repressed memories. Whereas to you, “recovered” memories could be repressed memories that came back to the surface organically — whether at random, triggered or through a therapy intended to deal with disassociating. Is that right?

      • hello_humans 11 hours ago
        [flagged]
    • jzymbaluk 10 hours ago
      Hi Ronan, thanks for the article and for answering questions.

      My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?

    • cm2012 9 hours ago
      I just spent a while reading the article. I really appreciate you writing it. In my case, it made me like Sam Altman a lot more. But I was only able to conclude this because of all the evidence you took the time to put together. It paints the picture of someone trying to do something very difficult in a rapidly changing environment and a lot of pressure, but still making the important choices and not shirking them.
      • ronanfarrow 8 hours ago
        Interesting to hear! While this hasn’t been a commonplace reaction, I think if I do my job right it should allow people to read the facts as they will, exactly like this. It’s strenuously designed to be fair and, where appropriate, even generous.
    • philip1209 8 hours ago
      We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.

      All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.

      • solenoid0937 8 hours ago
        > whether the company that branded itself as the ethical AI lab actually is one

        FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.

        Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.

        We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.

        • root_axis 7 hours ago
          Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.

          From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.

          • fwipsy 3 hours ago
            My model is that Anthropic was founded by OpenAI engineers who self-selected for safety-consciousness. However, it's still subject to the same problem: power corrupts. I think they are better than OpenAI but they are definitely sliding.
          • JumpCrisscross 6 hours ago
            > every engineer in the bay area has a way of framing the business they work for as a benign force for good

            This isn't remotely true in my experience. The senior folks I know at Meta, for example, pretty much concede they're ersatz drug dealers.

          • rapnie 31 minutes ago
            Indeed. The bad behavior is emergent, where most individual intentions are good. Good story, bad outcome.
          • solenoid0937 6 hours ago
            TBH I have worked at multiple FAANG and I don't know anyone other than maybe new grads that actually drank the koolaid.

            Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.

            So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.

        • Bolwin 5 hours ago
          I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.

          It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.

          • fwipsy 3 hours ago
            Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.
        • __alexs 1 hour ago
          If you know even the basics of ethics then such claims are clearly nonsense. There is no stable context independent ethical behaviour. This is a great example of the dangers of motivated reasoning.
        • DirkH 2 hours ago
          I have multiple friends at Anthropic. I can second this. One thing I notice about Anthropic culture is that it is unusually kind.

          So much so that I worry they won't be Machiavellian enough to survive. Hope I am wrong.

        • foolswisdom 7 hours ago
          I think cynicism is deserved just from observing Dario's remarks.
        • hypersoar 7 hours ago
          [flagged]
      • giwook 8 hours ago
        There may be a reason why Altman is talked about a lot. This article in particular surfaces real information and new perspectives we've not heard in this level of detail before on some pretty significant topics that will be impacting you, me, and pretty much everyone we know not only today but well into the future.

        You have a point in that Anthropic deserves some coverage too and that there are interesting perspectives that we've not heard of on that front either.

        But just because that's true doesn't mean this article isn't very much relevant and needed.

        Because it is.

        • freely0085 8 hours ago
          The New Yorker has given plenty of coverage about Anthropic in their past issues earlier this year.
      • ronanfarrow 8 hours ago
        For what it’s worth, the story, while focused on OpenAI, is not uncritical of Anthropic. It explores whether there is a wider race to the bottom in terms of safety, and erosion of even some of Anthropic’s commitments.
      • k1m 8 hours ago
        After the US launched its attack on Iran, the ethical AI lab's CEO wrote: "Anthropic has much more in common with the Department of War than we have differences." - https://www.anthropic.com/news/where-stand-department-war
        • mptest 7 hours ago
          "how easy it is, for those of us who play no part in public affairs, to sneer at the compromises required of those who do" - robert harris

          Not making any value judgements, but I can see how one might value their interpretability research higher than what the ceo says in a time where the corrupt, criminal executive branch is muscling in to everything from what's written on currency, to journalistic sources. I generally blame fascists before i blame those unable or unwilling to resist them. though obviously, ideally, we'd all lock arms and, together through friendship, crush authoritarians and fascists.

          • morpheuskafka 2 hours ago
            They are a private company. They have zero obligation to sell anything to any part of the government or military. The only reason they are involved in "public affairs" is because they want to profit from the government. Moreover, long before this DoW controversy, they had plenty of nationalist and anti-China rhetoric in their press releases, more so than the other AI firms.
          • whattheheckheck 6 hours ago
            Seriously blame anyone other than the fucking abuser. These people
      • Nevermark 8 hours ago
        We should stop talking about potential problems or perpetrators, when we have talked about them “enough”?

        That would be irrational.

        We should give air time to other problems?

        I think everyone agrees with that.

        You have managed to distill a surprisingly pure vintage of false dichotomy, from a near Platonic varietal of whataboutism.

      • basisword 8 hours ago
        OP says they’ve been working on this for 18 months. Most of what you’ve said wasn’t the case until much more recently.
      • _HMCB_ 8 hours ago
        [flagged]
      • easterncalculus 8 hours ago
        [flagged]
      • xvector 8 hours ago
        Normies don't know what an "Anthropic" is. They use ChatGPT. Particularly sharp normies might know that ChatGPT is made by OpenAI, and the sharpest might know that Sam Altman is the CEO.

        Now, they may have heard the word "Anthropic" due to recent media coverage. But they don't know what it is and don't remember what it makes. The fact that all businesses use "Anthropic" is about as relevant to them as knowing the overseas shipping company for all the shit they buy off Amazon.

        So articles about OAI will always produce more revenue for the media, because it's related to what normies actually use day to day.

    • sebmellen 10 hours ago
      Ronan Farrow on Hacker News. Now I’ve seen everything.
      • ronanfarrow 8 hours ago
        I’ve really appreciated how substantive and polite the discourse here is, overall!
        • dang 7 hours ago
          I'm a mod here and wanted to let you know 2 things: (1) I've marked your account with a beta feature that displays a colored line to the left of new comments (since you last viewed the page). It might help you keep track of this rather large thread.*

          (2) I'm sorry the post was downranked off the frontpage for a while this afternoon. A software penalty kicks in when the discussion seems overheated ("flamewar detector") but I turned this off as soon as I became aware of it. We make a point of moderating HN less when a story is YC-related (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) but as this goes against standard internet axioms, people usually assume the opposite.

          (* And yes, any reader who wants this is welcome to email hn@ycombinator.com to ask - I haven't turned it on for everyone because I'm worried it would slow the site down. Also, it's a bit buggy and not only have I not had time to fix it, I've forgotten what the bugs are.)

        • tootie 6 hours ago
          Not a question but just wanted to make sure you saw this:

          https://theonion.com/anyone-else-have-those-weird-dreams-whe...

    • logicallee 5 minutes ago
      Your report is mukraking, it doesn't include anything positive. I was considering subscribing to the New Yorker but won't do so now.

      For anyone else interested, you can see ChatGPT, Claude, Grok and Gemini summarize their article here:

      https://www.youtube.com/live/xQj0Ftl7j88

      There's nothing positive in it. The report isn't worth reading, and anyone who reads it will know less about Sam Altman than they did before they read it.

    • fblp 11 hours ago
      Hi Ronan appreciate you being here. what would help you and others continue to do journalism like this? (including commenting on HN?)
      • ronanfarrow 8 hours ago
        This is a vast and tricky question. The business model has basically fallen out from under journalism, and especially this kind of labor-intensive investigative reporting. The media landscape is increasingly dominated by moneyed individuals and companies essentially buying up the discourse.

        I would really suggest subscribing to and finding ways to amplify independent outlets and journalists, and encouraging others to do so.

        • fblp 8 hours ago
          Got it! Any recommendations on who to subscribe to? Any personal links for you?

          In developer communities often you can support individual developers or groups through a monthly subscription / donation on their github page or similar.

          • mplanchard 7 hours ago
            Well, this piece was in The New Yorker, which is reasonably priced and regularly includes excellent investigative journalism. I get the physical copies, which can be too much to keep up with if you try to read everything, but it’s easy enough if you skim and just read the things that stick out as being of particular interest.
            • ilamont 2 hours ago
              The New Yorker also comes with Apple News+ subscriptions (part of an Apple One plan that many people get for extra iCloud storage) which further includes a number of top-tier and local news orgs such as the Wall Street Journal, LA Times, SF Chronicle, Times of London, etc.

              The Sam Altman piece can be read here: https://apple.news/APTX4OkywRWeJXIL7b8a7zQ

          • t0lo 2 hours ago
            Drop Site News, 404 Media, Boston Review, The Intercept, and Atavist are all very worth supporting.
        • ricksunny 6 hours ago
          Treating quality investigative reporting like the scarce resource that it is, as one of the most well-known can you shed any light on why Reuters would delegate resources to commission investigative reporters to unmask Banksy (in a world where all-things-Epstein represents an unending source of investigative opportunities in the public interest)?
    • aragonite 9 hours ago
      I had a question about reporting conventions. In the paragraph where Altman is said to have told Murati that his allies were "going all out" to damage her reputation, the claim is attributed to "someone with knowledge of the conversation" but the attribution is tucked inconspicuously into the middle of the sentence (rather than say leading upfront ("According to someone with knowledge of the conversation, Altman...")) and Altman's non-recollection appears only parenthetically.

      As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?

    • antirealist 1 hour ago
      Hi Ronan. TCatK is a phenomenal book, not only in exposing the wrongdoing of powerful people, but also in presenting the meta-issue of how hard it was to get the word out, and you handled it all with nuance. You're about as close as I have to a personal hero.

      Long time HN lurker, made an account just to say that :)

    • Akuehne 35 minutes ago
      Great article.

      Thank you for fielding questions. And please don't stop, your work is great.

    • euio757 7 hours ago
      Nice biography from Loopt to OpenAI. Why no mention of the Worldcoin cryptocurrency https://x.com/sama/status/1451203161029427208 in this piece? Was there nothing interesting to report in that area?
      • shinryuu 3 hours ago
        It was mentioned, but not by name.
    • tbagman 8 hours ago
      Wonderful work and writing, Ronan -- I'm appreciative of your careful balance between objective fact-finding and synthesis.

      For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.

      Do you see a path through this?

    • f154hfds 6 hours ago
      > in 2014, [Graham] had recruited Altman to be his successor as president.

      > [Graham's] judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable.

      One thing I don't understand is why Paul Graham offered YC to Altman if he knew how slippery he was..

      • sonofhans 5 hours ago
        Perhaps your question answers itself.
    • egonschiele 9 hours ago
      Just wanted to say what an incredible person you are! Catch and Kill and the related reporting was awesome too!
      • ronanfarrow 8 hours ago
        This is so appreciated, thank you! These stories can honestly take a lot out of me so thoughtful reactions mean a lot.
    • cmiles8 20 hours ago
      Great reporting.

      Altman describes his shifting views as genuine good faith evolution of thinking. Do you believe he has a clear North Star behind all this that’s not centered on himself?

      • ronanfarrow 16 hours ago
        The piece is an interrogation of this very question, at great length and with some nuance. I think what it does most usefully is scrutinize an array of different answers to the question.

        My own impression after many hours of conversation is that he is identifying something of a true north star when he frames this around "winning." There are people in the story who talk about him emphasizing a desire for power (as opposed to, say, wealth). I think he probably also believes, to some extent, the story he tells that equates winning, and his gaining power, with a superabundant utopian future for all.

        However, I think critics correctly highlight a tension between his statements about centering humanity writ large and his tilt into relentless accelerationism.

      • i7l 20 hours ago
        (Other people's) money.
    • mplanchard 7 hours ago
      Hi Ronan, absolutely wild to see you here in the belly of the beast.

      I have not read the article yet, because I get the physical magazine and look forward to reading it analog. I therefore only have an inconsequential question.

      I love the New Yorker’s house style and editorial “voice,” and I have always been curious about the editing process. I enjoyed the recent exhibit at the NYPL, which had some marked up drafts with editor feedback and author comments.

      Did you find that your editors made significant changes to the voice of the piece, and/or do you find any aspects of their editing process particularly notable or unusual?

      Can’t wait to read this one, and hope the HN crowd treats you well.

    • Uhhrrr 5 hours ago
      The last couple sentences tie things up really nicely.
    • gib444 4 hours ago
      As someone on a budget, how can I pay for good journalism when it so spread out across various (expensive) outlets?
      • input_sh 2 hours ago
        Paying for 1 is doing more than paying for 0.

        It's not your responsibility to fund for every single one, just find the one you like the most and subscribe to that one.

    • tsunamifury 7 hours ago
      I know why the cantilevered pool statement is there and why you mentioned it.

      I’m sure you don’t know half of the totally fucked up things Sam did to get “revenge” for the slight of a leaking pool.

    • felixgallo 10 hours ago
      This is brilliant work, guys. Did you get any pressure to soften or spike the story?
      • ronanfarrow 8 hours ago
        I won’t get into behind-the-scenes specifics here but I think you can imagine how pressurized this topic was and the amount of heat that tends to generate. I’m used to getting a lot of blowback and it’s never fun. I just hope the work is meticulous and fair enough, and that enough people see the benefits of that, that I get to continue to do it.
        • Balgair 5 hours ago
          Hey, just want to say thanks for the piece and for all the hard work and effort you did to get this out there. I've published a bit here and there, and the actual writing is only ~50% of the work load (for me at least). So thanks for going through all the effort and pain to get it out, really appreciate all the work you do for me and the rest of Joe Public.
    • _alternator_ 10 hours ago
      Do you think the recent conflict between Anthropic and the Department of War, and the apparent bootlicking by OpenAI has fundamentally altered the public perception of OAI? Are they the baddies now in the general public opinion?
    • jharohit 7 hours ago
      what model was used to create the visual at the top of the article?
    • bck102 7 hours ago
      Have you considered doing a piece on Aaron Swartz? Timnit Gebru? Michael O. Church?
    • xnx 18 hours ago
      In depth reporting is great. This is a really tricky topic to cover over the course of 18 months. A year and a half ago OpenAI was ascendant, now it's -at best- stalling and, more likely, trending toward irrelevant.
    • Stevvo 11 hours ago
      Love the visual. Fantastic.
    • artursapek 6 hours ago
      hey I loved that Ricky Gervais joke about you at the globes
      • e40 2 hours ago
        For those that don’t know or remember:

        “Tonight isn’t just about the people in front of the camera. In this room are some of the most important TV and film executives in the world. People from every background. But they all have one thing in common: They’re all terrified of Ronan Farrow.”

    • Lerc 6 hours ago
      From time to time I have been accused of being an apologist for Sam Altman, but I have always tried to assess information based upon what it says instead of whether it matches an existing narrative. You list a number of distortions in your article which show the problem. If you are a good person, bad stories about you may be fake. If you are a bad person, bad stories about you may still be fake.

      My prima facie view on Altman has been that he presents as sincere. In interviews I have never seen him make a statement that I considered to be a deliberate untruth. I also recognise that people make claims about him go in all directions, and that I am not in a position to evaluate most of those claims. About the only truly agreed upon aspect has been how persuasive he is.

      I can definitely see a possibility of people feeling like they have been lied to if they experienced a degree of persuasion that they are unaccustomed to. If you agree to something that you feel like you didn't really feel like you would have, I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.

      In all such cases where an issue is contentious, you should ask yourself, what information would significantly change your views. If nothing could change your view, then it's a matter beyond reason.

      I think you will agree that there is no smoking gun in this article, and it is just an outlay of the allegations. Evaluating allegations becomes tricky because I think it becomes a character judgement of those making the claims.

      I have not heard a single person in all of this criticise Ilya Sutskever's character. If he were to make a statement to say that this article is an accurate representation of what he has experienced, it would go a long way.

      I think Paul Graham should make a statement, The things he has publicly claimed are at odds with what the article says he has privately claimed. I have no opinion if one or the other is true or if they can be reconciled but there seem to be contradictions that need to be addressed.

      While I do not have sources to hand (so I will not assert this as true but just claim it is my memory) I recall Sam Altman himself saying that he himself did not think he should have control over our future, and the board was supposed to protect against that, but since the 'blip' it was evident that another mechanism is required. I also recall hearing an interview where Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.

      I am a little put off by some of the language used in the article. Things like "Altman conveyed to Mira Murati" followed by "Altman does not recall the exchange" Why use a term such as 'conveyed' which might imply no exchange to recall? If a third party explained what they thought Altman thought. Mira Murati could reasonbly feel like the information has been conveyed while at the same time Altman has no experience of it to recall. Nevertheless it results in an impression of Altman being evasive. If the text contained "Altman told Mira Murati" then no such ambiguity would exist.

      "Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board" Is this still talking about Brockman and Sutskever? I just can't see this as anything other than a claim he took advice from people he trusted. I assume those board members who were alarmed were not the ones he was trusting, because presumably the others didn't need to find out. The people he disagreed with still had votes so any claim of a 'shadow board' with power is nonsense, and if it is a condemnable offence, is the same not true of the alignment of board members who removed him.

      Josh Kushner apparently made a veiled threat to Muratti, the claim "Altman claims he was unaware of the call" casts him as evasive by stacking denial upon denial, but without any other indication that was undisclosed in the article, it would have been more surprising if he did know of the call. I also didn't know of the call because I am not those two people.

      The claim of sexual abuse says via Karen Hao "Annie suggested that memories of abuse were recovered during flashbacks in adulthood." To leave it at that without some discussion about the scientific opinion on previously unremembered events being recalled during a flashback seems to be journalistically irresponsible.

      • nickpp 3 minutes ago
        Paul Grahams's latest public statement on the issue:

        https://x.com/paulg/status/2041363640499200353

      • laserlight 1 hour ago
        I have experience in dealing with Sam Altman-like behavior. I hope to explain how their tactics unfold.

        > I can see people concluding that they have been lied to rather than accept that they had been intellectually beaten.

        There are two angles to this: from an individual perspective and from a collective one.

        One's interaction with such a manipulator isn't a single shot. There is not a single event that they are “beaten”. First, one gets persuaded --- you might argue that there's nothing wrong with a skillful persuasion. At some point they realize that the reality is not in line with their expectations. They bring the point up to the manipulator and ask for a change, this time in more concrete terms. The manipulator agrees with the change, negotiates compromises, and the relationship continues. After some time the manipulated party realizes that things are not going in the direction they desire. This time they ask for more concrete terms, without accepting any compromises. The manipulator accepts, yet continues to act against the terms. The manipulated party is now angry and directly confronts the manipulator. The manipulator apologizes and tells that none of it was intentional, and asks for another chance. However, at that point, the manipulator has run out of “politically correct” “persuasion tactics”, and tells blatant lies to make the other party behave.

        From a collective perspective, even those “politically correct” “persuasion tactics” are discovered to be lies, because what the manipulator told different parties are in direct opposition to each other, i.e., they cannot all be truths.

        > Helen Toner suggested that they effectively ambushed Altman because if he had time to respond to allegations he could have provided a reasonable explanation. It did not reflect well on her.

        I understand how her behavior may raise a flag for the unsuspecting, but it was exactly the right one. Manipulators prey on the benefit of the doubt. If Toner were to bring Altman's behavior into attention of others, no doubt that Altman would manipulate them successfully.

        It's unfortunate that many people are unaware of these tactics and assume the best of intentions, when such assumptions fuel the manipulation that they would better avoid.

      • clapthewind 6 hours ago
        You make very good points. Signed up to point this out to others.
    • rhlannx 11 hours ago
      I have the feeling that if you write an article in that style, the subject of the story becomes the hero even if you insert a couple of negatives. In the same manner that Michael Corleone becomes the hero of The Godfather.

      I'm not pleased with the headline and the general framing that AI works. The plagiarism and IP theft aspects are entirely omitted. The widespread disillusion with AI is omitted.

      On the positive side, the Kushner ad Abu Dhabi involvements (and threats from Kushner) deserve a wider audience.

      My personal opinion is that "who should control AI" is the wrong question. In the current state, it is an IP laundering device and I wonder why publications fall silent on this. For example, the NYT has abandoned their crown witness Suchir Balaji who literally perished for his convictions (murder or not).

      • ronanfarrow 8 hours ago
        For what it’s worth, I don’t think the piece at all avoids key areas of disillusionment with the technology. Quite the contrary.
    • FloorEgg 12 hours ago
      Hi Ronan,

      I would love to read your piece and pay you and new Yorker for it, but I am not interested in paying a subscription. If I could press a button and pay a reasonable one time license such as $3 or $5 for just this article, or better yet a few cents per paragraph as they load in, I wouldn't hesitate.

      However I'm not going to pay for yet another subscription to access one article I'm interested in.

      I'm sure you can't do anything about this, but I just wanted you to know.

      You deserve to be compensated for great journalism. In this case, unfortunately, I won't read it and you won't earn income from me.

      • cloud_line 11 hours ago
        You could buy a physical copy (and this isn't meant to sound sarcastic).
      • jzymbaluk 10 hours ago
        You can walk down to a bookstore or anywhere that sells magazines and buy a physical copy
      • IrishTechie 12 hours ago
        I’ve often thought about a model like this and would love to see a few news outlets run it as a pilot and see how it stacks up.
        • mikeyouse 10 hours ago
          Many have tried it (as well as the oft-recommended micropayments idea) and it never justifies the added expense and overhead of the customization. Closest is probably the NYTimes’ gift article feature.
          • Dylan16807 10 hours ago
            I really doubt the implementation difficulty is the actual reason. It's not hard to have an extra table of specific article permissions.
      • caycep 12 hours ago
        You could hit up a public library...
        • eichin 11 hours ago
          Looking online it looks like the newsstand price of an issue is around $10 (which I'd assume is heavily ad subsidized, if anyone is still buying print ads?) which is an interesting data point for a pricing model. (Of course, I looked online because I have no idea where I'd find a newsstand around here - the nearest newsstand that show up on google maps has reviews that say "It's just snacks and scratch tickets." and "three newspapers and no magazines" - I may have to stop by just to see what three newspapers they have :-)
      • mattbee 12 hours ago
        Or just switch your browser to Reader Mode and it's free.
      • CookieTonsure 9 hours ago
        [dead]
    • sieabahlpark 10 hours ago
      [dead]
    • wileydragonfly 7 hours ago
      [flagged]
    • stavros 8 hours ago
      There's a very minor typo in the article:

      > “Investors are, like, I need to know you’re gonna stick with this when times get hard,”

      Should be:

      > “Investors are like, I need to know you’re gonna stick with this when times get hard,”

      • JumpCrisscross 6 hours ago
        I'm not seeing a typo. Just a stylistic difference.
        • SwellJoe 5 hours ago
          Pretty sure the correction is wrong, not merely a stylistic choice.
    • loloquwowndueo 20 hours ago
      [flagged]
      • LoganDark 20 hours ago
        Many browsers let you disable autoplay globally.
        • loloquwowndueo 20 hours ago
          Sure, there are a couple of buttons I can press to stop the video. Why do I have to? Find me one person who likes auto playing videos. The page was created with a deliberate annoying choice that I have to go out of my way to override.
          • binarymax 19 hours ago
            Why do you think the author of this piece, to who you originally replied, has any control over this?
          • LoganDark 20 hours ago
            I'm not talking about pausing the video after it starts playing. I'm talking about a global setting to prevent videos from playing before you manually unpause them. Safari has such a setting, for instance.
            • loloquwowndueo 18 hours ago
              Exactly what “I have to go out of my way to override” covers, from my comment.
    • mannyv 8 hours ago
      [flagged]
    • tstrimple 7 hours ago
      Hard hitting journalism here. Is the person who lied for years to promote himself trustworthy? More news at 11!
    • Uptrenda 9 hours ago
      Damn, just wanted to say reporters are scary... The amount of detail here is huge. You think of hackers as the ones good at doxing... Nah, its reporters.
    • giwook 10 hours ago
      Any plans to tackle any of the other folks who might be mentioned in the same sentence as Altman, like Darius Amodei?
      • mathisfun123 10 hours ago
        [flagged]
        • yakkomajuri 10 hours ago
          I think the comment was out of legitimate interest rather than weighing one against the other
        • giwook 8 hours ago
          Huh? It's a genuine question. The article is great and the writer did a fantastic job.

          Please try to give people the benefit of the doubt though I know it's hard in today's society.

    • wyldfire 9 hours ago
      Dang, can you substantiate that this is actually Mr. Farrow like he claims?

      Or Mr Farrow can you post some evidence somewhere we can see?

  • rupi 3 hours ago
    Ronan Farrow, the write of this article, made a comment in this thread that is buried in all the comments, "As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page."

    I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"

    It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!

  • andrewrn 6 hours ago
    “By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

    You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.

    • mi_lk 2 hours ago
      video link?
  • arionhardison 11 hours ago
    Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.

    FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.

    • baq 24 minutes ago
      The longer I live, the more secrets coming out I see, the less surprised I am with every next one.
    • edbaskerville 11 hours ago
      Can you give more details?

      It wouldn't particularly surprise me if Sam Altman were racist, but I'm curious what the specific incident you observed was.

      • arionhardison 10 hours ago
        Yes, but first I want to be very clear on some things.

        1. I could have hidden my identify behind a throwaway. I did not feel that would be appropriate when making this calim.

        2. I am not looking for anything, literally at all. Any follow ups for blogs; anything that would benefit I will not answer.

        3. This is NOT a new account, I am very easy to find; I am 6'1 140lbs

        I was working for a company called NationBuilder and I had the opportunity to go on a work trip. Outside of a talk he had just given I was waiting for my ride and I looked over like...damn thats the speaker. I wanted to say Hi; he damn near flagged down the police. I apologized and just decided to move on.

        Note: It was in Reno, and no I don't want to go into details; the others are not hard to find because I happened upon them via blog posts so i'm sure if someone with the accumen of RF wants to know, he will find.

        I have heard similar stores from several people in the years since. I AM NOT CALLING THIS PERSON RACIST. I am saying; he is observably scared of black people and that is not someone I want making descions about how the world moves foward.

        • pesus 5 hours ago
          Thank you for sharing this. I 100% believe it, and it lines up with my experience with other people who came from similar backgrounds as Sam Altman - i.e. white, rich, privileged, and attending elite universities.

          I will disagree with one part - I do believe it is racism. Most will never admit it publicly, but if they think you're one of them, it often comes out rather quickly, especially when alcohol is involved.

          • bakugo 1 hour ago
            I don't think you're in a position to comment on what is and isn't racism, considering you just made a sweeping negative generalization based on race without recognizing it for what it is.

            Also, I find it interesting how your list of "backgrounds that define bad people" conveniently omits a specific trait that many tech CEOs of questionable morals share, likely because it doesn't align with your agenda.

          • portender 2 hours ago
            It's sad to me that "racism" is such a divisive word to many, and is met with defensiveness rather than introspection and communication. Trying to not be racist takes work, and communication, and is a process, not a state.

            I appreciate OP's sharing as well. Also, racism isn't peddled only by rich white elite university attendees, it reaches into all the corners.

          • LAC-Tech 1 hour ago
            Sam Altman - i.e. white, rich, privileged, and attending elite universities.

            Sam Altman is jewish, not white.

        • Xmd5a 13 minutes ago
          > he is observably scared of black people

          More like I'm black, he got scared when I approached him in the street, thus he must be racist. You're under the spell of your own signifier that you see everywhere like a proud interpretive paranoid.

        • mememememememo 3 hours ago
          An extranordinary claim needs a bit more evidence than one datapoint where in his defense maybe he is scared of anyone he doesn't know trying to talk on the street.
        • arionhardison 10 hours ago
          Note: To all the downvotes; I did this publicly and not anon for a reason, if you will do the same I am more than willing to provide evidence for all of these claims as long as its done publicly and in the open.
          • arionhardison 9 hours ago
            PG said something along the lines of: "There should be no truth that is increasingly unpopular to speak."

            If you don't believe what I shared is true, address that directly. But seeing my post sitting at 1 point and [flagged] after 2 hours is not OK. Just as DJT can't flag away his issues, you shouldn't be able to do so on HN.

            One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings. I really hope that what happened to my post is not the beginning or a continuance of the end for that ethos.

            • latexr 11 minutes ago
              > One of the things I've loved most about HN is that it was real — grounded in observability, empirical evidence, not bias or feelings.

              That has never been the case, because HN is frequented by humans and humans are biased. Someone who claims to be unaffected by feelings is someone you cannot trust, as it means they are blind to their own shortcomings. Being robotic about the world is no way to live—that’s how you get people who are so concerned with nitpicks and “ackshually” that they completely lose sight of what’s important. They become easy to manipulate because they are more concerned with the letter of the law than its spirit or true justice.

              Objectivity and empiricism are positive traits but should be employed selectively. Emotions aren’t a weakness, they are what drives us to change and improve. Understanding your own emotions equips you better to understand the world. But they too can be used to manipulate you. To truly grow, you have to employ your emotional and rational sides together. Focusing on just the rational will get you far but not all the way.

              HN is primarily about curiosity—it’s in the guidelines four times—and you can’t have that without emotion.

            • tastyface 2 hours ago
              I tried to respond to your comment with some personal observations on racist currents in this community, but my comment immediately got flagged. So yeah! This site ain't what it used to be. Best for the good folks to seek community elsewhere, I reckon. I miss the old days as well, but I don't think they're coming back.
              • hnbad 1 hour ago
                If this site ever was anti-racist, that must have been a long time ago. I threw away my old account many years ago only to come back with this one (because it's difficult to completely ignore HN if you work in tech) and the reason I threw that one away was in part the overwhelming reactionary bias in this community.

                The "progressives" were at best silent "don't rock the boat" types more inclined to insist on civility than to challange reactionary sentiments while the reactionaries ranged from dog-whistling to outspoken, across the entire range of white supremacism, sexism, homophobia, transphobia, antisemitism, zionism and so on. The only comments that would ever get flagged or downvoted were those that were explicit enough to be seen as "impolite" because they happened to spell out calls for genocide or violence rather than merely gesturing at it with the thinnest veneer of plausible deniability.

                • tastyface 1 hour ago
                  Well, I do remember it being more about the underdogs and a cheeky "fuck the system" attitude without much malice. Maybe I just wasn't tuned into this stuff back then. Now, though, both users and tech leaders can unironically parrot Stormfront rhetoric from 10 years ago (using vaguely cordial language) and no one even bats an eye. The kind of stuff that would have made you unemployable just a few years ago.

                  When I think of HN in the before times, I think of people like Aaron Swartz. Would he have enjoyed his technical discussions peppered with comments on how the West is being "invaded" and "outbred" by third-world hordes? Based on what I know about him -- and please correct me if I'm wrong -- I'm guessing he would have noped out of that kind of community in a flash. Yet nowadays I see this kind of talk here all the time, percolating all the way up to industry leaders like Musk and DHH.

            • sharmi 4 hours ago
              Just came to say, I appreciate your emotionally intelligent and balanced take on your experience, where it would have been very easy to react and let emotions take over (understandably).
            • tastyface 6 hours ago
              [flagged]
          • kombookcha 4 hours ago
            Thank you for sharing this.
        • ahf8Aithaex7Nai 3 hours ago
          Thank you for sharing this experience with us. Don't worry about the downvotes. That's just how it is here sometimes. I don't think it reflects the views of most readers.
        • valianteffort 1 hour ago
          [flagged]
    • elschneider 4 hours ago
      I really hope @ronanfarrow addresses this. Thanks for sharing
  • jablongo 5 hours ago
    For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.

    At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...

    • hirako2000 1 hour ago
      Interestingly we are still experiencing the technological momentum inspired and created by what OpenAI used to be. AI for humanity.

      Given the initiative started circa 2017, much of the goods remain. It's a hijack of creative geniuses who got together, which is now turning into cow milking tech.

  • thrwaway55 6 hours ago
    We need only ask the dead. Aaron Schwartz knew what Altman is. The answer to the topic is no.
    • mastazi 4 hours ago
      I'm interested in knowing more about this topic, do you have any resources about the relationship between Swhartz and Altman?
      • stingraycharles 4 hours ago
        It’s not difficult to find these, Aaron always said that Sam was not to be trusted.
        • palmotea 2 hours ago
          Apparently Aaron Swartz and Sam Altman were classmates at the original 2005 Y Combinator class. This article has a picture of them literally standing next to each other: https://www.hindustantimes.com/trending/throwback-photo-of-f...

          The OP says this:

          > The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.”

          • t0lo 1 hour ago
            Does the hindustan times report facts. 90% of indian outlets are basically unfactual
            • hnbad 1 hour ago
              The cited snippet is in TFA. Did you read it? Did you read the Hindustan Times article either?

              Because that one doesn't actually include any relvant statement, it just contains the picture GP was pointing out - and the entire point of referencing that picture was to emphasize that they had had contact, which is already implied by them being in the same YC batch, which I don't think you are challenging.

              Please don't post comments like this one. "90% of Indian outlets are basically unfactual" is a hyperbolic claim - regardless of the truth content of "Indian outlets" that claim is bogus unless you have factual evidence to back up the specific number which I doubt because "basically unfactual" is not well-defined). But even worse, it's completely irrelevant to the discussion at hand because the factual accuracy of the Hindustan Times is at best tangential because nothing in GP's comment hinged on its accuracy unless you're saying the description of that photo as being one depicting both of them as members of the same YC cohort is "unfactual" or you're accusing them of having manipulated the image itself. But even then it would be irrelevant because you seem to take issue with the description of Altman as a sociopath (i.e. the quote), not the fact they were batch mates, and this quote is explicitly cited as being from TFA this comment thread is about, not the Hindustan Times piece. Comments like that just waste time, cause unrelated hostile arguments and could have been avoided by simply reading either of the articles involved.

              • t0lo 53 minutes ago
                I found a great piece from the halal times that backs up my claim

                https://www.halaltimes.com/indian-media-has-become-a-factory...

                It's fully up to you if you want to generalise before you read based on the publications name. I won't judge. If we read the times of india in full every time to give it the benefit of the doubt and counter our biases, the world would be a far less productive place. If a country's media has a reputation for low fact checking it's usually deserved.

      • input_sh 2 hours ago
        It's mentioned in the submitted article (about half way through), you should read it.
  • stavros 9 hours ago
    I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.

    Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.

    • druskacik 53 minutes ago
      Altman is an advocate of Universal Basic Income, as far as I'm aware. That doesn't sound like he's not worried about massive job losses.

      https://www.cbsnews.com/news/sam-altman-universal-basic-inco...

      https://finance.yahoo.com/news/sam-altman-wants-universal-ex...

    • red369 7 hours ago
      I also find that interesting.

      And not intending to defend the motives of anyone involved, but I'm hoping we can not worry about literally all jobs being destroyed, and AI companies amassing all the wealth in the world.

      Don't we need at least some humans working and earning to buy these AI services? Am I not being imaginative enough? Is it possible for the whole economy to consist just of AI selling services to each other?

      I realise that even if AI destroys most jobs, or even just a lot of jobs, and amasses most wealth, or a lot of wealth, it would still be a terrible thing for humans. The word "all" could have just been hyperbole, and it is still a valid point. I just want to know people's thoughts on whether entire replacement is possible.

      • eloisius 1 hour ago
        Why keep human consumers to buy your services when you could just amass all the wealth you desire, and have autonomous systems that can ensure your unassailable physical security? You would sit atop the most stratified dominance hierarchy ever achieved, and it would reduce other humans to mere pets or breeding stock. I don’t think normal humans would desire that kind of power, and I don’t believe LLMs will take us there, but I wouldn’t put it past the perverted billionaire maniac.
      • gpt5 1 hour ago
        Do you need ants buying services from humans for the world economy to function?

        If AI will indeed become superintelligent, we won't matter.

    • RealityVoid 5 hours ago
      I think fundamentally, the concern is misplaced. The fact you need to work for wealth is a convention of our constraints. The change in constraints would lead to other means of distribution. It's easy to see if someone who believes more productivity is good would not see making jobs obsolete a real problem. Thew would see us adapting to the new conditions in a relatively short while.
      • blargey 3 hours ago
        > The fact you need to work for wealth is a convention of our constraints

        The current constraint is "you need to produce to have things".

        If one company's AI takes all the jobs, and thus does all the producing-to-have-things, the constraint transforms into "you need that company's permission to have things".

        Hence the top-level question.

      • foobiekr 3 hours ago
        The new conditions almost surely being like the old conditions: slavery, sexual exploitation, etc.
      • chii 4 hours ago
        Those who are concerned is implying that any new distribution mechanism is not going to favour them.

        And under the capitalist system, if nothing changes, the "new" distribution system is indeed not going to favour them - at best there would be some sort of UBI, and at worst you would be left to starve in the streets.

        However, i cannot see how one can transition to a new system, and yet have the existing powers in the current system agree and not be disadvantaged.

  • kmfrk 18 hours ago
    Gobsmacking details about Altmans' time as Y Combinator president, in case anyone's wondering.

    Fantastic reporting.

    • ronanfarrow 16 hours ago
      As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page.
      • kmfrk 15 hours ago
        One of the decidedly eerier parts of this story as you keep reading are all the gaps between what people are saying about Altman, and what they clearly want to say about Altman but can't.
        • devmor 11 hours ago
          Throughout my life, what colleagues/friends are unwilling to remark plainly on has been the most telling factor of someone’s character to me.
          • dugidugout 10 hours ago
            This can be true I suppose, but equally I have a few friends who practically play characters as if they've resigned themselves to a role in a sitcom. For instance: one of my friends is late to just about everything and treats everyone as if we are on-call. We plainly note this repeatedly, the friend is, I hope, equally frustrated and embarrassed by it, and in spite of this nothing changes. This is obviously a critical element to their broader character.

            Perhaps you mean to distinguish social groups without much intimacy? To which I'm sure we could provide some convincing cases, but this seems like a silly heuristic generally.

            • rincebrain 10 hours ago
              I have been in or next to a number of social circles with such missing stairs, where for various reasons people in the groups have decided to not directly acknowledge certain Facts that are known about some members, because it would involve them confronting their hypocrisy.

              Someone cheating regularly on their partner, flagrant substance use problems, controlling people who ostracize anyone who doesn't agree with their sometimes insane perspectives...

              People will go along with quite a lot to avoid friction, especially as they get older and picking up new social circles becomes higher cost.

              It's possibly the most telling thing, when you see what people say is a hard line versus how they actually respond to it.

            • satvikpendem 6 hours ago
              Maybe they have ADHD because the symptoms fit, if they really do acknowledge the problem yet cannot fix it.
      • xnx 7 hours ago
        > where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long

        For anyone unfamiliar with this process, the New Yorker documentary is well worth the watch: https://www.netflix.com/title/81770824

      • Teever 9 hours ago
        You mention many proxies of Musk who post negative content about Altman.

        In your investigation were you able to determine if Altman has similar proxies?

        How common would you say that this is? Do these kinds of people generally have teams of people who sling mud for them?

        Can you speculate on how that manifests on a site like Hackernews?

      • trvz 1 hour ago
        Calling your own article all those things is a major turn-off.
  • locust101 31 minutes ago
    It’s hard to know what’s the new information here. Altman’s history has been reported on exhaustively.

    Few people have left openai over the year - safety abandonments, non profit status change, deception etc. but there is too much money involved. Here lies the actual rub. A lot of people involved and named in the article are reprehensible, kushners, saudis, Emiratis, PayPal mafia, vc folks with god complexes. But as long as they have the money, we have to dance to their tune.

    We really really need a way for our society to be more equitable and hold these people responsible.

    • willis936 16 minutes ago
      >PayPal mafia, vc folks with god complexes

      HR would like you to tell the difference between the two photos.

  • neonate 12 hours ago
    • nafey 6 hours ago
      I hope ronan farrow doesn't mind his article being shared like this
      • stevenwoo 6 hours ago
        It’s also available via public libraries in USA via Libby if your local library system pays for a subscription, so it’s a way to support the magazine indirectly, since your local taxes pays for your library. The downside for weekly is you have to read it that week, no archive access.
        • rhubarbtree 1 hour ago
          Which edition? I looked at April 6th and can’t see the article.
      • sph 3 hours ago
        Truth > revenue
      • MagicMoonlight 3 hours ago
        I’m not going to pay for another newspaper subscription just to read one article
      • vasco 3 hours ago
        The information is more important than the wants of the writer, always.
    • calebm 5 hours ago
      This is pretty hilarious - when I asked ChatGPT to "summarize this article: https://archive.ph/hOYMn", it said it's about Jesus ("The article traces the development of early Christian Latin hymns, especially focusing on how themes about the Virgin Mary and Christ evolved from the 4th to later centuries..." (https://chatgpt.com/share/69d48476-9bf4-8327-8c19-709865a547...)
      • sph 2 hours ago
        Sharing what an LLM has to say about a thing is like sharing what you dreamt of last night — no one really cares.
      • flux3125 25 minutes ago
        Interesting. If you look at the sources it cited, there are a few links about "Sacred Songs and Solos" (likely from related/side content on the page), my guess is it didn't read the main article and instead anchored on those and hallucinated
  • bkummel 1 hour ago
    Without having read the article, reacting on the headline: no single person should be allowed to control our future. Democracy is a thing in large parts of the world, and we should try very hard to keep that functioning and even improve it.
    • PUSH_AX 1 hour ago
      People are voting with their wallets
      • xboxnolifes 1 hour ago
        Thats not democracy.
        • jstummbillig 12 minutes ago
          It's also not not democracy. It has little to do with a form of government.
  • krackers 10 hours ago
    [1] is also good to read as a follow-up, and compare the personalities

    https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...

    • rwmj 2 hours ago
      I read this a few days ago, excellent article and an absolutely insane story.
    • mplanchard 7 hours ago
      This was a great article, and absolutely savage in some of its characterizations.
      • slopinthebag 4 hours ago
        The fact that this reads as deranged fantasy and yet I can believe is 100% real is insane lol
  • swingboy 11 hours ago
    It's really interesting reading about how these folks view LLMs. Yeah, they're transformative, but I don't know that we're going to be eating ramen in a Neo-Tokyo street bar anytime soon. So much "A.G.I" mentioned in the article.
    • m4rtink 7 hours ago
      I find it interesting how a lot of cyberpunk does not really include AI or does not present it in transformative way. There is a lot of mind uploading, implants, corpo fun and overall technology permeating all aspects of life, but often AI itself does not actually play a big role.
      • Terr_ 6 hours ago
        Counterexamples that come to mind are Neuromancer (AI driving the plot) and Blade Runner (AI antagonists.)

        A compromise thesis might be that in cyberpunk media, AI is at never powerful or motivated to fundamentally reform the worldwide crapsack economic system. They don't abolish corporations, although they might take them over.

        Of course, if there was a story about an AI taking over the world into a post-scarcity society, it probably wouldn't be filed under "cyberpunk" either...

        • hnbad 12 minutes ago
          Rampant capitalism is kinda genre-defining for Cyberpunk so Cyberpunk without corporations wouldn't really be Cyberpunk. _The Matrix_ only qualifies as Cyberpunk because within the matrix the machines effectively control the capitalist power structures to exert their influence.

          Abundance/scarcity isn't really about availability, it's more about access. You can have a cyberpunk story in a "post-scarcity" setting in the sense of availability (due to sci-fi tech) but you can't have it without unequal access to those resources.

      • keiferski 1 hour ago
        AIs are in plenty of cyberpunk stories, but your comment did make me think that they are often rather stereotypically “alien entity characters” and not a kind of corporate technology / weapon that is controlled by a specific organization.

        Which is a shame, as it seems to me that the overwhelming risk of AI is from the latter scenario, and not as a rogue individual entity.

      • ehnto 6 hours ago
        It is a pretty core part of Cyberpunk the "franchise" though, both tabletop and more recent video game.

        I think as well if you look closer, many cyberpunk worlds imply AI through robots, computers with personality etc.

      • mcat_god 1 hour ago
        I assume it just becomes one of those things as ubiquitous as Wi-Fi
      • helloplanets 3 hours ago
        AI is one of the core parts of cyberpunk, through androids / humanoid robots. Blade Runner is completely built on the protagonist having to interact with rogue artificial intelligence.
      • satvikpendem 6 hours ago
        I find that more realistic then, because it appears that's the trajectory we are going on with regards to AI, as a tool not a panacea.
      • gilgoomesh 7 hours ago
        I think you can look at Star Trek as a fairly grounded example of where current LLMs could go: the ship's computer is not autonomous in any way but it does accept fairly vague instructions and you can apparently vibe-code the holodeck.
      • rwmj 2 hours ago
        Hyperion has a pretty well-developed view of AGI.
      • Trasmatta 7 hours ago
        Deus Ex is an outlier, AI is a core part of that plot
        • staticman2 6 hours ago
          The first Cyberpunk book, Neuromancer, has a plot which revolves around A.I recruiting human agents to forward its plans...
    • 0x3f 11 hours ago
      It's because they're really good at the kind of busywork the average white collar job requires. Most people are out there writing documents and making presentations. Only when you use them for actual complexity does the shortfall become clear.
    • satvikpendem 6 hours ago
      Well I'd hope they're transformative, they're using transformers after all. We just need to pay attention to them, that's all they need.
      • kfarr 5 hours ago
        Do they need all our attention?
    • red369 8 hours ago
      I'm going to write a silly comment here: For a moment I thought you wrote "... LLMs. Yeah, they're transformative, but I don't know that they're going to be eating ramen in a Neo-Tokyo street bar anytime soon."

      I liked that mental image a lot! (I try to maintain being uncertain whether Deckard was a replicant)

  • vlovich123 4 hours ago
    > Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.

    Ronan interesting writing as always. I’m curious if the role of the media as a pawn of the rich and powerful to sway perception and build narratives concerns you, especially given your personal experiences with this and the reporting you’ve done. Are there reforms you think reporters and/or news organizations should adopt to make sure access doesn’t become direct or indirect manipulation and how do you fight against that in your own reporting?

  • snakeboy 1 hour ago
    I usually use free archived versions to read mainstream journalism pieces. Seeing this convinced me to subscribe. I've always loved The New Yorker, and am happy to support serious longform journalism (and I know that Ronan is one of the best).

    However, it's a shame that the only way to subscribe to the print version is to pay $260 upfront for the yearly subscription. Meanwhile the digital version is $1/week ($52 upfront) for one year, or even just $10 for one month.

  • ainch 10 hours ago
    Great piece. And a good excuse to read up on the use of diaeresis in English (eg. coördination, reëlection) to distinguish repeated vowels - I hadn't seen the New Yorker's usage before.
    • mplanchard 7 hours ago
      They also prefer some less common spellings. For instance, just noticed “vender” instead of “vendor” in an article this morning.
    • goodoldneon 9 hours ago
      It isn’t for all repeated vowels; only for when the 2 vowels don’t make a single sound. So “chicken coop” wouldn’t have a dieresis
      • stavros 9 hours ago
        It would if the chickens formed a business structure that was owned and democratically controlled by its member-owners.
      • OJFord 9 hours ago
        Unless it was a chicken coöp... One of few cases it actually resolves an ambiguity!
  • nerdyadventurer 4 hours ago
    Why would anyone trust him at all? their tech is used to bomb children, all of these rich folks are immoral only about their selfish gain.
    • jedberg 3 hours ago
      > their tech is used to bomb children

      If you're talking about the school in Iran, that wasn't OpenAI. That was a Palantir system that pre-dates OAI by a few years, and was due to a bad entry in a spreadsheet, that showed the building as military housing. Which it was a few years ago.

      180 people lost their lives because of bad data in spreadsheet, but not AI.

      • hnbad 2 minutes ago
        180 children lost their lives because of decisions by people in the US military (and ultimately the US government / the POTUS).

        Let's not fall into the trap of adopting narratives created to waive accountability. The spreadsheet didn't launch a missile, the spreadsheet didn't authorize the strike and the spreadsheet didn't select the target.

        Not to mention that "outdated spreadsheet" is also a hilariously anachronistic excuse for a war crime if you consider what kind of satellite technology the US has publicly acknowledged to have access to, let alone what kind of technology it is likely to have access to.

        The difference between intentional premeditated murder and reckless endangerment resulting in a killing is not guilt and innocence but merely the severity and nature of a crime. Both demonstrate a callous disregard for the sanctity of human life, one just specifically seeks to extinguish it, the other merely accepts death and suffering as an acceptable outcome.

      • steinvakt2 1 hour ago
        Many years ago. Not "a few years ago". Also you could make the sentence that 180 people lost their lives because of an evil war, of which USA and Israel are the aggressors. And we definitely don't talk enough about that part.
      • Hikikomori 2 hours ago
        Palantir was using anthropic and its use is being replaced by openai.
        • jedberg 19 minutes ago
          Yes but not for the system that decided to bomb a school. That was a Palanter in house system.
  • morleytj 10 hours ago
    Wow, this is an incredibly detailed piece. Really in depth reporting and the kind of detailed investigation we need more of on important topics like this.

    > "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."

    This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.

    • ytoawwhra92 10 hours ago
      They're mass media cynically produced to extract maximum profit from lowest common denominator audiences, so the idea that people working in such influential positions find them appealing enough to reference suggests they are members of that lowest common denominator audience.

      The people shaping the future have no taste.

      • eutropia 9 hours ago
        There's a time and a place for everything, and rejecting popular media as "lowest common denominator" is the most uninspired form of cultural elitism.

        Is it cynical to want your <art project> to make a profit? Or for it to make enough profit to subsidize other projects?

        Is it cynical to make something accessible so more people who watch it are able to enjoy it?

        I agree that it's embarrassing and feels crass when movies both try to be broadly appealing and simultaneously fail to be entertaining or well executed ... but many of the marvel movies clearly surpass that bar.

        No one wants to make a bad movie that does poorly with critics and paying customers - but it does happen because making a movie is expensive and complicated and requires a lot of skilled people working together towards the same goal.

        Regarding taste: do you think a michelin star chef swears off cheap food like hotdogs or fish and chips? Doubtful - because those foods have their place and the chef is able to enjoy them for what they are rather than use them as an excuse to display a superiority complex.

        • ytoawwhra92 9 hours ago
          > There's a time and a place for everything

          Yeah, I'm saying professional communication isn't the place for Marvel references, and that those who choose to include references to those movies in their professional communications are revealing something about their media tastes.

          If I'm at a Michelin star restaurant I don't want to be served a ballpark hotdog.

          • klausa 4 hours ago
            > If I'm at a Michelin star restaurant I don't want to be served a ballpark hotdog.

            This is a very funny quip.

            A famous anecdote about a 3* restaurant in NYC is about the servers overhearing a group of diners mentioning how they ran out of time try a "real NYC hot-dog", and the restaurant staff running out to grab one from the corner cart and plating it up nicely; and how this was a highlight of everyone's experience.

          • ianbutler 8 hours ago
            That they relate to the common person and aren't overly snobby?
            • ytoawwhra92 7 hours ago
              Exactly. They share the cultural sensibilities of the average person on the street, and yet they're making decisions that will shape the world for future generations. I think that's bad. I want those decisions being made by people who have a more extensive cultural education. Snobs, if you want to call them that.
              • ianbutler 6 hours ago
                Interestingly, the smartest people I know have the widest range of media consumption and understanding. To assume that because someone uses a marvel reference they might not have a deeper cultural education is rather...limited thinking.
                • calf 1 hour ago
                  Ferran Adria drew culinary inspiration from a bag of potato chips

                  As someone experienced with a privileged elite educational background, I can guarantee that intellectuals love the highbrow and lowbrow, the authentic and the kitsch; rather, it is a sign that someone is not acculturated if they have the stereotypical impression of the intelligentsia, which makes the OC's comment ironic, they are telling on themselves.

              • satvikpendem 6 hours ago
                Of course they're average people, why do you think tech or AI company employees are somehow above or beyond the average person? I'm not sure why you'd willingly say you'd want snobs controlling the world, that is somehow even worse and reeks of aristocracy which is why you see replies rejecting your thoughts, it is simply not a western ideal or one to strive towards.
                • lmm 3 hours ago
                  > why do you think tech or AI company employees are somehow above or beyond the average person?

                  They're supposed to be elite. They went to the best schools, many of them have PhDs, they are getting paid insane amounts of money.

                  • satvikpendem 3 hours ago
                    Lol. I can tell you right now they're not elite.
              • abustamam 7 hours ago
                I'm confused as to what your point is. Employees refer to the incident as "the blip." I got no impression that there was a formal memo that went out to the company or the media at large that officially refers to the incident as the blip, merely that employees refer to it as a blip (likely to each other, not too dissimilar to a meme).

                And while I don't think someone's media tastes ought to preclude them from making important decisions, I also disagree with your point at large. I don't think the world should be shaped by snobs. The world is already being shaped by snobs in other sense of the word, and I don't see any indication that it's any better than the alternative.

        • wolvesechoes 1 hour ago
          There is also elitism of lack of expectations. Common people should be helped to rise up over the mud produced by culture industry. Meeting them and staying with them in this mud is an actual elitism.
        • mvdtnz 6 hours ago
          Marvel movies absolutely target the lowest common denominator of film watchers. To deny that is delusional.
      • Noumenon72 9 hours ago
        When things reach a certain level of popularity they constitute "mental real estate". Your audience has heard of Groundhog Day, so there is an opening for a movie with that title to make money -- your film will start out already having name recognition and some understanding of what the movie is about.

        Thus it is a writer's job not to make references they find appealing to reveal their good taste, but to know what references their audience will find appealing and use them to help communicate concepts. If this bothers you it's because they're insulting you by saying you might be part of the audience that watches Marvel, and you had hoped reading the New Yorker would signal that you aren't.

        • ytoawwhra92 9 hours ago
          The writers of this piece didn't make the reference.
          • halter73 7 hours ago
            No, but they chose to include it. Presumably there were a lot less apt references they chose not to include.
      • red369 7 hours ago
        I agree that these movies are really being cranked out. I hadn't even realised quite the extent of this until I went to look. But I think some of these movies are good enough that it shouldn't be disturbing that people in influential positions find them appealing:

        I know a lot of people are critical of the Rotten Tomatoes score, but I find that when a high enough percentage of reviews are positive, it is likely I will enjoy the movie. Some of the Marvel movies have a very high proportion of positive reviews (admittedly, those reviews could be just positive, not very positive). And for most in this list with a very high score, I think it's deserved.

        https://en.wikipedia.org/wiki/List_of_Marvel_Cinematic_Unive...

        Arguably, one indication of the limitations of the Rotten Tomatoes score is the number of these Marvel movies with high scores :)

        Btw, I'm not trying to convince you that if you watch the movies you'll like them. Just that they may not all be as bad as you think.

      • sph 17 minutes ago
        I disagree with this characterisation. I loathe mass-media blockbusters, but a journalist has to be in touch with public culture in their goal to spread the truth and inform people, not just high-brow elites, but everybody. This is why their work is usually more influential, interesting and engaging than if it had been written by an academic.
      • abustamam 5 hours ago
        I'm an MCU fan. And while I do agree quality has gone down, I think it's hard to ignore the fact that the MCU did something really novel. They made a franchise that spanned 20+ movies and tied it up in a way that was almost universally loved by nerds and normies alike.

        Are there a lot of plot holes and retcons? Yeah. And some bad writing. And the movies that came after have been pretty meh with some exceptions.

        But for someone to say that referring to one of the highest grossing films and franchises of all time, means their decisions should be questioned, is quite the stretch.

  • adrianhon 21 hours ago
  • keepamovin 48 minutes ago
    YC invests in people, not ideas. They have vetted him. They are always right about people. It's probably nothing.
  • just_once 19 hours ago
    Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.
    • dang 12 hours ago
      This thread set off a software penalty called the flamewar detector.* I turned that off as soon as I saw it.

      (* This was predictable from the title, because the question in it was inevitably going to trigger an avalanche of crap replies. Normally we'd change the title to something less baity, and indeed the article is so substantive that it deserves a considerably better one. But I'm not going to change it in this case, since the story has connections to YC - about that see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....)

  • wk_end 11 hours ago
    This anecdote is so absurd it sounds like satire. This is the guy with the $23M mansion?

    > Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.

    • satvikpendem 3 hours ago
      Well they did indeed have a coup so looks like Altman was right.
    • simoncion 11 hours ago
      He's a liar and untrustworthy. Based on their public statements, that's a big part of why the board fired him.

      Of course, (despite the fact that Altman previously publicly stated that it was very important that the board can fire him) he got himself unfired very quickly.

  • bambax 46 minutes ago
    > Altman does not recall the exchange.

    Altman SAYS he does not recall the exchange. Not the same thing.

  • latentframe 3 hours ago
    It’s less about trusting one person but more about the structure indeed AI is concentrating capital and compute and talent into a few hands so we’ve seen this before with railroads, oil, semiconductors. It brings innovation and also pricing power and political influence.
  • throw4847285 15 hours ago
    A new Ronan Farrow piece is a rare gift (and Marantz is no slouch). Can't wait to read this in the physical magazine when it arrives!
    • geokon 6 hours ago
      I hadn't heard of him before. The wiki article is worth a look

      https://en.wikipedia.org/wiki/Ronan_Farrow

      It's got to be one of the most unusual biographies of a living person that I've ever come across. Nearly every sentence is a head-turner. If you made it up no one would believe you

      • geokon 1 hour ago
        ..that all said, this has no bearing on the article. It's written in a very neutral objective tone and the author doesn't inject himself in to the narrative - just found it interesting. Seems like an interesting guy
  • ambicapter 11 hours ago
    I didn't have the mental energy to read the whole thing but man the final paragraph is some really good writing. Way to tie it all in together.
    • krackers 9 hours ago
      The entire thing is a joy to read, you should really set aside some time to cleanse your palette in this age of LLM prose. I mean just look at this juxtaposition

      >Altman continued touting OpenAI’s commitment to safety, especially when potential recruits were within earshot. In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” in which sufficiently advanced models might pretend to behave well during testing and then, once deployed, pursue their own goals.

      (plus it finally resolves the mystery of "what Ilya saw" that day)

      Also since it wasn't stated clearly

      >“the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India

      That was Sydney if I understand correctly.

  • HardwareLust 18 hours ago
    Of course he cannot be trusted. Anyone whose motivation is based on greed is by nature untrustworthy.
    • throwway120385 11 hours ago
      Even if your motivation is some utopian vision of the future, you should not be trusted. Utopia is a thought experiment in a philosophy of living taken too far, not something to be reached for earnestly.
      • dns_snek 1 hour ago
        Why is it that criticism of people's insatiable greed for wealth and power often gets dismissed with this thought-terminating cliche about utopias?

        Desire to live in a society that's less greedy, that rewards compassion and punishes sociopathy is completely valid. We should be pursuing that earnestly because survival of our species depends on it. The people in charge are so drunk on wealth and power that they would rather drive our entire species off a cliff than sacrifice even 10% of their effectively bottomless wealth.

        But instead of criticizing our current philosophy that's actively being taken too far and threatens to destroy us, you criticize people who express their frustration with this state of affairs.

    • davebren 6 hours ago
      Not just the greed. The whole AI is so dangerous that we must be the ones to build it to save humanity, and then gaslighting yourself and everyone around you into believing that your language model is AGI. This is some weird detached from reality cult behavior.
      • kortex 5 hours ago
        Complete hearsay, but I struck up a convo with someone who had spent a few hours drinking around a campfire with him and a few others at burning man, prior to GPT3's popularity. Apparently he was utterly convinced in his pivotal role to shepherd in a new era with AI, to the point where it got really messianic and culty. He didnt recall much else other than just being really weirded out by the dude.
        • davebren 5 hours ago
          The AI CEOS and most of their employees are in the same place as that guy. They're just in a more professional context and will be careful not to let their delusions of grandeur look too insane.

          I remember watching the fitness function improve while my neural net learned to recognize characters for a project I did in school, and there was something about it that felt powerful. I guess we've always had that with the machines we imbue that have any sort of decision making "intelligence", but mix that with taking psychedelics and you have an interesting cocktail.

    • hellojimbo 10 hours ago
      lol thats like 99% of planet earth, including the animals
  • steve_adams_86 10 hours ago
    > Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I really want?” Among his answers is “Financially what will take me to $1B.”

    I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.

    • ks2048 8 hours ago
      It's not surprising. I made this comment on HN before, but if you follow him on Twitter, it's pretty remarkable - the CTO of one of the most important technology companies in the world and he has never (that I've seen) posted something with some technical insight, or just anything interesting about technology. It's just boring truisms, cliches, empty statements, etc.
    • chromacity 9 hours ago
      Eh. It doesn't start or stop with people like Altman, Zuckerberg, or Nadella. I think it's a symptom of a broader problem in tech. Half the people on this site made a decision to work at companies that do shady things, and they did that to maximize personal wealth.

      The difference isn't that the average techie doesn't dream of making a billion by any means necessary; it's that most of us don't think we have a shot, so we stick to enabling lesser evils to retire with mere millions in the bank.

      • skybrian 8 hours ago
        I don't think it's all that hard to avoid working on anything shady. It's not as easy to avoid being associated with anything shady due to widespread cynicism and a tendency to treat tech companies with thousands of projects as a monolith.
      • bluefirebrand 8 hours ago
        > The difference isn't that the average techie doesn't dream of making a billion by any means necessary

        I hope that's not true. If it is, we live in a bleak world indeed.

        I can confidently say I've never once dreamed of having billions. I've never wanted billions. Not even in a fanciful manner. What would I do with that money? Buy mansions and megayachts? That's loser stuff

        Most of what I want out of life cannot be bought. The pieces that come with a price tag, like a comfortable home, do not require billions

        I think only sociopaths want billions because they don't understand spending your life seeking things that actually matter, like family and human connection

      • ggregoire 8 hours ago
        > The difference isn't that the average techie doesn't dream of making a billion by any means necessary

        That's actually the difference, most people don't want a billion

        • azan_ 2 hours ago
          Yeah, sure…
    • kevinqi 9 hours ago
      it is disappointing, but is it shocking that people most driven by gaining money/power are the ones the most successful at achieving it?
      • steve_adams_86 9 hours ago
        What sticks out to me most is that humanity consistently fails to weed these creatures out and regulate society. It's a bug in our social software; we seem to like these broken people rather than recognize that they're a liability.
        • sumedh 2 hours ago
          Most people don't care as long as it does not affect them directly.
        • hackable_sand 8 hours ago
          Trust is not a bug

          You need to accept that every generation some people are going to try and fuck things up.

          Then you get to decide to stop or help them

        • basket_horse 8 hours ago
          This isn’t a bug. It’s the driving force of our capitalist society. We are not trying to weed them out. We are trying to encourage them. It’s pretty simple, when they get rich, so do all their investors.
    • dolebirchwood 10 hours ago
      Sociopaths don't have much going for them in life other than winning status games.
      • buzzerbetrayed 10 hours ago
        Sociopath is the next word that people seem to want to entirely destroy the meaning of
        • dolebirchwood 10 hours ago
          [flagged]
          • JumpCrisscross 9 hours ago
            > Struck a nerve?

            No need to be petty. They have a point. We did this with the words racist and fascist. Overinclusion diluted the term and gave cover for the actual baddies to come in. I'm not sure debating who is and isn't a sociopath is as useful as, say, the degree to which Sam is a liar (versus visible).

            • Ucalegon 5 hours ago
              Racism and fascism have been used correctly, its just that people do not like to be have their beliefs associated with negative things and thus, rather than perform self-reflection about themselves, instead the problem exists elsewhere. I am sure you can come up with outliers that prove what you are saying is true, but across the vast majority of applications of the use of both words they are correct relative to definitions of both words.
            • greenchair 8 hours ago
              Speaking of overinclusion, 'wild' is my nominee for 2026 as I'm seeing it all over the place.
              • JumpCrisscross 6 hours ago
                > 'wild' is my nominee for 2026

                I don't know how to define the delineation I'm about to propose. But there is a difference between overinclusivity trashing a morally-loaded, potentially even technical, term, and slang evolving.

            • rexpop 8 hours ago
              I'm sorry, we did what with the word "racist"?
              • JumpCrisscross 7 hours ago
                > we did what with the word "racist"?

                “Overinclusion diluted the term and gave cover for the actual baddies to come in.” The next sentence.

            • nixosbestos 5 hours ago
              I would be curious to hear you expand on that, walk me through it, maybe a small paragraph to explain what over inclusion happened with the weird fascist, what baddies you're vaguely referring to, and connect those dots?
      • kakacik 10 hours ago
        While true and we can see them literally everywhere where there is some money and/or power (even miniscule places like classic banks have easily 1/3 of the staff with clear sociopathic traits, I have to deal with them daily... or whole politics) - thats just human nature, or part of it.

        Its up to rest of society to keep them in check since classic morals are highly optional and considered nuissance blocking those games. And here we the rest fail pretty miserably, while having on paper perfect tool - majority vote.

      • lokar 10 hours ago
        Or, some fraction of otherwise good/normal people who “win” are turned into sociopaths by the power and sycophancy.
    • xorgun 9 hours ago
      [dead]
  • bootload 8 hours ago
    “By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

    This statement rings true.

    JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.

    • jacquesm 7 hours ago
      I don't think they were useful at all. If anything they pulled down YCs up to that point stellar reputation.
      • argee 6 hours ago
        At least two of YC's early (mid-aughts) "huge" successes come down to PG unilaterally (or with some help from JL) making some kind of "weird" call. AirBnB and Reddit come to mind. Even Stripe can be traced to him since he basically created the Auctomatic team (Patrick Collison's previous YC entry).

        In other words, PG had the "knack" for sometimes encouraging the right weird thing. I'm not sure it's been the same since he handed off the reins, like any other formerly-founder-led company. Nowadays it really gives off the vibe of bean-counting and hype-chasing.

        I don't think it's gotten quite as bad as this [0] article suggests, though.

        [0] https://stanfordreview.org/is-yc-for-cowards/

      • bootload 6 hours ago
        “Today’s news comes at an interesting time. Last week, Business Insider’s Jonathan Marino reported that YC is close to raising several billion dollars for a new fund, with the goal of possibly expanding its scope to later stage funding. It said it’s still in preliminary discussions for this new strategy, but if true, Thiel could definitely play a big role there.”

        My recollection was Thiel was injecting cash, a money deal. [0] There was another less advertised play. An established path for the Thiel “Boy Wonder Fellows”. [1]

        “In addition to founding PayPal and Palantir and being the first investor in Facebook, Peter has been involved with many of the most important technology companies of the last 15 years, both personally and through Founders Fund, and the founders of those companies will generally tell you he has been their best source of strategic advice. He already works with a number of YC companies, and we’re very happy he’ll be working with more.”

        Guess who was involved in the Thiel / YC deal? [2] You are not the only one seeing this as a reputation hit for YC. [3] Even I, disconnected across the other side of the world could see this as an issue.

        [0] https://www.inc.com/business-insider/peter-thiel-is-joining-...

        [1] https://boingboing.net/2016/08/25/peter-thiel-y-combinator-f...

        [2] https://www.ycombinator.com/blog/welcome-peter/

        [3] https://qz.com/810778/y-combinator-has-no-problem-with-partn...

        • jacquesm 5 hours ago
          Having Thiel on board of YC would probably turn off a lot of potentially successful founders. Or maybe it's a way to select for those with a lack of ethics. Having Musk and Thiel visibly associated probably is good from a monetary perspective but it sends all kinds of bad signals.
  • mvkel 59 minutes ago
    > Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different.

    Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?

    It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.

    Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.

    Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.

  • pharos92 10 hours ago
    We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.

    Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.

    • chii 4 hours ago
      > underlying threat posted by AI to society, the economy and human freedom persists

      I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI - unless you're claiming that an AGI would be capable of such independent actions.

      AI is similar in transformative power to how the internet was a transformative power - might even be greater, if it is more commonly available for use through out the world. Whether that transformative power is doing good or bad really depends on the people doing it, not on the tech. I would bet that the future is going to be better because of AI, than to imagine a worse future and act to stunt the tech.

      • wolvesechoes 1 hour ago
        > I would deny that AI poses any such threat. There are actors who would use the tool in ways that threaten as you described, but that is a threat from said actor, not AI

        Of course, it is popular to deny it. People constantly tell themselves "it is people, not tech". They make valid, yet banal and inconsequential statement. This distinction has no bearing on reality.

        • chii 27 minutes ago
          So you're saying that if people hadn't invented weapons, there would be no violence?

          The claim that AI is itself dangerous has no merit.

    • j2kun 9 hours ago
      Or perhaps, the underlying threat is personified by Altman, in that our country has repeated and widespread institutional failures to hold the wealthy accountable for wrongdoing.

      The threat of AI is, after all, driven by the people who use it.

    • xgulfie 10 hours ago
      It's because we only really know one economic system but we've known many people
  • 6Az4Mj4D 9 hours ago
    I am in 40s and going to be made redundant this June. In future only people who can afford to keep things like Claude, OpenAI and most importantly create value using them more than what others can do be able to survive. Otherwise, game is more or less over, and I question what's next for my own future while I learn to use Claude in FOMO. I cannot trust Sam or others if they will have any interest to keep this tech affordable for common people like me.
  • ernsheong 1 hour ago
    I bet Satya Nadella is regretting defending Altman now.
  • ycui1986 9 hours ago
    he won't. if anything, openai is falling behind recently. the trend won't change easily. it is like the old time Netscape.
  • RagnarD 26 minutes ago
    No.
  • slg 11 hours ago
    One thing that stands out when reading profiles like this is the number of positive and negative descriptions of the subject that agree. For example, there seems to be little dispute that Altman will happily say something that he knows/believes isn't true, there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.
    • palata 11 hours ago
      > there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.

      Or if the person lying is in a position of power?

  • innocenttop 15 hours ago
    Why is the story so downranked? Folks at HackerNews have something to do with it ?
    • dang 12 hours ago
      It off the flamewar detector, a,k.a. the overheated discussion detector. I've turned that off now - this is obviously a serious article.
    • randycupertino 15 hours ago
      HN generally downvotes and/or flags anything that paints ycombinator in a bad light. As Altman was president of yc from 2014 to 2019 that could be why this is getting downvoted.

      Articles critical of Airbnb, one of yc's biggest wins, also get flagged and taken down.

      • dang 12 hours ago
        I'm not sure whether you meant this about moderator interventions or not, but our actual practice is the opposite:

        https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

        As those comments explain, this has been the #1 rule of HN moderation from the beginning. See also https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

        • lovich 11 hours ago
          I don’t think the poster you responded to was claiming that moderators directly did this. The flagging system is open to bias from the community at large and certain types of articles(ex. Anything critical of the current admin) get a bunch of real users organically flagging them.
          • dang 6 hours ago
            Yes, it's hard to tell sometimes but I've at least learned not to automatically take these personally. Well, partly learned.

            I don't think anyone familiar with this community would assume positive bias towards Sam, Airbnb, or even YC anymore - it's quite the contrary, from my perspective, but of course everyone notices different things and has their own view. Ditto for political slants.

            • lovich 6 hours ago
              I dont assume positive bias, but I do assume that most negative things that get people irked are removed as a result of the mechanics of the flagging system.

              Like, I dont really expect puff pieces for ycombinator or the like to get artificially pushed to the top, but I do expect that enough people who are feel culturally or financially invested in ycombinator to flag negative things into oblivion, especially as its completely reasonable that the population of users here has a much higher percentage of those folk than any random population sampling.

  • b8 6 hours ago
    Sam failed upwards.
  • dmitrygr 10 hours ago
    The number of "Altman doesn’t remember this" or "Altman denies this" is hilarious
    • jcgrillo 9 hours ago
      Life would be so much easier if I was that forgetful
  • avaer 5 hours ago
    Who would you trust more: Sam Altman, or a council of 1000 representative AI models?
  • netcan 1 hour ago
    My tendency is to believe that the individuals do not what matter as much, when it comes to the biggest risks. I'm not sure if this is a bias or a theory... but I lean to some sort of "medium is the message" determinism.

    >"He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."

    Before "don't be evil" was a cliche, I think it was a real guiding principle at Google and they built a world class business that way.

    Facebook's rival ad platform didn't have search queries to target ads at. Aggressive utilization of user data was the only way they could build an Adwords-scale business. As they pushed this norm, Google followed.

    Doomscroll addiction gets a lot of attention because engineers and journalists have children and parents. There are other risks though. Political stability, for example.

    By early 2010s, smartphones were reaching places that had almost no modern media previously. Often powered by FB-exclusive data plans. The Arab spring happened, then ISIS. FB-centric propaganda seemingly played a major role in a major conflict/atrocity in Burma. Coups in Africa powered by social media based propaganda. Worrying political implications in the west. Unhinged uncle syndrome. Etc. Social media risks/implications were more than just "inconvenience."

    At no point did we really see tech companies go into mitigation mode. Even CYA was relatively limited. There was no moment of truth. It was business as usual.

    So... I think OpenAI's initial charter was naive. Science fiction almost. It was never going to withstand commercial reality, politics, competition and suchlike. I think these are greater than the individuals involved.

    That doesn't mean we should ignore, excuse or otherwise tolerate lack of integrity. But, I don't think it is a way of reducing risk.

    Whether the risk is skynet, economic turmoil, politics, psych epidemics or whatever... I don't think the personal integrity of executives is a major factor.

  • trakkstar 2 hours ago
    Girls and boys, this is a prime example of a rhetoric question.
  • saeranv 8 hours ago
    Greg Brockman honestly sounds like a psychopath:

    > In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?

  • ergocoder 11 hours ago
    I wonder if Sam might abandon the ship soon. Other co-founders already did.

    The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.

    This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.

    • 0x3f 10 hours ago
      But nobody is going to just gift him the same valuation on the next company. It's not like his execution is OpenAI's moat right now. So where would he be going that's a better deal for him?
      • ergocoder 10 hours ago
        Founding his own company would be one alternative. Full control. No stigma on the non-profit part. Probably get the same paper money as he got now at OpenAI.
      • davebren 6 hours ago
        What is the value he adds anyway, being a delusional cult leader where most people around him characterize him as a sociopath? Is it just his ability to lie and create fear-hype?

        It's not like he had anything to do with the technical achievements, except convincing the engineers that they were doing something valuable, but the cat is out of the bag on that.

    • raincole 10 hours ago
      And OpenAI's influence is hugely exaggerated compared to, say, Google.
      • ergocoder 10 hours ago
        Yes, and it seems people hate him more than Google co-founders, for example.

        All the downsides without much upside...

        • georgemcbay 9 hours ago
          > Yes, and it seems people hate him more than Google co-founders, for example.

          Sergey Brin is trying to change that lately, but Altman still has a sizable head start.

    • palata 10 hours ago
      IMHO, nobody is remotely worth $1B, period.

      The fact that some (usually toxic) individuals get there shows that the system is flawed.

      The fact that those individuals feel like they can do anything other than shut up, stay low and silently enjoy the fact that they got waaaay too much money shows that the system is very flawed.

      We shouldn't follow billionaires, we should redistribute their money.

      • simonh 9 hours ago
        If someone founds a company, grows it and owns $1bn of its stock, they don’t have $1bn in cash to distribute. They have a degree of control over the economic activity of that company. Should that control be taken away from them? Who should it be given to?

        I can see an argument when it comes to cashing out, but I’m not clear how that should work without creating really weird incentives. Some sort of special tax?

        • palata 49 minutes ago
          > Some sort of special tax?

          Well yeah. After some amount, you get 100% taxes. So that instead of having billionaires who compete against each other on how rich they are or on the first one to go contaminate the surface of Mars or simply on power, maybe we would end up with people trying to compete on something actually constructive :-). Who knows, maybe even philanthropy!

      • r14c 8 hours ago
        Big agree, at a certain point a company is big enough that their impact has to be managed democratically. I don't have an issue with effective leaders, the problem is that we reward a certain kind of success with transferable credits that don't necessarily align with people's actual talents or skills.

        I want skilled institutional investors who have a track records of making smart bets. I don't want a random person who happened to get lucky in business dictating investment policy for substantial parts of the economy. I want accountability for abuses and mismanagement.

        I know China gets a bad rep, but their bird cage market economy seems a lot more stable and predictable than this wild west pyramid scheme stuff we do in the US. Maybe there are advantages for some people in our model, but I really dislike the part where we consistently reward amoral grifters.

        • palata 40 minutes ago
          > Big agree, at a certain point a company is big enough that their impact has to be managed democratically.

          100%. First, a company should not be that big. The whole point of antitrust was to avoid that. The US failed at that, for different reasons, and now end up with huge tech monopolies. And it's difficult to go back because they are so big now.

          BTW I would recommend Cory Doctorow's book about those tech monopolies: "Enshittification: why everything suddenly got worse and what to do about it". He explains extremely well the antitrust policies and the problems that arise when you let your companies get too big. It's full of actual examples of tech we all know. He even has an audiobook, narrated by himself!

      • rafterydj 9 hours ago
        Well, redistributing their money is (in some cases disingenuously) exactly how they are able to pitch investors. "Sure, value my company at $10B and my shares make me $2B, but we're alllllll gonna make money when hit AGI!!!" That kind of thing.
        • palata 38 minutes ago
          Sure, I understand why the people around them who benefit from it also want to do that.

          My point is that it all only benefits a few people. Those people used to call themselves "kings", appointed by god. Now they are tech oligarchs. If the people realised that it was bad to have kings, eventually maybe they will realise that it is bad to have oligarchs?

  • pupppet 19 hours ago
    Ask Condé Nast if he can be trusted..

    https://www.reddit.com/r/AskReddit/s/VWJVBNzc2u

  • CyborgUndefined 2 hours ago
    ugh, i don't understand why only altman scares you? what about google, china, and other players?

    for me, the answer >>> we need to create our own systems. decentralized agent networks and etc.

    if you don't want to depend on one person or one company controlling your AI, build your own infrastructure.

    the concentration of power in one/two persons is the problem.

  • einrealist 10 hours ago
    I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.

    This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.

    • shruggedatlas 9 hours ago
      Your brain is performing "compute-intensive brute-force attacks on the problem/solution space" as you read this very sentence. You trained patterns on English syntax, structure, and semantics since you were a child and it is supporting you now with inference (or interpretation). And, for compute efficiency, you probably have evolution to thank.
      • JohnMakin 8 hours ago
        people like to say this like they’re apples to apples but this comparison isn’t remotely how the brain actually works - and even if it did, the brain does it automatically without direction and at an infitesimal percentage of the power required.

        And we’re just talking about cognition - it completely ignores the automatic processes such as maintaining and regulating the body and it’s hormones, coordinating and maintaining muscles, visual/spacial processing taking in massive amounts of data at a very fine scale, and informing the body what to do with it - could go on.

        One of the more annoying things about this conversation is you don’t even need to make this argument to make the point you’re trying to make, but people love doing it anyway. It needlessly reduces how amazing the human brain is to a bunch of catchy sci fi sounding idioms.

        It can be simultaneously true that transformer based language models can be very smart and that the human brain is also very smart. It genuinely confuses me why people need to make it an either/or.

        • igggh 6 hours ago
          Great post
      • stonyrubbish 8 hours ago
        Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does. AI is more like a parrot which is trained to give a correct-looking response to any question. The parrot doesn't think, doesn't know what its doing etc, it just does it because it gets a treat every time a "good" answer is prompted. This is why it can't do things like know how many parenthesis are balanced here ((((()))))) (you can test this), it doesn't have any kind of genuine cognition.
        • fgfarben 2 hours ago
          I love reading posts like this. When you were a child, learning math or grammar, do you not remember bouncing off the walls of incorrect answers, eventually landing on a trajectory down the corridor of the right answer? Or were you always instantly zero-shotting everything?

          In my experience, this is exactly how language models solve hard new problems, and largely how I solve them too. Propose a new idea, see if it works, iterate if not, keep going until it works.

          Of course you can see how to solve a problem that you've seen before, like a visual puzzle about balanced parentheses. We're hyper specialized to visually identify asymmetries. LMs don't have eyes. Your mockery proves nothing.

          • calf 37 minutes ago
            The mistake in these types of arguments is that natural, classical-artificial, and/or neural-net-artificial learning methods all employ some kind of counterexample/counterfactual reasoning, but their underlying methods could well be fundamentally different. Thus these arguments are invalid, until computer science advances enough to explain what the differences and similarities actually are.
        • saxonww 7 hours ago
          > Human cognition is nothing like AI "cognition."

          I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.

          What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?

          • davebren 6 hours ago
            Yes, we do. Humans share the statistical association ability that LLMs possess, but also conscious meaning and understanding. This is a difference in kind and means that we can generalize beyond the statistical pattern associations that we've extracted from data, so we don't require trillions of examples to develop knowledge.

            Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...

            They don't need to read every math textbook, paper, and online discussion in existence.

            • AstroBen 6 hours ago
              Our DNA does contain our pre-training, though. It's not true that we're an entirely blank slate.
              • davebren 5 hours ago
                Pre-training is not a good term if you are trying to compare it to LLM pre-training. Closer would be the model's architecture and learning algorithms which has been designed through decades of PhD research, and my point on that is that the differences are still much greater than the similarities.
            • saxonww 5 hours ago
              The point I'm trying to make is that I don't think we know, so we can't say either way.

              In your example, would the human have ever had contact with other humans, or would it be placed in the room as a baby with no further input?

        • chpatrick 8 hours ago
          This is such a boring cliche by now. "thinking" and "knowing what it's doing" are totally vague statements that we barely understand about the human mind but in every comment section about AI people definitively state that LLMs don't do them, whatever they are.
          • davebren 6 hours ago
            This is the epitome of learned helplessness, that you need a neuroscience paper to tell you what thinking and knowledge is when you experience it directly all the time, and can't tell that an LLM doesn't have it. Something is extremely evil about these ideologies that are teaching people that they are NPCs.
          • stonyrubbish 8 hours ago
            They aren't so vague that you would argue the parrot is thinking.
        • sph 2 hours ago
          > Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does.

          This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.

          We are very different, and there are some high-profile people that don't even have an internal monologue or self-introspection abilities (one of the other symptoms is having an egg-shaped head)

        • CamperBob2 4 hours ago
          AI is more like a parrot which is trained to give a correct-looking response to any question.

          A parrot that writes better code and English prose than I do?

          I would like to buy your parrot.

      • wil421 8 hours ago
        If you think this way then why not talk to LLMs exclusively. Don’t let the oxytocin cloud your ability to problem solve.
      • slopinthebag 9 hours ago
        I get you're trying to do the whole "humans and LLMs are the same" bit, but it's just plainly false. Please stop.
    • stavros 9 hours ago
      > All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning'

      If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".

      • bjacobel 7 hours ago
        Has generative AI made material progress on curing cancer? Has it produced any breakthroughs, at all?
        • igggh 6 hours ago
          In b4

          - it’s the worst it’ll ever be - big leaps happened the fast few months bro

          Etc.

          Personally I think llm’s can be very powerful in a narrow-band. But the more substance a thing involves, the more a human is needed to be involved.

      • stonyrubbish 8 hours ago
        > "I don't trust anyone who claims they're intelligent" doesn't follow from "all they do is <how they work>".

        It kind of does if how they work is nothing like genuine intelligence. You can (rightly) think AI is incredible and amazing and going to bring us amazing new medical technologies, without wrongly thinking its super amazing pattern recognition is the same thing as genuine intelligence. It should be worrying if people begin to believe the stochastic parrot is actually wise.

        • einrealist 7 hours ago
          I can slow down the compute by a factor of a thousand. It would not change the result. But it changes the economics. We only call it intelligent, because we can do the backpropagation, the inference (and training) fast enough and with enough memory for it to appear this way.
        • stavros 8 hours ago
          If LLMs can come up with superhumanly intelligent solutions, then they're superhumanly intelligent, period. Whether they do this by magic or by stochastic whatever doesn't make any difference at all.
          • davebren 7 hours ago
            Like..a calculator?
            • CamperBob2 4 hours ago
              Take a calculator to the International Math Olympiad and let's see how you do.
      • bigyabai 9 hours ago
        That's moonshot logic that reinforces the parent's point. You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment.
        • JumpCrisscross 9 hours ago
          > You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment

          That's not a cure. Like yes, I'd care if the AI says it cures cancer while nuking Chicago. But that isn't what OP said.

        • Noumenon72 9 hours ago
          "The cure for cancer" as a phrase doesn't include those solutions. If the headline was "Pope discovers the cure for cancer" and those were his solutions you would say "No he didn't." OP was referring to AI discovering the cure for cancer that cancer research is working towards.
    • crazylogger 8 hours ago
      If all they do is "just" brute-force problem solving, then they are already bound to take over R&D & other knowledge work and exponentially accelerate progress, i.e. the SciFi "singularity" BS ends up happening all the same. Whether we classify them as true reasoning is just semantics.
    • semiinfinitely 9 hours ago
      calculator is superhumanly intelligent
    • Rover222 8 hours ago
      Yeah and everything is just atoms. If you reduce anything enough it’s not real.
  • 383toast 10 hours ago
    if you have to ask if someone can be trusted, they usually can't
  • eximius 2 hours ago
    Fuck no! Of course he can't be trusted. We know that. Nobody questions that. We know that about most of the "elites" running the show.

    We're just in this shitty pit of despair where people are desperate. It's difficult to campaign for good when you're struggling and capital can jerk people around.

    People pursue good for the sake of good at cost to themselves when times are very good or times are very, very bad.

    Right now times are only merely very bad.

  • sph 1 hour ago
    Excellent article, truly well-researched. As someone close to a pathological liar [1], the idea that one could be at the forefront of the creation of an artificial superintelligence confirms all the existential risks of such a piece of technology and how naïve, if not ignorant, the average starry-eyed tech worker and investor is about this whole endeavour. It's easy to believe there is a lot of idealism and wish for a better world, but underneath the greedy drive for money and power is excellently summarized in Greg Brockman's own thoughts: “So what do I really want? [...] Financially what will take me to $1B.”

    Literally, the only hope for humanity is that large language models prove to be a dead-end in ASI research.

    ---

    1: “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — I guess now I know of two people with these traits.

  • the_arun 8 hours ago
    The main animated picture reminded me of evil king Ravan from Ramayan with 10 heads. Not sure it is intentionally done that way.
  • BrenBarn 5 hours ago
    Of course not. No one can be trusted to control our future.
  • Rover222 1 hour ago
    I don’t know, but any time I see an interview of Altman and I look at those eyes, I get creeped out.
  • brap 3 hours ago
    He’s a grown ass man tweeting in all lowercase, that’s all I needed to know.

    I could more or less infer the rest from that.

  • tines 8 hours ago
    Two "insure" typos?
    • mplanchard 7 hours ago
      The New Yorker prefers insure to ensure. They have a unique house style. I commented on another thread about alternative spellings like vender instead of vendor, too.
    • Wyverald 7 hours ago
      In American English, "insure" can also mean "to make sure" as in "ensure", in additional to meaning "to take out insurance for".
    • o0-0o 7 hours ago
      Dictation likely and not caught by editing.
  • game_the0ry 13 hours ago
    For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.

    I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.

    Some concepts from the book:

    > Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.

    > Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.

    > The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.

    > Trust your instincts over a person's social role (e.g., doctor, leader, parent)

    Check and check.

    OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.

    • unsupp0rted 12 hours ago
      I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

      We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.

      E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.

      • xg15 10 hours ago
        That's not a third category, that's just a sociopath as seen by themself.
        • unsupp0rted 9 hours ago
          I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

          Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.

          • game_the0ry 8 hours ago
            > I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

            Yes that is the core trait I highlighted in the 1st bullet.

      • game_the0ry 8 hours ago
        > I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

        There is -- I call it "corpo sociopath." The corpo sociopath really comes out in the workplace, less so in personal life.

    • jcgrillo 8 hours ago
      I was with you right up until the final paragraph, but this made me do a double take:

      > OpenAI is too important to trust sama with.

      ...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.

      The whole "super serious what-ifs" game is just marketing.

      • davebren 6 hours ago
        Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.

        I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.

        • arcfour 3 hours ago
          > I'm not even sure we're any closer to AGI than we were before LLMs.

          I mean this is very obviously untrue. It'd be like saying we aren't any closer to space flight after watching a demonstration of the Wright Flyer. Before 2022-2023 AI could barely write coherent paragraphs; now it can one-shot an entire letter or program or blog post (even if it's full of LLM tropes).

          Just because something is overhyped doesn't mean you have to be dismissive of it.

    • gib444 4 hours ago
      It's fairly obvious sociopathy is a prerequisite for top CEO jobs. Some just hide it better than others or have better PR people
  • shevy-java 2 hours ago
    I don't trust him. He already made statements that convinced me I don't want to touch anything he controls. In a way it is similar to Meta and co. For some reason the US corporations behave very suspiciously once past a certain threshold size. With Win11 from Microsoft I always wonder whether there is a not so hidden subagenda in place.
  • almostdeadguy 16 hours ago
    Seems this got buried from the front page very quickly
    • dang 12 hours ago
      It set off the flamewar detector. I've turned that off now.

      I only saw this thread by chance and almost didn't look, because the title made the piece sound like a flamebait blog post. Fortunately I saw newyorker.com beside the title and looked more closely.

    • ronanfarrow 16 hours ago
      There is dwindling space for sincere independent accountability reporting on big tech like this to a) be created, since it's incredibly resource-intensive and so many resources flow from Silicon Valley, and b) actually reach people, since more platforms are now owned or otherwise influenced by interested parties.

      Thank you for looking. Please do spread this kind of reporting in your communities, and subscribe to investigative outlets when you can.

      • walterbell 10 hours ago
        > OpenAI has closed many of its safety-focussed teams

        A paper with "ideas to keep people first" was (coincidentally?) published today:

          • Worker perspectives
          • AI-first entrepreneurs
          • Right to AI
          • Accelerate grid expansion
          • Accelerate scientific discovery and scale the benefits. 
        
          • Modernize the tax base
          • Public Wealth Fund
          • Efficiency dividends
          • Adaptive safety nets that work for everyone
          • Portable benefits
        
          • Pathways into human-centered work
        
        https://openai.com/index/industrial-policy-for-the-intellige...
      • almostdeadguy 13 hours ago
        This was an excellent piece with many new pieces of information in it. Thanks to you and your coauthor for getting it released.
      • big_toast 15 hours ago
        You can see the vote history here[1]. It's always hard to know exactly why something gets buried. I was a little sad to see the story down-ranked when I saw that you were here in the comments.

        But the discussion is generally pretty low quality with these sort of posts. People react without having read the story, or with whatever was on their mind already, or are insubstantive, or simply low effort. I don't think you'll lose k-factor not having a bigger post here.

        Sometimes if you talk to the mods, they'll let you know their perspective. I generally find they're correct that people are much better at contributing/disseminating new knowledge to the world on more technical topics here.

        [1]: https://news.social-protocols.org/stats?id=47659135

        • dang 11 hours ago
          Yes, I was surprised that it was downranked when I saw that too. Then I realized it had set off the flamewar detector and it was a simple matter to turn it off. I'm glad we got to this in time, because sometimes we don't, and this was an important case not to miss.
        • throw4847285 13 hours ago
          But isn't that circular? If the ranking algorithm used by the mods tends to devalue articles like this because they don't trust the user base to comment intelligently, doesn't that alter the culture of this site to make that more true?
          • dang 11 hours ago
            I'm not sure what big_toast meant, but we do trust the user base to comment intelligently (which sometimes works and sometimes not), and we don't devalue articles like this.

            We do tend to devalue titles like this, or more likely change them to something more substantive (preferably using a representative phrase from the article body), but I'm worried that if I did that here we would get howls of protest, since YC is part of the story.

            • throw4847285 9 hours ago
              I'm sure you're sick of comments about moderation, but I will say, this makes me more sympathetic to the position you're in.

              It's an interesting dilemma. Many very respected publications use provocative titles because of the attention economy. And I'm sure you have good data that provocative titles lead to drive-by comments and flame wars.

              But I don't think big_toast was entirely wrong that there is a side effect of sometimes burying articles that are by their nature provocative. And how do you distinguish a flame war over a title from a flame war over content? That's not a leading question. I don't know.

              • dang 7 hours ago
                For us the litmus test isn't the title, it's whether the article itself can support a substantive discussion on HN. If yes, then we'll rewrite the provocative title to something else, as I mentioned. Ironically this often gives the author more of a voice because (1) the headline was often written by somebody else, and (2) we're pretty diligent about searching in the article itself for a representative phrase that can serve as a good title.

                If, on the other hand, the title is provocative and the article does not seem like it can support a substantive discussion on HN, we downweight the submission. There are other reasons why we might do that too—for example, if HN had a recent thread about the same topic.

                How do we tell whether an article can support a substantive discussion on HN? We guess. Moderation is guesswork. We have a lot of experience so our guesses are pretty good, but we still get it wrong sometimes.

                In the current case, the title is baity while the article clearly passes the 'substantive' test, so the standard thing would have been to edit the title. I didn't do that because, when the story intersects with YC or a YC-funded startup, we make a point of moderating less than we normally do.

                I know I'm repeating myself but it's pretty random which readers see which comments, and redundancy defends against message loss!

  • cedws 3 hours ago
    Sounds like a snake pit. None of them can be trusted. If we have to rely on companies to self appoint a benevolent ‘AI dictator’ we’re fucked.

    The only high profile person in AI I’d consider perhaps worthy of trust is Demis Hassabis.

  • panzi 9 hours ago
    No. Next question.
  • lenerdenator 12 hours ago
    If you are asking if a single human can be trusted with such a responsibility, the answer is, by default, no.
  • pdonis 10 hours ago
    Does the article ever actually answer the title question?
    • mohamedkoubaa 10 hours ago
      The answer is no, he can't be trusted
      • pdonis 9 hours ago
        Oh, I agree that's the correct answer. I just don't see the article actually ending up with that answer. I see it waffling. Basically, the article ends up saying that, well, we told you about all this dodgy stuff, but what he's doing is working.
        • Wyverald 7 hours ago
          God forbid an article presents all the evidence from all parties and asks you to reach a conclusion by yourself...

          Sorry for the snark. But I genuinely think the way they did this was perfect.

          • pdonis 6 hours ago
            > I genuinely think the way they did this was perfect.

            Evidently we disagree. I responded about that to another commenter downthread.

        • mohamedkoubaa 8 hours ago
          Trusted to increase shareholder value is also questionable
    • kubik369 9 hours ago
      I think you are misunderstanding the point of journalism. It can be debated whether the title should be such a question. Nevertheless, the article should just present information, ideally in a balanced way, without author's bias, so that you can decide for yourself. You can see the attempts at the balanced part in the article where an allegation/statement is made about Altman followed by parentheses saying that Altman recalls the exchange differently/does not remember.
      • pdonis 8 hours ago
        > the article should just present information, ideally in a balanced way, without author's bias, so that you can decide for yourself.

        I get that this is the claimed ideal of journalism, at least for straight reporting. The problem is that it's impossible.

        There isn't time or space to present all the information; the journalist has to filter. And filtering is never unbiased. Even the attempt to be "balanced" is a bias--see next item.

        "Balanced" always seems to mean "give equal time and space to each side". But what if the two sides really are unbalanced? What if there's a huge pile of information pointing one way, and a few items that might point the other way if you believe them--and then the journalist insists on only showing you a few items from the first pile, so that the presentation is "balanced"? You never actually get a real picture of the facts.

        There's a story that I first encountered in one of Douglas Hofstadter's books, about two kids fighting over a piece of cake: Kid A wants all of it for himself, Kid B wants to split it equally. An adult comes along and says, "Why don't you compromise? Kid A gets three-quarters and Kid B gets one-quarter." To me, the author of this article comes off like that adult.

        In any case, all that assumes that this article is supposed to be just straight reporting, no opinion. For which, see the next item.

        > It can be debated whether the title should be such a question.

        Yes, it certainly can. If this article is just supposed to be straight reporting--no editorializing--then that title is definitely out of place. That title is an editorial--and the article either needs to own that and state the conclusion it's trying to argue for, or it shouldn't have had that title in the first place.

        • kubik369 1 hour ago
          > "Balanced" always seems to mean "give equal time and space to each side". I agree with you that this seems to be the idea people have when "balanced" is mentioned. I don't think this is correct. You can easily have a balanced article which has lots of evidence pointing one way or the other. I think that this article is like that. Boatload of pointers towards Altman being a sly person with reporters asking him about those exchanges and him basically shrugging each time.

          The journalists credibility is doing quite a bit of lifting here as we have to trust that they put in the effort. One such example is the molesting accusations which the reporters say they heavily looked into and were not able to find any corroborating evidence.

          > You never actually get a real picture of the facts. Yes, it is a fundamental impossibility in lots of cases. That's why we trust the reporters that they did as good a job as they could to present all pertinent information.

          > That title is an editorial ... I do not perceive it to be editorialised. It states an arguably real possibility that Altman may/does have lots of real power. I am guessing that you believe that the "can he be trusted" is an editorialisation that points towards him being untrustworthy. If that is the case, I think those would be your biases knowing that he is probably not trustworthy. I see it just as an objective question.

          Imagine a different situation: you have local elections into your small town. There is a new mayor candidate and during the next term, there will be some money to be given to residents for renovations and such, but not enough for everyone. You don't know this candidate. A local reporter, whom you trust, writes an article "New mayor candidate favoured in polls - will he be fair with the renovation money?". It is a piece trying to shed light on who this candidate is as a person, what was his life before moving into your village, etc. so that voters like you can decide whether to give him your vote. It is not editorialised, as it does not point either way.

  • tw04 8 hours ago
    I don't even need to read the article to know that he unequivocally can't be trusted. Every action he's taken to this point have shown he will say literally anything to get what he wants.
  • brandonpollack2 9 hours ago
    I haven't read it yet. The answer is no.
  • Arubis 9 hours ago
    This is unfair to the original article, which is well-researched and worth a read. But the answer this question is _always_ no. Nobody should have as much power as the oligarch class currently does, even if of inscrutable power.
  • hirako2000 1 hour ago
    tautology
  • KellyCriterion 12 hours ago
    Na, it will be Dario instead of Sam, Id say? :-))
  • cm2012 10 hours ago
    I don't see anything bad about Altman in this article that cant be explained by the chaos of growing a billion dollar company in a few years.
  • jesterson 18 hours ago
    Watch Altman's reaction in Tucker Carlson interview to the question about (alleged) murder of OpenAI researcher Suchir Balaji.

    The overall response and particularly the body language speaks a lot.

  • jader201 11 hours ago
    Am I the only one that feels like Claude is clearly winning code generation, and Gemini in general LLM?

    I just don’t feel like OpenAI has a legitimate shot at winning any of the AI battles.

    Therefore, I feel like “Sam Altman may control our future” is a far stretch.

    • guelo 11 hours ago
      Well I just canceled my Claude Pro subscription because of the mysterious limits that I don't experience with codex, even after paying for "extra usage". If Anthropic can't figure out their capacity problems they are in trouble.
      • chrisjj 10 hours ago
        I doubt Anthropic see this as their capacity problem. They like "extra usage", and users who don't, well its their capacity problem.
    • dominotw 11 hours ago
      how is gemini winning in general llm. what is general llm .
      • SwellJoe 11 hours ago
        General LLM is what Apple is paying Google for.
        • tartoran 8 hours ago
          I noticed that Apple speech to text has gotten pretty good lately. Is that because they’re paying Google? Not sure I use other AI features from Apple as I have my Siri turned off.
          • laserlight 5 hours ago
            > Is that because they’re paying Google?

            No, the Google deal hasn't shipped yet.

    • gambiting 10 hours ago
      >>and Gemini in general LLM?

      You might be. Or at least I feel like Gemini is actually dumber than a house of bricks - I have multiple examples, just from last week, where following its advice would have lead to damage to equipment and could have hurt someone. That's just trying to work on an electronics project and askin Gemini for advice based on pictures and schematics - it just confidently states stuff that is 100000% bullshit, and I'm so glad that I have at least a basic understanding of how this stuff works or I would have easily hurt myself.

      It's somewhat decent at putting together meal plans for me every week, but it just doesn't follow instructions and keeps repeating itself. It hardly feels worth any money right now, like it's some kind of giant joke that all these companies are playing on us, spending billions of these talking boxes that don't seem that intelligent.

      I also use claude at work, and for C++ programming it behaves like someone who read a C++ book once and knows all the keywords, but has never actually written anything in C++ - the code it produces is barely usable, and only in very very small portions.

      Edit: I just remembered another one that made me incredibly angry. I've been reading the Neuromancer on and off, and I got back into it, but to remind myself of the plot I asked Gemini to summarise the plot only up to chapter 14, and I specifically included the instruction that it should double check it's not spoiling anything from the rest of the book. Lo and behold, it just printed out the summary of the ending and how the characters actions up to chapter 14 relate to it. And that was in the "Pro" setting too. Absolute travesty. If a real life person did that I'd stop being friends with them, but somehow I'm paying money for this. Maybe I'm the clown here.

      • staticman2 5 hours ago
        I'm curious: did you give Gemini the entire text of Neuromancer or did you expect it to use search results for chapters 1 to 14?

        I would have just fed it the text of chapters 1 to 14 from a non drm copy.

        • gambiting 2 hours ago
          I just asked like I said, give me plot summary until chapter 14, don't spoil the rest of the book. And of course when I told it what it just did it was like oh I'm sorry, here's a summary without the spoilers for the ending. So clearly it could do it without additional context.
          • calf 34 minutes ago
            I wouldn't expect any LLM to be able to respect such a request. Do they even have direct access to published works to use as reference material?

            Also, last time I played 20 questions with ChatGPT, it needed 97 turns and tons of my active hinting to get the answer.

            • gambiting 30 minutes ago
              >>Do they even have direct access to published works to use as reference material?

              I mean, clearly, given that it did answer my question eventually. Also wasn't it a whole thing that these models got trained on entire book libraries(without necessarily paying for that).

              >>I wouldn't expect any LLM to be able to respect such a request

              Why though? They seem to know everything about everything, why not this specifically. You can ask it to tell you the plot of pretty much any book/film/game made in the last 100 years and it will tell you. Maybe asking about specific chapters was too much, but Neuromancer exists in free copies all over the internet and it's been discussed to death, if it was a book that came out last year then ok, fair enough, but LLMs had 40 years of discussions about Neuromancer to train on.

              But besides, regardless of everything else - if I say "don't spoil the rest of the book" and your response includes "in the last chapter character X dies" then you just failed at basic comprehension? Whether an LLM has any knowledge of the book or not, whether that is even true or not, that should be an unacceptable outcome.

  • AbuAssar 2 hours ago
    no
  • jerrygoyal 1 hour ago
    could someone please give a tldr? this was way too long
  • lizhang 6 hours ago
    i think im shadowbanned :(
    • dang 5 hours ago
      Fixed now.
  • primer42 11 hours ago
    "Any headline that ends in a question mark can be answered by the word no."

    https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...

  • slibhb 7 hours ago
    It is disconcerting how Altman has used "AI safety" as a marketing tool. The more people imagine the universe turned into paperclips, the more they invest. Obviously Altman doesn't care about safety (I don't either; I'm not an AI-doomer). But he truly does come across as someone incapable of telling the truth. Are you even a liar if honesty is not in the set of possible outcomes?

    Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.

    • sobellian 5 hours ago
      "This thing might destroy humanity - we need to build it ASAP" does not really make sense. But it enthrall[s/ed] many smart researchers who would normally demand specific, testable claims and logical responses to those claims.

      We have drastically escalated what claims are necessary to motivate startup employees. It used to be that you could merely dangle an interesting problem in front of a researcher. Then you could earn millions, then billions. TAMs in the trillions. AGI will destroy humanity unless you, personally step in. Elon is talking about Kardashev III civilizations. The universe cannot bear the hype being loaded upon it.

    • brap 3 hours ago
      I agree with you completely, but the way I see it Anthropic are x100 worse when it comes to amplifying this doomer bs for marketing. It’s their whole shtick.
  • simoncion 11 hours ago
    Can Sam "The board can fire me, I think that's important." Altman be trusted?

    If for no other reason, given what happened when the board fired him... no. I'd say not.

  • jrflowers 7 hours ago
    I hope somebody just publishes The Ilya Memos. Sounds like a fun read
  • mayhemducks 8 hours ago
    I would really appreciate it if someone in the know could explain to me how a markov chain with some backpropagation can surpass human cognition. Because right now I call BS.
  • o0-0o 8 hours ago
    Hey, Ronan. Did the IPO come up at all in the research or interviews for this article? A yes or no will suffice, and color it if you want. ~_^
  • zoklet-enjoyer 9 hours ago
    I believe Annie Altman.
    • zoklet-enjoyer 7 hours ago
      Annie Altman is more credible than a serial scammer
    • s5300 9 hours ago
      [dead]
  • thewileyone 7 hours ago
    [flagged]
  • lnenad 20 hours ago
    This whole situation goes to show that yesterday's conspiracy theorists are today's realists. What's happening to USA's leadership and as a country and what's happening with with their top companies is really scary for the rest of us. If this trend continues we're all definitely gonna end up in a kleptocracy.
  • firemelt 1 hour ago
    obviously not
  • imagetic 5 hours ago
    No.
  • davidmurdoch 7 hours ago
    "Good luck, have fun, don't die."
  • wileydragonfly 7 hours ago
    No
  • therobots927 20 hours ago
    Excellent work. I’ll have to wait until we get the print version delivered to finish as I’m not signed into the new Yorker on my phone.

    I’ve always been a huge fan of Ronan Farrow’s journalism and willingness to speak truth to power. I think he’s pulling at exactly the right thread here, and it’s very important to counteract Altman’s reputation laundering given that we run a very real risk of him weaseling his way into the taxpayer’s wallet under the current administration.

  • y1n0 8 hours ago
    Betteridge's law of headlines: no
  • ProAm 9 hours ago
    Nope, never trust this man. His history proves why you cannot. Pure greed.
  • GlibMonkeyDeath 17 hours ago
    Disclaimer: I have no association with any AI company and have never met Altman or any of the other top AI scientists.

    The real question is: can anyone be trusted if the fever dreams of super-intelligence come true? Go ahead and replace Sam Altman with someone else - will it make a difference? Any other CEO is going to be under the same overwhelming pressure to make a profit somehow. I think the OpenAI story is messier because it was founded for supposedly altruistic reasons, and then changed.

    Methinks many of Altman's detractors protesteth too much. He's doing his job as it is defined (make OpenAI profitable.) Nothing of substance in this article seemed to make him exceptionally "sociopathic" compared to any other tech CEO. It goes with the territory.

    What depressed me most is that trillions of dollars are being raised for building what will undoubtedly be used as a weapon. My guess is the ROI on that money is going to be extremely bad for the most part (AI will make some people insanely rich, but it is hard to see how the big investors will get a return.) Could you imagine if the world shared the same vision for energy infrastructure (so we could also stop fighting wars over control of fossil fuels and spewing CO2?) A man can dream...

    • tim333 14 hours ago
      People do vary even if none are perfect. Demis Hassabis has a pretty good reputation amongst the AI leaders. Altman seems unusually shifty.
    • laserlight 5 hours ago
      > He's doing his job as it is defined (make OpenAI profitable.)

      What? OpenAI was a non-profit until Sam made it for-profit.

  • Aboutplants 20 hours ago
    Seeing Sam Altman slowly degrade into the realization that he is in fact not as smart as others in this space has been fascinating to watch. He used to speak with enthusiasm and confidence and now he’s like a scared little boy who got in way too deep.

    The last person that this happened to was Sam Bankman Fried as investors and regular folk finally realized he was full of complete shit and could only talk the game for so long until the truth emerged.

    • the_doctah 18 hours ago
      And they both peddle the same altruism smokescreen. Sociopath leader playbook.
    • therobots927 20 hours ago
      [flagged]
      • jjtheblunt 12 hours ago
        which of the two are you referring to as possibly angling for a pardon?
        • Findecanor 11 hours ago
          Bankman-Fried has already done it.
      • throwawayq3423 12 hours ago
        I have a feeling he might be angling for a pardon if he ends up bringing the whole global economy down.
  • andrewstuart 5 hours ago
    Meh. I’m no particular fan of Altman but there’s nothing in this article particularly surprising or terrible.

    The whole AI safety thing has always seemed extreme to me and has turned out to be a storm in a teacup. All those prominent people who used to tell us how AI will end humanity seem to have stopped talking about it.

    I get the sense that Altman is not particularly like-able person but Bill Gates and Steve Jobs both seem to have scored a 10/10 on their “is this guy a jerk” rating, it’s common for tech CEOs.

    So, the article and headline are dramatic but not much really there.

    I think all the AI safety obsessed people turn out to have been the ones off course.

  • guzfip 15 hours ago
    > Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”

    lol do you think these guys have ever been hit? Let alone in the face. They’d probably be less eager to mouth off as much as they do if so.

  • Cheyana 21 hours ago
    Harvey Dent…
    • the_doctah 18 hours ago
      The brighter the picture, the darker the negative
  • nickphx 10 hours ago
    speak for yourself, he doesnt control my future.
    • vntok 9 hours ago
      Please don't leave us hanging; what makes you immune?
  • thm 22 hours ago
    Hybris.
  • jojobas 10 hours ago
    The guy called out for being a sociopath by a multitude of Silicon Valley CEOs of all people, sure we can trust him our future.
  • smcg 8 hours ago
    Rule of Headlines says "no"
  • seba_dos1 20 hours ago
    Looks like Betteridge's law of headlines applies here too.
  • josefritzishere 19 hours ago
    Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word "NO."
  • ambicapter 11 hours ago
    > The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”

    These sociopaths are so good at giving away nothing. He managed to engender sympathy instead of saying "I'm not gonna talk about anything that happened then".

    Also very weird how many of these people are so deeply-linked that they'll drop everything they're doing just to get this guy back in power? Terrifying cabal.

  • sumeno 20 hours ago
    Betteridge strikes again
  • selimthegrim 6 hours ago
    Quite frankly, if he went and scrubbed (or had scrubbed) a Facebook thread I got in an argument with him on in 2018 (about the last time someone did an article about him) I can only imagine how obsessive he is about controlling his past and info about it.
  • drivingmenuts 20 hours ago
    Short answer: No. Long answer: Hell, no.
  • tylerchilds 10 hours ago
    [dead]
  • ihsw 10 hours ago
    [dead]
  • surcap526 15 hours ago
    [dead]
  • HarHarVeryFunny 12 hours ago
  • huflungdung 20 hours ago
    [dead]
  • giwook 10 hours ago
    tl;dr

    No, he cannot.

  • covercash 20 hours ago
    [flagged]
    • runevault 11 hours ago
      It is, at best, incredibly hard to accumulate that much wealth without doing shady things. Microsoft's monopolistic practices in the 90s for example. The only person I can think of that ever cracked a billion without their money coming through dirty means was, funny enough, JK Rowling who has her own set of issues separate from the value she got out of Harry Potter.
      • balls187 11 hours ago
        John Lithgow had a take I agreed with: Her opinions were heavily misconstrued though she chose to double down at her own peril.
    • i7l 20 hours ago
      I feel the "always have been" meme might be a suitable insert here.
    • aleph_minus_one 19 hours ago
      > Why are all billionaires (especially tech) such villains?

      Not all billionaires are villians. But it is long-known in organizational psychology that dark triad [1] traits are very "helpful" if one wants to climb career ladders fast.

      [1] https://en.wikipedia.org/wiki/Dark_triad

    • seba_dos1 20 hours ago
      I'm not 100% sure if it's strictly necessary to be a villain in order to become and remain a billionaire, but it seems like it could be and even if it's not it surely helps.
    • burnt-resistor 19 hours ago
      Money often changes people's attitude in a fashion similar to chronic substance abuse. Plus, there's a insular and detached bubble effect that grows around them.

      Also, there's the psychopathic and narcissistic tendencies of greedier people and the false "virtue" "greed is good" that is contrary to the values espoused by Adam Smith.

      We need standard income tax brackets of 90% after $20M/y and 99% after $100M/y.

  • romeroej 12 hours ago
    Can anybody tho?
    • morleytj 11 hours ago
      Yeah, some people can more than others.
  • neya 20 hours ago
    [flagged]
  • FpUser 11 hours ago
    >"Sam Altman may control our future"

    TLDR but just the heading is already ugly. No single person no matter how nice they're should be able to control our future. Power corrupts, what fucking trust. We are supposed to be democratic society (well looking at what is going on around this is becoming laughable)

  • asK1ajsh 11 hours ago
    The New Yorker is owned by Conde Nast just as Reddit. Conde Nast has a deal with OpenAI:

    https://www.reuters.com/technology/openai-signs-deal-with-co...

    This is a damage control piece, and you see that the most stinging comments here get downvoted.

    • cake_robot 9 hours ago
      What might feel like "damage control" is more likely to be the outcome of the even-handedness you get with serious, rigorous reporting. Something the New Yorker is known for.
  • gchokov 20 hours ago
    He is cooked. Only a matter of time before the whole thing blows up. Once a scammer, always a scammer.
  • ahartmetz 20 hours ago
    Well, no, obviously not. Not one bit.
  • aduty 10 hours ago
    LOL, no.
  • nielsbot 11 hours ago
    No one person control our future. Stop there.
    • _moof 11 hours ago
      Some people have far, far more power over our lives than others. More than they deserve, frankly.
    • mikkupikku 11 hours ago
      Yeah, but one person can fuck a lot of shit up.
  • killbot5000 11 hours ago
    No. Why is this a question?
  • LetsGetTechnicl 20 hours ago
    No
    • gonzo41 20 hours ago
      just like Zuck.
  • catigula 20 hours ago
    1. No.

    2. You cannot "control" superintelligent AI.

  • ekjhgkejhgk 20 hours ago
    No.
  • aksss 12 hours ago
    "could", "may", "might" - these words do so much heavy lifting in "journalism". Almost always it's an invitation to worry and be miserable.
  • drob518 7 hours ago
    [flagged]
  • bijowo1676 11 hours ago
    This article is just another typical New Yorker fluff piece that tries to look deep but misses the actual point.

    The biggest flaw is that it spends way too much time on high-school level drama and "he-said-she-said" gossip about Sam Altman’s personal life instead of focusing on the actual technical and corporate capture of OpenAI.

    The author treats the "nonprofit mission" like some holy quest that was "betrayed," when anyone with a brain in tech saw the Microsoft deal as the moment the original vision died. Instead of a hard-hitting look at how compute-monopolies are actually forming (MSFT AMZN NVDA and circular debt dealing inflating the AI bubble that could crash the economy), we get 5,000 words of hand-wringing over whether Sam is a "nice guy" or a "liar."

    Who cares???????

    The board failed because they had no real leverage against billions of dollars, not because they didn't write enough Slack messages. It's a long-winded way of saying "Silicon Valley has internal politics," which isn't news to anyone here.

  • ninjahawk1 11 hours ago
    OpenAI is like #3 or #4 of the AI companies right now in terms of power, and last place in the court of public opinion.

    I’d be more concerned about Anthropic both being in the good graces of the public and having access to all of our computers indirectly with Claude Code.

    • 0x3f 11 hours ago
      OpenAI has ~30x the userbase of Anthropic.
      • aduffy 10 hours ago
        I'm not sure how much of that converts to revenue. If it's free plan users, that's just cost. You can say what you want about "creating a training data moat" but that doesn't seem like it's prevented the other labs from putting out excellent models.
        • 0x3f 10 hours ago
          Well we were talking about power and reputation and being well-known and all that. Being more ubiquitous is surely a big part of that. GP seems to think Anthropic is doing better because of the DoD thing. In my estimation, 90% of people do not care about that at all.
      • ninjahawk1 10 hours ago
        They’re all in the negative excluding subsidies, hard core coders are more valuable than high schoolers cheating on homework.
      • hellojimbo 10 hours ago
        Around the same revenue due to Anthropics strong enterprise strategy
        • 0x3f 10 hours ago
          Perhaps, but I'd venture the ear of the regime is even more valuable.
    • estearum 11 hours ago
      makes sense if you think the point of journalism is just to take everyone down a notch instead of... um... informing the public of bad actors

      "the local drug-dealing pimp is so passe, we need to investigate the most upstanding members of the community just to be sure" is a frankly insane strategy

  • quantified 10 hours ago
    A bit of a feeling of "so what" here. Maybe he's less trustworthy than some. We have people of X trustworthiness running the government, crypto exchanges, a certain space exploration and satellite company, social media companies, and so on. We know their trustworthiness. Isn't the real issue how to cope?
    • boc 9 hours ago
      What's the point of living in an advanced society if you just sit around watching it decay around you? Our ancestors fought for our indifference today, and with attitudes like yours we'll watch our children fight for it again tomorrow.
      • quantified 7 hours ago
        What's your proposal? We knew he's as trustworthy as the others, and it sounds like you agree. What are you doing about them? Legally or illegally?

        Mostly we don't need 3,000 words on how untrustworthy he is. We could use 3,000 words on how to remove his influence.

    • Boxxed 10 hours ago
      Your point is that it's ok he's untrustworthy because lots of people in power are?
      • JumpCrisscross 9 hours ago
        > Your point is that it's ok he's untrustworthy because lots of people in power are?

        It's...weirdly a valid question. If Sam fibs as much as the next guy, we don't have a Sam problem. Focussing on him alon is, best case, a waste of resources. Worst case, it's distracting from real evil. If, on the other hand, as this reporting suggests, Sam is an outlier, then focussing on him does make sense.

      • quantified 7 hours ago
        Not sure where I said it's OK? Please point it out.

        We have to deal with it. Or are you suggesting we should purchase a controlling interest and vote him off the board?

      • TheOtherHobbes 9 hours ago
        No, it's that the entire ecosystem is rotten to the core, and it actively selects, rewards, and protects flawed personality types.

        And when you're dealing with a potential existential threat, this is an existential problem.

        • Rury 5 hours ago
          I don't disagree, but at some point, I think people need to understand we're dealing with laws of nature here. I mean just look at human history, this has been a problem since the dawn of civilization...

          I think if you truly understand social contract theory, how hierarchies are formed, and political theory, you'll realize that oligarchies tend to be nature's equilibrium point for setting social disputes, and all forms of governments regardless of whatever they claim to be, naturally devolve towards them as they tend to represent the highest social entropy (ie equilibrium) state. That's not to say you can't have or move further away from that point and towards another (supposed ideal) form of government, you absolutely can, but it takes work. Perpetual work - of which no set of "rules" can remedy people of having to do in order to sustain it.

          The problem however, is most people get complacent. They eventually tire of that work, or are ignorant, and by doing so create a power vacuum which allows things slide back towards that state.

          As so, people must decide for themselves one of several possible avenues to pursue:

          #1 - Try to convince others (the masses) to join and work together to take power from the few, back to them

          #2 - Find a way to join the ranks of the elite few (which thanks to the prisoner's dilemma, unscrupulous means tends to perform better in the short term, even if at the cost of the long term. And if the elite is already corrupt, well, cooperating with it works well)

          #3 - Settle for their lot in life

          Unfortunately #1 is such a difficult proposition given it requires winning agreement among many whilst many often decide to remain in camp #3 (for complacency/ignorance reasons). And #2 is often easier done without moral integrity, especially at the behest of those in camp #3 whose behavior only helps enable these realities. Thus, is why I think the "ecosystem" as you say, will always tend towards this way - where society tends towards being controlled by an elite few who are rotten.

          Robert Michel's realized this and dubbed it the Iron Law of Oligarchy and embraced his own version of #2 for himself. Although, he came to this conclusion through his own observations and reasoning, rather than through historical political theory.

  • aryehof 2 hours ago
    I might expect such a subjective, gossipy exposé of a public official, but this of a private individual in a non-public sector commercial company?
  • rambambram 1 hour ago
    Any idea how stupid this title sounds!? It's past exaggeration.