This leaves many in a position where they fear they will be next on the chopping block. Many assume physical tasks will take longer since it will take longer to build up, verify and test humanoid robots vs. some virtual AI agent. However, many believe the writing is on the wall either way, and those in domains involving using their hands or bodies will only have a few more years than the formerly employed white-collar class.
Which skills then, or combinations of skills, do you believe will be safest for staying employed and useful if AI continues improving at the rate it has been for the past few years?
Like right now, native mobile jobs are mostly unaffected by AI. Gemini, despite all the data in the community, doesn't do a decent job at it. If you ask it to build an app from scratch, the architecture will be off. It'll use an outdated tech stack from 2022. It will 'correct' perfectly good data to an older form, and if you ask it to hunt for bugs in cutting edge tech, it might rip out the new one and replace it with old stuff. It often confuses methods that are common between different languages like .contains().
But if very high quality data is easily accessible, e.g. writing, digital art, voice acting, etc, that makes it viable to be cloned by AI. There's little animation data and there's even less oil painting data - so something like oil painting will be greatly more resistant than digital art. It's top tier on Python and yet it struggles with Ren'Py.
Anthropic released experiment results from getting it to manage a vending machine: https://www.anthropic.com/research/project-vend-1
This is a fairly simple task for a human, and Claudius has plenty of reasoning and financial data. But it can't reason its way into running a vending machine because it doesn't have data on how to run vending machines.
Out of date after only 3 years?
Actually very little debate. We get a lot of unsubstantiated hype from companies like OpenAI, Anthropic, Google, Microsoft. So-called AI has barely made a dent in economic activities, and no company makes money from it. Tech journalism repeatedly fails to question the PR narrative (read Ed Zitron).
> Regardless of whether this will happen, or when, many people already have lost their jobs in part due to the emerging capabilities of AI models…
Consider the more likely explanation: many companies over-hired a few years ago and have cut jobs. Focus on stock price in an uncertain economy leads to layoffs. Easier to blame AI for layoffs than admitting C-suite management incompetence. Fear of the AI boogeyman gives employers the upper hand in hiring and salary negotiations, and keeps employers in line out of fear.
Would you really consider the Nobel laureates Geoffrey Hinton¹, Demis Hassabis² and Barack Obama³ not worth listening to on this matter? Demis is the only one with ulterior motives to hype it up, but compared to normal tech CEOs he certainly has quite a bit of proven impact (Alphafold, AlphaZero) to be worth listening to.
> AI has barely made a dent in economic activities
AI companies' revenues are growing rapidly, reaching the tens of billions. The claim that it's just a scapegoat for inevitable layoffs seems fanciful when there are many real-life cases of AI tools performing equivalent person-hours work in white-collar domains.
https://www.businessinsider.com/how-lawyer-used-ai-help-win-...
To claim it is impossible that AI could be at least a partial cause of layoffs requires an unshakable belief that AI tools could not even be labor-multiplying (as in allowing one person to perform more work at the same level of quality than they would otherwise). To assume that this has never happened by this point in 2025 requires a heavy amount of denial.
That being said, I could cite dozens of articles, numerous takes from leading experts, scientists, legitimate sources without conflicts of interest, and I'm certain a fair portion of the HN regulars would not be swayed one inch. Lively debate is the lifeblood of any domain that prides itself on intellectual rigor, but a lot of the dismissal of the actual utility of AI, the impending impacts, and its implications feels like reflexive coping.
I would really really love to hear an argument that convinced me that AGI is impossible, or far away, or that all the utility I get from Claude, o3 or Gemini are all just tricks of scale and memorization entirely orthogonal to something somewhat akin to general human-like intelligence. However, I have not heard a good argument. The replies I get seem to be largely ad-hominems toward tech CEOs, dismissive characterizations of the tech industry at large, and thought-terminating quips that hold no ontological weight.
1: https://www.wired.com/story/plaintext-geoffrey-hinton-godfat... 2: https://www.axios.com/2025/05/21/google-sergey-brin-demis-ha... 3: https://www.youtube.com/watch?v=72bHop6AIcc 4: https://www.cio.com/article/4012162/ai-begins-to-reshape-the...
For example, Conflict resolution, therapy, and coaching depend on nuance, empathy, and trust.
Skilled trades like plumbing, electrical work, or HVAC repair, Auto mechanics, Elevator technicians.
Roles like physical presence plus knowledge Emergency responders (firefighters, EMTs), Disaster relief coordinators.
Plumbing.
Embalming and funeral direction.
Childrearing, especially of toddlers. Therapy for complex psychological conditions/ones with complications. Anything else that requires strong emotional and interpersonal judgement and the ability to think outside the box.
Politics/charisma. Influencers. Cult leaders. Anything else involving a cult of personality.
Stand up comics/improv artists. Nobody’s going to pay to sit in a room with other people and listen to a computer tell jokes.
World class athletes.
Top tier salespeople.
TV news and game show etc hosts.
Also note that a bunch of these (and other jobs) may vanish if the vast majority of the population is unemployed and only a few handfuls of billionaires can afford to pay anyone for services.
I’d also note that a lot of jobs will stay safe for much longer than we fear if AI continues to be unable to actually reason and can only handle patterns / extrapolations of patterns it’s already seen.
That’s why if you look at the leveling guidelines for any well known tech company, “codez real gud” only makes a difference between junior and mid level developers. After that it’s about “scope”, “impact” and “dealing with ambiguity”.
Yes I realize that there are still some “hard problems” that command a premium for people to be able to solve via code - that’s the other 10% and I’m being generous.