Ask HN: Has AI stolen the satisfaction from programming?

I've been trying to articulate why coding feels less pleasant now.

The problem: You can't win anymore.

The old way: You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do. Then write the code. Understanding was mandatory. You solved it.

The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.

So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works. And when even something does work = zero satisfaction because I don't have the same depth of understanding of the solution. Its no longer my code, my idea. It's just some code I found online. `import solution from chatgpt`

If I think about the problem, I feel inefficient. "Why did you waste 2 hours on that? AI would've done it in 10 minutes."

If I use AI to help, the work doesn't feel like mine. When I show it to anyone, the implicit response is: "Yeah, I could've prompted for that too."

The steering and judgment I apply to AI outputs is invisible. Nobody sees which suggestions I rejected, how I refined the prompts, or what decisions I made. So all credit flows to the AI by default.

The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.

Am I alone in this?

Does anyone else feel this pressure to skip understanding? Where thinking feels like you're not using the tool correctly? In the old days, I understood every problem I worked on. Now I feel pressure to skip understanding and just ship. I hate it.

52 points | by marxism 3 hours ago

54 comments

  • raw_anon_1111 12 minutes ago
    My satisfaction in programming since I started doing it professionally in 1996 was that companies with money let me exchange some of their money for my labor and I could take that money and then exchange it for goods and services to support my addiction to food and shelter.

    AI just like IDEs before it makes it easier for me to complete my labor and have money appear in my account.

    There are literally at least a dozen things I would rather do after getting off of work than spending more time at a computer.

  • conductr 2 hours ago
    If programming is woodworking, using AI is ikea assembly except they packed most the wrong parts in the box so I have to deal with customer service to go back and forth to get the right parts and the hardware parts don’t always function as intended leaving me to find my own.

    It’s a different, less enjoyable, type of work in my opinion.

    • D13Fd 1 hour ago
      Plus the item you build may not be exactly what you initially requested, and you'll have to decide whether it's something you are willing to accept.
    • butlike 2 hours ago
      > It’s a different, less enjoyable, type of work

      This is an elegant way of putting it. I like it

  • tpoacher 2 hours ago
    Not necessarily an identical thought to OP, but, anecdotally (n=1), my experience teaching the exact same course on Advanced Java Programming for the last 4 years has been that the students seem to be getting more and more cynical, and seem to think of programming as an art or as a noteworthy endeavour in itself less and less. Very few people have actually vocalised the "why do I even need to learn this if I can write a prompt" sentiment out loud, but it has been voiced, and even from those who don't say it there's a very definite 'vibe' that is all but screaming it.

    Whereas the vibe in the lecture theatre 4 years ago was far more nerdy and enthusiastic. It makes me feel very sorry for this new generation that they will never get to enjoy the same feeling of satisfaction from solving a hard problem with code you thought and wrote from scratch.

    Ironically, I've had to incorporate some AI stuff in my course as a result of needing to remain "current", which almost feels like it validates that cynical sentiment that this soulless way is the way to be doing things now.

    • nxor 1 hour ago
      Has the school changed?

      And can we assume that because AI has made it easy to solve some hard problems, other hard problems won't arise?

      Not that I don't agree

      And hasn't the internet generally added to this attitude?

      And if it makes you feel any better, as someone around that age, this environment seems to have also led some of us to go out of our way to not outsource all our thinking

    • andy99 1 hour ago
      I taught intro to programming ~15-20 years ago. Back then everyone just copied each other’s assignments. Plus ça change
    • abnercoimbre 2 hours ago
      The OP said coding now feels like:

      > import solution from chatgpt

      Which reminded me of all the students in classes (and online forums) mocking non-nerds who wanted easy answers to programming problems. It would seem the non-nerds are getting their way now.

  • themafia 2 hours ago
    > That's the labor-saving promise.

    Where are the labor saving _measurements_? You said it yourself:

    > You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do.

    So why are we relying on "promises?"

    > If I use AI to help, the work doesn't feel like mine.

    And when you're experiencing an emergency and need to fix or patch it this comes back to haunt you.

    > So all credit flows to the AI by default.

    That's the point. Search for some of the code it "generates." You will almost certainly find large parts of it, verbatim, inside of a github repository or on an authors webpage. AI takes the credit so you don't get blamed for copyright theft.

    > Am I alone in this?

    I find the thing to be an overhyped scam at this point. So, no, not at all.

    • LordDragonfang 2 hours ago
      > You will almost certainly find large parts of it, verbatim, inside of a github repository or on an authors webpage. AI takes the credit so you don't get blamed for copyright theft.

      Only if you're doing something trivial or highly common, in which case it's boilerplate that shouldn't be copyrighted. We already had this argument when Oracle sued Google over Java. We already had the "just stochastic parrots" conversation too, and concluded it's a specious argument.

      • heavyset_go 2 hours ago
        > We already had this argument when Oracle sued Google over Java.

        "It's boilerplate therefore it isn't IP" isn't the argument that was made by Google, nor is it the argument that the case was decided upon.

        It was decided that Google's use of the API met the four determining factors used by courts to ascertain whether use of IP is fair use. The court found that even though it was Oracle's copyrighted IP, it was still fair use to use it in the way Google did.

        https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_...

      • themafia 2 hours ago
        > in which case it's boilerplate that shouldn't be copyrighted

        Let's say it's boilerplate code filled with comments that are designed to assist in understanding the API being written against. Are the comments somehow not covered because they were added to "boilerplate code?" Even if they're reproduced verbatim as well?

        > We already had the "just stochastic parrots" conversation too

        Oh, I was not part of those conversations, perhaps you can link me to them? The mere stated existence of them is somewhat underwhelming and entirely unconvincing. Particularly when it seems easy to ask an LLM to generate code and then to search for elements of that code on the Internet. With that methodology you wouldn't need to rely on conversations but on actual hard data. Do you happen to know if that is also available?

  • mattlondon 2 hours ago
    > The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing

    I'd disagree. For me, I direct the AI to implement my plan - it handles the trivia of syntax and boilerplate etc.

    I now work kinda at the "unit level" rather than the "syntax level" of old. AI never designs the code for me, more fills in the gaps.

    I find this quite satisfying still - I get stuff done but in half the time because it handles all the boring crap - the typing - while I still call the shots.

    • mpliax 1 hour ago
      Don't you have to go over whatever the chatbot spurts? Isn't that part more boring than writing the code yourself ?
  • CuriouslyC 2 hours ago
    To be honest the only time I got satisfaction out of programming in the past was when I programmed a really hard algorithm or created a really beautiful design, and the AI doesn't replace me there, it just automates the menial part of the work.
  • saulpw 2 hours ago
    I agree completely, you are not alone! I've heard the argument "if you don't like AI just don't use it" but there is this nagging feeling just as you describe. Like the mere existence of AI as a coding tool has sucked all the dopamine out of my brain.
  • leakycap 2 hours ago
    > There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

    You're having imposter syndrome-type response to AI's ability to outcode a human.

    We don't look at compliers and beat out fists that we can't write in assembly... why expect your human brain to code as easily or quickly as AI?

    The problem you are solving now becomes the higher-level problem. You should absolutely be driving the projects and outcomes, but using AI along the way for programming is part of the satisfaction of being able to do so much more as one person.

  • recursivedoubts 50 minutes ago
    Here is how I'm using it:

    Do all the stuff you mention the old way. If I have a specific, crappy API that I have to deal with, I'll ask AI to generate the specific functionality I want with it (no more than a method or two). When it comes to testing, I'll write a few tests (some simple, some complicated) and then ask AI to generate a set of tests based on those examples. I then run and audit the tests to make sure they are sensible. I always end my prompts with "use the simplest, minimal code possible"

    I am mostly keeping the joy of programming while still being more productive in areas I'm not great at (exhaustive testing, having patience with crappy APIs)

    Not world changing, but it has increased my productivity I think.

  • mhaberl 2 hours ago
    >The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing.

    That’s the promise, but not the reality :) Try this: pick a random startup idea from the internet, something that would normally take 3–6 months to build without AI. Now go all in with AI. Don’t worry about enjoyment; just try to get it done.

    You’ll notice pretty quickly that it doesn’t get you very far. Some things go faster, until you hit a wall (and you will hit it). Then you either have to redo parts or step back and actually understand what the AI built so far, so you can move forward where it can’t.

    >I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.

    It was "stupid" then - better alternatives already existed, but you do it to learn.

    > Am I alone in this?

    absolutly not but understand it is just a tool, not a replacement, use it and you will soon find the joy again, it is there

  • baq 2 hours ago
    It’s the opposite for me. I can get so much more of what I want done built quicker, and if I’m not familiar with a framework, it isn’t an issue anymore unless we’re talking about the bleeding edge.
  • journal 2 hours ago
    It's draining. I think I've read more synthetic text in the last three years than all text I've ever encountered in life.
  • andy99 2 hours ago
    I still find it’s only useful for writing code you don’t need to understand. I would never vibe code something that I needed to know how it worked.

    It’s just going to take time for “best practice” to come around with this. It’s like outsourcing, for a while it seems like a good idea and it might be for very fixed tasks that you don’t really care about, but nobody does it now for important work because of the lack of control and understanding which is exactly where AI will end up. I think for coding tasks you can almost interchangeably use AI and outsourcing and preserve the meaning.

  • nerdsniper 1 hour ago
    AI makes me a lot more adventurous in terms of the projects I take on. I usually end up having to rewrite everything from scratch after POC proves that my vision is possible and actually works - including the whole old-school RTFM.

    It’s a huge help for diving into new frameworks, troubleshooting esoteric issues (even if it can’t solve it its a great rubber duck and usually highlights potential areas of concern for me to study), and just generally helping me get in the groove of actually DOING something instead of just thinking about it. And, once I do know what I’m doing and can guide it method by method and validate/correct what it outputs, it’s pretty good at typing faster than I can.

  • crtified 2 hours ago
    Not to suggest that analogies solve anything, but perhaps it adds large-scale context to mention that throughout history various (and frequent!) events of technological disruption have had similar effect upon particular fields of work.

    I used to work in land surveying, entering that field around the turn of the millennium just as digitalisation was hitting the industry in a big way. A common feeling among existing journeymen was one of confusion. Fear and dislike of these threatening changes, which seemed to neutralise all the hard-won professional skills. Expertise with the old equipment. Understanding of how to do things closer to first-principles. Ability to draw plans by hand. To assemble the datasets in the complex and particular old ways. And of course, to mentor juniors in the same.

    Suddenly, some juniors coming in were young computer whizzes. Speeding past their seniors in these new ways. But still only juniors, for all that - still green, no matter what the tech. With years and decades yet, to earn their stripes, their professionalism in all it's myriad aspects. And for the seniors, their human aptitudes (which got them there in the first place) didn't vanish. They absorbed the changes, stuck with their smart peers, and evolved to match the environment. Would they have rathered that everything in the world had stayed the same as before? Of course. But is that a valid choice, professionally speaking? or in life itself? Not really.

  • RamtinJ95 1 hour ago
    This is a topic I feel very strongly about and have given it some considerable amount of thought. Most of those thoughts ended up in this blog post: https://handmadeoasis.com/ai-and-software-engineering-the-co...

    But the whole blog is a consequence of exactly this that you are describing.

  • littlecranky67 2 hours ago
    AI is taking the joy out of programming the same way cameras took out the joy of painting, or the record player took out the joy of playing an musical instrument. If your financial income relies on it, there will be issues and you will have to go along with the technical innovation. If you enjoy programming for programming sake and only do it for fun, simply do not use AI. On your own free time, you are free to make that choice.
  • gooodvibes 1 hour ago
    > Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.

    Why didn't the fact that Redis already existed make the whole thing feel pointless before? You could just go to github and copy the thing. I don't get why AI is any different in this regard.

  • agentultra 2 hours ago
    Can’t be disappointed if you don’t use it.

    I’ve never met so many people that hate programming so much.

    You get the same thing with artists. Some product manager executive thinks their ideas are what people value. Automating away the frustration of having to manage skilled workers is costly and annoying. Nobody cares how it was made. They only care about the end result. You’re not an artist if all you had to do was write a prompt.

    Every AI-bro rant is about how capital-inefficient humans are. About how fallible we are. About how replaceable we are.

    The whole aesthetic has a, “good art vs. bad art,” parallel to it. Where people who think for themselves and write code in service of their work and curiosity are displayed as inferior and unfit. Anyone who is using AI workflows are proper and good. If you are not developing software using this methodology then you are a relic, unfit, unstable, and undesirable.

    Of course it’s all predicated in being dependent on a big tech firm, paying subscription fees and tokens to take a gamble at the AI slot machine in hopes that you’ll get a program that works the way you want it to.

    Just don’t play the game. Keep writing useless programs by hand. Implement a hash table in C or assembly if you want. Write a parser for a data format you use. Make a Doom clone. Keep learning and having fun. Satisfaction comes from mastery and understanding.

    Understanding fundamental algorithms, data structures, and program composition never gets old. We still use algebra today. That stuff is hundreds of years old.

  • jackdoe 2 hours ago
    i absolutely feel the same

    wrote recently about it https://punkx.org/jackdoe/misery.html

    now at night i just play my walkman(fiio cp13) and work on my OS, i managed to record some good cassettes with non AI generated free music from youtube :) and its pretty chill

    PS: use before:2022 to search

  • thom 2 hours ago
    No, this is the exact opposite of my experience. I have little interest in the AI's code, it can be very illustrative but I find it ugly, unmaintainable (for either humans or itself after enough iterations) and regularly wrong. But it's so much better than Google at teaching me new things, and helping with the boring bits like debugging stack traces and making random throwaway visualisations. I ask it dumb questions until I'm sure I understand things, in ways I would never burden a co-worker with, or that would be impossible when faced with a narrow blog post. And I'm left to just concentrate on my craft. It doesn't feel too slow to me because there's no point arriving at the destination if I didn't enjoy the journey and turn up with the AI having forgotten to pack any trousers.
  • jvanderbot 2 hours ago
    Who do you feel this pressure from? I realize I'm not answering your question, but is it possible that pressure is your own inner critic, not any real constraint?

    Ok, you don't like a particular way of working or a particular tool. In any other era, we would just stop doing using that tool or method. Who is saying you cannot? Is a real constraint or a perceived one?

    Regardless, I understand the need to understand what you built. So you have a few options. You can study it (with the agent's help?), you can write your own tests / extensions for it to make sure you really get it, or you can write it yourself. I honestly think that most of those take about as long. It's only shorter when you don't want to understand it, so then we're back to the main question: Why not?

  • the__alchemist 2 hours ago
    Here is what has changed for me: I spend less time on tedious or solved work, and focus on the interesting parts. If there's some sort of algo that I think the AI could solve cleanly, but I want to refresh my skills, maybe I do that part myself.

    Note: I don't vibe-code, or use agents. Just standard Jetbrain IDEs, and a GPT-5-thinking window open for C+P.

  • diob 2 hours ago
    Interesting, I have yet to feel like AI automates everything.

    When I need something to work that hasn't been done before, I absolutely have to craft most of the solution myself, with some minor prompts for more boilerplate things.

    I see it as a tool similar to a library. It solves things that are already well known, so I can focus on the interesting new bits.

  • cadamsdotcom 1 hour ago
    Unfortunately you are asking your question inside an echo chamber.

    Most commenters comment because it makes them feel good inside. If a comment helps you.. well, that’s a rare side-effect.

    To truly broaden your perspective - instead of just feeling good inside - you must do more than Ask HN.

  • lordofgibbons 2 hours ago
    I started (incorrectly) going down this same route of offloading the thinking and system design process to the LLM. It has a disaster.

    You have to understand your problem and solution inside and out. This means thinking deeply about your solution along with the drawing boxes and lines. And only then do you go to the LLM and have it implement your solution!

    I heavily use LLMs daily, but if you don't truly understand the problem and solution, you're going to have a bad time.

  • jgb1984 1 hour ago
    I don't use AI. Problem solved. The software I make will be better off in the long term.
  • netdur 2 hours ago
    to be honest i still feel satisfied but when it comes to making something useful, in the past i gave up programming because of the endless repetitive tasks, you want to build something cool but wait you first need to make a auth system and by the time you finish that the cool idea is already dead because of how boring and repetitive it all is, ai coding made it fun again
  • jjice 2 hours ago
    I had this mindset at first. Then I found that it's fantastic at doing all the grunt work I didn't care for. I'm quite happy now that I've mostly automated away the truly boring stuff, leaving me with more time for the interesting problems.
  • nxor 2 hours ago
    I think it has, but not necessarily from programming, rather from other pursuits. I think the hype has gone too far. That said, it hasn't stolen the satisfaction from language learning, so I still think there's things it's suited for.
  • layer8 2 hours ago
    Understanding remains imperative in software development. I would recommend finding employers/projects that value the thoroughness and diligence of what you call the “old way”. I don’t see those going away.
  • samuelknight 2 hours ago
    No. I can prototype in 20 mins things that would have taken me a day before.
  • michelsedgh 2 hours ago
    Stolen as if people hate AI and dont love just chilling and have it code for them. You understand that people are choosing to and loving using AI right? Thats why it grew so much?
  • gooodvibes 3 hours ago
    > The entire premise of AI coding tools is that they automate the thinking, not just the typing. You're supposed to be able to describe a problem and get a solution without understanding the details.

    This isn't accurate.

    > So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works.

    These things have planning modes - you can iterate on a plan all you want, make changes when ready, make changes one at a time etc. I don't know if the "pressure" is your own psychological block or you just haven't considered that you can use these tools differently.

    Whether it feels satisfying or not - that's a personal thing, some people will like it, some won't. But what you're describing is just not using your tools correctly.

    • marxism 2 hours ago
      I think you're misunderstanding my point. I'm not saying I don't know how to use planning modes or iterate on solutions.

      Yes, you still decompose problems. But what's the decomposition for? To create sub-problems small enough that the AI can solve them in one shot. That's literally what planning mode does - help you break things down into AI-solvable chunks.

      You might say "that's not real thinking, that's just implementation details." Look who came up the the plan in the first place << It's the AI! Plan mode is partial automation of the thinking there too (improving every month)

      Claude Code debugs something, it's automating a chain of reasoning: "This error message means execution reached this file. That implies this variable has this value. I can test this theory by sending this HTTP request. The logs show X, so my theory was wrong. Let me try Y instead."

      • gooodvibes 1 hour ago
        > But what's the decomposition for?

        To get it done correctly, that's always what it's been about.

        I don't feel that code I write without assistance is mine, or some kind of achievement to be proud of, or something that inflates my own sense of how smart I am. So when some of the process is replaced by AI, there isn't anything in me that can be hurt by that, none of this is mine and it never was.

      • malux85 2 hours ago
        > When I stop the production line to say "wait, let me understand what's happening here," the implicit response is: "Why are you holding up progress? It mostly works. Just ship it. It's not your code anyway."

        This is not a technical problem or an AI problem, it’s a cultural problem where you work

        We have the opposite - I expect all of our devs to understand and be responsible for AI generated code

  • gordonhart 2 hours ago
    I largely agree. Thinking through the business requirements, hammering out the design, testing against the requirements, and reviewing the code were never the fun parts, but they were usually <50% of the job as an IC. Now that LLMs do the fun part (actually writing the code), those parts are all that’s left.

    The job now feels quite different than the one I signed up for a decade+ ago. The only options I see are to accept that with a sigh or reject automation of the fun part and lose employability (worst case) or be nagged by anxiety that eventually that’ll happen.

  • block_dagger 2 hours ago
    I'm more thrilled building software now than I have been in my ~35 years programming. I think that means I am satisfied.
  • zkmon 2 hours ago
    No, it didn't. The way you get your wow moment has changed. You get impressed by your skills in prompting, agentic stuff and your ability to squeeze out the best work from AI, fix its bugs, get it to review and fix your bugs and make the whole collaboration a grand success. That's not easy because now you are expected to deliver 10x output. It's the same hard work, or maybe more hard work.
  • travisgriggs 2 hours ago
    Web Programming has stolen satisfaction from programming. At least for me.

    I've coded in win32, XWindows, GTK, UIKit, Logo, Smalltalk, QT, and others since 95. I had various (and sometimes serious) issues with any of these as I worked in them. No other mechanism of helping humans interact with computation has been more frustrating and disappointing than the web. Pointing out how silly it all is (really, I have to use 3 separate languages with different computation models, plus countless frameworks, and that's just on the client side???), never makes me popular with people who have invested huge amounts of time and energy into mastering etheral library idioms or modern "best practices" which will be different next month. And the documentation? Find someone who did a quick blog on it, trying to get their name out there. Good luck.

    The fact that an AI is an efficient, but lossy compression of the big pile, to help me churn it faster, it's actually kind of refreshing for me. Any confidence that I was doing the Right Thing in this domain always made me wonder how "imagined" it was. That fact that I have a stochastic parrot with sycophantic confidence to help me hallucinate through it all? That just takes it to 11.

    I thought when James Mickens wrote "To Wash It All Away" (https://scholar.harvard.edu/files/mickens/files/towashitalla...), maybe someday things would get better. 10 years later, the furniture has moved and changed color some, but its still the same shitty experience.

  • eimrine 3 hours ago
    AI has stolen my satisfaction from Philosophy. Now there are not the times I need to be sure in my outcomes by the ideological reasons. If I can not persuade LLM my theory is nothing. If I can persuade LLM I use LLM's thesises instead of my own.

    How can I chose my political views and preferences if I need to consult about them with LLM?

    • bigfishrunning 2 hours ago
      > If I can not persuade LLM my theory is nothing.

      It's important to remember, at times like these, that the LLM is not thinking. You can't persuade it of anything; you're looking at a convincing response based on patterns in language.

  • bediger4000 3 hours ago
    Understanding (of various fields) is the only reason to do programming. I only use "AI" (really, LLMs) for code review, for this very reason.

    LLM code is extremely "best practices" or even worse because of what it's trained on. If you're doing anything uncommon, you're going to get bad code.

  • chankstein38 2 hours ago
    Honestly I'm not sure I ever really got satisfaction from the coding process itself. The output is what I care about. If it's a new and interesting output then it's still your idea. The code not being written by you doesn't detract from that.

    Aside from regular arguments and slinging insults at chatgpt, I've been enjoying being able to be way more productive on my personal projects.

    I've been using agentic AI to explore ESP32 in Arduino IDE. I'm learning a ton and I'm confident I could write some simpler firmware at this point and I regularly make modifications to the code myself.

    But damn if it isn't amazing to have zero clue how to rewrite low level libraries for a little known sensor and within an hour have a working rewrite of the library that works perfectly with the sensor!

    I'll say though, this is all hobby stuff. If my day job was professional chatgpt wrangler I think I'd be pretty over it pretty quickly. Though I'm burnt out to hell. So maybe it's best.

  • more_corn 2 hours ago
    Don’t use AI in the fully automated, big picture dehumanizing way. (It will screw up if you do that and you won’t be able to catch and correct it)

    Use it in the precise, augmenting, accelerating way.

    Do your own design and architecture (it sucks at that anyway) and use AI to tab complete the work you already thought through and planned.

    This can preserve your ability to reason about the project and troubleshoot, improve your productivity while not turning your brain off.

  • qq99 1 hour ago
    I absolutely love it. I find it empowers me more than ever before, and my satisfaction is at all time highs. I'm even building projects now (videogames) that I probably wouldn't have started before.

    Here's where I'm at:

    - Your subjective taste will become more important than ever, be it graphic design, code architecture, visual art, music, and so on for each domain that AI becomes good at. People with better taste will produce better results. If you have bad taste, you can't steer _any_ tool (AI or otherwise) into producing good outputs. So refining your taste and expanding it will become more important. re: "Yeah, I could've prompted for that too.", I see a parallel to Stable Diffusion visual art. Sure, anyone _can_ make _anything_, but getting certain types of artistic outputs is still an exercise in skill and knowledge. Without the right skill expression, they won't have the same outputs.

    - Delegating the things where "I don't have time to think about that right now" feels really good. As an analog, e.g., importing lodash and using one of their functions instead of writing your own. With AI, it's like getting magical bespoke algorithms tailored exactly to your needs (but unlike lodash, I actually see the underlying src!). Treat it like a black box until it stops working for you. I think "use AI vs not" is similar to "use a library or not": you kinda still have to understand what you need to do before picking up the tool. You don't have to understand any tool perfectly to make effective use out of it.

    - AI is a tremendous help at getting you over blockers. Previous procrastination is eliminated when you can tell AI to just start building and making forward progress, or if you ask it for a high level overview on how something works to demystify something you previously perceived as insurmountable or tough.

    > Nothing feels satisfying anymore

    You still have to realize that were it not for you guiding the process, the thing in question would not exist. e.g., if you vibecode a videogame, you start to realize that there's no way (today) that a model is 1-shotting that. At least, it isn't 1-shotting it exactly to your vision. You and AI compile an artifact together that's greater than the sum of both of you. I find that satisfying and exciting. Eventually you will have to fix it (and so come to understand parts you neglected to earlier).

    It's incredibly satisfying when AI writes the tedious test cases for things I write personally (including all edge cases) and I just review and verify they are correct.

    I still find I regret in the long term cases where I vibe-accept the code it produces without much critical thought, because when I need to finesse those, I can see how it sometimes produces a fractal of bad designs/implementations.

    In a real production app with stakes and consequences you still need to be reading and understanding everything it produces imo. If you don't, it's at your own peril.

    I do worry about my longterm memory though. I don't think that purely reading and thinking is enough to drill something into your brain in a way that allows you to accurately produce it again later. Probably would screw me over in a job interview without AI access.

  • ChrisArchitect 2 hours ago
    Lots of various laments on this tip around here the last while (with mixed responses)

    I do not want to be a programmer anymore

    https://news.ycombinator.com/item?id=45481490

    I Don't Want to Code with LLM's

    https://news.ycombinator.com/item?id=45332448

  • diamondfist25 2 hours ago
    I never liked coding.

    What i like is problem solvinig.

    Coding is 90% syntax 10% thinking

    AI is taking away the 90% garbage, so we can channel 90% to problem solving

  • AlienRobot 2 hours ago
    Adding to this, when using AI as an autocomplete, it feels like I'm being distracted every half second I type a character because I need to check if the autocomplete is what I was going to type or not, which feels just very annoying.

    So I'm more productive, but at what cost...

  • lovich 2 hours ago
    It took the satisfaction out of it in the sense that I can no longer be laid to do it.

    For side projects no, but I use it at the level that feels like it enhances my workflow and manually write the other bits since I don’t have productivity software tracking if I’m adopting AI hard enough

    • bryanlarsen 2 hours ago
      I assume you mean "paid" instead of "laid". Textbook Freudian slip?
  • incomingpain 2 hours ago
    I have multiple open source projects, maintaining them and coding fixes for mostly edge case bugs, as I hadnt really added new features, became tedious and I didnt want to maintain them anymore.

    AI coding fixed that. Pre-AI I loved using all of the features of an IDE with an intention of speeding up my coding. Now with AI, it's just that much faster.

    >The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

    I've had so much satisfaction since ai coding. Ive had greater satisfaction.

  • alganet 2 hours ago
    Think of it this way: if your problem can be solved by an LLM with the same quality, then it's not a problem worthy of a human to tackle. It probably never was in the first place, we just didn't knew.

    The only exception here is learning (solving a solved problem so you can internalize it).

    There are tons of problems that LLMs can't tackle. I chose two of those (polyglot programs, already worked on them before AI) and bootstrapping from source (AI can't even understand what the problem is). The progress I can get on those areas is not improved by using LLMs, and it feels good. I am sure there are many more of such problems out there.

    • marxism 2 hours ago
      I actually agree with everything you said, and I see I failed to communicate my idea that's exactly why I'm so upset.

      You said "the only exception here is learning" - and that exception was my hobby. Programming simple things wasn't work for me. It was entertainment. It was what I did for fun on weekends.

      Reading a blog post about writing a toy database or a parser combinator library and then spending a Saturday afternoon implementing it myself. that was like going to an amusement park. It was a few hours of enjoyable, bounded exploration. I could follow my curiosity, learn something new, and have fun doing it.

      And you're right: if an LLM can solve it with the same quality, it's not a problem worthy of human effort. I agree with that logic. I've internalized it from years in the industry, from working with AI, from learning to make decisions about what to spend time on.

      But here's what's been lost: that logic has closed the amusement park. All those simple, fun learning projects now feel stupid. When I see those blog posts now, my gut reaction is "why would I waste time on that? That's one prompt away." The feeling that it's "not worthy" has completely drained the joy out of it.

      I can't turn off that instinct anymore. I know those 200 lines of code are trivial. I know AI can generate them. And so doing it myself feels like I'm deliberately choosing to be inefficient, like I'm LARPing at being a programmer instead of actually learning something valuable.

      The problem isn't that I disagree with you. The problem is that I agree with you so completely that I can no longer have fun. The only "worthy" problems left are the hard ones AI can't do. But those require months of serious investment, not a casual Saturday afternoon.

      • pakitan 1 hour ago
        > The feeling that it's "not worthy" has completely drained the joy out of it.

        It was never "worthy". With the proliferation of free, quality, open source software, what's now a prompt away, has been a github repo away for a long time. It's just that, before, you chose to ignore the existence of github repos and enjoy your hobby. Now you're choosing to not ignore the AI.

      • eCa 2 hours ago
        > And so doing it myself feels like I'm deliberately choosing to be inefficient

        People have plenty of hobbies that are not the most "efficient" way to solve a problem. There are planes, but some people ride bikes across continents. Some walk.

        LLMs exist, you can choose to what level you use them. Maybe you need to detox for a weekend or two.

      • alganet 2 hours ago
        I genuinely do not understand this. You can totally still do that for learning purposes.

        The only thing you cannot do anymore is show off such projects. The portfolio of mini-tutorials is definitely a bygone concept. I actually like that part of how the culture has changed.

        Another interesting challenge is to set yourself up to outperform the LLM. Golf with it. LLM can do a parser? Okay, I'll make a faster one instead. Less lines of code. There's tons of learning opportunities in that.

        > The only "worthy" problems left are the hard ones

        That's not true. There are also unexplored problems which the AI doesn't have enough training data to be useful.

  • wahnfrieden 2 hours ago
    Hell no. I'm a full-time indie dev now, so maybe I would think differently if I were trading my time for a paycheck instead of working for results with 100% equity. But now I get to tackle features and ideas I've had for years but could never justify taking the time to investigate and attempt, in part because agents are slow enough that I must parallelize them which allows me to test ideas on the side while working on my primary objectives. I still review all code and provide close technical guidance.

    I already learned to appreciate working with code from "others" by working in teams and leading teams in a past life. So I don't feel as personally attached to code that comes from my own fingertips anymore, or the need for the value of my work to be expressed that way.

  • unnouinceput 2 hours ago
    No, it didn't. Or rather it did for run of the mill coder camp wanna be programmer. Like you sound you are one. For me it's the opposite. That's because I don't do run of the mill web pages, my work instead is very specific and the so called "AI" (which is actually just googling with extra spice on top, I don't think I'll see true AI in my lifetime) is too stupid to do it. So I have to break it down into several sessions giving only partial details (divide and conquer) otherwise will confabulate stupid code.

    Before this "AI" I had to do the mundane tasks of boilerplate. Now I don't. That's a win for me. The grand thinking and the whole picture of the projects is still mine, and I keep trying to give it to "AI" from time to time, except each time it spits BS. Also it helps that as a freelancer my stuff gets used by my client directly in production (no manager above, that has a group leader, that has a CEO, that has client's IT department, that finally has the client as final user). That's another good feeling. Corporations with layers above layers are the soul sucking of programming joy. Freelancing allowed me to avoid that.

    • marxism 1 hour ago
      I'm curious: could you give me an example of code that AI can't help with?

      I ask because I've worked across different domains: V8 bytecode optimizations, HPC at Sandia (differential equations on 50k nodes, adaptive mesh refinement heuristics), resource allocation and admission control for CI systems, custom network UDP network stack for mobile apps https://neumob.com/. In every case in my memory, the AI coding tools of today would have been useful.

      You say your work is "very specific" and AI is "too stupid" for it. This just makes me very curious what does that look like concretely? What programming task exists that can't be decomposed into smaller problems?

      My experience as an engineer is that I'm already just applying known solutions that researchers figured out. That's the job. Every problem I've encountered in my professional life was solvable - you decompose it, you research up an algorithm (or an approximation), you implement it. Sometimes the textbook says the math is "graduate-level" but you just... read it and it's tractable. You linearize, you approximate, you use penalty barrier methods. Not an theoretically optimal solution, but it gets the job done.

      I don't see a structural difference between "turning JSON into pretty HTML" and using OR-tools to schedule workers for a department store. Both are decomposable problems. Both are solvable. The latter just has more domain jargon.

      So I'm asking: what's the concrete example? What code would you write that's supposedly beyond this?

      I frequently see this kind of comment in AI threads that there is more sophisticated kinds of AI proof programming out there.

      Let me try to clarify another way. Are you claiming that say 50% of the total economic activity is beyond AI? or is some sort of niche role that only contributes 3% to GDP? Because its very different if this "difficult" job is everywhere or only in a few small locations.

  • heavyset_go 2 hours ago
    > The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.

    That's the "promise", but in practice it's exactly what you don't want to do.

    Models can't think. Logic, accuracy, truth, etc are not things models understand, nor do they understand anything. It's just a happy accident that sometimes their output makes sense to humans based on the statistical correlations derived during training.

    > The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

    Am I the only one who is not totally impressed by the quality of code LLMs generate? I've used Claude, Copilot, Codex and local options, all with latest models, and I have not been impressed on the greenfield projects I work on.

    Yes, they're good for rote work, especially writing tests, but if you're doing something novel or off the beaten path, then just lol.

    > I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.

    If you don't understand these things yourself, how do you know the LLM is "correct" in what it outputs?

    I'd venture to say the feeling that models can do it better than you comes from exactly that problem: you don't know enough to have educated opinions and insights into the problem you're addressing with LLMs, and thus can't accurately judge the quality of their solutions. Not that there's anything wrong with not knowing something, and this is not meant to be a swipe at you, your skills or knowledge, nor is my intention to make assumptions about you. It's just that when I use LLMs for non-trivial tasks that I'm intimately familiar with, I am not impressed. The more that I know about a domain, the more nits I can pick with whatever LLMs spew out, but when I don't know the domain, it seems like "magic", until I do some further research and find problems.

    To address the bad feelings: I work with several AI companies, the ones that actually care about quality were very, very adamant about avoiding AI for development outside of doing augmented searches. They actively filtered out candidates that used AI for resumes and had AI slop code contributions, and do the same with their code base and development process. And it's not about worrying about their IP being siphoned off to LLM providers, but about the code quality in itself and the fact that there is deep value in the human beings working at a company understanding not only the code they write, but how the system works in the micro and macro levels. They're acutely aware of models' limitations and they don't want them touching their code capital.

    --

    I think these tools have value, I use them and reluctantly pay for them, but the idea that they're going to replace development with prompt writing is a pipe dream. You can only get so far with next-token generators.

  • madaxe_again 2 hours ago
    Nope. I have the same dividing line as I had when I was leading development teams:

    If this seems interesting to me, and I have time, I will do it.

    If it is uninteresting to me, or turns out to be uninteresting, or the schedule does not fit with mine, someone else can do it.

    Exactly the same deal with how I use AI in general, not just in coding.