However, recently I have purchased the max plan from anthropic and have been vibing with Claude code since then. And wow, the results are very good. With a good enough prompt, and planning step, it could generate full features in a project with 20k LOC, with very little modifications needed by me after review.
I heard even more success stories from friends who gave Claude 3-4 different features that Claude would develop in parallel.
On top of that, everyone seems to produce side project at an astronomical rate, both among my friends, and here on HN where fully complete project that would take months to develop, seem to appear after few hours with Claude code.
So, my questions is, is programming as a profession cooked? Are most of us going to be replaced with a “supervisor” who runs coding agents all day?
Most of the projects I’ve worked on in my career have been ten to hundreds of millions lines of code!
I really see the same thing at our current level of AI. People are whipping out basic apps that work for small problems that are solved by small solutions. And it works. But without professionals intervening and correcting all the small problems along the way, it doesn't scale. Professional software engineers still need to exist to be sure that the solutions being created are scalable.
Will we spend as much time typing out specific lines of code? Probably not. But will the jobs still be there? Absolutely. Perhaps even with more variety because we can focus more on the actual problems being solved. We will do more take-over work of apps that people got started but cannot finish. We'll refactor apps that got coded into corners, and spend more time talking directly to customers to understand what we are really trying to accomplish. It will be different work, but it will be there.
Reality is that FSD was/is a "few decades away"
Same for programming. We can take our hands off the steering wheel for longer stretches of time, this is true, but if you have production apps with real users that spend real money then going to sleep at the wheel is far too risky.
Programmers will become the guardians and sentinels of the codebase, and their programming knowledge and debugging skills will still be necessary when the AI corners itself into thorny situations, or is unable to properly test the product.
The profession is changing, no doubt about it. But its obsolescence is probably decades away.
You talk about programming that become guardians, but I see two issues with this: (1) you don't need ten guardians, you need 1-2 that know your codebase; and (2) a "guardian" is someone who were junior, turned into senior, if juniors are no longer needed, in X years there will be no guardians to replace the existing ones.
[1] Even if your domain is not traditionally considered heavily regulated (military, banking,...) there is a surprising amount of "soft law" and "hard law" in everything from privacy to accounting and much more.
As engineers, we can be the supervisor, doing code review, managing things at a higher level. Instead of choosing which libraries to do the work for us, we choose which LLM to write that code and we make sure those tests are all good and we insist they fix the failures they gloss over.
As coders… well, right now it's only "mostly" taking over that, because there are still cases where the AI has no idea what it's doing, where it can* get the syntax right but the result is still useless. One example of this I've been trying recently is having an LLM do music generation**, both with "give me a python script to make a midi file" and Strudel (https://strudel.cc), and at this task it sucks much much worse than GPT-2 did with dungeon text adventures.
I'm always on the lookout for the failure modes, because those failure modes are going to be my employment opportunities going forwards.
Right now, if you're a coder who knows something else besides just coding, I think you can do useful work with the intersection that a lot of other coders without that side-interest would fail at even with LLM assistance. On the other hand, if your only side-interests are other forms of code, e.g. you want to make a game engine and a DSL for that game engine, but you're not fussed about writing any games with either, then you probably won't do well.
* "can", not "will always" like we're used to in other domains.
** "why not use Suno?" I imagine you asking. Where Suno works well I like it, but it also has limits. Ask it for something outside its training domain… I've tried getting a 90 second sequence of animal noises with no instruments, it made something 140 seconds long consisting of only instruments and no animal noises.
Exactly! I don't have a lot of experience with coding via LLMs, but lately I've been dabbling with that outside of my job precisely to find these failure modes... and they actually exist :)
Programming as in “software engineering” - no. Because it isn’t about choosing the bext most probable word or pixel. At all.
One is the practical and societal consequences, iteratively, over the next few decades. Fine, this is important discussion. If this is what you're discussing, I have no worries - automation taking a significant portion of jobs, including software engineering, is a huge worry.
The other thing is this almost schadenfreude of intelligence. The argument goes something like, if AGI is a superset of all our intellectual, physical, and mental capabilities, what point is there of humans? Not from an economic perspective, but literally, a "why do humans exist" perspective? It would be "rational" to defer all of your thinking to a hyperintelligent AGI. Obviously.
The latter sentiment I see a decent bit on hackernews. You see it encoded in psychoanalytic comments like, "Humans have had the special privilege of being intelligent for so long, that they can't fathom that something else is more intelligent than them."
For me, the only actionable conclusion I can see from a philosophy like this is to Lie Down and Rot. You are not allowed to use your thinking, because a rational superagent has simply thought about it more objectively and harder than you.
I don't know. That kind of thinking, be it from intuitively when I was in my teens, to learning about government and ethics (Rational Utopianism, etc.) has always ticked me off. Incidentally, every single person who's thought that way unequivocally, I've disliked.
Of course, if you phrase it like this, you'll get called irrational and quickly get compared to not so nice things. I don't care. Compare me all you want to unsavory figures, this kind of psychoanalytic gaslighting statement is never conducive to "good human living".
Don't care if the rebuttal analogy is "well, you're a toddler throwing a tantrum, while the AGI simply moves on". You can't let ideologies like the second get to you.
There's always a market for things that people are too lazy to do on their own.