> Overall, LLMs aren’t yet at the point where they can replace all engineers. But I don’t doubt they will be soon enough.
All engineers? This doesn't match my hands-on experience at all.
If you give a chainsaw to everyone, it doesn't make everyone a lumberjack. And a chainsaw itself certainly isn't a lumberjack.
If you give Claude Code or the like to everyone, it's doesn't make everyone a highly skilled software engineer. And Claude code itself isn't a highly skilled software engineer.
I've come around to this view. When I first began using these things for building software (the moment ChatGPT dropped), I was immediately skeptical of the view that these things are merely glorified autocomplete. They felt so different than that. These computers would do what I _meant_, not what I _said_. That was a first and very unlike any autocomplete I'd ever seen.
Now, with experience using them to build software and feeling how they are improving, I believe they are nothing more or less than fantastically good auto complete. So good that it was previously unimaginable outside of science fiction.
Why autocomplete and not highly skilled software engineer? They have no taste. And, at best, they only pretend to know the big picture, sometimes.
They do generate lots of code, of course. And you need something / someone that you can trust to review all that code. And THAT thing needs to have good taste and to know the big picture, so that it can reliably make good judgement calls.
LLMs can't. So, using LLMs to write loads of code just shifts the bottleneck from writing code to reviewing code. Moreover, LLMs and their trajectory of improvements do not, to this experienced software engineer, feel like they are kind of solution and kind of improvements needed to get to an automated code review system so reliable that a company would bet its life on it.
Are we going to need fewer software engineers per line of code written? Yes. Are lines of code going to go way up? Yes. Is the need for human code review going to go way up? Yes, until something other than mere LLM improvements arrive.
Even if you assume 100% of code is written by LLMs, all engineers aren't going to be replaced by LLMs.
tbh, it could be a breakthrough in model design, an smart optimization (like the recent DeepConf paper), a more brute force (like the recent CodeMonkeys paper), or a completely new paradigm (at which point we won't even call it an LLM anymore). either way, I believe it's hard to claim this will never happen.
conartist6
today at 10:45 AM
It's pretty easy to understand why it will never happen. AI isn't alive. It's more intellectually akin to a sword or a gun than it is to even the simplest living thing. Nobody has any intention of changing this because there's money to be had selling swords and guns and no money to be had in selling living entities that seek self-presentation and soul