Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Death of the Stubborn Developer (sourcegraph.com)
21 points by ingve on Dec 11, 2024 | hide | past | favorite | 26 comments


Amazing, a company with a vested interest in our AI future of software development is bullish on the use of AI in software development.

> Many of those graph nodes are leaf-node tasks, like “write an auth library” or “modernize these unit tests”. They tend to be fairly self-contained. We often give these leaf node tasks to junior developers because the scope is small.

You know, the companies that hand "write an auth library" to their junior developers will, I'm sure, do just fine by having an LLM do it for them. No worse than their normal output, absolutely.

I'm sure a future where this is a good idea is coming, but I've seen nothing to persuade me that this is where we are now. IMO they're glorified intellisense - useful, but if you trust them blindly they'll take you where you deserve to go.

Does anyone have any hard data (not anecdotes/opinions) to suggest I'm wrong on this?


> IMO they're glorified intellisense

To me, LLMs are actually two-step shortcut to previously

1) query google

2) select most voted/accepted answer on StackOverflow

3) copy+paste example code in IDE

4) update some chars to match your need

5) run some tests to see if it suits your need

Saves plenty of time, but scary when someone running that shortcut may be careless or have lack of understanding.

(Edit: Fix Formatting)


You can safely ignore any blog post with 'The death of...' in it's title.

As you say, companies telling us their products with dominate the future is nothing new. The next big job killer is always 5 years away, apparently.


> The next big job killer is always 5 years away, apparently

It really is, and really always has been. At least, when it comes to software.

What's always been hard, is knowing which of the many predictions about the real job killer will be correct, so that you can train for it.

In 2009, I was expecting that by 2019 the world would have widespread availability of self driving cars without steering wheels, and not just some geofenced examples and a promise of more "real soon now honest" as we're still at in 2024.

In 2019, I wasn't expecting LLMs to be even remotely as good as they turned out to be by 2022, let alone 2024.


> CHOP isn't just the future, it's the present. And if you're not using it, you're starting to fall behind the ones who are.

Be that as it may, the reason I got into software development as a kid was because I wanted to understand how computers worked and tinker with them. I did not get into it because of increased efficiency, or for telling others how to do stuff that I could learn how to do myself. If that makes me unmarketable, then I guess I will move to something that is not so prone to automation. Because I don't want to be a glorified manager (of bots or of people). :)


Is the author suggesting that current AI tools are capable of:

> “write an auth library” or “modernize these unit tests” ?

If so, I'm genuinely interested to know what tools people are using that can do this.

Our experience is that AI will perform well if given a specific and well-bounded sub-task ("parse the JWT token and return a collection of claims" or "re-write this test using framework x" to extend the author's examples), but would still need a competent developer (junior or otherwise) to fit the pieces together and act as quality control.


> In my blog post I called this phenomenon Chat-Oriented Programming, CHOP for short (or just chop). Chop isn’t just the future, it’s the present. And if you’re not using it, you’re starting to fall behind the ones who are.

> I’m going to stop qualifying this assertion as my opinion, because at this point it is entering into well-established fact territory.

Ok one of these guys, moving on...


Well I wouldn't dismiss the article fully, but I agree a proper product demo showing a real session of this CHat Oriented Programming on a real project would go a long way removing the whiff of snake oil there.

My experience has been meh: LLMs let me enter unknown domains and get started, but a very large part of the LLM-generated code gets dumped once the right abstractions have been clarified.


> Well I wouldn't dismiss the article fully

the article is trash, but the HN discussion is great, both trashing the low quality of the content and discussing why it’s trash


This is spot on. Fully agree with this point:

> Chat-oriented programming is hard, but many folks believe that autonomous agents are right around the corner, and that they will handle all that heavy lifting for you. You’ll just need to write short prompts and spot-check the results, with much less toil involved. [...] I think those people are smoking some serious crack. But many are well-funded, so they’re going to get a chance to give it the old college try, burning investor money like jet fuel along the way.

I think the time-horizon presented by the author is about right. Like they mention, sure, we could see a magical breakthrough soon, but it's not likely. I'd argue that we won't see 100% autonomy on the code side in our lifetime (and I use an LLM every day) based on current evidence. The models are good, but they're several iterations away from evening being close to autopilot.

In the event that we do get 100% autonomy, a whole lot more hell is going to break loose than what's happening with CHOP (as the author refers to it).

I think the biggest nightmare that lies ahead is the point in the metaphorical chart where new/junior programmers "learn" by getting some result that passes the sniff test and assume it's correct (i.e., they don't learn the gritty details you get from experience, just surface level knowledge).

Eventually, we'll hit a point where the majority of programmers learn this way, which means technological progress will come to a halt first, and second, when serious bugs inevitably creep up, there will be a relatively small pool—if not non-existent—of people who can fix it (and will charge handsomely to do so).

Sadly, I think the only real solution is to just play it by ear. But the advice in the article is sound: if you're not learning/using this stuff now, you're falling behind. If you're a junior trying to level up, don't rely on your job or co-workers to teach you. Start building stuff (in whatever area you're trying to work in) in your spare time and consider it an investment in your future.


Ironically, the one thing I think LLMs can usefully replace is authors of nonsense like this.


I’m not convinced it hasn’t already. The writing style is bizarre, rambling, light on detail and heavy on irreverent digressions.


As a relatively new senior developer who has witnessed the emergence of Copilot while progressing from Junior to mid senior and above, I completely disagree with the premise.

Junior developers have huge gaps in their knowledge, which they fill naturally by working on different parts of projects and getting their hands dirty. What's actually happening now in my company is that juniors who have been primarily using LLMs to code have a much worse grasp of the codebase, the tech stack and software engineering in general than those who don't, even if they deliver faster. Also, in a complex codebase, the number of mistakes they produce doesn't go down with time, because they're not actually learning from their previous mistakes and applying the learnings.

As a senior developer who used to use Copilot heavily, I have to admit it's a crazy performance boost when you're adding critical small changes here and there, across different codebases with different stacks, when you know what you want to do. However, I recently decided to stop my copilot subscription and go back to non-assisted coding, for multiple reasons: 1. Coding is fun 2. On a senior level, mistakes are more problematic. 3. Writing the code yourself results in much higher quality code, for me, than reviewing the LLM's code

In short, I disagree with the article


Agreed that LLMs can make the role of junior developers become one of filtering out bad LLM output, and tuning a prompt until it outputs the right thing.

I think the concern that we won't have senior developers later if there are no junior developers now, is pretty valid. The question is if LLMs will keep up pace to replace senior developers when the current ones retire. Since no one can predict the future, it's a pretty risky gamble at the scale of the entire industry. (OTOH if this new tool sparks more interest in the occupation, then maybe the issue solves itself by sheer numbers.)


when junior developers using LLMs become seniors we will have a different kind of senior developers. That won’t be the first time this happen actually. The Java generation and now the JS/Python generation change were very similar.


I find the comparison to a "3D printed car" to be excellent!

What you end up with is a... car-shaped object!

There is a reason why even in Soviet Germany the car[1] still had a metal frame... If only all of it could have been made out of used pants[2]!

[1]: https://en.wikipedia.org/wiki/Trabant

[2]: https://en.wikipedia.org/wiki/Duroplast


I still think @mpweiher's original response to the release of GitHub Copilot is spot on: https://blog.metaobject.com/2021/06/don-generate-glueextermi...

That is, rather than use an LLM to generate boilerplate code from an ambiguous natural-language prompt, use a high-level programming language to precisely, concisely, and directly express what you want.


Honnestly the only developer I think are any good at their job are the stubborn ones. They are the ones that spend the extra time to learn something new. They also do stuff right because they spend the time to solve the problem. Not just patch it up.


    You are getting left behind if you do not adopt
    chat-based programming as your primary modality.
Holy overblown marketing, Batman.


There's going to be a demand for Artisanal/Organic/Hallucination-free programmers in the future, much like there's a demand for COBOL programmers now, but bigger.


I think he's right, and I hate it.

I'm also depressed by his endgame, which he glosses over - LLMs prevent the gradual buildup of expertise through hard, repeated, work.

All our handwriting (and hell, even our grammar) is basically trash, because who can be arsed putting in the time to write beautifully with a pen when you can just fling out an email?

LLMs will cause an analogous decline in code and coders - every codebase a cesspit of messy mediocrity in outputs, authors, and professional culture.

Fuck the future, man.


> LLMs will cause an analogous decline in code

Which will lead to worse code being published at a way faster rate that good code used to be published. Which will be used to train future versions of the LLMs, which will amplify the decline in code.

> Fuck the future

As the article points out, this is not the future, it's the present.

Fuck the present.


> LLMs will cause an analogous decline in code and coders - every codebase a cesspit of messy mediocrity in outputs, authors, and professional culture.

and we will be paid (a lot) to make them barely work. Most car owners can’t fixes their cars and the ones that destroys theirs are the ones that pay the biggest bills


> We’ve managed to narrow it down to a single principle: You are getting left behind if you do not adopt chat-based programming as your primary modality.

Cool, so I guess I'm going to start planning to leave the tech industry if CHOP becomes a ting because that isn't programming, and even if it is, the person entering the prompts is just a placeholder.

This is this generations "boomer" problem, but a million times worse. (Highly paid engineers with money in the bank bolstered an industry that simultaneously lets them work less while destroying upward mobility for the next generation after them.)


Really, really a good point!


that’s one of the dumbest things i’ve ever read in HN




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: