I've been using AI tools to brainstorm approaches and sometimes generate code, but actually doing the typing myself. That way I'm less likely to forget the mechanics and programming language over time.
One approach you can use is to ask it to never write the code for you, which forces it to explain and then once you try the idea by coding yourself you get a better understanding of it. I use this approach with code I am required to maintain. It still bites me sometimes because the models still mixes a lot of incorrect information (usually just stuff that was correct in the past but is incorrect now). For throwaway and easy to verify scripts I ask it to generate, but I do ask to avoid over engineering and trying to catch all corner cases cause in scripts I prefer just letting things error as they are better understood as a step that failed. I also avoid languages I find hard to read (like powershell) and prefer to generate things that are short to fit in the monitor so I can read everything and understand (python, bash, batch are my goto scripting languages).
Same. Most of what I do is ask for an implementation plan, with minimal code, or no code, or pseudocode, and then write the actual code myself. This is for open source work, where the entire point of my enjoyment is that I write the code myself. I honestly wouldn't bother being an open source maintainer if the entire thing was just prompting an LLM to write code, and then reviewing it. That doesn't sound fulfilling at all.
If this was an actual paid job, I do wonder how that would change my LLM use. The reason I'm a software developer at all is because I love the craft. The act of building, of using my brain to transform ideas into code... that's what I enjoy. If it was just prompting an LLM, would I still do that job? I don't know. I'd probably start looking into the idea of switching careers, at least.
I have asked more than a few dozen people on this, and the answer after some probing is that no other knowledge based career exists that one can move to which is not exposed to AI. While many talk about moving to a labour oriented career, no one has actually done this in my immediate network and friend of friends network. It's day-dreaming in my opinion.
Same. I've also configured the system prompt to never give me a full solution or write a code for me. So whenever I ask it a question it produces a short 10 line example or even a pseudocode. This is far easier for me to reason about.
I still reject > 50% of AI suggestions, because they're too mediocre, like moving code for no reason or sometimes it is just plain wrong.
Remembering and understanding aren't the same thing. Merely reciting facts doesn't automatically give you the ability to apply those facts to solve problems.
Me too, ... more or less. I'm mostly still typing, sometimes copy-and-pasting with typed changes, and rarely copy-and-pasting verbatim. With the caveat that in some cases, like prototypes, proofs-of-concept, and porting code between languages; then maybe many lines are copy-and-pasted verbatim.
The way I see it is - AI still makes mistakes, and I have to know how things work at some point anyway. So I'd rather spend my time actually understanding fundamentals (in my case, CSS at the moment), than trying to keep up with the frequently changing AI tools and models.
Once the tools and models stabilize more (as well as the pricing model), there's less risk in me learning something that is no longer relevant.
Except when I choose to wait on learning how to use AI tools effectively, I get told I am going to be "left behind".
You make it seem like AI coding has already "totally changed" our jobs. This is exactly the FOMO the article talks about ("until its too late"). It hasn't. I'm still using the same workflows without AI tools, and so are most of my teammates.
Yes, but also I often argue that the Wii is better "VR" than VR, because you can play with your friends, which is probably the real "killer app" of gaming?
I agree. This is one area I'm hoping that AI tools can help with. Given a complex codebase that no one understands, the ability to have an agent review the code change is at least better than nothing at all.
I would normally agree, but I think the "code is a liability" quote assumes that humans are reading and modifying the code. If AI tools are also reading and modifying their own code, is that still true?
You have to be able to express the change you want in natural language. This is not always possible due to ambiguity.
Next to that, eventually you run into the same issue that we humans run into: no more context windows.
But we as software engineers have learned to abstract away components, to reduce the cognitive load when writing code. E.g., when you write file you don't deal with syscalls anymore.
This is different with AI. It doesn't abstract away things, which means you requesting a change might make the AI make a LOT of changes to the same pattern, but this can cause behavior to change in ways you haven't anticipated, haven't tested, or haven't seen yet.
And because it's so much code to review, it doesn't get the same scrutiny.
Switch to a rival service that doesn't have an outage. There are at least half a dozen competent hosted LLM vendors for coding now (Anthropic, OpenAI, Gemini, Mistral, Kimi, MiniMax, Qwen, ...)
There was nothing stopping everyone from using continuous delivery today, yet many companies still rely on long cycles, manual testing and handovers. The problem isn't the tooling, it's the people.
I don't understand this part either. At some point we're writing software for people to use, and there has to be someone who comes up with the requirements based on what people want. AI doesn't change this fact.
Is there an equivalent for the JS ecosystem? If not, having Dependabot update dependencies automatically after a cooldown still seems like a better alernative, since you are likely to never update dependencies at all if it's not automatic.
RenovateBot supports a ton of languages, and ime works much better for the npm ecosystem than Dependabot. Especially true if you use an alternative package manager like yarn/pnpm.
Too bad dependabot cooldowns are brain-dead. If you set a cooldown for one week, and your dependency can't get their act together and makes a release daily, it'll start making PRs for the first (oldest) release in the series after a week even though there's nothing cool about the release cadence.
reply