Hacker Newsnew | past | comments | ask | show | jobs | submit | slopinthebag's commentslogin

I really dislike this push away from augmentation and towards agents. I get that people want to be lazy and just have the LLM do all of their work, but using the AI as an augmentation means you are the driver and can prevent it from making mistakes, and you still have knowledge of the codebase. I think there is so much more we could be doing in the editor with AI, but instead every company just builds a chatbot. Sigh.

Claude Code proves you don't need quality code — you just need hundreds of billions of dollars to produce a best-in-class LLM and then use your legal team to force the extreamly subsidised usage of it through your own agent harness. Or in other words, shitty software + massive moat = users.

Seriously, if Anthropic were like oAI and let you use their subscription plans with any agent harness, how many users would CC instantly start bleeding? They're #39 in terminal bench and they get beaten by a harness that provides a single tool: tmux. You can literally get better results by giving Opus 4.6 only a tmux session and having it do everything with bash commands.

It seems premature to make sweeping claims about code quality, especially since the main reason to desire a well architected codebase is for development over the long haul.


AI comments are against the rules

They definitely used an LLM for it

> He's saying that carpenters who make nice things wouldn't use a material like that

If they do, then they aren't making truly great things. It's as simple as that really.

If you cut corners, even if you think nobody will notice, some people will, and your product will never be truly great. Steve understood that.


This but unironically.

I think it means OSS projects should start unilaterally banning submissions from people working for Anthropic.

Why? What does this have to do with the leak

Because it has a high likelyhood of being written completely by a LLM without any human thought or attention being put into it.

Being written by a LLM is a signal that the submission is of low effort and therefore probably low quality, which then puts the onus on the people reviewing and reading the submission instead of the original generator of the submission. Hence I would classify it as spam.

Open source communities also have rules against LLM generated contributions, for various moral, ethical, or legal reasons.


...Because it's a mode of using Claude Code that allows certain users to use the application in "stealth mode" to produce pull requests that seem human, but are actually AI generated, which often goes against the contribution rules of OSS projects?

At this point I would consider any employee of an AI provider to be tainted.


A little bit of imagination, and we end up with hardware-backed attestation required to contribute source code, that certifies the author isn't an AI!

None of the other agents claim that the commit was made by an AI so why the panic suddenly

This is just cope to avoid feeling any shame for shipping slop to users.

Facebook.com is a monstrosity though, and their mobile apps as well are slow and often broken. And the younger generations are using other networks, Facebook is in trouble.

I'm pretty sure it's AI.

https://x.com/JustJake/status/2007730898192744751

I wouldn't be surprised if most of Railway's infra is running on Claude at this point.


The CEO says it's not: https://x.com/JustJake/status/2038799619640250864

A lot of people are confident in enough in their ability to spot AI infra that they are willing to dismiss a firsthand source on this, and I admit I have no idea why. There isn't any upside to making this claim, and anyway, I assure you that people need no help at all from AI to make these kinds of mistakes.


Their reply doesn't make much sense, they're supposedly soc2 compliant. How are they compliant but letting a single engineer push out a change like that?

I'm sure Claude didn't literally ship the feature itself with no oversight, but I also find it hard to believe that their approach to adopting AI didn't factor in at all. Even just like, the mental overhead of moving faster and adopting AI code with less stringent review leading to an increase in codebase complexity could cause it. Couple that with an AI hallucinating an answer to the engineer who shipped this change, I'm not sure why people are so quick to discount this as a potential source of the issue. Surely none of us want our infra to become less secure and reliable, and so part of preventing that from happening is being honest about the challenges of integrating AI into our development processes.


> I'm not sure why people are so quick to discount [AI] as a potential source of the issue.

Because (per the link above) the CEO said that (1) it was their fault, and (2) it had nothing to do with AI.

I understand that on this forum statements like this are inevitably greeted with some amount of skepticism, but right now I'm seeing no particular reason to disbelieve Jake, and the reason that "if they did use AI they'd deny it" should frankly not be considered good enough to fly around here. Like probably everyone in this comment section I'm open to evidence that they used AI to slop-incident themselves, but until we can reach that standard let's please calm down and focus on what we actually know to be true.


During this whole incident, Railway have made a wide range of misleading and straight out false claims to cover themselves, so them saying it wasn't AI is pretty much meaningless

So on the one hand you have a direct statement from the source that the cause of this incident is humans. On the other hand, while we all agree there is no specific evidence that AI caused the issue, the guy who made that statement, like, really loves AI.

In my life I have gone back and forth on the idea that 12 angry men is a kind of facile representation of how people think and what kinds of evidence really form the basis of a reasonable society. This comment section is doing a really good job of stretching my resolve to believe we are getting at least better.


Would you mind pointing out these claims? Happy to address them personally

Come on man, their CEO is a massive vibe coding proponent and his company spent $300,000 on Claude this month. But yeah, I'm sure Claude had nothing to do with any of it. I bet they don't use it to write any code.

https://xcancel.com/JustJake/status/2030063630709096483#m


Both things can be true: they’re doing a lot of vibe coding, and this was a human error that didn’t involve AI.

I have no skin in the game but that is a very charitable perspective.

It's fine they use AI, it's not fine they don't proofread things.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: