Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think your "winner takes all, first mover wins" premise is wrong, even if it may be what Anthropic believe. Their mission has certainly shifted from "save the world from AI" to "push AI onto the world ASAP, because we've got an IPO coming up".

In reality the coding market, which is really the biggest success story for frontier AI (because code it is uniquely suited for LLMs and RL) is rapidly headed for, if not already arrived at, commodification, with each release from any of the US big 3 heralded as best yet, and the Chinese models like DeepSeek, Kimi, Qwen, GLM maybe no more than 6 months behind.

As far as code quality and level of bugs, certainly Claude Code has been hugely successful despite that, for two reasons.

1) It's a revolutionary product, and people are willing to accept a high level of bugs because of that.

2) The product is an LLM, itself an inherently flawed and unreliable technology, but one that people have got used to. The fact that the agent/harness, as well as the LLM itself, is unreliable and regresses from release to release doesn't much change the vibe

The quality of code produced by Claude Code, at least the way it has been used to write itself, would be a complete non-starter for any business where reliability is important. Maybe best suited for things like consumer web apps where the cost of product failure, or version regression, is just an annoyed customer rather than a lawsuit.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: