Why do those experiences indicate the presence or non-presence of an afterlife?
This claim from Ayer -- how do we make the leap from these experiences existing to being evidence of a life after consciousness?
> On the face of it, these experiences, on the assumption that the last one was veridical, are rather strong evidence that death does not put an end to consciousness
for some it's impossible to witness death. We get forced to sometimes see someone else's lifeless cask, but our own becomes impossible, as our very own worldline/probability trees branch before witnessing what others see as their objective reality; someone else's death. and this won't make any sense to anyone that doesn't already see this occurring, so it's just like an ouroborus of a string of jokes, each punchline becomes the begining of the next joke.
I came away with a very different conclusion, which is that the fact that such “bad” software can be so resoundingly successful for a business, yet be so odious to experienced human reviewers, means that it was the right engineering choice to go fast, rather than “do things right” by emphasizing code quality.
What good would it truly be if a 3K line function is split into 8 modules? It’ll be neater and more comprehensible to a human reader. More debuggable, definitely.
But given the business problem the have: winner takes all of a massive market, first mover wins, — the right move is to throw the usual rulebook about quality software out the window, and double down on the bets of the company, that AI will make human code engineering less and less necessary very quickly.
It turned out incredibly well despite the “bad” engineering — which in this case, I really count as good engineering.
It was "good engineering" only because this was a new kind of product and the customers were not aware yet of what they should get for the money they pay.
The bad quality of the Claude Code program has resulted in increased costs for the customers (very high memory consumption, slow execution, higher and sometimes much higher token count than necessary), and even for Anthropic, but nobody was aware of this, because there was no previous experience to compare with.
This kind of sloppy vibe coding works only when there is no competition. When the competition comes with something much more efficient, e.g. pi-dev, the inefficient application will be eliminated.
Anthropic attempts to protect their badly written program by forbidding its customers to use other coding harnesses, but this will not be able to protect them from competition for long.
If you are the first on a new market without competitors, then indeed time-to-market matters more than anything else and the sloppiest vibe-coded application is the best if it can be delivered immediately.
However, one must plan to replace that with a better and more efficient application ASAP, because the advantage of being the first is only temporary.
I have already said that sometimes time-to-market is the most important, so that should be the priority, but the advantage of delivering immediately the application is only temporary, so you must improve quickly your first possibly vibe-coded implementation, otherwise better alternatives will be delivered by others.
Claude Code is an obvious example of this, because it has practically opened a new market, but because it has remained a mess now there are better alternatives.
What is wrong is not generating instantly a proof-of-concept application that barely works and using it in the beginning, but continuing to build upon that even after you had enough time to rewrite it.
They would have to trade off building new features for refactoring. It seems they consider shipping more important, and that as long as the existing features mostly work, that’s good enough. As customers, we have to ask: do we agree? Do we want features over stability? I think the answer is yes, at least for me (and the market seems to agree). But it’s certainly a risk Anthropic is taking.
I will note that this strategy only really makes sense because Anthropic controls the compute. If open-source harnesses could also use Claude max plans, then they’d have to focus much more on stability and quality, or just build an open-source harness themselves, or probably better yet, get out of the harness-building business altogether. So they’re gambling on staying ahead of open-source models, which seems like it’s been a good bet so far, but we’ll see.
The advantage is definitely not temporary as can be seen by how many people use codex vs Claude code.
Claude code works just fine and I have zero issues at usability level. Could they have more features? Yes but I see that they are shipping quick. It is obvious to me that they took the right balance between time to market and clan code
These discussions are insane. No agentic coding product, including Claude Code, has existed for a full year yet, but people are stating with extreme confidence who has "won" the competition to be the leading or even only provider of this kind of product and having heated arguments over whether or not we can consider the current state of the market to be temporary or not. Imagine having this same argument about Lycos versus Yahoo! in 1995.
But it is working primarily because of the Max subscription model. If I could use my Max subscription to get $5000 worth of tokens for only $200 via OpenCode or Pi, I would drop Claude Code today. I think a lot of people (and enterprises) are of a similar opinion. Not saying Claude Code would have no users, but its dominance would be greatly diminished.
But you can’t and the reason that you care also has to do with the same production process.
It’s not like a separate company made the terminal app versus the model. If we think that the desktop app is bad, but the model is good then that’s still an endorsement of the software process.
If we think the model doesn’t matter at all, then that’s an even bigger endorsement. Is the model has no content worth talking about over the nearest competitors or an open source alternative, then the remainder is marketing and polish.
I just don’t understand how people can look at a company that is capacity constrained in this market and think that they’re doing things poorly.
I think your "winner takes all, first mover wins" premise is wrong, even if it may be what Anthropic believe. Their mission has certainly shifted from "save the world from AI" to "push AI onto the world ASAP, because we've got an IPO coming up".
In reality the coding market, which is really the biggest success story for frontier AI (because code it is uniquely suited for LLMs and RL) is rapidly headed for, if not already arrived at, commodification, with each release from any of the US big 3 heralded as best yet, and the Chinese models like DeepSeek, Kimi, Qwen, GLM maybe no more than 6 months behind.
As far as code quality and level of bugs, certainly Claude Code has been hugely successful despite that, for two reasons.
1) It's a revolutionary product, and people are willing to accept a high level of bugs because of that.
2) The product is an LLM, itself an inherently flawed and unreliable technology, but one that people have got used to. The fact that the agent/harness, as well as the LLM itself, is unreliable and regresses from release to release doesn't much change the vibe
The quality of code produced by Claude Code, at least the way it has been used to write itself, would be a complete non-starter for any business where reliability is important. Maybe best suited for things like consumer web apps where the cost of product failure, or version regression, is just an annoyed customer rather than a lawsuit.
You can go just as fast if you make good code, you just have to burn more tokens to do it. The tokens you burn in strict structure and documentation you’ll save in debugging as the codebase grows. I’m 5-30x my normal production depending on the day…with zero team and writing better code than I ever have, but you need a robust system to manage the path, and active supervision and management basically you’ll apply your senior dev skills as if you were managing 50 frisky interns.
Unsure if this was AI generated, but doesn't pass close scrutiny:
"winner takes all of a massive market, first mover wins"
...this is the kind of AI spam that sounds convincing until you think about it.
It's not at all clear the foundation model or coding agent markets are winner takes all. Far more likely to be a handful of successful players based on the market so far.
First mover wins? OpenAI was first to market and looks in trouble.
There's something convincing about this kind of cliche that lets it slip past you until you start inspecting each claim.
Much simpler. Those in power, in every place and in every time, adopt self-serving beliefs that justify their place as the ones in power and flatter themselves. No different in any day or any time. Same quasi-messianic ideals as ever. Their beliefs don’t have to pay rent or correspond with reality.
I’m wary about the exuberance of AI displacing quality.
But some of the worst experiences I’ve had with coworkers were with those who made programming part of their identity. Every technical disagreement on a PR became a threat to identity or principles, and ceased being about making the right decision in that moment. Identity means: there’s us, and them, and they don’t get it.
‘Programmer’ is much better off as a description of one who does an activity. Not an identity.
In the last few decades, we went from a small handful of programming languages / libraries to a massive cambrian explosion from Github-fueled open source. Everyone's choice of Javascript framework became a dumb pissing contest and source of identity. A cudgel used at meetings to look down on some other way of doing things.
I hope AI liberates us from that dumb facade of pretend innovation. In some ways, us programmers got way too full of ourselves and filled our lives with pretend work porting apps from one thing to the next thing with no actual change in end-user value
I think AI is going to make this even worse, because now every person and their mom think that they can create a prompt carefully enough so as to create a new library with their own philosophy.
100%. A lot of these AI anxiety driven odes to the loss of craft have me wondering whether anyone cares about the value being provided to the user (or the business), which is the part that is actually your job.
Elegant, well-written and technically sound projects will continue to exist, but I’ve seen too many “well crafted” implementations of such technically vexing features as “fetching data and returning it” that were so overengineered that it should have been considered theft of company money.
"I’ve seen too many «well crafted» implementations of such technically vexing features as «fetching data and returning it» that were so overengineered that it should have been considered theft of company money."
This judgement has merit. However, over the years I got to perceive that over-engineering tendency to be the manifestation of exploratory spirit in one's craft. This is how the Unix got to be created at Bell Labs. To their managers, Ken Thompson and Dennis Ritchie worked on programs like the "ed" editor, thus they cared about "value being provided to the user (or the business)". What was later officially named Unix was not pitched as an operating system, but instead framed mostly just a needed way to organize the growing set of utilities, among other things (i.e. as a footnote). What are the over-engineered bits (and the related gained experience) in a given project may become useful for something else. People (tend to) do this kind of stuff. But should they be blamed, considering the enticing promise of growth and development of new technologies, practiced by employers themselves, as part of recruitment game?
A human problem will not be solved by the additional of AI. It's a force multiplier. If management is shit, it will be shittier. If you get arguments lime this during work, now you will get more, because it's so easy to port or rewrite or even create your own framework now. It's even easier to compete in the pissing match because you can just ask the AI for why xyz is wrong.
I feel like it's important to align team/org goals once in a while -- there's nothing wrong with a refactor or a port, but what's the ROI, etc etc. Yes I agree this framework is better but what do we gain, do we really need it etc
It can mean a category flag someone waves, an identifier we ask others to respect, a group we choose to belong to, a way of understanding what it is we like about ourselves, or something we quietly aspire to.
Or literally a part of the self, which is what the OP was getting at I think. And there is plenty of that in the software world. "I'm a Rubyist", "I'm a Pythonista", "A rustacean" and so on. There is plenty of identity ridiculousness. I've been a C programmer but I've also been a basic programmer an assembly language programmer, a PHP programmer, a FORTH programmer and a whole list of others. To me that collapses to "I'm a programmer" (even if the sage advice on HN by the gurus is to never call yourself a programmer I'm more than happy to do so). It defines what I do, not what or who I am, and it only defines a very small part of what I do. That's one reason why I can't stand the us-vs-them mentality that some programming languages seem to install in their practitioners.
I think it’s because C++ programmers tend to define themselves by the domain they work in. Game developer, embedded systems engineer, firmware developer, etc.
I feel like it implies "it's harder to make a word out of 'C++' than it is for things that already naturally evolved as words people say like 'Ruby', 'Python', or 'Rust'".
I agree with your comment. While reading the article, I had sympathy for the author, but also unintendedly pictured them as a mix of all of the "wizard" seniors I have worked with over the years. These are the type of people who when pair programming, constantly point out what they perceive as problems with your development setup, IDE, keyboard-macro skills, lack of tiling layout, etc etc. Not to mention what they will suggest on your actual PRs.
At the end of the day, I like the mental model of programming, and I am somewhat uninterested in shaving every millimeter of friction off of every surface I touch on my computer. Does that make me a worse programmer? Maybe? I still delivered plenty of high quality code.
> constantly point out what they perceive as problems with...
Yeah, screw those people. I count myself as lucky that I've only worked with 1 person who was seriously CRITICAL of the way other's worked... beyond just code quality. However, I always enjoyed a good discussion about the various differences in how people worked, as long as they could accept there's no "right" way. That's what the article brought up for me, and I wonder how much that happens these days.
One of my fondest memories was sitting around with a few other devs after work, and one had started learning Go pretty soon after its public release... and he would show us some new, cool thing he was playing around with. Of course those kind of organic things stopped with remote work, and I wonder how much THAT has played into the loss of identity?
> At the end of the day, I like the mental model of programming, and I am somewhat uninterested in shaving every millimeter of friction off of every surface I touch on my computer. Does that make me a worse programmer? Maybe? I still delivered plenty of high quality code.
In every pursuit there are secretly 3 different communities. Those who love doing the thing, those who love tinkering with the gear, those who love talking about it.
HackerNews and the internet in general are dominated by people who like to nerd out about the gear of programming (IDEs, text editors, languages, agent setups, …) and the people who like to talk about programming. The people doing most of the work are off on the side too busy doing the thing to be noticed.
I see this same thing now. In this case, it’s a more senior engineer and his manager taking credit for work a less senior engineer who’d left the team did.
There’s simply no advantage to crediting work to someone who’d left the team.
We love to blame those who are misfortunate. It’s called just world syndrome. It’s deeply uncomfortable to realize that this kind of thing is the norm, and justice is the exception. I’ve been extremely fortunate in my career, but not due to any special savviness of my own.
It’s not true that if someone else is getting credit for your work, that’s a you problem.
At my workplace now, there’s a senior staff engineer taking credit for work that was done by someone 3 levels below him. And the senior staff engineer still thinks he is not getting enough credit for his work. The senior staff engineer’s manager has been crediting him for the work the less senior engineer had done, since the less senior engineer is no longer at that team, in forums where the less senior engineer has no access to.
The less senior engineer is plenty likeable. As is the senior staff engineer. But the less senior engineer had left that team, and the senior staff engineer and his manager are unscrupulous, and do what they’d like to their advantage.
As someone who isn't invested in this spat, this just looks petty for openai to put this on their website.
Just write a press release and let the tech press publish it. Don't host it yourself. The legalistic language belongs in a filing, not a user-facing blog.
That's not at all what they meant. They meant they ended up raising their own quality bar tremendously because the QA person represented a ~P5 user, not a P50 or P95 user, and had to design around misuse & sad path instead of happy path, and doing so is actually a good quality in a QA.
Frankly I think the 'latest' generation of models from a lot of providers, which switch between 'fast' and 'thinking' modes, are really just the 'latest' because they encourage users to use cheaper inference by default. In chatgpt I still trust o3 the most. It gives me fewer flat-out wrong or nonsensical responses.
I'm suspecting that once these models hit 'good enough' for ~90% of users and use cases, the providers started optimizing for cost instead of quality, but still benchmark and advertise for quality.
This claim from Ayer -- how do we make the leap from these experiences existing to being evidence of a life after consciousness?
> On the face of it, these experiences, on the assumption that the last one was veridical, are rather strong evidence that death does not put an end to consciousness
reply