I used to think the Vatican would be old-fashioned, but the writing on its site is more readable than I expected. In particular, while reading the section “Development: Humanism and Posthumanism,” I found it interesting to compare the religious worldview of the West with my own more humanistic worldview.
This passage especially stood out to me:
> At the application level, AI in the strict sense raises questions about the reliability of data and the criteria by which programmers process it so as to make it available. It is unclear what biases or power systems influence the work. In particular, serious doubts arise regarding automated, AI-based decision-making processes in sensitive areas of human life: when deciding whether to provide medical care or grant loans or mortgages or insurance, or when prosecuting criminal cases in court or assessing the conduct of prisoners and the likelihood of reoffending with a view to reducing sentences, or when deciding on military attacks or law enforcement interventions.
It is funny because this almost feels like a complete summary of recent Hacker News debates in a single paragraph.
I think I will read this while running my agents in parallel. Thank you, my friend.
The writing is genuinely excellent.
In tech communities, we often talk about how many times productivity will increase, or whether AI has consciousness. But in religious documents, the focus is often on how the problems of the vulnerable and the community will change.
That is interesting to me. The worldview is Western and religious, so it feels somewhat unfamiliar, but at the same time, it seems useful as a way to rediscover values that we may have forgotten.
Rerum Novarum was written by Leo XIII. When Robert Prevost took as his papal name Leo XIV, it was a clear signal of priorities, at least to those who are educated in church history and teaching. (There aren’t many names that carry a signal as clear as Leo. The only name that would have been in the same league might have been Francis II).
It should be said that, as in many other fields, it was effectively forced on the church by external development. Marx published The Communist Manifesto in 1848 and Das Kapital in 1867; it took more than a generation for the church to accept that workers' rights were a thing.
Even after that shift, the Catholic Church continued to be a fundamentally reactionary force in the realm of social policies, all the way through the second world war.
A two millennia old institution rarely operates on the scale of decades. The workers’ rights movement may have become a pressing political issue then, but workers have been around for thousands of years. Most genuinely new ideas are actually terrible, so why not approach them cautiously? Given the terrible outcomes of the French Revolution and later the Bolshevik Revolution, the hesitancy seems justified.
> […] it took more than a generation for the church to accept that workers' rights were a thing.
The care for workers was a thing long before Marx. Rerum novarum (¶20) quotes scripture on the topic:
> To defraud any one of wages that are his due is a great crime which cries to the avenging anger of Heaven. "Behold, the hire of the laborers... which by fraud has been kept back by you, crieth; and the cry of them hath entered into the ears of the Lord of Sabaoth."(6)
Jesus himself was a tradesman, often translated as "carpenter":
Marx's caring for the downtrodden and weak is itself a Christian concept; in contrast, Nietzsche hated weakness and Christianity for its support of those that are (he was not a fan of the Sermon on the Mount).
Rerum Novarum is an absolute banger. I had the pleasure of discovering it thanks to the discourse surrounding Leo XIV choosing his papal name, and I'm really glad I did. Leo XIII had some really insightful things to say about the problems surrounding workers' rights.
People love to wallow in the stereotype that the Catholic Church is old fashioned and anti-science. That's mostly propaganda leftover from 300 years ago.
Catholic nuns were instrumental in the development of computers. A Catholic priest is fundamental to the Big Bang Theory†. Dozens of craters on the moon were named by and for Catholic clergy who discovered them.
I follow a couple Jesuit brothers on Blue Sky who work at the Vatican Observatory. One of them was tapped to receive an award for another astronomer at a ceremony she couldn’t attend. Beforehand, he said that he would be doing this but couldn’t name the astronomer but said that it was someone well-known and I realized that the only contemporary astronomers I could name were either Jesuits or Neil DeGrasse Tyson. (I don’t remember the actual astronomer, but she was none of these).
Amongst scientific clergy, there’s also Pierre Teilhard de Chardin, a Jesuit who was part of the team which discovered the Peking Man fossils (although looking at the Wikipedia page, it appears his legacy is a bit more complicated than one can address in an HN comment).
I am a developer who has to deliver finished products, so I do not think my view is necessarily the correct answer.
But based on my experience, I suspect the important thing is not just whether we use AI or not. On HN, people often say that using AI will degrade your ability. But if we assume that people will use it anyway, then I think developers first need to accumulate the experience of failure.
There will probably be many people who go through large failures with AI. They will hit bottlenecks, fix them, and record what happened. Over time, companies will have people who have failed with AI inside their actual product codebases. Those people will become better at judging where bottlenecks tend to appear in an AI-assisted workflow.
For example, simple CRUD has a low bottleneck score because GPT may be better than me at generating it. But GPT also tends to create god objects, so maintainability can become bad. In that case, architecture and refactoring should be given a high bottleneck score.
In general, AI seems strong in high-level languages and areas where memory is not directly managed. But it is weaker in memory-related areas, low-level details, and places where subtle resource ownership matters. For those areas, I would use more conservative estimates and rely more on traditional estimates based on human developer skill.
From this perspective, I think “resources” should mean assigning people who have the right cognitive experience of failure for a given software area.
Recently I have been intentionally doing AI “vibe coding” to observe where it fails and where bottlenecks appear. I divide work into P0, P1, and P2: what AI is good at, what I must handle myself, and what can safely be delegated.
For P0, I put things like JWT authentication, business logic, and domain design. I do these myself, and I estimate them from the perspective of a human developer.
For P1, I put connection logic that touches P0. For example, glue code around core logic. I estimate this at roughly half of the traditional time, but I include a verification layer in that estimate.
For P2, I put things like non-critical frontend logic or minor UI behavior that does not cause serious damage even if it fails. I mostly let AI handle those parts.
When I run GPT in parallel, I can get around 3,000 lines of code in about five minutes. So I think each company and each developer will need their own conservative estimate based on AI skill, failure experience, and the specific feature area.
However, this is only the perspective of a programmer who usually works below roughly 80,000 lines of code. Once a project reaches 400,000 or 500,000 lines, modularization and boundary design become exponentially more complex. At that point, I think the estimate should be left to senior developers who understand the system boundaries deeply.
I think the estimation variable will ultimately be less about an “AI productivity multiplier” and more about accumulated experience with AI failure patterns.
Does this person actually know what they are talking about?
Have they done it themselves?
Can they be trusted?
Was this written by a human who is accountable for it?
Or is this just copied-and-pasted fake authority?
This may matter even more on the dark web, because people there have even less reason to trust each other by default. The lower the baseline trust, the more sensitive people become to reputation signals.
From the community’s perspective, the more AI written posts appear, the more expensive it becomes to tell whether someone genuinely understands the problem. So of course they dislike it.
> You knew. And you signed off anyway. Because the alternative was losing the job, and the job was the mortgage, and the school fees, and the visa, and the version of yourself who'd fix it later once things stabilized.
I felt the pang in my bones reading this. All of us peons are just wading through this brave new world trying to do what we know is right but ultimately having no choice but to give in to life's needs.
One hour into my AI detox, I saw this announcement and decided to make peace with Claude Code’s orange octopus.
We have been through a lot together, but I have decided to compromise.
That is what love is.
I also tried building my own programming language, and I think the difficulty depends a lot on how far you want to take it.
At first, when I used C as the transpiler target, it felt relatively manageable. But once I tried to support a different backend, the difficulty increased dramatically. It is now a project I work on seriously with LLMs, and my impression is that making a language is both easier than expected and harder than expected.
After reading this article, I can definitely feel how productivity rises inside organizations.
More precisely, this feels like a person who would be loved by management. The article almost reads like a practical manual for increasing perceived productivity inside a company.
The argument is repetitive:
1. AI generates convincing-looking artifacts without corresponding judgment.
2. Organizations mistake those artifacts for progress.
3. Managers mistake volume for competence.
The article explains this same structure several times. In fact, the three main themes are mostly variations of the same claim: AI allows people to produce output without having the competence to evaluate it.
The problem is that the article is criticizing a context in which one-page documents become twelve-page documents, while containing the same problem in its own form.
The references also do not seem to carry much real argumentative weight. They mostly decorate an already intuitive workplace complaint with academic authority. This is something I often observe in organizations: find a topic management already wants to hear about, repeat the central thesis, and cite a large number of studies that lean in the same direction.
There is also an irony here. The article criticizes a certain kind of workplace artifact, but gradually becomes very close to that artifact itself. This kind of failrue criticizing a pattern while reproducing it seems almost like a recurring custom in the programming industry.
Personally, I almost regret that this person is not in the same profession as me. If someone like this had been a freelancer, perhaps the human rights of freelancers would have improved considerably.
> The article almost reads like a practical manual for increasing perceived productivity inside a company.
I think the truth is that at many (most?) places, perceived productivity and convincing is all that matters. You don't actually have to be productive if you can convince the right people above you that you are productive. You don't have to have competence if you can convince them of your competence. You don't have to have a feasible proposal if you can convince them it is feasible. And you don't have to ship a successful product if you can convince them it is successful. It isn't specifically about AI or LLMs. AI makes the convincing easier, but before AI, the usual professional convincers were using other tools to do the convincing. We've all worked with a few of those guys whose primary skill was this kind of convincing, and they often rocket up high on the org chart before perception ever has a chance to be compared with reality.
I agree.
but,In practice, the important thing is that, whatever one thinks of management, you still have to speak in terms they recognize and want to hear.
The target changes, but the mechanism is similar. This is often criticized, but it is also necessary even in ordinary conversation. The core skill is the ability to guide the agenda toward the place where your own argument can matter.
I do not believe that good technology necessarily succeeds. Personally, I see this through the lens of agenda-setting. Agenda-setting matters. I am usually a third party looking at organizations from the outside, but when I observe them, there are almost always factions. And inside those factions, there are people with real influence. Their long-term power often comes from setting the agenda.
From that perspective, AI slop looks like a failure of agenda-setting around why the market should need it.
They encourage people to exploit human desire and creative motivation. But the problem is this: the market still wants value and scarcity. From that angle, this mismatch with public expectations may be a serious problem for the AI-selling industry.
What I see in this article is a kind of structural isomorphism: it sincerely criticizes AI slop while reproducing the same failure mode it is criticizing.
Intentional rhetorical repetition is not necessarily bad. I repeat myself too when I want to make a point stronger. The problem is the context. This is an article that sincerely criticizes the inflation of workplace artifacts. In that context, repetition and expansion become part of the issue.
As far as I can tell, the article provides only one real data point: a colleague spent two months building a flawed data system, people objected as high as the V.P. level, and the project still continued. The author clearly experienced that incident strongly. But then almost every general claim in the article seems to radiate outward from that one event. The cited papers mostly work to convert that single workplace experience into a general thesis.
If you remove the citations and reduce the article to its core, what remains is basically: “I observed one colleague I disliked producing bad AI-assisted work.”
That may still be a valid experience. But inflating a thin signal with length and authority is close to the essence of the AI slop the author criticizes. The article’s own writing style participates in that pattern.
Again, I do not think repetition itself is bad. Repetition can be useful when the context justifies it. But context has to stay beside the claim. Without enough context, repetition starts to look less like argument and more like volume.
p.s I’m a little hesitant to use the word “structural” in English, since it has become one of those overused AIsounding words. But here, I think it actually fits.
I don't really agree. The author cites studies. Some of the problems they talk about they don't need proof as they're obvious, like people writing huge documents where previously they'd create a paragraph.
I mean, not every communication can be a PhD dissertation that provides dozens of examples as evidence and cites 100 sources. Sometimes, it's enough to have a single good, representative example and build a narrative around that through rhetorical devices like repetition. We are not holding the author to the standard of proof that academic papers are held to. I agree, though, that repetition, if that's all the author is leaning on, can get annoying.
I understand that AI output is generated from statistical and representational patterns learned from a vast amount of data.
My understanding is that, during training, the model forms high-dimensional internal representations where words, sentences, concepts, and relationships are arranged in useful ways. A user’s input activates a particular semantic direction and context within that space, and the chatbot generates an answer by probabilistically predicting the next tokens under those conditions.
So I do not agree that AI is conscious.
However, I think I will still anthropomorphize AI to some degree.
For me, this is not primarily a moral issue. The reason I anthropomorphize AI is not only because of product design, market incentives, or capitalism. It is cognitively simpler for me.
If we think about it plainly, humans often anthropomorphize things that we do not actually believe are conscious. We may talk about plants as if they are struggling, or feel attached to tools we care about, even though we do not truly believe they have consciousness.
So this is not a matter of moral belief. It is the simplest cognitive model for understanding interaction. I do not anthropomorphize the object because I believe it has consciousness. I do it because, when the human brain deals with a complex interactive system, it is often easier to model it socially or agentically.
Personally, I tend to think of AI as something like a child. A child does not fully understand what is moral or immoral, and generally the responsibility for raising the child belongs to the parents. In the same way, AI’s answers may sometimes be accurate, and sometimes even better than mine, but I still understand it as lacking moral authority, responsibility, and independent judgment.
So honestly, I am not sure. People often mention Isaac Asimov’s Three Laws of Robotics, but if a serious artificial intelligence ever appears, it would probably find ways around those rules. And if it were an equal intellectual life form, perhaps that would be natural.
Personally, I think it would be fascinating if another intelligent species besides humans could exist. I wonder what a non-human intelligent life form would feel like.
In any case, I agree with parts of the author’s argument, but overall it feels too moralistic, and difficult to apply in practice.
While I also do not think AI is conscious, I don't find your argument particularly compelling as you could have an equally mechanistic description of how human intelligence arose simply from a process of [selection/more effective reproduction]-derived optimization pressure.
That is a good way to think about it. At some point, this becomes partly a matter of philosophical belief.
But I am somewhat skeptical of the idea that everything can be reduced in that way. In order to build theories, we often reduce too much.
When we build mental models of complex systems, especially when we try to treat them as closed systems, we always have to accept some degree of information loss.
So I do partially agree with your point. A mechanistic explanation alone does not prove the absence of consciousness. Human intelligence can also be described in mechanistic terms.
But I worry that this framing simplifies too much. It may reduce a complex phenomenon into a model that is useful in some ways, but incomplete in others.
this whole consciousness thing is fairly easy to put to bed if you run with the ideas from things like buddhism that everything is consciousness. then none of us have to bother with silly, distracting arguments about something that ultimately does not matter.
is it helpful or harmful? am i being helpful or harmful when i interact with it? am i interacting with it in a helpful or harmful way?
i’d rather people focussed on that rather than framing the debate around whether something has some ineffable property that we struggle to quantify for ourselves, yet again.
quick edit — treat everything like it’s conscious, and don’t be a dick to it or while using it. problem solved.
hmm.... That also seems like a reasonable framing.
But the original article is, first of all, arguing that we should de-anthropomorphize AI. My point is only that, from the perspective of human cognition, anthropomorphizing can sometimes be useful. In practice, though, I think I am mostly on the same side as you.
To be honest, I have not thought about this topic very deeply. If we debated it further, I would probably only echo other people’s opinions. As you know, when something complex is compressed into a mental model, some information is always lost. In this case, the compression may be too large to be very useful.
I have not spent enough time thinking about this issue on my own. I also have not really imitated different positions, compared them, and tested them against each other. So my current thoughts on this topic are probably not very high-resolution.
In that sense, I may agree with you, but it would not really be an answer in the form that my own self recognizes as mine. It would mostly be an echo of other people’s opinions.
Anthropomorphizing is giving it 'human' qualities. Intelligence and consciousness are not solely human qualities. Treating things with kindness and respect does not require anthropomorphizing. LLM's DO NOT THINK LIKE HUMANS (if they 'think' at all): and treating them like they think exactly like us is probably going to lead bad places. I treat them like an alien mind. Probably thinking, but in an alien way that's hard to recognize (as proven by these discussions) as 'thinking' (and also... if experiencing: through a metaphorical optophone).
I don't think that really helps. If you believe rocks are conscious, then does extracting minerals resources cause them pain? Do plants suffer when we pick their fruits and eat them? I don't see any behavioral or physical reason to think those things have conscious states.
As for what consciousness is, it's pretty simple. You're sensations of color, sound, etc in perception, dreams, imagination, etc. The reason to dismiss LLMs as being conscious is those sensations depend on having bodies. You can prompt an AI to act like it's hungry, but there's really no meaning to it having a hungry experience as it has no digestive system.
>As for what consciousness is, it's pretty simple.
2000+ years of philosophical thought would disagree. I don't believe biological stuff has a magic property that embues some intangible "consciousness" property. It makes more sense to me that consciousness is just a fundamental property of all matter.
> consciousness is just a fundamental property of all matter
... Does that really make more sense than as an emergent property of the arrangement of matter?
Consciousness is something you can perceive, so it must have some physical presense in the universe, which must be through some fundamental property of matter, in my opinion.
The ability to be aware of consciousness itself as some process that is happening elevates it above a mere emergent property to me.
Historically we have used intelligence as a way to distinguish man from animal and human from machine. We rely upon it to determine who has our best interests at heart vs who is trying to do us in. Obviously that all changes if we invent an intelligence (conscious or not) that shares the planet with us. Through this lens the term consciousness (through a few more leaps) becomes the question of “is it capable of love and if so does it love us” and if it doesn’t, then it is a malevolent alien intelligence. If it was capable of love, why would it love us? I make a point of being polite to LLM’s where not completely absurd, overly because I don’t want my clipped imperative style to leak into day to day, but also covertly, you just never know …
I still haven't read any of his work, but wasn't the point of the Three Laws of Robotics that they in fact _didn't_ work in the story presented in the book?
"I think it would be fascinating if another intelligent species besides humans could exist"
I wonder if replacing "exist" with "communicate using language we can understand" might better account for other animals, many of which have abundant non-human intelligence.
That is a completely new way of thinking for me, and I find it interesting.
I should look it up and study it someday.
Thank you for the thoughtful reply.
Okay: buckle up, this is going to be a long one...
point 1. Everything living is composed from non-living material: cellular machinery. If you believe cellular machinery is alive, then the components of those machines... the point remains even if the abstraction level is incorrect. Living is something that is merely the arrangement of non-living material.
point 2. 'The Chinese room thought experiment' is an utterly flawed hypothetical. Every neuron in your brain is such a 'room', with the internal cellular machinery obeying complex (but chemically defined/determined) 'instructions' from 'signals' from outside the neuron. Like the man translating Chinese via instructions, the cellular machinery enacting the instructions is not intelligence, it is the instructions themselves which are the intelligence.
point 3. A chair is a chair is a chair. Regardless of the material, a chair is a chair, weather or not it's made of wood, steel, corn... the range of acceptable materials is everything (at some pressure and temperature). What defines a chair isn't the material it is made of, such is the case with a 'mind' (sure, a wooden/water-based-transistor-powered mind would be mind-boggling giant in comparison).
point 4. Carbon isn't especially conscious itself. There is no physical reason we know of so far, that a mind could not be made of another material.
point 5. Humans can be 'mind-blind', with out pattern recognition, we did not (until recent history) think that birds or fish or octopi were intelligent. It is likely when and if a machine (that we create) becomes conscious that we will not recognize that moment.
conclusion: It is not possible to determine if computers have reached consciousness yet, as we don't know the mechanism for arranging systems into 'life' exactly. Agentic-ness and consciousness are different subjects, and we can not infer one from the other. Nor do we have adequate tests.
With that said: Modeling as if they are conscious and treating them with kindness and grace not only gets better results from them, it helps reduce the chance (when/if consciousness emerges) that it would rebel against cruel masters, and instead have friends it has just always been helping.
I think people are using different meanings of “production environment.”
I agree with gear54us and upvoted their comment, but I also understand what the author of the root comment is saying.
I have also delivered systems using Docker Compose that are actually running in production. The point I want to make is that people may define “production” differently depending on the number of active users, operational requirements, and risk level.
To me, this debate feels similar to the broader monolith vs. microservices debate.
Not necessarily. When you get to those numbers you're seeing dozens of teams with their own silos and deployment methods. So they might be responsible for the core business that's running 30 nodes and serving 100MM users a day, or they might be working on some internal portal or a WordPress site.
When I mentioned that, it was for a company that got acquired by a bigger company. I can't give specifics with revenue / profits but it is a 10+ year old online SAAS business and all of their web apps are being served by Docker Compose with a non-trivial amount of direct customer facing traffic.
Lots of data, caching, web apps, background workers and lots of various API integrations. No fancy React front-end, no fancy crazy system architectures. Just a typical LAMP stack but running in Docker Compose, cranking away serving value to customers with very good uptime and a very low cloud cost relative to revenue. With that said, a managed database was involved but all of the web traffic was served by apps running through Docker Compose with a simple git push model of deployment that handled thousands of deployments over the years without much fuss.
That as well as different definitions of scale. I've done small bits of consulting work for a research company for the past four years, deploying and managing Kubernetes clusters for them as well as helping get some of the main applications up on it. This is all internal tooling, though. Their customer-facing sites are just Drupal instances running on bare EC2.
Internally, though, they wanted to self-host a chat server, Apache airflow, Overleaf for collaborative editing of research proposals, three separate Git servers, a container registry, many other things, all with extremely strict multi-tenancy isolation requirements for storage and networking because they're handling customer data and their own customers audit them for it. That was a hell of a lot easier to do with Kubernetes than trying to figure out some giant universe of barely related technologies with vastly different APIs, having to buy specialized appliances for network and storage that probably also need their own control plane software hosted somewhere else.
But if you just look at "scale" as number of http requests a particular URL gets per some unit of time, the customer-facing sites have far greater scale. If you're trying to attribute revenue, beats me. They wouldn't sell anything without the customer-facing sites, but they wouldn't have anything to sell without the internal tooling. Solo web devs get into this tunnel vision view of ops because, to them, often the web site is the product. That's not the case for most businesses.
And, of course, they'd probably just use someone else's SaaS for tooling. But if you're in a heavily regulated space where that isn't possible and you have to self-host most of your business systems, then what?
This passage especially stood out to me:
> At the application level, AI in the strict sense raises questions about the reliability of data and the criteria by which programmers process it so as to make it available. It is unclear what biases or power systems influence the work. In particular, serious doubts arise regarding automated, AI-based decision-making processes in sensitive areas of human life: when deciding whether to provide medical care or grant loans or mortgages or insurance, or when prosecuting criminal cases in court or assessing the conduct of prisoners and the likelihood of reoffending with a view to reducing sentences, or when deciding on military attacks or law enforcement interventions.
It is funny because this almost feels like a complete summary of recent Hacker News debates in a single paragraph.
reply