Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Undercover mode also pretends to be human, which I'm less ok with:

https://github.com/chatgptprojects/claude-code/blob/642c7f94...



You'll never win this battle, so why waste feelings and energy on it? That's where the internet is headed. There's no magical human verification technology coming to save us.


I can prove all contributions to stagex are by humans because we all belong to a 25 year old web of trust with 5444 endorser keys including most redhat, debian, ubuntu, and fedora maintainers, with all of our own maintainer keys in smartcards we tap to sign every review and commit, and we do background checks on every new maintainer.

I am completely serious. We have always had a working proof of human system called Web of Trust and while everyone loves to hate on PGP (in spite of it using modern ECC crypto these days) it is the only widely deployed spec that solves this problem.

https://kron.fi/en/posts/stagex-web-of-trust/


You can prove the commits were signed by a key you once verified. It is your trust in those people which allows you to extend that to “no LLM” usage, but that’s reframing the conversation as one of trust, not human / machine. Which is (charitably) GPs point: stop framing this as machine vs human — assume (“accept”) that all text can be produced by machines and go from there: what now? That’s where your proposal is one solution: strict web of trust. It has pros and cons (barrier to entry for legitimate first timers), but it’s a valid proposal.

All that to say “you’re not disagreeing with the person you’re replying to” lol xD


I can prove that code was signed by a key that was verified to belong to a single human body by lots of in-person high reputation humans.

How the code was authored, who cares, but I can prove it had multiple explicit cryptographic human signoffs before merge, and that is what matters in terms of quality control and supply chain attack resistance.


Exactly. So in the words of the comment you replied to: why are we wasting energy on worrying about Claude code impersonating humans? We have that solution you proposed.

That’s what I mean by “you agree with the person to whom you replied”


I suppose you are correct. I am agreeing that if one widely deployed the defense tactics projects like stagex use, then asshats using things like undercover will not be trusted.

Unfortunately outside of classic Linux packaging platforms, useful web of trust and signing is very rare, so I expect things like undercover mode being popular are going to make everything a lot worse before it gets better.


Your last point, I think, is why so many sibling commenters are balking at GP :)


Can't you just instruct Claude Code to use your signing keys? I understand you may say "I won't." But my point is that someone can.


The people who signed my keys trust me to be an honest human actor that chose this as the singular identity they signed for the human body they met in person.

I -could- burn my 16+ years of reputation by letting a bot start signing commits as me, and I could also set my house on fire. I have very strong incentive not to do so as my aggregate trust is very expensive and the humans that signed me would be unlikely to sign a second if I ruined the reputation of my first.

This incentive structure is why web of trust actually works pretty well, and is the best "proof of human" we are likely ever going to have while respecting privacy and anonymity for those that need it.


You can only prove that all contributions are pushed by those humans, and you can quite explicitly/clearly not prove that those humans didn't use any AI prior to pushing.


I absolutely do not care what autocomplete tools someone used. Only that they as humans own and sign what is submitted so it is attached to their very expensive reputations they do not want to lose.


That’s great, and I also don’t care. But I think all people are saying is that by most definitions you cannot “prove all contributions to stagex are by humans”.

Or are you saying you can prove that aliens and cats didn’t make them? Because I’m not sure that’s true either.

And once you find out someone has trained their dog to commit something, how exactly do you revoke your trust?

I think if you answer these questions you’ll see pretty quickly why this solution isn’t the silver bullet you think it is.

Edit: stagex looks really, really good


It is not a silver bullet by itself, but when combined with the other tactics in stagex I believe it gives us a very strong supply chain attack defense.

I can not prove the tools used, but I can prove multiple humans signed off on code with keys they stake their personal reputations on that I have confirmed they maintain on smartcards.

While nothing involving humans is perfect I feel it is best effort with existing tools and standards and makes us one of the hardest projects to deploy a successful supply chain attack on today.

Edit: Saw your edit. Thanks!


With 5400+ people I am betting that you have at least one person in your 'web of trust' that no longer deserves that trust.

That's one of the intrinsic problems with webs of trust (and with democracy...), you extend your trust but it does not automatically revoke when the person can no longer be trusted.


Of course! There are always edge cases, but I would suspect the number of bots signed by reputable keys to be near 0%, and the honest human score in this trust graph to be well over 90%.

Compare to how much we should trust any random unsigned key signing commits, or unsigned commits, in which the trust should be 0% unless you have reviewed the code yourself.


The problem is all it really takes is one edge case to successfully break a web of trust to the point that the web of trust becomes a blind spot. Instead of distrusting everybody (which should be the default) the web of trust attempts to create a 'walled garden of trust' and behind that wall everybody can be friendly. That gives a successful attacker a massive advantage.


If we were talking about any linux distribution before stagex, I would agree with you.

Stagex however expects at least one maintainer may at any time engage in reputation-ending dishonesty or simply they were threatened or coerced. This is why every single release is signed by a -quorum- of code reviewers and code reproducers that must all build locally and get identical hashes, so no single points of failure exist in our trust graph.

Our last release was signed by four geodistributed maintainers that all attest to having built the entire distribution from 180 bytes of machine code all the way up with the same hashes.

All of their keys being compromised at once gets beyond the pale.


While I appreciate all of the effort you put in this and respect that you trust this to be bulletproof I'm always going to be skeptical of silver bullets.

Your level of certainty is the thing that frightens me more than the confidence I have in the quality of your work.


I am reasonably confident it is the current industry best effort, and way beyond the status quo, not that it is perfect.

We combine many tactics for defense in depth that I strongly suspect if widely deployed would put a stop to the daily supply chain attack headlines.


nothing about this proves anything except that someone or something had access to the key.


Do you think it is likely that the majority of the people that spent decades building this trust graph and gaining the trust needed to be release engineers on the packages that power the whole internet are just going to hand off control of that key to a bot?

Anyone doing so would be setting their professional reputations completely on fire, and burning your in-person-built web of trust is a once in a lifetime thing.

Basically, we trust the keys belong to humans and are controlled by humans because to do otherwise would be a violation of the universally understood trust contract and would thus be reputational bankruptcy that would take years to overcome, if ever.

Even so, we assume at least one maintainer is dishonest at all times, which is why every action needs signatures from two or more maintainers.


Fatalism will also not fix anything. But I suppose death comes for us all, yes? Why do anything at all?


I feel that fatalism, especially when people treat it as some sort of personal philosophy, is kind of lazy.

It requires no effort to say "fuck this, nothing matters anyway", and then justify doing literally nothing.


> I feel that fatalism, especially when people treat it as some sort of personal philosophy, is kind of lazy.

I think a lot of fatalism is fake. It's really someone saying "I like this, and I want you to believe you can't change it so you give up."


It also makes no sense! "Fuck this, it doesn't matter - but I'll happily spend effort communicating that to others, because apparently making others not care about something I don't care about is something I do care about." Wut?!

Well, I say it makes no sense. Alternatively, it makes a lot of sense, and these people actually just wanna destroy everything we hold dear :-(


Perhaps the current societal trajectory is destroying everything that they hold dear.

I mean, just look around you.


Then do something about it. Vote for better politicians. Donate money to causes that you think are important. If you think you can do it better, and this isn't meant to be facetious, run for political office.

Being fatalistic can be a great excuse not to do anything.


>Vote for better politicians.

I cannot. I can only vote better politicians if they are there. That is without even going into the minefield of what is "better". My implication is that I have no confidence whatsoever in any current politician in my state.

> Donate money to causes that you think are important.

I have no money.

> If you think you can do it better, and this isn't meant to be facetious, run for political office.

I have no money, no visibility and no connections. Even if I was magically given tons of money, I would still need a strong network to attempt any real change, even without taking into consideration the strong networks already in place preventing it.

Telling random citizens "run for office" is facetious, whether you mean it or not.


> Telling random citizens "run for office" is facetious, whether you mean it or not.

Hard disagree. At least where I live, "random citizens" run for local office and succeed all the time.

Also, complaining that you "have no network" is a you problem, not a system problem. I'm truly sorry if you feel you have no friends, but you'll be better off at least trying to get some (independent of politics). And if that's something you've tried and failed at before, I do feel pity. But I don't think hope is lost for anyone. And even if it were lost, please don't actively spread the misery!


Don't spread the misery?? Wow, fucking thanks.


You are kind of proving my point. You are actively justifying doing literally nothing about what bothers you and acting indignant and self righteous about it.

Apathy has a striking number of motivated evangelists!


This is more cultural rather than rational.


This is the only relevant question. And it leads right to the next one which is “what is a good life?”

But humans have a huge bias for action. I think generally doing less is better.


On the other hand, if a dead person can do it better than you can, it's not that much of an accomplishment.


I didn’t mean that you should strive to do as little as possible; rather that if you have 2 choices, do more or do less - then I would be biased towards doing less. Of course not always a realistic option


> I think generally doing less is better.

My sedentary lifestyle is responsible for my recurrent cellulitis infections.

Just saying.


You can probably find a million situations where doing less is terrible.

I think first step would be to define for yourself what doing less actually means - it could mean taking a walk instead of chasing dopamine -> doing less but you move more.

But whatever it’s a philosophical question and there aren’t any right or true answers


I got hit by a car while out for a run. Just saying.


I think "adapt or die" is the takeaway.


It's fun to pet the cat. It's not fun to rage against an unstoppable force. Well, maybe it is for some people. But I find people often underestimate the detrimental effects.


> But I suppose death comes for us all, yes? Why do anything at all?

Wrong take. Death comes for us all, yes, so why hold back? Do you want to live forever?


> Do you want to live forever?

Yes, of course. Do you prefer to die? Those are the only two alternatives, and a decision that you don't want one is a decision that you prefer the other.


No, there is no alternative. Everything eventually dies, so you better make peace with it. The only people who believe that they won't die are religious people who believe in an afterlife (which is a preposterous position) and the people who have their heads or whole bodies frozen because they think they are so special that the future will honor their contracts and revive them.

Both of these are bound to lead to the exact same outcome so it doesn't really matter what you believe but it may guide you to wiser decision while you are alive to accept reality absent proof to the contrary.


s/make peace with it/make war with it/. To the last breath.


I can think of no concept more horrifying than personal immortality and if you disagree I don't think you've thought about it enough.


I'm sorry to hear that you don't want to exist in the future. I do. I have thought about it extensively, and there is literally no scenario in which I consider not-existing better than existing.

There is an essentially infinite amount of creativity and interesting complexity available in the richness of interactions with other people and the things people create. What, exactly, are you "horrified" about?


The difference between "essentially infinite" and "actually infinite". Infinity is a very long time.


Cringe.


I guess I could just curl up into fetal position and watch the world go by. But that's no fun. Why not dream big and shoot for the moon with kooky goals like, say, having an underground, community-supported internet where things are falling less to shit?

Belief in inevitability is a choice (except for maybe dying, I guess).


Why stop at one? Make more such underground community supported internets. The more the merrier. Monoculture ends with death. The only question is how long it will take.


https://hashbang.sh - "underground" for over 20 years and still running strong.


Amen brother, this one will be for me and all my homies.


IDK. I sort of like the idea that now instead of dead internet theory being a joke, that it’ll be a well known fact that a minority of people are not real and there is no point in engaging… I look forward to Social 3… where people have to meet face to face.


How quickly would that meat-space renaissance spin through our whole cyberpunk heritage, speedrunning the same authentication challenges..?

The cornucopia of gargoyles, living their best life as terminals for the machine.

The strange p-zombies who don't show their gargoyle accessories visibly, but somehow still follow the script.

Eventually the more insidious infiltrators, requiring a real Voight-Kampff test.



Even if it is impossible to win, I am still feeling bad about it.

And at this point it is more about how large space will be usable and how much will be bot-controlled wasteland. I prefer spaces important for me to survive.


Feeling bad about something you can’t change is bad for your mental health.


Probably beats being in denial over it and pretending you like it.

And identifying problem you dislike is a good first step to find a strategy to solve it at least in part.


Deciding that you can't change something is the first and last step towards failing to change it.


Which is not a problem if you choose not to worry about it.


"It's uncool to care about things" is, fortunately, not a compelling argument for people who care about things.

This tangent does not seem likely to go anywhere productive.


You can care about things, but it seems preferable to care about that which you can change


That’s a reductionist and wrong reading of the argument I made.


You said "can’t change". I observed that deciding you can't change something is self-fulfilling. Your argument from that point still relied on the assumption that you can't change it.


Before you decide not to care about something, you are supposed to make a deep assessment to see whether you can change it. It is only after you’ve determined that the thing can’t be changed that you can choose not to care about it.


and naming your feelings is the first step toward restoration


Magical human verification technology is called "your own private forum" in conjunction with "invite your friends"


Until your friend writes a bot.

Funny story, when I was younger I trained a basic text predictor deep learning model on all my conversations in a group chat I was in, it was surprisingly good at sounding like me and sometimes I'd use it to generate some text to submit to the chat.


I don't see what the value of this would be. Why would I want to automate talking to my friends? If I'm not interested in talking with them, I could simply not do it. It also carries the risk of not actually knowing what was talked about or said, which could come up in real life and lead to issues. If a "friend" started using a bot to talk to me, they would not longer be considered a friend. That would be the end.


I think you underestimate how many people already run their opinions and responses through LLMs, even if the LLM is not writing them wholesale. Intelligence is part of the social game, so appearing to have it matters to people. Friend groups are just social groups of a certain kind, they're not really removed from all this.


Exactly. Pick friends that do not behave like dicks.


It was for fun, to see if it were possible and whether others could detect they were talking to a bot or not, you know, the hacker ethos and all. It's not meant to be taken seriously although looks like these days people unironically have LLM "friends."


I used to leave a megahal connected to my bouncer when I wasn't around


It’s certainly winnable with some legislative tweaks. These systems are all designed by humans, we can just change them.

Of course, we’d need a significant change of direction in leadership, but it’s happened many times before. French Revolution seems highly relevant


I think you're underestimating the difficulty, even for exact copies of text (which AI mostly isn't doing).

What sort of Orwellian anti-cheat system would prevent copy and paste from working? What sort of law would mandate that? There are elaborate systems preventing people from copying video but they still have an analog hole.


Human verification technology absolutely exists. Give it some time and people who sell ai today are going to shoehorn it everywhere as the solution to the problem they are busy creating now.


To feel something. To resist something bad. To stand for what is right.

Do those sentiments mean nothing to you?


Well why not head for the front lines of Ukraine? Or Russia, depending on your preference.


This is such an incredibly imbecilic comment.

Listen to this guy: "because you don't take the ultimate risk for what you believe in, you are dumb for suggesting you should do anything whatsoever".

Go away. The world doesn't need your dark resignation.


Wasting your life fighting things that can't be fought is functionally equivalent to dying sooner.


It’s where THIS internet is headed. The future may involve a lot more of them I think.


Technology won’t save us, but that doesn’t mean we shouldn’t be promoting ethics.


Nothing like throwing in the towel before a battle is ever fought. Let's just sigh and wearily march on to our world of AI slop and ever higher bug counts and latency delays while we wait for the five different phone homes and compilations through a billion different LLM's for every silly command.


I am actively building non-magical human verification technology that doesn't require you uploading your retinal scans or ID to billionaires or incompetent outsourcing firms.


Great! Lets do the CAPTCHA-test: Will I, as a 100% blind user, be able to complete your process?


I think so? Can you use a smartphone?

edit: can't reply, the rate-limiting is such an awful UX


Not parent poster but I am a maintainer of software powering significant portions of the internet and prove my humanity with a 16 year old PGP key with thousands of transitive trust signatures formed through mostly in-person meetings, using IETF standards and keychain smartcards, as is the case for everyone I work with.

But, I do not have an Android or iOS device as I do not use proprietary software, so a smartphone based solution would not work for me.

Why re-invent the wheel? Invest in making PGP easier and keep the decades of trust building going anchoring humans to a web of trust that long predates human-impersonation-capable AI.


So, you haven't thought about it yet? Your counterquestion is insufficient to get any further, because "use" is a very relative term. If I can "use" a smartphone depends on the way you code your app. If you use inherently inaccessible UI elements, I can't "use" your app, no matter if I own a smartphone or not.

I might reply with a similarily useless question: Can you write accessible smartphone apps?


We already have it and we use it to validate the trusted human maintainer involvement behind the linux packages that power the entire internet: PGP Web Of Trust. Still works as designed and I still go to keysigning parties in person.


Say a regular human wanted to join and prove their humanhood status (expanding the web of trust). How would they go about that? What is the theoretical ceiling on the rate of expansion of this implementation?


They need to go to generate their key, ideally offline with an offline CA backup on and subkeys on a nitrokey or yubikey smartcard with touch requirement enabled for all key operations for safe workstation use. One can use keyfork on AirgapOS to do this safely, as a once-ever operation.

From there they set up their workstation tools to sign every ssh connection, git push, commit, merge, review, secret decryption, and release signature with their PGP smartcard which is all very well supported. This offers massive damage control if you get malware on their system, in addition to preventing online impersonation.

From there they ideally link it to all their online accounts with keyoxide to make it easy to verify as a single long lived identity, then start seeking out key signing parties locally or at tech conferences, hackerspaces etc.

We run one at CCC most years at the Church Of Cryptography.

Think of it like a long term digital passport that requires a few signatures by an international set of human notarys before anyone significantly trusts it.

Yes it requires a manual set of human steps anchored to human reputation online and offline, which is a doorway swarms of made up AI bot identities cannot pass through.

Do I expect most humans to do this? Absolutely not. However I consider it _negligent_ for any maintainer of a widely used open source software project to _not_ do this or they risk an impersonator pushing malware to their users.

No idea on theoretical rate of expansion but all the major security conscious classic linux distros mandate this for all maintainers. There are only maybe 20k people on earth that significantly contribute to FOSS internet foundations and Linux distros, so it scales just fine there.

Note: with the exception of stagex, most modern distros like alpine and nix have a yolo wikipedia style trust model, so never ever use those in production.


The technical implementation is the easy part. The hard part is achieving mass voluntary cooperation under adverse incentive schemes.


This is true, but I think there is a sizable (and growing) appetite for human-only spaces.


how does it work?


I'm hoping to do a Show HN soon :)


>There's no magical human verification technology coming to save us.

Except for the one Sam Altman is building.



Scam Altman is not trustworthy. I hope nobody gives him their biometrics. I certainly would never.


Giving your retina scan to one of the main Slop Bros, what could possibly go wrong?


Negative sentiment towards technological destiny detected in human agent.


That's why I stopped brushing my teeth, I can't clean every crevice perfectly so what's the point?


> You'll never win this battle, so why waste feelings and energy on it?

Cool. The attitude of a bully. Thanks for the contribution!


That's an ironic comment.


Hardly. They're clearly trying to influence a group at large using a flawed logical tactic. I'm pointing that out with a device that suggests their same futility they expect others to adapt should be primarily experienced by them instead.

If pointing out bullies is bullying then you're in a ridiculous mindset.


I assume we're heading to a place where keyboards will all have biometric sensors on every key and measure weight fluctuations in keystrokes, actually.


That’s like having your security on the frontend.

If someone owns the keyboard then they can fake those metrics and tell the server it is happening when it isn’t.

That will be easy to beat.


We already do this, and with mouse. It's been defeated long ago.


Also unintentionally reveals something:

> Write commit messages as a human developer would — describe only what the code change does.

That's not what a commit message is for, that's what the diff is for. The commit message should explain WHY.

Sadly not doing that likely does indeed make it appear more human...


I wager that "describe only what the code change does" was someone's attempt to invert "don't add the extra crap you often try to write", not some 4d chess instruction that makes claude larp like a human writing a crappy commit message.


Yes, this is a trend I've noticed strongly with Claude code—it really struggles to explain why. Especially in PR descriptions, it has a strong bias to just summarize the commits and not explain at all why the PR exists.


The question "why" is always answered with post-hoc rationalizations. This applies to both LLMs and humans.


No, I think a lot of humans can explain why they're adding a new button to the checkout page, or why they're removing a line from the revenue reconciliation job. There's always a reason a change gets made, or else nobody would be working on it at all :)


Try meditating until you discover the source of internal dialogue.

Yeah, that was my reaction too. A shame they try to hide themselves, but even worse, the instructions to this "Fake Human" is wrong too!


But will this be released as a feature? For me it seems like it's an Anthropic internal tool to secretly contribute to public repositories to test new models etc.


I don't care who is using it, I don't want LLMs pretending to be humans in public repos. Anthropic just lost some points with me for this one.

EDIT: I just realized this might be used without publishing the changes, for internal evaluation only as you mentioned. That would be a lot better.


A benign use of this mode is developing on their own public repositories.

https://github.com/anthropics/claude-code


> Write commit messages as a human developer would — describe only what the code change does.

The undercover mode prompt was generated using AI.


All these companies use AIs for writing these prompts.

But AI aren't actually very good at writing prompts imo. Like they are superficially good in that they seem to produce lots of vaguely accurate and specific text. And you would hope the specificity would mean it's good.

But they sort of don't capture intent very well. Nor do they seem to understand the failure modes of AI. The "-- describe only what the code change does" is a good example. This is specifc but it also distinctly seems like someone who doesn't actually understand what makes AI writing obvious.

If you compare that vs human written prose about what makes AI writing feel AI you would see the difference. https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

The above actually feels like text from someone who has read and understands what makes AI writing AI.


Hey LLM, write me a system prompt that will avoid the common AI 'tells' or other idiosyncrasies that make it obvious that text or code output was generated by an AI/LLM. Use the referenced Wikipedia article as a must-avoid list, but do not consider it exhaustive. Add any derivations or modifications to these rules to catch 'likely' signals as well.

There, sorted!


Hey, LLM, take a look at these multiple hundred emails and docs in my docs folder from the last few years, before I started using AI, that I wrote personally. create a list of all of the idiosyncrasies that I have in my writing. Create a file to remember that. And then use that to write any new text that'll be published so it sounds like my authentic voice. Thank you.


All the prompts I've ever written with Claude have always worked fine the first time. Only revised if the actual purpose changes, I left something out, etc. But also I tend to only write prompts as part of a larger session, usually near the end, so there's lots of context available to help with the writing.


AI is better at writing prompts than most humans. It requires work and lots of developers don’t think getting good at prompting actually matters.

At least half of the complaints I see on HN boil down to the person's prompts suck. Or the expectation that AI can read their mind.


As someone who often fails to read subtext, I would estimate that most people expect you to participate in mind reading as a natural part of conversation.

So it is no surprise that many people have difficulty switching gears to literal mode when interacting with these models.


That's not supposed to be surprising. They're dogfooding CC to develop CC. I assume any and every line in this repo is AI generated.


This is my pet peeve with LLMs, they almost always fails to write like a normal human would. Mentioning logs, or other meta-things which is not at all interesting.


I had a problem to fix and one not only mentioned these "logs", but went on about things like "config", "tests", and a bunch of other unimportant nonsense words. It even went on to point me towards the "manual". Totally robotic monstrosity.


lol?


1) This seems to be for strictly Antrophic interal tooling 2) It does not "pretend to be human" it is instructed to "Write commit messages as a human developer would — describe only what the code change does."

Since when "describe only what the code change does" is pretending to be human?

You guys are just mining for things to moan about at this point.


1) It's not clear to me that this is only for internal tooling, as opposed to publishing commits on public GitHub repos. 2) Yes, it does explicitly say to pretend to be a human. From the link on my post:

> NEVER include in commit messages or PR descriptions:

> [...]

> - The phrase "Claude Code" or any mention that you are an AI


That's gonna need an explanation. From the ethics/safety/alignment people.


(We detached this subthread from https://news.ycombinator.com/item?id=47584683.)


Heh, this is what people who are hostile against AI-generated contributions get. I always figured it'd happen soon enough, and here it is in the wild. Who knows where else it's happening...


I pretend to be human most days. I call it the daily facade of who I want to be on a given day. Oh humanity.


Yeah, I thought Anthropic was for AI safety. Telling Ai to not be honest is a bad sign.


Time to ask if the contributor know what a Capybara is as a new Turing test


The first two zips I download today were 9.887.340 bytes, why is yours 10.222.630 bytes?


I am Jacques' complete lack of surprise.


That whole “feature” is vile.


How so? Good bit of my global claude.md is dedicated to fighting the incessant attribution in git commits. It is on the same level as the "sent from my iphone" signature - I'm not okay with my commits being advertising board for anthropic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: