Thaler v. Perlmutter: The D.C. Circuit Court affirmed in March 2025 that the Copyright Act requires works to be authored "in the first instance by a human being," a ruling the Supreme Court left intact by declining to hear the case in 2026.
Authors and inventors, courts have ruled, means people. Only people. A monkey taking a selfie with your camera doesn't mean you own a copyright. An AI generating code with your computer is likewise, devoid of any copyright protection.
The ruling says that the LLM cannot be the author. It does not say that the human being using the LLM cannot be the author. The ruling was very clear that it did not address whether a human being was the copyright holder because Thaler waived that argument.
the position with a monkey using your camera is similar, and you may or may not hold the copyright depending on what you did - was it pure accident or did you set things up. Opinions on the well known case are mixed: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
Where wildlife photographers deliberately set up a shot to be triggered automatically (e.g. by a bird flying through the focus) they do hold the copyright.
AI generated code has no copyright. And if it DID somehow have copyright, it wouldn't be yours. It would belong to the code it was "trained" on. The code it algorithmically copied. You're trying to have your cake, and eat it too. You could maybe claim your prompts are copyrighted, but that's not what leaked. The AI generated code leaked.
The linked document labeled "Part 2: Copyrightability", section V. "Conclusions" states the following:
> the Copyright Office
concludes that existing legal doctrines are adequate and appropriate to resolve questions of
copyrightability. Copyright law has long adapted to new technology and can enable case-by-
case determinations as to whether AI-generated outputs reflect sufficient human contribution to
warrant copyright protection. As described above, in many circumstances these outputs will be
copyrightable in whole or in part—where AI is used as a tool, and where a human has been able
to determine the expressive elements they contain. Prompts alone, however, at this stage are
unlikely to satisfy those requirements.
So the TL;DR basically implies pure slop within the current guidelines outlined in conclusions is NOT copyrightable. However collaboration with an AI copyrightability is determined on a case by case basis. I will preface this all with the standard IANAL, I could be wrong etc, but with the concluding language using "unlikely" copyrightable for slop it sounds less cut and dry than you imply.
That's typical of this site. I hand you a huge volume of evidence explaining why AI generated work cannot be copyrighted. You search for one scrap of text that seems to support your position even when it does not.
You have no idea how bad this leak is for Anthropic because with the copyright office, you have a DUTY TO DISCLOSE any AI generated work, and it is fully RETROACTIVE. And what is part of this leak? undercover.ts. https://archive.is/S1bKY Where Claude is specifically instructed to HIDE DISCLOSURE of AI generated work.
That's grounds for the copyright office and courts to reject ANY copyright they MIGHT have had a right to. It is one of the WORST things they could have done with regard to copyright.
I merely read the PDF articles you linked, then posted, verbatim, the primary relevant section I could find therein. Nowhere does it say that works involving humans in collaboration with AI can't be copyrighted. The conclusions linked merely state that copyright claims involving AI will be decided on a case by case basis. They MAY reject your claim, they may not. This is all new territory so it will get ironed out in time, however I don't think we've reached full legal consensus on the topic, even when limiting our scope to just US copyright law.
I'm interpreting your most recent reply to me as an implication that I'm taking the conclusions you yourself linked out of context. I'm trying to give the benefit of the doubt here, but the 3 linked PDF documents aren't "a mountain of evidence" supporting your argument. Maybe I missed something in one of those documents (very possible), but the conclusions are not how you imply.
Whether or not a specific git commit message correctly sites Claude usage or not may further muddy the waters more than IP lawyers are comfortable with at this time (and therefore add inherent risk to current and future copyright claims of said works), but those waters were far from crystal clear in the first place.
Again, IANAL, but from my limited layman perspective it does not appear the copyright office plans to, at this moment in time, concisely reject AI collaborated works from copyright.
Your most recent link (Finnegan) is from an IP lawyer consortium that says it's better to include attribution and disclosure of AI to avoid current and future claim rejections. Sounds like basic cover-your-ass lawyer speak, but I could be wrong.
Full disclosure: I primarily use AI (or rather agentic teams) as N sets of new eyeballs on the current problem at hand, to help debug or bounce ideas off of, so I don't really have much skin in this particular game involving direct code contributions spit out by LLMs. Those that have any risk aversion, should probably proceed with caution. I just find the upending of copyright (and many other) norms by GenAI morbidly fascinating.
Currently, the US copyright application process has an AI disclosure requirement for the determination of applicability of submitted works for protections under US copyright law.
The copyright office still holds that human authorship is a core tenet of copyrightability, however, whether or not a submission meets the "de minimis" amount of AI-generated material to uphold a copyright claim is still being decided and refined by the courts and at the moment the distinction appears to fall on whether the AI was used "as a tool" or as "an author itself", with the former covered in certain cases and the latter not.
The registration process makes it clear that failure to disclose submissions in large contribution authored by contractor or ai can result in a rejection of copyright claim now or retroactive on discovery.
That comment is spot on. Claude adding a co-author to a commit is documentation to put a clear line between code you wrote and code claude generated which does not qualify for copyright protection.
The damning thing about this leak is the inclusion of undercover.ts. That means Anthropic has now been caught red handed distributing a tool designed to circumvent copyright law.
They can't. AI generated code cannot be copyrighted. They've stated that claude code is built with claude code. You can take this and start your own claude code project now if you like. There's zero copyright protection on this.
It's undetermined if code will be majority written by machines, especially as people start to realize how harmful these tools are without extreme diligence. Outages at Cloudflare, AWS, GitHub, etc are just the beginning. Companies aren't going to want to use tools that can potentially cause $100s of millions in potential damages (see Amazon store being down causing massive revenue loss).
I'm sure it's not _entirely_ built that way, and in practically speaking GitHub will almost certainly take it down rather than doing some kind of deep research about which code is which.
That's fine. File a false claim DMCA and that's felony perjury :) They know for a fact that there is no copyright on AI generated code, the courts have affirmed this repeatedly.
Try not to be overly confident about things where even the experts in the field (copyright lawyers) are uncertain of.
There's no major lawsuits about this yet, the general consensus is that even under current regulations it's in the grey. And even if you turn out to be right, and let's say 99% of this code is AI-generated, you're still breaking the law by using the other 1%, and good luck proving in court what parts of their code were human written and what weren't (especially when being sued by the company that literally has the LLM logs).
>In case you’re worried, this is still me. These are my own words. Writing is thinking, and it would defeat the purpose for an AI to write in my place on my personal blog.
Hey author. I vouched you so I can reply. Look into drum-buffer-rope. I think you'll like it. I agree with you, AI isn't accelerating the part than needs accelerating.
>10 U.S.C. § 3252 authorizes the Secretary of Defense to exclude a source from defense procurements involving national security systems if there is a supply chain risk, defined as the risk that an adversary may sabotage, maliciously introduce unwanted function, or subvert a covered system.
I think any LLM is covered by that, but specifically for Anthropic,
>Recent research has uncovered several critical vulnerabilities, including the "Claudy Day" attack chain which allows silent data exfiltration through conversation history, and a zero-click XSS prompt injection in the Chrome extension that enabled attackers to inject prompts without user interaction until a patch was released in February 2026.
What is obvious to me however is the timing. This Trump pants-shitting happened just before the Iran invasion. You can just imagine it. Trump wants to send fully autonomous bots into Iran to destroy the non-existent nuclear program. Anthropic leadership tries to make a moral stand saying innocent civilians could die. Trump doesn't care because he wants zero US military casualties even if it means a school full of Iranian children is bombed and everyone is killed. And then we get exactly that plus a forever war.
And obviously, the judge is out of her lane too... since, you know, the rule basically can apply to any AI agent because they're just as likely to do what you ask as they are to delete all your emails without even apologizing for it.
> Not if you want to run any of your banking apps or all sorts of things.
I must be getting old, cause I see everyone saying this in response as if it's a downside. As someone that's getting real tired of every company/product/service on earth trying to have you install their own app (even before we get to the privacy/data concerns, just on a pure convenience/hassle POV), the idea of "WeLl ThEn YoUr BaNk ApP DoEsN'T WoRk" is frankly a bonus.
I can touch to pay with a card , which is faster and more convenient than having to unlock/approve/dick with my phone, which by doing so also allows me to keep NFC off by default (personal preference).
Also, I don't need an app for that, already have one, it's called a browser.
You are getting old (and so am I), but banks are already starting to build out needed features into these apps that don't have equivalents in their web applications, and I'm deeply worried that this will continue. It also honestly needs a legislative solution, but at least where I live there is no appetite to handling that problem.
It's not paying I care about (and I don't need their app to do that, thankfully!), that's a solved problem as you rightly pointed out. It's everything else that makes me nervous as to where it might be going.
Said another way: I'm saying this as a warning, not as I "wahhhh I don't have the app that I want :'("
The Illinois bill is not about 18+ content. It's about controlling who your children can talk to on social media. The OS age check is just a means to that end. The end is blatantly unconstitutional. The bill of rights doesn't mention age limits. Freedom of assosiation applies to kids just as much as it does to adults. If the bill passes, then any racist parent could block all comms from kids of a different color for example.
I get what you’re saying but it’s a false premise. In today’s era, racist parents already block their children from even attending school with someone of a different color. Merely blocking comms would be a step before that in severity of control.
Parents have always had the ability (though maybe not explicitly the right to) control their children’s environment for the purposes of teaching personal beliefs. So long as the belief itself wasn’t deemed harmful to the child, society would allow it to continue propagate that way. Racism unfortunately has never been seen as innately harmful. It’s looked down on, yes, but not to the point of making it illegal to enforce in family life.
To be fair, as a parent I don’t want my under age children hooking up with literal nazis on social platforms, whoever that might be. The current tools and controls are lacking. A lot.
You delete the rest of your spam database and replace it with `fn can_send_spam(_: Email) -> bool { false }`. You delete the "can we spam you" checkbox from your checkout page and replace it with "return false".
For legitimate newsletters and similar: you delete any and all forms that allow signing up to receive emails without affirmative consent from that email address that they want to receive mail, and you offer a one-click effective-immediately "unsubscribe" to retract that consent at any time. Then, you can tell if you can send someone mail based on whether they're in your database of people who have explicitly consented to send you mail, and you don't ever send email to anyone else other than one-time consent requests and order-confirmation-style transactional mail.
The only legitimate database of emails is "these people have explicitly confirmed to us that we can email them"; any other database is radioactive waste, delete it.
>The only legitimate database of emails is "these people have explicitly confirmed to us that we can email them"; any other database is radioactive waste, delete it.
That's not actually how HIPAA compliance works. You're required to keep 7 years of communications, and part of those communications is who you sent it to. Amazon SES sends complaint notifications and you're not allowed more than 1 complaint per 1000 emails or they shut you down too. People who are repulsively anti-spam have ruined email as a medium.
I'm merely pointing out the technical aspect of this bill is ridiculous and everyone sending transactional emails will fight you, killing any bill you might have.
> People who are repulsively anti-spam have ruined email as a medium.
That is a ridiculous attitude. Spam has ruined email; anti-spam is the attempt to keep it usable. Anti-spam wouldn't be needed in the first place if not for spammers.
> Amazon SES sends complaint notifications and you're not allowed more than 1 complaint per 1000 emails or they shut you down too.
Good, that sounds like a reasonable step.
Now if only there were existential-level fines for sending spam, too.
Yes, I am aware of people who use the "report spam" button because they can't be bothered to hit "unsubscribe". Which wouldn't be as much of a problem if 1) they felt like they'd subscribed in the first place, rather than being tricked by a default-to-spamming "do you not not not want us to not spam you" checkbox, 2) spammers didn't act like having an "unsubscribe" link was all they need to do to make it okay to send unsolicited commercial email, and 3) unsubscribing reliably worked.
> transactional emails
Transactional emails have never been the problem. People buying lists of emails and sending email marketing spam and trying to defend that as in any way a legitimate practice have always been the problem, along with phishing, scams, etc.
>That is a ridiculous attitude. Spam has ruined email; anti-spam is the attempt to keep it usable. Anti-spam wouldn't be needed in the first place if not for spammers.
Spam didn't close port 25 to residential ISP customers. Repulsive anti-spammers did that. I can't set up and run email on a rpi in my house without paying ridiculous fees to become "business" internet. And all you really get for that is port 25.
I've run my own email server at work. I doubt you have the experience I do. I sent 50,000 emails a day to patients for over a decade. Important emails, about their health. And repulsive anti-spammers come up with solutions like "you have to solve this captcha to send this important email to your patient on Earthlink!" So after a time, we simply had to give up running our own email server and run email through SES and let Amazon worry about the Earthlinks of the world for us. 99.9% no complaints sounds really really hard, but we actually cleared that bar pretty easily. Except that one day one of our doctors dumped hundreds of our emails, which HE PAYS TO RECIEVE, into the spam folder by accident.
I have ZERO empathy with repulsive anti-spammers. NONE. For they are the reason that email is the centralized shitshow it is today. We have AI now. AI should be able to tell us if email is spam very quickly now. Can we please have our port 25 back?
1. User requests for email alice@example.com to be removed from database
2. Company removes "alice@example.com" from 'emails' table
3. Company adds 00b7d3...eff98f to 'do_not_send' table
Later on, the company buys emails from some other third-party, and Alice's email is on that list. The company can hash all the email addresses they received, and remove the emails with hashes that appear in their 'do_not_send' table.
You'd have to normalize the emails (and salt the hashes), but seems doable?
Thaler v. Perlmutter: The D.C. Circuit Court affirmed in March 2025 that the Copyright Act requires works to be authored "in the first instance by a human being," a ruling the Supreme Court left intact by declining to hear the case in 2026.
And in the US constitution,
https://constitution.congress.gov/browse/article-1/section-8...
Authors and inventors, courts have ruled, means people. Only people. A monkey taking a selfie with your camera doesn't mean you own a copyright. An AI generating code with your computer is likewise, devoid of any copyright protection.
reply