They could, but it's one of those things that really only work if everybody joins. Because 3DS is rarely used right now, a portion of merchants don't even support it, so if you start enforcing is as a single bank, your customers will start complaining their card doesn't work. The banking industry in the US is also more decentralized than in the EU, so getting everybody to join in simultaneously is hard.
The window of opportunity for 3DS has also more or less passed, the industry is moving on to the next generation of tech (wallets/tokenization), that should be both easier to use and more secure.
Account Updater functionality isn't necessarily even involved there. In the end whether to accept a transaction is up to the issuer, and quite often they'll keep accepting recurring transactions on otherwise outdated card information.
Indeed, I suspect that's what went on here. I don't think there even exist 99 providers of what's customary called a digital wallet (e.g. Apple/Google Pay), and there's no definitely no single person that uses 99 of them.
It's bad service from GP's card company though, with network tokens they should be able to see which specific token was abused, and revoke just that one.
> merchants can’t select what level of security they want from the credit card processor
That really depends on the processor; many processors do allow merchants specify your acceptance rules in quite deep detail.
There's a bit of a dichotomy in the processor market: on one side you have those that aim to make it simple for their customers and unburden them, while on the other side you have those that expose all the complexities and give intricate controls. The first side won't allow you to specify security requirements, while the second side will give you a hundred options (of course there's also processors positioning them in between). The two sides generally target different customers.
Exactly, if citizens could convince US lawmakers to make it mandatory, it would be a huge net benefit to society as a whole.
I suspect that banks and merchants would lobby against it due the work involved. After all, they’ve already marked up their services and goods to cover the cost of fraud/insurance. So right now they don’t pay the cost of it, instead all their customers do through higher prices than they would otherwise have needed to pay.
> Exactly, if citizens could convince US lawmakers to make it mandatory, it would be a huge net benefit to society as a whole.
That's not obviously true. Adding security would likely reduce fraud, but would also make transactions more difficult and time consuming, and may also make recovering from fraud more difficult and time consuming.
Not having the module loaded doesn't mean you're not vulnerable, the kernel loads the module on-demand when it's needed. I tried the exploit on such a system, and it worked.
However, not having the module loaded does mean that in normal operation you don't need the module, so the proposed mitigation of disabling the module is safe in the sense that it won't disrupt anything.
I don't know what exactly can load this module but the servers are running for many weeks and I suppose that if something will load this module, it stays loaded until the next reboot.. no ?
I tried to rmmod on all servers and rmmod always returns `ERROR: Module algif_aead is not currently loaded`, that's why I think it's fine. Of course I take a look on https://security-tracker.debian.org/tracker/CVE-2026-31431 for the updates.
the kernel will autoload modules when they are needed. The fact that the module hasn't been loaded is an indication that the bug may not have been exploited, but it does not mean that you are not vulnerable to it. You need to block the module from loading or remove it entirely to mitigate the issue (which is what the first line of the recommended mitigation states).
Two things can be true simultaneously: the Linux kernel ecosystem should have done better at communicating this to their downstreams, and publicly sharing the exploit was irresponsible.
It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass.
No, this was already timed disclosure. This is very common and widely accepted. 90+30 is what Google Project Zero uses, for example. The security researcher has met their ethical requirements already. This is entirely on the kernel's security team for failure to communicate downstream. That is their responsibility.
The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch.
I'm not advocating for delaying the disclosure at all; my point is, if you see your initial disclosure to the kernel didn't go anywhere, to be responsible is to put in a little extra effort to ensure the fix is picked up before you disclose.
"Didn't go anywhere"? The kernel devs patched it! They patched it weeks ago! The kernel security team needs to communicate security problems in their own releases, because that is where the distros are already looking.
Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.
The kernel devs patched it! They patched it weeks ago
FTFA:
> I see that on the 11th of April 6.19.12 & 6.18.22 were released with the fix backported.
> Longterm 6.12, 6.6, 6.1, 5.15, 5.10 have not received the fix and I don't see anything in the upstream stable queues yet as I write.
I wouldn't go so far as to call this "the kernel devs patched it". Virtually none of the kernels that distro's are actually using today have received a fix. This looks like an extremely lackluster response from the kernel security team.
Pretty much the only non-rolling distro's that are shipping a fixed kernel are Fedora 44 and Ubuntu 26.04, both released in the last few weeks. Their previous releases both shipped with Linux 6.17 which is still vulnerable today!
None of this impacts disclosure norms. One important reason the clock starts ticking faster once any patch lands is that for serious attackers, the patch discloses the vulnerability. That's quadruply so in 2026, when many orgs are automatically pumping Linux patches through LLM pipelines to qualify them for exploitability.
But it's been at least 15 years since "reversing means patches are effectively disclosures legible mostly to attackers" became a norm in software security. And that was for closed-source software (most notably Windows). The norms are even laxer for open source.
I don't know if you are or you aren't, but that's the overall topic of the thread, and I'm just clarifying that the details you're adding don't change any of the norms of disclosure.
I'm on Fedora 43 and tried to hack myself with the python script. It didn't work on kernel 6.19.12-200.fc43.x86_64 which has a build date of April 12, 2026
> Should a security researcher that identifies a vulnerability in electron.js need to identify _every_ possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.
But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details.
It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least?
They embargoed their vulnerability for 30 days after Linux landed a kernel patch. They did their part. You will always be able to come up with other things they could do for you, and they will always at first blush sound reasonable because of how big and important Linux is, but none of those things will be responsibilities of the vulnerability researcher. Their job is to bring information to light, not to manage downstreams.
About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there.
I realize you've been championing this idea in the thread, and I admire it because I also recognize the misdirected blame. Please understand I do not harbor "blame" for the researchers.
> Their job is to bring information to light, not to manage downstreams.
The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions.
I strongly disagree. I want the information. I don't want to wait longer to find out about critical vulnerabilities so that researchers can fully genuflect to whatever Linux distribution norms people on message boards have. Their "actions" were to disclose a vulnerability that already existed and was putting people at risk. It's an absolute good.
If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs.
People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago.
I do get that, this era of automation is too responsive to not go public to provoke action. I think I might just be wistful of an era in which the alternate path might have made a difference. Sorry to pile on.
they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add).
whoever did the marketing on "responsible disclosure" was a genius.
tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules."
In my world, responsibility is not just checking a box of following industry practice. Responsibility, as Wikipedia puts it on their social responsibility page, is working together with others for the benefit of the community. And yes, sometimes that's a bit larger burden than would ideally be the case. It's an imperfect world, after all -- and let's not forget the disclosure as it happened also placed a larger burden than ideal on people scrambling to patch.
And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough.
The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports.
Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation.
The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there.
The situation with e.g. BlueHammer is fundamentally different: there, the only party that could act on it (Microsoft) ignored them. In this case, the parties that could act on it weren't notified at all.
I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before.
Downstream vulnerability disclosure is a negotiation between the downstreams and the upstreams. It is not the job of a vulnerability researcher to map this out perfectly (or at all).
Yes and that's why the current system where security researchers are expected to reach out to the distro mailing list is flawed and instead there should be a defined pipeline for the kernel security team to give a heads up.
"Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.
[...]
While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results.
[...]
For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy."
>Linux distros (specifically) act in this way
carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works.
Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that.
This is the result of that failure.
If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem.
There's no contradiction, the point is that Bob is able to produce valid output using LLMs, but only while he himself is being supervised; and that he doesn't develop the skills to supervise independently himself in the future.
No, this is impossible unless Bob is presenting at each weekly meeting simply the output of the LLM and feeding the tutor's feedback straight into it. For a total of 10 minutes work per week, and the tutor would notice straight away at least for the lack of progress.
No, the article specifies that Bob actually works with the LLM, doesn't just delegate. He asks the agent to summarise, to explain, and to help with bug fixing. You could easily argue that Bob, having such an AI tutor available 24/7, can develop understanding much faster. He certainly won't waste his time with small details of python syntax (though working with a "coding expert" will make his code much cleaner and more advanced).
This is the rub, Bob would not be promoted if he consistently provided unreliable LLM output. In order to get promoted, Bob needs to learn the skills that get reliable output out of an LLM. These may not be the same skills that Alice learns, but if the argument is that Schwartz's LLM output is valuable -- why are we to assume Bob's path isn't towards Schwartz?
They could, but it's one of those things that really only work if everybody joins. Because 3DS is rarely used right now, a portion of merchants don't even support it, so if you start enforcing is as a single bank, your customers will start complaining their card doesn't work. The banking industry in the US is also more decentralized than in the EU, so getting everybody to join in simultaneously is hard.
The window of opportunity for 3DS has also more or less passed, the industry is moving on to the next generation of tech (wallets/tokenization), that should be both easier to use and more secure.
reply