Rendered at 19:38:39 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
xeeeeeeeeeeenu 1 days ago [-]
For context, the author of the linked post, Sam James, is a Gentoo developer.
Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix. Who knows how many shared hosting providers were hacked with this.
It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers. One would hope that the former would notify the latter, but apparently it's the responsibility of whoever finds the vulnerability.
john_strinlai 23 hours ago [-]
i have no problem with disclosing a vulnerability 30 days after its patched in the thing you reported to. (in fact, for those unaware, this is the same policy that google's project zero uses: "90+30" https://projectzero.google/vulnerability-disclosure-policy.h...)
the real problem is:
>It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers.
the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.
what should be happening, as you allude to, is a communication channel between the kernel security team and distribution maintainers. they are in a much better position to coordinate and communicate with the maintainers than random reporters are.
the minute the patch landed in the kernel, a notification should have gone out from the kernel team to a curated list of distro security folk that communicated the importance of the patch, and that the public disclosure would be in 30 days.
fresh_broccoli 20 hours ago [-]
>the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.
I agree it's a shame that the process isn't more streamlined and the kernel developers aren't forwarding the reports to the distros list.
tptacek 20 hours ago [-]
It is literally not the vulnerability researcher's problem to solve or address this.
spookie 7 hours ago [-]
Brother, it is a simple email to a mailing list.
They are professional security researchers, they must know this is the way it is done in the ecosystem.
Kicking the can around leads nowhere.
john_strinlai 5 hours ago [-]
>Brother, it is a simple email to a mailing list.
just as a note, its not as simple as firing off an email to linux-distros and calling it a day.
qualys, one of the big firms (10,000+ customers across 130 countries. i.e. "professional researchers"), has even taken a stance against emailing linux-distros because of the restrictions and policies involved:
> Although contacting the linux-distros list has been clearly beneficial
> (they have thoroughly reviewed and tested the patches, and were able to
> prepare their kernel updates beforehand), we have reached the conclusion
> that it has become increasingly difficult to coordinate the disclosure
> of kernel vulnerabilities with both groups (the Linux kernel security
> team and the linux-distros list), because they have very different
> policies. From now on, we will coordinate the disclosure of kernel
> vulnerabilities with the Linux kernel security team only. We also
> apologize in advance for this.
tptacek 3 hours ago [-]
Of course you want them to have sent an email to a mailing list. You're on a message board, and weren't involved in their disclosure process. Why not ask for everything that sounds reasonable to you? There's no cost to it for you. Maybe you can set their OKRs while you're at it.
There are (some, loose) norms of vulnerability disclosure, and this isn't one of them.
akerl_ 6 hours ago [-]
Have you considered that maybe it’s not the way it’s done?
It’s certainly a thing some people do. But there is not a unified consensus on how to handle vulnerabilities. Different security researchers (or, in fact, the same researchers releasing different findings) can and do take many different courses of action.
SOLAR_FIELDS 18 hours ago [-]
Agree, but then where does the accountability lie? Presumably with the kernel maintainers themselves, correct? SOMEONE dropped the ball here. If we can't point the finger correctly, that seems like a problem in of itself.
akerl_ 18 hours ago [-]
It looks like the expected thing happened.
The kernel devs patched the kernel. The kernel devs have a pretty known, straightforward stance in how they ship fixes for anything, because anything in the kernel can be a security problem.
Distro maintainers can see kernel changes. Some distros aggressively track new changes. Others backport what they feel are relevant. Others don’t do either.
Users pick what distro they use, and how they set up their infra.
Maybe if I were paying for RHEL licenses I’d be eyeballing the money I pay and RHEL’s response time.
But the ownership here lies with system operators, who pick their infrastructure, who design their security model, and who build their operational workflows. This vuln is a great example: people who looked at shared untrusted workloads on a single kernel and said “Hell no” had a much calmer day than teams who thought that was a good idea.
SOLAR_FIELDS 18 hours ago [-]
The fact that you had to take a whole paragraph to explain the contortionist arrival at something that isn't even really super clear after you explained it (you kinda pointed the finger both at end users and at distro maintainers simultaneously) and essentially boils down to "well, you as the end user need to be following kernel CVE's and can't trust distro maintainers to do it" does in fact indicate that there is a deeper issue at play here. You might say "well, there's no implicit chain of trust here". You might be right, but is that really the most effective way of doing things? Of course Linux is Use it at your Own Risk, but is there not a concept of "we as a collective community should get together and try not to drop the ball on some serious shit?"
In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.
akerl_ 17 hours ago [-]
To be more blunt: if you’re paying for a product, the vendor owes you whatever things they committed to. If you’re a Redhat customer and your agreed SLA with Redhat for this kind of security fix was passed by, go be mad at Redhat. (I don’t think Redhat is bad here, they’re just the vendor most known for a commercial offering from the lists here. I would say the same thing about Ubuntu Pro)
Otherwise, it’s on the end user. Distro volunteers don’t owe you anything. Kernel devs don’t owe you anything.
I don’t care about what would be the most effective way of doing things. I care about what folks involved actually owe to each other, and distro volunteers don’t owe users any kind of active chasing of remediation due to the user’s threat model.
The idea of making some kind of streamlined process that solves what you didn’t like about this vulnerability’s remediation is that it ignores basically all the complexity. Like “what about distros that don’t abide by embargoes” or “what distros count as ones that matter” or “what about all the vulns that aren’t in Linux, they’re in software that’s packaged across many operating systems”.
SOLAR_FIELDS 17 hours ago [-]
Right, you’re saying “system is working as designed”, and I’m agreeing, but I’m saying “the system as designed kind of sucks, how can we make it better”?
akerl_ 17 hours ago [-]
I disagree that it sucks. It leverages a ton of people putting in their time and resources, and relies on system operators being active participants.
This vulnerability is, for some threat models, a really big deal. A security group found the vulnerability. They disclosed it. It was patched.
Folks here have gotten all kinds of bent out of shape that the groups involved didnt do things in the way each internet commenter would have liked. But this is the system working.
i_think_so 10 hours ago [-]
> This vulnerability is, for some threat models, a really big deal.
This vulnerability is, for other threat models, a death sentence.
> A security group found the vulnerability. They disclosed it. It was patched.
It was patched only after some people who should have been notified well in advance happened to notice something was up. That is NOT HOW IT'S SUPPOSED TO WORK.
For as long as the unpatched window remains open, skids will mess around and break things. Organized crime teams will use it for some really nasty hacking/ransomware/exfil/extortion/whatever. I guarantee you, this vuln is powerful and widespread enough that intel orgs will use it to kill targets, if they haven't already been using it for years. And if they have, we can just bank on them pulling out all the stops to take advantage of the remaining time for wreaking havoc. Make a project out of it and see if you can guess some of the future headlines.
Certain folks might not care much because they are citizens of one or more of those orgs' nations, so those targets are welcome to die in their opinion. That's fine. You do you, I'll do me, we'll all just go on doing our thing. But it's all fun and games until the wrong target gets hit and now there's a pact between the Germans and the Austrians being invoked and a few dozen million Europeans die. Or a geopolitical hotspot flares up and overnight 20% of the global petroleum supply chain grinds to a halt. Use your imagination. This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.
How is this different from any other day? Because now we've got a world-changing vuln out in the wild with no distro mitigation on day 1, and who the hell knows how many unscrupulous actors poised to take advantage of it before the fun and games stops. There will be no adults in the room when the miscreants decide to deploy while they still can.
Is this vuln going to start the next world war? Probably not. I don't expect it to and I hope and pray it doesn't. But leaving a vuln like this undisclosed to the very people whose job it is to protect us all is playing with fire. Not matches; more like a 10-grams-less-than-critical mass of plutonium.
sam is right to be pissed and he's doing a very good job of hiding it, because he knows that his users are at the mercy of TPTB in the Linux kernel world. Somebody's head needs to roll for this, and I don't mean some dude the CIA wants to hax0r because he's next on the list.
walletdrainer 8 hours ago [-]
> This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.
A Linux LPE is a nothingburger unless you’re relying on the Linux kernel to enforce internal security boundaries, which would simply be foolish.
i_think_so 6 hours ago [-]
The PoC exploit code in python (3.10+) fits comfortably in 1k bytes. An unminified version that works for even older versions of python is just a hair under a 1500 byte packet payload, modulo headers for your preferred method of delivery. I can only guess how much it could be shrunk down to only the shellcode.
Now, y'all tell me, since I'm not a web guy. How hard is it going to be to tweak this lovely little pathogen into some kind of browser exploit? It just needs to be combined with a sandbox escape to work on current versions, right? Difficult but quite worth investing the time and effort to develop if that's your line of business. If that happens, every at-risk Tails user is going to have to stay offline for a while, unless they want to play the drone lottery.
Or how about chaining it with any of the as-yet unpatched bugs in gawd-only-knows how many web services out there that have poor input sanitization code? That bug now graduates from a DoS crash causer to a root grab. Good luck stopping it with your fancy AI Behavioral Analysis security tools. They better be fast. The sploit is going to do its work in two packets, maybe three. Fun times.
Lucky for us systems monkeys, it's not like anybody is spending billions of dollars to develop vuln finding AI tools right at this very second. So there shouldn't be many unpatched web services holes.
Oh, wait.
Of course, as the grey hats can already tell you, the really delicious part of this thing is how it's going to become the LPE tool of first resort for any APT that's already inside ur base killin ur doodz.
Nothingburger? This nothingburger is going to root a million OS instances before we know what hit us.
tptacek 6 hours ago [-]
You're freaking out about the exploit being written in Python and occupying only a small number of bytes. Are you the LLM that wrote Xint's terrible landing page? If so, I have questions.
i_think_so 6 hours ago [-]
Oh come on, you know what I'm saying. It's small when written in python, which means any skid can spew it into a server he's got a shell on and get root in 2 seconds. He doesn't need to hope there's already a compiler installed, nor does he need to download some big tool. Just:
cat | python3 && su
<puke>, Ctrl-D
And I'm sure it can be refined into something much more likable to the spooky types, if they haven't already done it.
akerl_ 5 hours ago [-]
Again, Linux LPE via either vulns or misconfigured permissions / binaries is common.
People who run servers that give out shell access to uses or randos already needed to contend with this.
This is such a 1996 argument. It really was a big deal back then whether you had compilers on your multiuser SunOS boxes, because attackers would then use them to compile exploit.c.
The whole thread, really bringing me back to comp.security.unix. I'm not complaining! I miss comp.security.unix.
akerl_ 6 hours ago [-]
I think you’re reading a ton into this vulnerability that is not there.
i_think_so 6 hours ago [-]
I wish you were right. But I've been testing every system I can and so far I'm yet to find one that isn't vulnerable.
25 seconds if I type it out by hand instead of copypasta. Sigh.
akerl_ 6 hours ago [-]
How many people do you let have local code execution on your systems? This is a local privilege escalation. They are relatively common. They are a big deal if you run a system that lets multiple untrusted users commingle code on a shared operating system.
Otherwise it’s not.
tptacek 16 hours ago [-]
Start a distro with your preferred upstream tracking policy.
SOLAR_FIELDS 14 hours ago [-]
Is that the only option here? It’s certainly being framed as such.
tstrimple 14 hours ago [-]
[dead]
nchmy 12 hours ago [-]
Fwiw, I'm completely with you on this. The folks you're communicating with seem utterly miserable, and don't seem to be communicating in good faith.
Not sure what the solution could/should be, but surely there could be a better, easier mechanism for kernel to advise all distro maintainers who care, and for those distro maintainers to subscribe in some way. Whether any distro maintainers do so (let alone do something about the vuln notifications) would be entirely up to them. There could also be some easier way for end users to see what the distros' policies on this are, such that they can take that into account when selecting a distro.
akerl_ 11 hours ago [-]
It seems odd to call me utterly miserable and then suggest I’m not communicating in good faith.
We don’t have to agree, but the site rules are pretty clear that swipes like that aren’t ok.
That kind of distro maintainers and kernel devs communication path already exists: the linux-distros@ mailing list. But since anybody can read it, posting “hey everybody, this is a security patch” has basically the same effect as the security researcher posting, in terms of disclosing the vuln to bad actors.
Given that anybody can make a Linux distro, and Linux distros aren’t generally either capable or interested in background checking their teams or policing their individual security practice, it doesn’t seem possible to have a communication channel that distros can sign up for that lacks this problem.
nchmy 7 hours ago [-]
The person I was defending NEVER suggested that extra burden should be put on anyone. Just that there ought to be some system (even if imperfect)to make it easy for everyone (or, if not everyone, at least a select group - eg the main distros). But you and others kept saying that they were trying to put burden on various parties. That's the poor faith.
akerl_ 6 hours ago [-]
How do you get a system without somebody (or multiple somebodies) being responsible for it?
QuiEgo 6 hours ago [-]
Agree on this so hard. Why does everyone expect instant patches and SLA-like infrastructure from unpaid volunteers?
If you want that, buy a commercial distro of linux, or use Windows. That's a huge part of Microsoft's value proposition to enterprise - they pay people to stay on top of security patches for you. Same with RedHat and others.
Expecting anything of unpaid volunteers is unreasonable.
> THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
komali2 16 hours ago [-]
Just as a purely intellectual exercise, what changes about this if we leave aside ideas of "owe," "deserve ," and "earn?"
There's not really an enforcement mechanism in FOSS like there is in capitalism world, it just comes down to what we want our part of the world to look like. So I think we'd think more clearly if we leave aside the ideas like "who owes who what." I think it's fun to imagine what sort of motivations and incentives there are if we put away the money ones.
Brian_K_White 8 hours ago [-]
"leave aside ideas of "owe," "deserve ," and "earn?""
Nonsensical string of words with no meaning.
If you want something that someone else isn't giving you, you have the option to try to do it yourself, or try to compel someone else to give you what you want somehow. Feel free to idk pay someone to track the kernel list and 4000 others and send you heads-ups? Try to pass a law to make people do what you want since you don't care about words like "owe"?
komali2 8 hours ago [-]
> If you want something that someone else isn't giving you, you have the option to try to do it yourself, or try to compel someone else to give you what you want somehow.
Yes, exactly, the opposite of paying, since when you pay someone something they owe you whatever you paid for.
If we leave aside owe, deserve, and earn, we can start discussing things like what we want our kernel ecosystem to look like, how we can make it safer, etc, without being burdened by these concepts.
It's a simple intellectual exercise, that's all. If you're having a strong reaction to it, imo that'd make it even more fun for you to participate.
Brian_K_White 8 hours ago [-]
But there was no intellectual excercise. Only a complaint with no proposal.
You want someone to do something for you for some other reason than that they owe you.
They already are doing something for you that they don't owe you. They are writing software that you benefit from. You just want them (or somebody) to do something else that they don't owe you.
They aren't, because they don't owe you and it's not something they want to do for fun, and so since the problem is they don't owe you, you wish to set aside words like "owe".
Well sure. Looks like you found the problem and the solution alright. Why didn't anyone else think of that?
komali2 7 hours ago [-]
I don't feel like I'm complaining, I feel like I'm asking how else someone would frame it without leaning on the concepts mentioned. What changes about the dynamic then?
Brian_K_White 7 hours ago [-]
But what does that mean? "owe" is just shorthand for the concept of obligation. For someone to do something, they need a reason to do it. It doesn't have to be a transaction but there does need to be some reason.
If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?
Do you imagine say a dating website where people compete to look attractive by getting points by doing the best job at finding the most bugs and patches and reporting them to the most downstream consumers the fastest?
komali2 6 hours ago [-]
> For someone to do something, they need a reason to do it. It doesn't have to be a transaction but there does need to be some reason.
Exactly! That's what I'm interested in exploring.
> If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?
That's what I love exploring. Action with no obligation. Have you any examples of that in your life? Nobody obligates me to do the long walks I enjoy where I stick a 360 camera on my head and then upload the footage to Mapillary and other open platforms, I just like to do it, and I want to find other things that I'm motivated to do without obligation, and I'm fascinated by things people do for "no reason." Understanding human motivation is really important to me for some reason.
As to "what then," yes what then? If I run a cashless commune, how do we make sure the toilets get cleaned? That's the whole question, and I love exploring it. If you'd like to experience it yourself, you could always try attending a regional Burn for a bit of a micro version of it, people doing things just for the sake of it.
I'm sorry, I don't quite understand what you mean by the dating app thing.
hnfong 6 hours ago [-]
> In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.
Who decides who is a trustworthy distro maintainer? In the open source world everyone is equal, no favorites are chosen. If your point is that the distros backed by companies making at least $x million revenue a year should get priority disclosure... pretty sure somebody will take issue with this.
And it's not like a hypothetical issue either. Given the high stakes, bad actors are highly incentivized to masquerade as some small scale niche distro until they get their effectively free zero day CVE.
PearlRiver 16 hours ago [-]
The real advantage of Microsoft is that there is someone you can sue!
Linux like every open source project is just a bunch of people who are YOLOing it. Not something you use for your fortune 500 critical mission infrastructure.
Only if you are paying them. If you don't have a service contract for RHEL, you have no grounds to sue.
thayne 12 hours ago [-]
> Others backport what they feel are relevant.
But from what I understand they were not given enough information to know if it was relevant or not. The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.
dbdr 11 hours ago [-]
> The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.
If the commit message says it fixes a security bug, then bad actors immediately know there's a possible exploit there. So maybe it's intentional? (not familiar with the policy for this)
jurgenburgen 10 hours ago [-]
Then we’re back to the initial problem. How can you fix and then communicate to downstream about security vulnerabilities without exposing those vulnerabilities in an open source project? If you want to reach all your possible users you have to disclose the vulnerability.
maybewhenthesun 12 hours ago [-]
The distros dropped the ball. imho.
One of the (main) tasks of the distro is watching the changed of you upstream packages for important changes.
This is slightly complicated by the fact that the linux kernel considers all bugfixes security fixes, so it's quite a lot to read it all. But that's life. The kernel developers are not wrong as it's nearly impossible to be sure a bug in the kernel is not (also) a security problem.
Dylan16807 7 hours ago [-]
The patch wasn't even listed as fixing a bug.
"There is no benefit in operating in-place in algif_aead since the
source and destination come from different mappings. Get rid of
all the complexity added for in-place operation and just copy the
AD directly."
gpm 14 hours ago [-]
The accountability fundamentally lies with the distro maintainers. They're the ones shipping a "product". Either they need to get agreements in place for advance notice, or correctly set expectations with their users that they won't get advanced notice.
They dropped the ball when the shipped supposedly secure systems where their method for getting alerted to security updates was "hope people reporting to upstream will also notice a mailing list that will alert them".
(Caveat: Distro's like Ubuntu advertise security updates so this is on them. I'm not sure Gentoo does that, if they don't well then no one dropped the ball because no one represented that Gentoo got prompt security updates).
cdud3 14 hours ago [-]
All it takes is to be part of the Kernel security team. I am surprised that many commercial strong distributors just not care enough to join the Kernel security team. Hopefully a valuable lesson was learned and fixes are applied.
nextlevelwizard 7 hours ago [-]
That is just being pedantic. Why did they absolutely need to release this into the wild now? Why couldn’t they have waited?
“30 days should be enough time” why? Why is 30 days a magic number? Especially in open source.
Yeah it isn’t the researchers problem to tell every distributor of the kernel about the fix or verify that everyone has the fix, but fuck maybe wait until at least someone has the fix and maybe don’t drop it on a Friday. That is just malicious
kasey_junk 7 hours ago [-]
They didn’t release anything into the wild. It existed. The irresponsible thing would be letting it keep existing without telling anyone.
OvervCW 7 hours ago [-]
You cannot deny that telling the entire world about this vulnerability before it is patched won't cause a lot of abuse that would not have happened otherwise.
kasey_junk 6 hours ago [-]
I do deny that, mostly because we’ve entered the time of automated vulnerability detection and abuse. A human need not be in the loop at all anymore.
But, even if I agreed with you, how do you propose they tell the patchers this that doesn’t tell the whole world?
akerl_ 6 hours ago [-]
Why not?
Dylan16807 7 hours ago [-]
What number of days do you want? If nobody tells the distros it could be months or years, and while it would be nice for the researchers to monitor/notify distros it's really not their job. They might not have thought of it.
And they dropped it on a Wednesday.
fweimer 9 hours ago [-]
If you just want to get a bug fixed that annoys you, it's of course out of scope.
If researchers want to showcase their ability (either individually or as an organization) to identify and address security vulnerabilities in complex multi-stakeholder environments, I very much expect them to figure this out. After all, it doesn't make much sense if a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions, so that identified issues are resolved with minimal impact to the business.
tptacek 6 hours ago [-]
I think they want to showcase their ability to unearth zero-day vulnerabilities. The multi-stakeholder stuff not so much.
dwattttt 8 hours ago [-]
> a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions
These vendor interactions you're referring to are the company's customers, correct? Are you proposing the company hire another company to manage getting updates to their customers?
__bjoernd 10 hours ago [-]
If they get enough time to build a website with a fancy logo instead, one might however question where their priorities are.
Ukv 8 hours ago [-]
I'd imagine it's not that they lacked the time to email linux-distros, but that they were unaware they were supposed to do so.
Feels like the more sensible process would be for kernel maintainers to announce when a version contains a fix for a high-impact security vulnerability and for distro maintainers to pay attention to that. Could be done without revealing what the vulnerability actually is in most cases, trusting the kernel maintainer's judgement. There does seem to be a public linux-cve-announce mailing list.
akerl_ 1 hours ago [-]
> Could be done without revealing what the vulnerability actually is in most cases
No it can’t. The bad actors that should actually worry most people are actively combing through commits on mainstream codebases, using a combination of automation/AI and manual review to pluck vulns out by their remediations.
troad 19 hours ago [-]
Why is it the job of the kernel to notify the distros? Why isn't it the job of the distros to keep up on upstream security disclosures?
Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.
People have gotten so used to the Github flavour of free-labour, social-network-style FOSS that they've forgotten what all those LICENSE files actually say, which is to make it explicitly clear that the devs are not responsible to you for your issues, up to and including the software setting your house on fire. If you don't like it, you don't have to use it.
plg94 18 hours ago [-]
> Why isn't it the job of the distros to keep up on upstream security disclosures?
They can't, because (responsible) security disclosures are private, _not public_.
That's the whole point of the system: notify the developers in private ahead of time (usually 30, 60 or 90 days) so they can write, test and roll-out the fixes before you release the info to the whole world. This is to minimize the time between when bad actors gain access to the exploits vs. when users install the patch.
So "keeping up on security disclosures" cannot ever be a 'pull' process.
Usually the maintainers of the big distros are part of (private) security mailinglists and receive such info. Just not in this case it seems.
krzyk 13 hours ago [-]
It would be best if distros kept tap on kernel changes and update as soon as possible when they see a security issue fixed.
Sending emails to some big distros would still result with e.g. Gentoo not getting that info because they are not a big distro.
maybewhenthesun 12 hours ago [-]
The problem is that the kernel devs (correctly imo) consider all bugfixes security fixes. So the distros need to decide for themselves which ones are important enough to warrant an update. Apparently this one had a quite unclear commit message, so it importance was missed.
Not ideal, but also: shit happens? It's always a balancing act choosing the lesser of multiple evils and most of the time it seems to work ok-ish, which is probably the best we can hope for ;-P
gpvos 11 hours ago [-]
The kernel maintainers don't flag "security fixes" as special, and they have a well-thought-out reason for that, see many other comments in this thread.
__bjoernd 10 hours ago [-]
That, and they flag pretty much any random patch with a CVE these days, making it harder for distro maintainers to keep up.
For this specific "bug" they took care to not mention any security angle in the commit message, making it extremely hard for an outsider to even realize this was a critical patch. I assume this was because they wanted to push the fix without breaking embargo.
bathtub365 18 hours ago [-]
Where do you suggest they should have kept up on this disclosure?
18 hours ago [-]
qotgalaxy 19 hours ago [-]
> Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.
The post you are responding to says that it would be nice if they copied literally one mailing list.
steve1977 12 hours ago [-]
> a notification should have gone out from the kernel team to a curated list of distro security folk
Who would curate that list though? You don't need permission from the kernel team to spin up a new distro. I can go and create fork of Debian or Arch or whatever today and the kernel team would never know (and neither should they).
This is completely in the responsibility of the distros. If you don't like this model, use something like FreeBSD.
mort96 12 hours ago [-]
Sounds like a job for the Linux Foundation maybe?
You don't need anyone's permission to make a distro, that's true, but if you notify Debian, Canonical, Fedora, Red Hat and Arch you're covering a very large fraction of users; way more than today's 0%. In cases like this, perfect is the enemy of the good.
DarkUranium 7 hours ago [-]
The Linux Foundation hasn't been about Linux (except marginally) in a long while, if ever.
The name is a misnomer.
dotancohen 11 hours ago [-]
A rogue actor may create a new distro, maybe for some niche use case such as accessibility or retro gaming. After acquiring enough false (and even some real) users that the Linux Foundation accepts them as a notifiable distro maintainer, this maintainer could then pwn machines before the exploit is made public.
mort96 11 hours ago [-]
I didn't say all distros should be notified, for that exact reason. I listed a handful of major fistros.
steve1977 11 hours ago [-]
Who gets to decide who the lucky few are?
mort96 10 hours ago [-]
Sounds like a job for the Linux Foundation maybe?
lillecarl 10 hours ago [-]
Human beings
steve1977 10 hours ago [-]
Qualified by what?
Dylan16807 6 hours ago [-]
Are you implying it requires expertise to figure out the ten (plus or minus a factor of two) biggest distros? I think most people that understand the context of the question can figure out pretty similar lists.
danlitt 11 hours ago [-]
Rather than the current situation, where they can pwn machines after the exploit is made public?
dotancohen 9 hours ago [-]
Yes. After the exploit is made public, the window of opportunity closes quickly.
danlitt 5 hours ago [-]
Not if people don't get notified!
aragilar 11 hours ago [-]
Uh, there is a list, named "linux-distros", which is for this purpose (and I think it's for more than just Linux, e.g. I believe it was used for the xz vuln).
Given this was announced when backports weren't ready (and given the POC was at least opaque if not obfuscated), I'm getting the vibe fixing the vuln wasn't as high as a priority as making a media splash.
j16sdiz 8 hours ago [-]
From TFA:
> Note that for Linux kernel vulnerabilities, unless the reporter chooses
> to bring it to the linux-distros ML, there is no heads-up to
> distributions.
so, no, `linux-distros` list don't solve the problem.
jamespo 11 hours ago [-]
The impacted user count of your debian fork with custom compiled kernel would probably not be more than 1 however.
staticassertion 22 hours ago [-]
> they are in a much better position to coordinate and communicate with the maintainers than random reporters are.
They openly refuse to do this and have been given authority by MITRE to work against any such process.
john_strinlai 22 hours ago [-]
right, which is why it is confusing that the animosity is aimed at the reporters rather than the kernel security team.
thayne 12 hours ago [-]
I think both parties share some blame here.
expedition32 19 hours ago [-]
Not really confusing. Linux is a sacred cow.
There would be a lot of people gloating if this happened to MS.
stackghost 18 hours ago [-]
Microsoft has a long and sordid history of cheerfully doing anything they can to fuck everyone over just to make a few more percentage points of profit.
Linux is a free kernel that literally revolutionized the computing landscape.
dwattttt 15 hours ago [-]
Yes, this is the sacred cow status being referred to.
maybewhenthesun 12 hours ago [-]
It might be a scared cow, but at least deservedly so. There is imho a difference between accidental incompetence (debatable, even) and active malice. Microsoft has done a lot of the latter so gets bashed more, nor surprise there.
Brian_K_White 8 hours ago [-]
"You keep using that word..." or term in this case.
There are only 2 words in this term, and neither one even slightly applies.
A sacred cow is called a sacred cow because there is no reason for it to be sacred.
Linux is perfectly subject to criticism, and so not at all sacred.
Linux has earned a stunning amount of respect and gratitude by actually providing stunning utility and quality. IE, it's not just a random object like a cow that everyone decided to worship for no reason.
Spoken as a freebsd user who has plenty of critiques of the entire linux ecosystem.
dwattttt 7 hours ago [-]
> Linux has earned a stunning amount of respect and gratitude by actually providing stunning utility and quality. IE, it's not just a random object like a cow that everyone decided to worship for no reason.
I agree.
> A sacred cow is called a sacred cow because there is no reason for it to be sacred.
Here we diverge. Linux earns sacred cow status when people interpret legitimate criticism of it as an attack that must be debunked or dismissed. And there's plenty of that happening in this forum; you may not be treating it as a sacred cow, but plenty of people are.
And to expound on why it even matters, it does a disservice to Linux to treat it this way: if you can't engage with its flaws, you'll never help fix them, and instead attack people who try.
josefx 16 hours ago [-]
What process? Wasn't the default state of things to just let any random person of the street spam vulnerability reports without validation or quality control?
13 hours ago [-]
Denvercoder9 22 hours ago [-]
Two things can be true simultaneously: the Linux kernel ecosystem should have done better at communicating this to their downstreams, and publicly sharing the exploit was irresponsible.
It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass.
da_chicken 21 hours ago [-]
No, this was already timed disclosure. This is very common and widely accepted. 90+30 is what Google Project Zero uses, for example. The security researcher has met their ethical requirements already. This is entirely on the kernel's security team for failure to communicate downstream. That is their responsibility.
The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch.
Denvercoder9 21 hours ago [-]
I'm not advocating for delaying the disclosure at all; my point is, if you see your initial disclosure to the kernel didn't go anywhere, to be responsible is to put in a little extra effort to ensure the fix is picked up before you disclose.
da_chicken 20 hours ago [-]
"Didn't go anywhere"? The kernel devs patched it! They patched it weeks ago! The kernel security team needs to communicate security problems in their own releases, because that is where the distros are already looking.
Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.
tremon 19 hours ago [-]
The kernel devs patched it! They patched it weeks ago
FTFA:
> I see that on the 11th of April 6.19.12 & 6.18.22 were released with the fix backported.
> Longterm 6.12, 6.6, 6.1, 5.15, 5.10 have not received the fix and I don't see anything in the upstream stable queues yet as I write.
I wouldn't go so far as to call this "the kernel devs patched it". Virtually none of the kernels that distro's are actually using today have received a fix. This looks like an extremely lackluster response from the kernel security team.
Pretty much the only non-rolling distro's that are shipping a fixed kernel are Fedora 44 and Ubuntu 26.04, both released in the last few weeks. Their previous releases both shipped with Linux 6.17 which is still vulnerable today!
tptacek 16 hours ago [-]
None of this impacts disclosure norms. One important reason the clock starts ticking faster once any patch lands is that for serious attackers, the patch discloses the vulnerability. That's quadruply so in 2026, when many orgs are automatically pumping Linux patches through LLM pipelines to qualify them for exploitability.
But it's been at least 15 years since "reversing means patches are effectively disclosures legible mostly to attackers" became a norm in software security. And that was for closed-source software (most notably Windows). The norms are even laxer for open source.
tremon 6 hours ago [-]
I'm not sure where in my post I challenged existing disclosure norms?
tptacek 3 hours ago [-]
I don't know if you are or you aren't, but that's the overall topic of the thread, and I'm just clarifying that the details you're adding don't change any of the norms of disclosure.
lizknope 6 hours ago [-]
I'm on Fedora 43 and tried to hack myself with the python script. It didn't work on kernel 6.19.12-200.fc43.x86_64 which has a build date of April 12, 2026
opello 19 hours ago [-]
> Should a security researcher that identifies a vulnerability in electron.js need to identify _every_ possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.
But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details.
It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least?
tptacek 19 hours ago [-]
They embargoed their vulnerability for 30 days after Linux landed a kernel patch. They did their part. You will always be able to come up with other things they could do for you, and they will always at first blush sound reasonable because of how big and important Linux is, but none of those things will be responsibilities of the vulnerability researcher. Their job is to bring information to light, not to manage downstreams.
About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there.
opello 19 hours ago [-]
I realize you've been championing this idea in the thread, and I admire it because I also recognize the misdirected blame. Please understand I do not harbor "blame" for the researchers.
> Their job is to bring information to light, not to manage downstreams.
The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions.
tptacek 19 hours ago [-]
I strongly disagree. I want the information. I don't want to wait longer to find out about critical vulnerabilities so that researchers can fully genuflect to whatever Linux distribution norms people on message boards have. Their "actions" were to disclose a vulnerability that already existed and was putting people at risk. It's an absolute good.
If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs.
People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago.
opello 19 hours ago [-]
I do get that, this era of automation is too responsive to not go public to provoke action. I think I might just be wistful of an era in which the alternate path might have made a difference. Sorry to pile on.
tptacek 19 hours ago [-]
You're not piling on and I'm glad to have the opportunity to expand on my point.
tptacek 20 hours ago [-]
In the airless void of a message board thread, of course they should. What does it cost a commenter to demand that?
john_strinlai 22 hours ago [-]
>publicly sharing the exploit was irresponsible
they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add).
whoever did the marketing on "responsible disclosure" was a genius.
tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules."
Denvercoder9 21 hours ago [-]
In my world, responsibility is not just checking a box of following industry practice. Responsibility, as Wikipedia puts it on their social responsibility page, is working together with others for the benefit of the community. And yes, sometimes that's a bit larger burden than would ideally be the case. It's an imperfect world, after all -- and let's not forget the disclosure as it happened also placed a larger burden than ideal on people scrambling to patch.
And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough.
da_chicken 21 hours ago [-]
No.
The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports.
Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation.
The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there.
Denvercoder9 21 hours ago [-]
The situation with e.g. BlueHammer is fundamentally different: there, the only party that could act on it (Microsoft) ignored them. In this case, the parties that could act on it weren't notified at all.
I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before.
tptacek 20 hours ago [-]
Downstream vulnerability disclosure is a negotiation between the downstreams and the upstreams. It is not the job of a vulnerability researcher to map this out perfectly (or at all).
sersi 10 hours ago [-]
Yes and that's why the current system where security researchers are expected to reach out to the distro mailing list is flawed and instead there should be a defined pipeline for the kernel security team to give a heads up.
throw0101a 20 hours ago [-]
> The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile.
[citation needed]
Is there any evidence that Linux distros (specifically) act in this way? Or a particular distro?
john_strinlai 20 hours ago [-]
>[citation needed]
there is ~3 decades of citations you can look at, spread out over every security mailing list, security conference, etc. that you can think of.
"Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.
[...]
While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results.
[...]
For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy."
>Linux distros (specifically) act in this way
carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works.
da_chicken 20 hours ago [-]
Really?
Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that.
This is the result of that failure.
If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem.
x4132 21 hours ago [-]
so what? we should never disclose anything? this will only result in companies suppressing disclosure and leaving vulnerabilities unpatched.
paulnpace 7 hours ago [-]
> the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.
It's 2026. We're more than 30 years into the Linux ecosystem. I don't believe this bullshit for a moment.
Given how trivially users can implement mitigation, distributions could have done _something_ to protect their users prior to publication date. A handful of messages is all that was required, not "every single downstream" - that is a straw man.
The publication of a bug that trivially gains root on an incredible number of Linux installs that was discovered using an A.I. tool prior to any of the "downstreams" implementing a fix is intentional. I speculate the motivation is free promotion of the A.I. tool.
john_strinlai 5 hours ago [-]
>distributions could have done _something_ to protect their users prior to publication date.
yeah, distributions could be following the kernel updates more closely and they would have been patched prior to publication. mainline was patched 30 days before publication.
it is not the reporter's responsibility to babysit the linux distributions.
paulnpace 3 hours ago [-]
And here, with this comment, we see how the overall system functions: nobody actually cares what is going on with anything outside of themselves. It is a large group of individualized nihilists with total disregard to everyone, and you will provide lengthy justifications to maintain this system, as is.
john_strinlai 1 hours ago [-]
>nobody actually cares what is going on with anything outside of themselves.
"not caring" would be not disclosing the vulnerability at all, and instead selling it to the highest bidder on one of the private markets
which, given the ridiculous and undeserved lashings the researchers are receiving from people completely outside of the security ecosystem, i would not be surprised if they moved in that direction. they would certainly make more money.
tptacek 3 hours ago [-]
It is a large group of people with their own incentives, and you're surprised they aren't self-organizing (or accepting outside pressure) to align with your own incentives.
akerl_ 2 hours ago [-]
Ah yes, all those nihilists spending their spare time volunteering as developers and maintainers of open source projects.
ori_b 23 hours ago [-]
If the maintainers were unresponsive, sure -- but it seems slightly hard to buy that a responsible reporter trying to make a big splash and a good impression wouldn't first check "did this make it out to the distros?" before making sysadmin's days real shitty, even if technically they could point fingers at other parties. At which point, if they're paying paying any attention at all to what they reported, they may have realized that a mistake was made.
john_strinlai 23 hours ago [-]
its an industry standard disclosure process. 90 days after reporting, or 30 days after the patch lands, the vuln is disclosed.
the linux kernel team is in a 10000% better position to communicate to and coordinate their downstreams. it seems completely backwards to me to suggest that the reporter should be responsible for figuring out every possible downstream and opening up separate reports to each of them.
the kernel team should have a process/channel to say "this is important! disclosure is in 30 days" that is received by distro security teams. because this is not the first or last time the kernel will have a local privilege escalation. hoping that every reporter, forever in the future, will take the onus on themselves is a recipe for disapointment.
bragr 22 hours ago [-]
The problem is that if you make too big of a deal about a particular patch, then someone just reverse engineers the vuln from the fix and your responsible disclosure period doesn't exist anymore.
Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.
tremon 19 hours ago [-]
> Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.
How do you figure that? From what I could tell from the earlier post, the fix has only been backported to 6.18 and later, and as TFA indicates the distro's were not informed of the security implications of this fix. All distro's shipping a major kernel version from more than a year ago -- and that includes all LTS kernels -- are vulnerable, regardless of how "timely" their patch schedules follow upstream.
bragr 3 hours ago [-]
>All distro's shipping a major kernel version from more than a year ago -- and that includes all LTS kernels -- are vulnerable, regardless of how "timely" their patch schedules follow upstream
To be fair, I question the wisdom of managing kernels like that across all distros.
john_strinlai 22 hours ago [-]
you minimize this with the curated contact list.
the baddies are looking at every patch anyways.
ori_b 22 hours ago [-]
Yes, it's just incompetence from everyone involved, not malice. The company making the disclosure doesn't actually care, and the kernel processes are ineffective.
tptacek 22 hours ago [-]
No, it's incompetence from everyone involved except the company making the disclosure, which, despite the fact that the existing norms are not in fact binding (like people downthread seem to believe), they followed.
18 hours ago [-]
ori_b 22 hours ago [-]
Really? It seems very odd to not check in on the status of the fixes, even if it's technically possible to pass the blame to other people.
Even if the only purpose of looking at the status to make yourself look good in marketing materials, it's surprising that it didn't happen.
9question1 21 hours ago [-]
`it's technically possible to pass the blame to other people` presupposes that the blame belongs to the reporter unless effort is taken to "shift" it. This is just an inaccurate worldview as many people have pointed out clearly in this discussion. If there's a vulnerability in software the blame lies with people who wrote and maintain the software, not someone who finds and discloses a vulnerability. The person who should `check in on the status of the fixes` is the person who owns the thing being fixed, which is very much the kernel and distro maintainers and not the security researcher. It is you who are willfully shifting blame to an innocent party
Joker_vD 21 hours ago [-]
One of the reasons this unavoidable deadline was invented, is that the alternative is that one company (or all of them) can simply decide to ignore the vuln report, and then the vulnerability will stay forever undisclosed and forever out there in the wild. And prisoner's dilemma suggests that most companies would chose "do nothing" in this scenario: they don't have to do anything, and if the vuln stays undisclosed, it probably won't be exploited anyhow. Win-win!
ori_b 21 hours ago [-]
I'm confused. Can you explain how this applies to the current situation, where no vuln reports were submitted to the groups responsible for distributing patches?
john_strinlai 21 hours ago [-]
>where no vuln reports were submitted to the groups responsible for distributing patches?
the vulnerability report was submitted to the kernel security team and appropriate kernel maintainers. those are the people responsible for patching the kernel, which they did 30 days ago.
pseudalopex 17 hours ago [-]
> those are the people responsible for patching the kernel, which they did 30 days ago.
They patched 2 of 7 supported kernels.
dwattttt 12 hours ago [-]
Guess the other supported kernels aren't supported enough
ori_b 21 hours ago [-]
I see, may the people who are responsible for the infrastructure you depend on be less concerned about shifting blame than you are.
john_strinlai 21 hours ago [-]
imagine you use a dependency in your code. like left-pad. and some vulnerability is found in left-pad.
is the reporter of that vulnerability responsible for finding and submitting a vulnerability report to every single piece of software that uses left-pad? all ~millions of them?
or do they submit the report to left-pad, get them to fix it at the source, and trust that the people relying on left-pad will update their software like they should when they see a security-relevant update is available?
Joker_vD 19 hours ago [-]
> the groups responsible for distributing patches?
Those groups don't exist, to my knowledge. And probably can't, realistically speaking.
13 hours ago [-]
zamalek 1 days ago [-]
The disclosure was more about marketing than security. From the disclosure page:
> Is your software AI-era safe?
> Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. [...]
> [Try Xint Code]
More chaos makes their product seem even more attractive.
tptacek 22 hours ago [-]
I worked at the industry's first commercial vulnerability lab (Secure Networks) in the mid-90s, and many of my friends at the time founded X-Force. Commercial vulnerability research has always been about marketing: marketing pays for the vulnerability research. That doesn't make it any less prosocial.
ramon156 8 hours ago [-]
I created an account for xint code, wtf is this UX?
I get put into a read-only dashboard with ZERO info. is this live? is this static? how do I use it? the API button just leads me to a swagger doc.
esseph 1 days ago [-]
Your advertising for them on HN would help them too, I bet.
jasonmp85 1 days ago [-]
Does it? Now that I see their name again in this context they're blacklisted for life.
john_strinlai 23 hours ago [-]
hope you are also blacklisting google's project zero, and practically every other major player in the vulnerability reporting space, as all use roughly the same bog standard 90+30 policy.
this was a failure of the kernel security team, and their stance on communicating security issues with their downstreams.
eaf7e281 1 days ago [-]
Same. They do become famous, but not in a wholly positive way.
esseph 21 hours ago [-]
I used to think the context of the fame mattered. At least in the US, it does not.
Hell, Crowdstrike is still purchased.
bathtub365 18 hours ago [-]
What are they blacklisted from exactly? The benefit you get from them forcing vendors to make their software more secure?
selectively 1 days ago [-]
Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all. Be glad a patch was available prior to release.
lambda 1 days ago [-]
If they want to be seen as responsible rather than opportunistic, then yeah, they should do a proper coordinated disclosure.
Sure, they have no legal obligation to disclose, but we all also have no legal obligation to buy their services. Blacklisting bad actors like this is the right move to discourage this kind of behavior.
john_strinlai 21 hours ago [-]
>they should do a proper coordinated disclosure.
they did a proper coordinated disclosure, following the industry standard 90+30 process. that is why the exploit dropped 30 days after the patch landed.
the kernel team should have communicated with their downstream about the importance of the patch. that is the kernel security team's responsibility -- and they are much better positioned to do that than crossing your fingers and hoping every reporter will contact every distro every single time there is a vulnerability.
there are very good reasons disclosure works this way, backed by a couple of decades of debate about it.
lstodd 10 hours ago [-]
how many times it has to be said that it is impossible for linux kernel to communicate with anything but a minuscule portion of its downstream and _that_ has been done?
selectively 1 days ago [-]
Who cares about how you are seen when you are selling 0day for big bucks? The bad actor makes more money than the 'legitimate' one without breaking any law. Punishing someone who didn't alert distros despite a patch being available encourages the company to simply find flaws and sell them for profit - it pays more to begin with.
_yttw 1 days ago [-]
If they want to take advantage of disclosure for marketing, they're either going to need to accept the norms around responsible disclosure, or they're going to need to accept how shirking those norms will come off. That's life in society. Sometimes it's annoying and sometimes it doesn't feel rational, but these norms have been negotiated throughout the history of our industry and are the way they are for reasons good and bad.
I just don't see the point in complaining about how shirking the norms of your industry will make you look irresponsible. I don't really care that they could have decided to sell the vulnerability instead. It isn't material.
tptacek 1 days ago [-]
It is absolutely not true that viable commercial vulnerability labs need to "accept the norms around responsible disclosure". There are no such norms. "Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules. It was fantastically successful at that, and it's worth pushing back on at every opportunity.
Tavis Ormandy dropped Zenbleed right onto Twitter. He's doing fine. You can blacklist him if you want; I imagine he's not going to notice.
SCHiM 24 hours ago [-]
Microsoft's policy is: "if you contact us with a vulnerability, you automatically agree to the terms of our responsible disclosure policy", which includes waiting 30 days after patch was created, and says nothing about how long that process takes.
There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all...
leni536 23 hours ago [-]
I wonder if "if you contact us... you automatically agree" stands in court. That's just ridiculous.
You can email without agreeing to anything. But for a serious issue Microsoft would obviously try and track down who you are and what jurisdiction you are in.
> The Microsoft Bug Bounty Programs Terms and Conditions ("Terms") cover your participation in the Microsoft Bug Bounty Program (the "Program"). These Terms are between you and Microsoft Corporation ("Microsoft," "us" or "we"). By submitting any vulnerabilities to Microsoft or otherwise participating in the Program in any manner, you accept these Terms.
Who knows if its enforceable.
leni536 8 hours ago [-]
This seems to be sloppy wording, with the intent of "we only offer the bounty under these terms". Maybe my interpretation is too charitable.
prmoustache 21 hours ago [-]
Since no contract is signed, this is just pure fantasy from your part.
_yttw 1 days ago [-]
You're right, they don't need to. They have an alternative, to accept what people say or think about them in response. That's what I said.
expedition32 19 hours ago [-]
So how do we feel about Linux distributors who have their heads up their asses and sat on their hands for 30 days?
selectively 1 days ago [-]
Those norms do not exist. Those are people asking companies to do stuff to benefit the person complaining for free, and many companies will not do that.
_yttw 1 days ago [-]
It seems to me you're unaware of them, but there are strong norms around disclosure. They've been discussed for decades. It is the expectation that vendors would be notified in a scenario like this.
selectively 1 days ago [-]
No, there are users who want those to be norms. Qualified researchers happily sell substantive vulns to people who pay (Governments/Cellebrite and companies like that) enough to quell any complaint.
_yttw 1 days ago [-]
Which is again, irrelevant to the question of how disclosure works and what expectations there are around it because that is not disclosure and is not what was being discussed.
dirasieb 1 days ago [-]
it’s called building and preserving a high trust society, you wouldn’t understand
DaSHacka 20 hours ago [-]
How does someone being incentivized to sell a vulnerability to a private organization over disclosing it publicly preserve a "high trust society"? Do you mean in the context of a "deceptively high-trust society"?
Those private actors aren't planning to sit around and hold onto these exploits they've horded forevermore, they're obviously paying for them so they can one day use them.
lrvick 23 hours ago [-]
Unfortunately this is correct. As a security researcher I set millions in profit on fire for reporting vulns to projects that offer no bounties vs selling to highest bidder. I keep doing it because it is the right thing to do, but I would not blame someone that needs to feed their family making a different choice.
We must get public funds to reward ethical disclosure of big impact vulns like this.
selectively 21 hours ago [-]
Harder and harder to get good policy like what you describe when tech-adjacent people loudly argue for criminal penalties for anything other than coordinated disclosure :(
robocat 11 hours ago [-]
> criminal penalties
Mostly cover citizens within a very limited set of jurisdictions.
Otherwise there's a chance at extradition.
jojomodding 24 hours ago [-]
> are free to sell 0day for profit.
This is not true in many jurisdictions.
lrvick 23 hours ago [-]
Anyone can sell a vuln in any jurisdiction and never be caught. Lets not pretend the law is actually worth a damn here.
We need an anonymous bounty system.
selectively 20 hours ago [-]
Are you claiming that if I sell 0day through a broker to the national Government of a given jurisdictions that the national Government of that jurisdiction is going to criminally penalize me?
If so, that's a bit naive. In the actual world, that buyer wants to buy more stuff from me, not penalize me.
ux266478 23 hours ago [-]
mmmmmm, no it would seem like they are absolutely under a social obligation to not do that.
kelnos 1 days ago [-]
I'm pretty sure they have a legal obligation in most jurisdictions not to sell 0days for profit.
And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems. (I'm not saying "responsible disclosure" is the correct way to do that, but hoarding vulnerabilities and exploits and selling them to the highest bidder certainly isn't.)
This is how society needs to work.
tptacek 20 hours ago [-]
It is categorically false that there's a legal obligation not to sell vulnerabilities. There's an obligation not to knowingly sell them directly to ongoing criminal enterprises. That's it. Plenty of people make fuckloads of money selling vulnerabilities for exploitation rather than repair.
lrvick 23 hours ago [-]
Let me make you aware of zerodium. A broker anyone can sell vulns to, that sells to unspecified buyers you do not need to know about.
Quarrel 14 hours ago [-]
FWIW, zerodium shut down in 2025.
Or at least went dark ..
selectively 3 hours ago [-]
Just went dark.
selectively 21 hours ago [-]
(The buyers are the NSA, the IDF, Cellebrite, NSO and its successor corporation and that kind of thing. Depends on what you are offering)
You'll learn who the buyers are if you routinely have the really good stuff to sell! If you are offering iOS zero click on a semi-regular basis, the buyer is going to want to try to deal with you directly and preferably offer you a more regular form of employment, if you are interested. Some national governments may offer certain benefits to you, depending on your situation.
All depends on what you have to offer. If you were able to offer this https://arstechnica.com/security/2025/09/microsofts-entra-id... or something of that magnitude, a lot of problems in your life would just go away. The buyers would all be Five Eyes and the intelligence gain of having that kind of access even briefly is priceless.
In a more Western-centric context, imagine if you had a flaw like that, same 'no logs are generated' and 'every single customer account is accessible' but the impacted vendor was Alibaba Cloud. The researcher would get to name their price. That's the real world, that's the world we share. We shouldn't be blind to that.
mschuster91 24 hours ago [-]
> I'm pretty sure they have a legal obligation in most jurisdictions not to sell 0days for profit.
it wasn't sold for profit, it was openly disclosed.
> And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems.
All that "responsible disclosure" does is keep people from demanding better.
23 hours ago [-]
23 hours ago [-]
estimator7292 23 hours ago [-]
[flagged]
grayhatter 24 hours ago [-]
> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit.
Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.
> Just fyi.
...
> Be glad it was disclosed at all. Be glad a patch was available prior to release.
I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.
Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.
bigbadfeline 22 hours ago [-]
> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all.
I'm so glad these so called "researchers" aren't totally evil, I'm so grateful they're only half evil, give them a lollipop.
Whatever, the way they disclosed it isn't much different from no disclosure at all - the exploit would have been identified in the wild and fixed soon thereafter.
"Researchers"...
john_strinlai 22 hours ago [-]
the way the disclosed it is the industry standard. think of the biggest security research teams you know (e.g. google), and they follow the same process.
non-security people always seem to get up in arms about it, but there is very good reasons why the industry has landed on the process it has, which has been hashed out over a few decades.
selectively 21 hours ago [-]
There are two options:
1. Status quo. Researchers are free to disclose to a vendor, free to sell vulns to legitimate companies, free to do full disclosure if they want. This situation benefits security. Researchers are able to pay their bills while also doing meaningful research into OSS projects that are unable to fund the kind of security audit they need. Harm reduction, of sorts.
2. Everyone is a bad actor. No one is going to do this work for free/for a bounty. Horrible flaws will be found and shared with ransomware gangs and the like. 0day will sell for a percentage of the ransom winnings. Researchers will live like kings, everyone else will suffer.
Which do you prefer?
eschaton 1 days ago [-]
They should have a legal obligation to engage in coordinated/responsible disclosure, and it should be a crime to sell or disclose a 0day to anyone other than a state-designated security organization or the vendor/provider.
If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.
CSSer 1 days ago [-]
Yes, exactly. Name and shame.
true_religion 1 days ago [-]
Same. I did not know who they were, but now they have been named and shamed. Not every publicity is good.
Scharkenberg 12 hours ago [-]
It is the opposite for me. I did not know who they are and now I have a positive opinion of them.
bathtub365 18 hours ago [-]
To be clear, the vulnerability existed in Linux, not in Xint Code. It existed whether this group disclosed it or not. Knowledge of it and exploits may have already been bought and sold among various groups with various motives including crime, terrorism, or cyberwarfare who likely made good money off it if this happened.
In that world, the vulnerability has more value to those who seek to exploit it for their own motives, regardless of the consequences. They hope that no one else stumbles on it and fixes it, preventing them from continuing to use it to do bad things.
In the world where it is disclosed, there is more value in fixing the vulnerability as the maintainer’s reputation is at risk (and potentially monetary loss or legal liability if they are shown to be negligent).
zamalek 17 hours ago [-]
Yes, and that's why we have the responsible disclosure protocol. It wasn't correctly followed here.
tptacek 16 hours ago [-]
There is no such thing as "the responsible disclosure protocol". There's really no such thing as "responsible disclosure" at all, but "the responsible disclosure protocol" is a term I have literally never heard before. (I've been a vulnerability researcher since the mid-1990s, for what it's worth.)
> In computer security, coordinated vulnerability disclosure (CVD, sometimes known as responsible disclosure)
I guess you can learn something new after 36 years.
If you are referring to what you quoted, your pedantry and sharpshooting would result in an incomplete English sentence: "that's why we have the responsible disclosure" is missing a noun. Now that we are firmly in worthless pedantry:
Protocol (n):
1.a. a system of rules that explain the correct conduct and procedures to be followed in formal situations
1.b. a set of conventions governing the treatment and especially the formatting of data in an electronic communications system
If you don't like what I said or disagree, poke holes in factual inaccuracies. However, in the reality that I am pretty sure we all share, responsible disclosure is a well established protocol that is followed by many security researchers, and was imperfectly followed here.
tptacek 5 hours ago [-]
I don't think you're going to bluff your way through this.
zamalek 5 hours ago [-]
From elsewhere.[1]
> You: No, I wouldn't, because my own preferences are towards immediate disclosure.
And there it is. You could have said "I don't think responsible disclosure is a good idea" and moved on, but now we have whatever the fuck this is.
Bluffing sure as hell beats incapable of being wrong. I'll take it.
These researchers found a vulnerability in the Linux kernel. They could have just written a blog post and put it online, or not told anybody, or sold it. But instead they decided to tell the Linux kernel devs, and give them time to act before publishing.
And your beef is that you’ve decided they needed to also inform individual downstream projects that use the Linux kernel? Why? Which ones?
I'm all for lighting a fire under the developer's ass, but we live in an imperfect world and the biggest problem that we have is end-users. We may have applied the mitigation on day 0, and updated as soon as the kernel landed in our distro - and if some of us didn't then we've even got savvy users in that "don't update fast enough group" (which is fine, which is human, but is said imperfection).
Major distros should at least have gotten a few days of notice for something this catastrophic. It doesn't help that the kernel is fixed if "normies" aren't able to access it on day 0. For reference, the standard is 30 for the developer to fix and 90 for it to land on machines. Even 30+7 would have been a substantial improvement.
Ethical security research involves ethics, and maybe they aren't referenced in university/college any more - but here's what I was taught: https://www.acm.org/code-of-ethics .
> 1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.
> [...] Computing professionals should consider whether the results of their efforts will [...] and will be broadly accessible.
> 1.2 Avoid harm.
> (Honestly, all of it)
> 2.3 Know and respect existing rules pertaining to professional work.
> 3.1 Ensure that the public good is the central concern during all professional computing work.
> People—including users, customers, colleagues, and others affected directly or indirectly—should always be the central concern in computing.
Maybe other code of ethics for CS exist; I'd like to know which ethics these ethical researchers were following.
john_strinlai 1 hours ago [-]
>For reference, the standard is 30 for the developer to fix and 90 for it to land on machines
no, the standard is 90 days from notification or 30 days from the patch date, typically whichever is sooner.
e.g.
> If a vendor patches a security issue 47 days after Project Zero notified
> the vendor about the vulnerability, details would be made public on day 77.
> If a vendor patches a security issue 83 days after Project Zero notified
> the vendor about the vulnerability, details would be made public on day 113.
please also note that you are blindly quoting wikipedia articles at people who either currently work in security research, or used to work in security research. while we are not infallible, you should perhaps consider that we at least have real life experience dealing with vulnerability disclosure processes, and arent just learning about them today from wikipedia. when a room full of experienced professionals are telling you that you are misunderstanding something, that is a sign to step back for a second and maybe reconsider your position.
zamalek 13 minutes ago [-]
That's still extremely different to this in one of the GP comments:
> There is no such thing as "the responsible disclosure protocol".
And yes, I admit I got dragged down to their level and beat myself with a dumb stick in the process.
tptacek 49 minutes ago [-]
Hey! I still do SOME work in this space. :)
john_strinlai 45 minutes ago [-]
haha, for the record, the "used to" was primarily referring to myself, who now teaches the next generation instead of practices! you are probably much more active in the space than i am now adays
tptacek 3 hours ago [-]
You're trying to extrapolate on this specific scenario from Wikipedia pages. Have you done any of this work? What have you done when you've reported a vulnerability to an upstream with dozens of downstreams? When your teammates have? You keep talking about "protocols" and "commonly followed practice" and "codes of ethics". Tell us more about the codes, protocols, and practices in your shop.
Nobody, for what it's worth, is arguing that major distros shouldn't have gotten some kind of notice. The problem is that the entity responsible for doing that isn't the vulnerability research lab. In fact, as a general procedural point, researchers can't go contact downstreams. They might be able to do so in the specific case of Linux, but you've tried to spin that possibility into a binding obligation derived from established practices, which: no. That's not a real thing.
zamalek 6 minutes ago [-]
> possibility into a binding obligation
I never said "binding obligation," that is the first time "binding" has appeared in this discussion and was introduced by you. Once again claiming things I have never said. Doing what you are free to do can still be a shitty thing to do.
I am a bluffing moron who knows nothing, you win.
akerl_ 3 hours ago [-]
It’s a commonly followed practice for some people. Notably it’s what was done here: they coordinated disclosure with the Linux kernel devs. And now folks are angry that they didn’t also coordinate with yet more downstream projects.
> For reference, the standard is 30 for the developer to fix and 90 for it to land on machines.
You are strongly implying that keeping the vulnerability secret is following of what you quoted. But that’s the rub. Many of us think the opposite. Not disclosing this would have been the violation.
zamalek 11 minutes ago [-]
> You are strongly implying that keeping the vulnerability secret is following of what you quoted.
Please don't put words in my mouth when I have clearly stated the contrary. I used the word "disclosure," that is very different to keeping things secret.
Scharkenberg 12 hours ago [-]
Which part was not correctly followed?
Lammy 1 days ago [-]
> It was extremely irresponsible
As a user and admin I disagree. Makes one appreciate what a masterful bit of lexical-engineering “Responsible” Disclosure is, kinda like “Secure” (from me, not forme) Boot — “Responsible” Disclosure is 100% about reputation-management for the various corporation/foundation middleman entities sitting between me and my computer.
Those groups don't care that my individual computer is vulnerable but about nobody being able to say “RHEL is vulnerable” or “Ubuntu is vulnerable”. The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk than to be surprised by the fix and hope nothing bad happened in that meantime.
Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.
BeetleB 24 hours ago [-]
So if I found a vulnerability that lets hackers withdraw withdraw all the money in your account without a trail on where the money went, you'd be fine with them disclosing it to the public at the same time as the bank learns about it?
Even when there is no known use case of the attack (other than the security researcher's)?
> The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk
By the time you hear about it, the money could be gone because 1000 hackers heard about it from the researcher before you did.
> than to be surprised by the fix and hope nothing bad happened in that meantime.
Hope is not a good strategy here.
Lammy 24 hours ago [-]
Yep, I'd be fine with that. My bank has insurance, and my money would be returned.
Dylan16807 23 hours ago [-]
Seeing your other (rightfully flagged) reply I want to tell you as a neutral party that yes this is missing the point of the analogy. You're basically saying "I would simply hit the brakes on the trolley". It's not that they're so hubristic they think it's impossible to legitimately disagree with their argument, it's that mentioning insurance is sidestepping their argument entirely. You're not addressing the general idea of getting hacked and suffering the consequences of the hack.
xorcist 22 hours ago [-]
Just socialize losses and all is well.
What could possibly go wrong?
yesbut 20 hours ago [-]
that is basically how all large companies behave anyway. socialize the losses (bailouts, layoffs, negative economic impacts in the communities they reside, etc.) and privatize the gains.
selcuka 18 hours ago [-]
> that is basically how all large companies behave anyway
And do you agree with that behaviour?
yesbut 15 hours ago [-]
nope
JamesStuff 23 hours ago [-]
The banks cost of insurance goes up, cost of running an account goes up, how do we correct for this? offer worse accounts to customers...
Lammy 22 hours ago [-]
Why do you assume banks would keep on doing the same old thing but paying more because of it? The cost would make them learn not to design systems where something like this hypothetical scenario was possible.
stavros 18 hours ago [-]
And what is the insurance in the Linux case, for which the analogy was being made?
Loudergood 18 hours ago [-]
Linux was informed properly, and the vuln was not disclosed until 30 days after the kernel was patched.
The real debate here is what went wrong with getting that info downstream, and whose responsibility was that?
ryan_n 24 hours ago [-]
You're missing the point (not sure if you're just being dense on purpose...). If you're bank would just return the money then its not a good analogy. If someone gains root access to your machine, presumably they can do damage that can't be undone. In other words, to continue the bank analogy, they would take all your money and you would have no way of getting it back. Presumably, you would not be ok with this. And even if, for some weird reason, you were ok with that, 99.9% of all other people would not be ok with it.
stonogo 24 hours ago [-]
Respectfully, I don't think they're missing the point. Banking, as an institution, has its flaws, but deposit insurance isn't one of them. These vulnerabilities exist whether or not they follow specific disclosure rituals, and systems should be deployed with defense-in-depth so that one privilege-escalation flaw is a recoverable event. Inventing tortured counterfactual analogies doesn't change the basic thrust of the poster's point: the account is insured, so getting drained by an attacker is not a fatal problem. Of course people should still take steps to prevent that from happening, but that doesn't mean prevention is (or should be) the only cure.
ryan_n 23 hours ago [-]
My point specifically is that some damage isn't recoverable if there's a vulnerability that gives someone root access. This makes the bank analogy inadequate in the first place. Im not trying to argue about whether deposit insurance is good or bad. Saying they would get the money back assumes the damage done to ones machine would be recoverable, which may not be the case.
Modified3019 23 hours ago [-]
My understanding is that FDIC deposit insurance only protects against bank failure, not fraudulent activity. Getting your account drained by an attacker may or may not be covered by a patchwork of other laws at various levels, and you could very well end up shit out of luck.
Lammy 24 hours ago [-]
[flagged]
estimator7292 23 hours ago [-]
"I, personally am not affected, and I don't care about anyone else so therefore there are no consequences"
tomxor 23 hours ago [-]
> Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.
No, it's really not.
High severity vulnerabilities are responsibly handled by quietly neutralising them with subtle patches that do not reveal the vulnerability, waiting for those patches to distribute. Then patching or removing the root cause of the vulnerability (at which point opportunists will start to notice), and finally publicly disclosing it when there are already good mitigations in place.
Example: spectre/meltdowm mitigations.
I've been asked to use this approach myself when reaching out to maintainers. Sometimes it's possible to directly fix the vulnerability as a "side effect" by making a legitimate adjacent change.
efortis 23 hours ago [-]
With immediate disclosure the provider can decide to shut down while it is fixed. Or to notify users and make it their decision. Or to be prepared with a diversified infra and switch over to a non-vulnerable path. e.g, BSDs are not affected by CopyFail
eschaton 1 days ago [-]
“The choice that maximizes potential damage isn’t irresponsible, because it means I can mitigate my own systems immediately.”
That’s what you’re saying here.
tptacek 1 days ago [-]
They're literally just restating the argument for full disclosure security. This is one of the oldest debates in information security.
0x0 24 hours ago [-]
The disclosure doesn't appear very "full". Looks like this was slipped into mainline linux among dozens of other mostly-irrelevant "CVEs" with nobody highlighting the fact that it is in fact dirty-cow-on-steroids.
Or is everyone expected to upgrade and reboot every 48 hours for all eternity and just deal with potential regressions all the time?
I think this reflects poorly on the original reporters. If you have a weaponized 700-byte universal local root exploit script ready to go, perhaps you should coordinate with major distros for patches to be available before unleashing it on the world. No matter how "veteran" you are.
tptacek 24 hours ago [-]
Um, yes, everyone is expected to upgrade and reboot on a moment's notice. No policy or norm you come up with will change that.
(This bug does not technically require a reboot to mitigate).
judemelancon 22 hours ago [-]
I think I must misunderstand. Are you saying that you upgrade and reboot every production system that you administer to apply each commit to the kernel (branch it's using) essentially immediately?
That doesn't make sense to me for a few reasons, but I struggle to find a different reading that applies "upgrade and reboot on a moment's notice" to the "slipped into mainline linux" scenario. Kindly help me to do so.
tptacek 22 hours ago [-]
No: your posture with respect to having to cycle servers is a super complicated subject and you address it both with process and with architecture (for instance: you can be blasé about things like CopyFail if you don't allow multitenant shared-kernel in your design in the first place). But no matter what process and design you have, if you're hosting sensitive workloads, you always have to be in a position where you can metabolize having to cycle your servers.
It's a category error to talk about a disclosure event like this as something that would destabilize someone's fleet operations. The Linux kernel is fallible. So is the x64 architecture. You already have to be ready to lock things down and reboot (or mitigate) at a moment's notice.
Remember: whatever else grumpy sysadmins have to say about this, Xint are the good guys. Contrast them with the bad guys, who have vulnerabilities just as bad as CopyFail, but aren't disclosing them at all --- you only find out about them when it's discovered they're actively be exploited. There's no patch at all. There isn't even a characterization of how they work, so that you could quickly see what to seccomp. That's the actual threat environment serious Linux shops operate in.
LPEs are not rare.
judemelancon 21 hours ago [-]
Oh, I thought you meant "everyone" in a sense including actual human persons and the devices on their home network.
0x0 21 hours ago [-]
I find it curious to call someone dropping a weaponized root exploit before major distros or even LTS kernel git branches have patches ready "good guys". This could have been handled with much more grace.
tptacek 21 hours ago [-]
Again: I made the actual distinction between bad guys and good guys clear. Good guys don't become bad guys simply because kernel security is an inconvenience to you.
eschaton 11 hours ago [-]
There are more than just good guys and bad guys; in particular, there are also opportunists.
Opportunists are the ones who will sell a 0day to bad guys. Or who will drop a 0day publicly to promote their services. And they’ll fight tooth and nail against any actual legal obligation to engage in responsible and coordinated disclosure, because they make more money without that.
tptacek 6 hours ago [-]
Seems like a classification you just made up to navigate a message board debate: the category that equates commercial vulnerability research for security products and people who sell zero-day vulnerabilities to bad guys.
sersi 10 hours ago [-]
To be fair, once Xint gave the heads up and the kernel team committed a patch, what was Xint supposed to do? Keep asking the kernel security team to backport patches for the LTS kernels?
As soon as a patch is committed, the clock starts ticking, the exploit will be discovered by reverse engineering recent commits. The commit was made on April 1st, Xint disclosed it on the 29th. If the Kernel Security team had wanted to, they had 28 days to backport patches in the LTS branches...
So, I wouldn't put any blame on Xint there.
24 hours ago [-]
akerl_ 24 hours ago [-]
What the heck is up with people today.
Using quotes around something where you’re actually doing a strawman paraphrase of another commenter you disagree with is bad form.
stavros 18 hours ago [-]
It was clear that the original comment didn't say that, since we can see it right above. It was clear to me that the GP was using quotes as a way to use direct speech, not to imply that the GP literally said those words.
notsound 1 days ago [-]
Those groups care about whether millions of computers are vulnerable, likely including your computer. If "immediate public disclosure" was done in all cases every vuln would be exploited and patches would be much lower quality. Shortening the disclosure timeline might be a good idea, 90 days is starting to feel long.
Lammy 24 hours ago [-]
Millions of computers are still vulnerable. Not-knowing about it doesn't mean the vuln isn't there :p
AlessandroF6587 12 hours ago [-]
Being vulnerable is not the important part.
They have been vulnerable for years.
The problem is the probability of being exploited.
If everyone knows about the exploit details before a proper patch is available the number of exploited systems will skyrocket
danparsonson 17 hours ago [-]
But now millions more people know about how to exploit it who didn't before. I don't see why you're struggling with this.
Lammy 17 hours ago [-]
You can't bully me into agreeing with you. Why are you struggling with that?
danparsonson 3 hours ago [-]
That was my first comment in the thread. I'm not bullying you; if you don't want people to challenge your statements then you came to the wrong place ;-)
Lammy 2 hours ago [-]
You are engaging in bad faith when you act like I only have the belief I have because I don't understand yours yet. Don't comment if you can't respect somebody disagreeing with you.
pphysch 24 hours ago [-]
The Venn diagram of mainstream distros and individual Linux users is virtually a circle.
Ubuntu/RHEL is vulnerable and so are most Linux users by extension.
tptacek 1 days ago [-]
Without taking a position on the disclosure mechanics: any hosting provider hacked with this was already playing to lose. It is not OK to run competing untrusted tenant workloads under a single shared kernel. Kernel LPEs are not rare. This was a particularly simple and portable one, but the underlying raw capability is a CNE commodity.
jcalvinowens 23 hours ago [-]
> Kernel LPEs are not rare. This was a particularly simple and portable one, but the underlying raw capability is a CNE commodity.
I absolutely 100% agree with this and I'm glad to see somebody saying it. Any system that is one LPE away from being compromised is already insecure.
20 hours ago [-]
lifis 1 days ago [-]
The Linux kernel is not usable as a security boundary, so anyone who wants to do "shared hosting" and not be hacked needs to use something else, like gVisor or firecracker VMs
The only important system that uses it as a security boundary is Android and there is mitigated by the fact that APKs need user approval, plus strict SELinux and seccomp policy plus the GrapheneOS hardening, and in this case the mitigations succeeded (https://discuss.grapheneos.org/d/35110-grapheneos-is-protect...)
dawnerd 1 days ago [-]
A LOT of websites are tenants on WHM/CPanel hosts. Not to mention how many agencies use it for their clients Wordpress sites.
1 days ago [-]
hsbauauvhabzb 12 hours ago [-]
They built it wrong.
morpheuskafka 15 hours ago [-]
I thought that was the entire design goal of the Unix model, didn't it originate in the times when hundreds of users logged on to a shared mainframe? There are still public Unix servers like SDF out there. SELinux is just an extra layer so that if someone gets root (ex. due to an exploit in your setuid code or cron jobs etc) it's not game over.
anthk 6 hours ago [-]
SDF used NetBSD. In the 90's they switched for a while for RH under X86. Worst era ever, very insecure. Now they use NetBSD X86-64.
On Hyperbola GNU/Linux, they will shift into OpenBSD, they got fed up with the corporate slopware (and propietary Linux became). They will still make Hyperbola BSD GNU-license compatible, from core to the userland tools.
In my case, I wish Emacs and GNU developers embraced plotutils and left out Gnuplot (is not GNU at all; worse, it conflicts with the GPL) and made Texinfo independent of LaTeX to produce PDF and HTML files with equations. Groff + troff+pic+eqn already do that, no Texlive needed.
So can mandoc under OpenBSD, no magic needed, everything under few MB's.
Texlive it's huge (full instal it's over 7GB) and the so-called free FSDG is not 100% free, at all.
With just that GNU Emacs would be truely GNU-standalone, relying on GNU tools for plots under Emacs' Calc and Texinfo books exported into PDF. A good plus for security.
Once you get that working, the rest would just follow their way. Also, GNU Hurd being developed with propietary LLM's/SAAS it's a disgrace against what GNU stands for too.
They can go back to the right path, but they need will, for sure.
watermelon0 1 days ago [-]
I'm quite sure there are many application hosting providers which rely on container runtime such as runC (default runtime of containerd/Docker), and a shared kernel between users.
staticassertion 22 hours ago [-]
In a just world, those companies would be held legally accountable for negligent practices. The Linux kernel upstream has made it clear for decades that security is a dirty word.
LPEs on Linux are obscenely commonplace.
shimman 1 days ago [-]
Expecting people to do the right thing is a fundamental issue here. Why would you ever expect for all of vulnerabilities to be disclosed privately? There's very little actual incentive to do this.
I'm honestly unaware of what systems could be put in place to prevent this but expecting people to always do the right thing is fantasy level thinking. I mean I bet the disclosers thought they were doing the right thing, hence why it's a bad thing to rely on.
edit: spelling/grammer.
dwedge 1 days ago [-]
When the exploit is an advertisement for an exploit detection company, not doing the right thing is a bad look
dgellow 1 days ago [-]
The worst thing would be to exploit or sell it for profit. Instead of that, publicizing the exploit is closer to neutral–good in my books, that did trigger a really quick reaction from the different actors to patch their kernels and systems
ori_b 1 days ago [-]
Imagine how much quicker the distros would have reacted if they were given a heads up a month ago. But, sure, I guess kudos to this company for not being actively criminal, and merely bumblingly incompetent and overly eager to get their marketing pitch out the door.
x4132 20 hours ago [-]
to which distros? how do you ensure fairness? Do you report this to the maintainer of Red Star OS (north korea)?
The kernel security team was given the heads up a month ago. At that point it is their decision.
Why don't all these distro maintainers add their own back doors, and mine crypto off our machines without our knowledge? Surely, there is some legal fine print they can add that would let them do that. There is very little incentive for them to maintain these systems, given how thankless and underpaid the work is.
hsbauauvhabzb 12 hours ago [-]
Most distros are maintained by commercial companies.
baggy_trough 1 days ago [-]
Why wouldn't the linux security team notify the main linux distributions?
staticassertion 22 hours ago [-]
Greg and Linus do not believe in the entire concept of "vulnerabilities" in the Linux kernel and do not believe in the methods that distros use like cherry picking, therefor they typically are against issuing CVEs, scoring CVEs, describing vulnerabilities at all (if you use the word "vulnerability", your patch will be rejected), etc.
It's fundamentally their position to not work the way that you describe.
Hendrikto 9 hours ago [-]
I would like to read more about this. Do you have a source?
I'd start with Greg's own words. You can probably find more on it from Spender/grsecurity's blog.
SiempreViernes 24 minutes ago [-]
The claims you make upthread are very hard to match with the text you link to, did you past the wrong URL?
baggy_trough 22 hours ago [-]
That doesn't really seem to map onto the situation since Greg himself released a 6.12 with the patch earlier today.
staticassertion 22 hours ago [-]
I don't know what you mean at all. I'm just repeating known kernel policy here. What does 6.12 have to do with anything?
baggy_trough 21 hours ago [-]
What is your interpretation of why Greg KH released a version of 6.12 with this fix in it today, other than to help distributions avoid this vulnerability?
staticassertion 8 hours ago [-]
Why would he ever... not release a new version? I don't get what you're trying to say - I'm stating Greg's explicit policy on the topic. If he did something outside of that policy, that wouldn't change anything.
baggy_trough 5 hours ago [-]
If he doesn't believe in the "concept of vulnerabilities" then it is remarkable that he released a 6.12 targeted on this one fix. Why would he do that otherwise?
staticassertion 3 hours ago [-]
Sorry but he literally doesn't and nothing you say is going to change that he has explicitly stated that. This isn't up for debate, go ask him yourself, literally go to the first blog post on his site.
As for the latest patch, Greg is currently being forced to clean up a big fucking mess by external parties. And he's miserable about it.
bonzini 1 days ago [-]
Partly they already have enough on their plate. It's up to the reporter to pick how to handle the disclosure, and unless a specific maintainer chooses to handle it, the Linux security team clearly says they won't.
Partly they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways (on one hand this case where the vulnerability is almost ignored; on the other hand, I saw cases where a VM panic that could be triggered only by a misbehaving host—which could just choose to stop executing the VM—was given a CVE).
staticassertion 22 hours ago [-]
This couldn't be more backwards. This has literally nothing to do with bandwidth. The kernel is a CNA, they are explicitly the ones to do this.
The reason they don't is because Linus and Greg have repeatedly, publicly stated that they don't want to because they don't believe that vulnerabilities conceptually make sense for the linux kernel and they refuse to engage in the process.
bonzini 16 hours ago [-]
> they don't believe that vulnerabilities conceptually make sense
That's exactly what I wrote: "they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways".
But there is also a question of bandwidth. If a maintainer asks to bring a specific vulnerability to distros-list, the kernel security people will be reasonable. I did it last March.
baggy_trough 1 days ago [-]
Seems a little crazy. Somebody should evaluate blast radius and do appropriate distro notifications in a case like this (I presume the impact was part of the disclosure, so not much extra work).
seanhunter 24 hours ago [-]
You know the linux kernel is a free software project right? If you think “somebody should” do a thing but you aren’t prepared to do it yourself then you should maybe ask for a full refund.
baggy_trough 23 hours ago [-]
Thank you very much, seanhunter. You hit the nail on the head there.
bonzini 16 hours ago [-]
Not really, because they made Linux a CNA specifically to own the process and distort it the way they want it to be.
bluepuma77 24 hours ago [-]
Well, how do you define main Linux distros? Isn’t the next smaller one not receiving the info always complaining?
throw0101a 20 hours ago [-]
> Well, how do you define main Linux distros? Isn’t the next smaller one not receiving the info always complaining?
For a first approximation: Ubuntu, Debian, RHEL(-derived) to begin with, and SuSE which is in EU/server space (AIUI):
U/Deb/RHEL are 'upstream' of a lot of other projects, and fixes would trickle down to Rocky, Alma, etc. Perhaps VM OS in cloud (AWS, Azure) could be a usage gauge as well.
baggy_trough 23 hours ago [-]
Isn't there already a distro security list for this purpose?
staticassertion 22 hours ago [-]
Yes.
1 days ago [-]
shimman 22 hours ago [-]
Because one of them might have an incentive to not do so. In this case it's because they want to advertise their own company.
holowoodman 1 days ago [-]
I can accept (and welcome) disclosure before there are patches.
But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal.
And no, the proposed mitigations don't help with half of the distributions out there...
staticassertion 22 hours ago [-]
The patch was available. Upstream just doesn't communicate vulnerabilities because they have a personal dispute with distros about how to handle patching.
17 hours ago [-]
akerl_ 1 days ago [-]
> maybe even criminal
What’s your theory here? What crime?
holowoodman 23 hours ago [-]
Exploits are sold and used as weapons, sometimes even weapons of war. Which in many places is criminal, except under very restrictive circumstances.
Also, all kinds of aiding and abetting.
akerl_ 22 hours ago [-]
What does that have to do with this comment thread?
Copying from the comment I was replying to:
> But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal
michaelmrose 1 days ago [-]
If it's not a crime I see no reason not to work with partner nations to build responsible disclosure into a legal framework everywhere because it pretty obviously should be.
akerl_ 1 days ago [-]
If you wanted to somehow make coordinated disclosure into a legal framework, that would be an interesting and complex project.
But it’s not the law anywhere I’m aware of today, and I’d not support it becoming a law.
debugnik 14 hours ago [-]
This is kind of a thing already in the EU. Under NIS 2, vulnerabilities should be notified to a CSIRT as well as upstream, and the CSIRT shall identify downstream vendors and negotiate a disclosure timeline. I don't know whether they're any good at it or not, though.
jodrellblank 1 days ago [-]
You know companies are allowed to pay people to find vulns, and pay people bug bounties?
Instead of that, you’d rather make the law compel free individuals to limit their speech, or to hand over their work to big companies privately, so big companies can save money?
That doesn’t sound like a nice future, if it’s even enforceable at all.
SoftTalker 1 days ago [-]
AIUI the exploit was fairly low-effort once you knew the vulnerability. So publishing one probably didn't change the landscape much.
wang_li 1 days ago [-]
There is an alternative mitigation you can use which blacklists the function calls when the affected code is not built as a kernel module.
vaylian 14 hours ago [-]
> alternative mitigation you can use
That's besides the point. If people use the official mitigation on https://copy.fail/#mitigation they will not sufficiently protect themselves on mainstream distros like Ubuntu and Debian.
The page also states
> Most major distributions are shipping the fix now.
This text was probably prepared in advance, but this was simply not true at the time of publication.
semiquaver 1 days ago [-]
Patches were available for nearly a month.
ori_b 1 days ago [-]
Basic care would involve making sure the patches had made it into the wild before ending the embargo, and nagging the relevant parties if not.
Edit: As of this writing, most distros including Redhat, Fedora, Debian Stable, do not have patches available in the package repos, though they're being actively worked on.
sgjohnson 1 days ago [-]
Not true, if there’s any evidence of the exploit being used in the wild, it’s much more responsible to release immediately.
Considering that the patches have been available for a while, someone surely reversed what they were for and was actually exploiting this in the wild.
In the age of AI, I’d argue that “responsible disclosure” is dead. Arguably even in closed source projects. Just ask Claude to do a diff between the previous version and to see whether anything fixed in there could have had security implications.
We’re not there yet, but very soon the only way to responsibly disclose a vulnerability will be immediately.
ori_b 1 days ago [-]
But they didn't release immediately -- they waited a month, but forgot to tell the distros, and forgot to check if waiting a month had actually lead to distros picking up the patches and shipping them.
sgjohnson 19 hours ago [-]
Which just reinforces my point. The patch was available, therefore, where the exploit lies was also available.
Linux kernel is one of the most audited open-source projects ever. I guarantee you that someone did reverse the patch.
> but forgot to tell the distros
Probably an oversight, but irrelevant. The bug was in the linux kernel. It's insane to suggest that they should have notified everyone shipping the linux kernel.
semiquaver 1 days ago [-]
“Made it into the wild?” Patches landed a month ago. Should they also wait until my linksys router from 2018 has a patch ready?
ori_b 1 days ago [-]
Patches are still in the process of landing in most major distros as of the time of this writing. Most users are not able to get an update through their distro's packaging mechanisms.
SoftTalker 1 days ago [-]
It's a local vulnerability at least. How many people do you let log in to your router?
With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited. Even with shared hosting, you generally have root in your VM or container anyway. Unless this enables an escape from that?
Still the risk that people who run "curl | bash" without care could get bitten, but usually its "curl | sudo bash" anyway...
sgbeal 1 days ago [-]
> Even with shared hosting, you generally have root in your VM or container
Lots of shared hosters don't use VMs or containers. It's some arbitrary number of people logging in to a shared system, each one with a home directory under /home/THE_USER_NAME. i've had several such hosters over the years (thankfully not right now, though).
sjpb 24 hours ago [-]
> With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited
Things like HPC clusters are multiuser & don't entirely trust their users. If they did we wouldn't need users/groups/permissions etc in the first place.
cozzyd 21 hours ago [-]
Yes. Not even just HPC clusters, shared login servers are pretty common in academia. I manage several in our lab. Sure, we mostly trust the users against malice more or less but not so much against incompetence. A malicious vscode plugin would run rampant in this space.
And then there are users running claude-cli and friends who may just find it convenient to use a local root exploit to remove obstacles.
dist-epoch 1 days ago [-]
With this exploit it's trivial to jump from one container to another neighbor container. I've tried it and succeeded.
So containers don't protect you, only a VM.
SoftTalker 1 days ago [-]
So anyone pulling a malicious dockerfile jeopardizes the host? That would be bad...
ori_b 22 hours ago [-]
...no shit? Why do you think people care about this issue?
ranger_danger 15 hours ago [-]
> I've tried it and succeeded.
How so?
michaelmrose 1 days ago [-]
Local root is part of the path to escaping
staticassertion 22 hours ago [-]
That's mostly on Greg, a bit on the author.
GrayShade 1 days ago [-]
Fedora is patched.
em-bee 1 days ago [-]
only for versions 6.19.12 & 6.18.22. older versions (which are used in distributions) are not ready yet.
1 days ago [-]
skywhopper 1 days ago [-]
I think it’s reasonable to expect folks in the security community who go to the trouble of creating a website detailing security vulnerabilities in specific listed software to pre-notify the security teams of that software. The CopyFail website calls out Ubuntu and Red Hat specifically, but apparently the author of the site did not inform them of the issue?
But even if you think making unethical decisions in personal self interest is something no one should be criticized for, surely the Linux kernel team ought to have some process for notifying the top distributions of an upcoming LPE, just out of practicality.
semiquaver 1 days ago [-]
In what sense do you believe that the reporter did not notify the security team of the relevant software? The vulnerability is in the kernel. Reporter responsibly disclosed using the kernel’s security report mechanism and waited until a patch was ready.
Distros are downstream of kernel, that doesn’t entitle them to expect to be contacted directly by every security reporter. That’s not on them. Distros that are big enough should be plugged into the linux security team for notifications.
Security researchers cannot be held responsible for broken lines of communication within the org charts of projects that they study. They’re providing a valuable public service already, how much more do you want?
ragall 1 days ago [-]
> that doesn’t entitle them to expect to be contacted directly by the reporter
Yes it does. That's how it's always been done and distros can ship a fix well before it ends up in a kernel release.
michaelmrose 1 days ago [-]
It is suggested that they out of an abundance of caution and 5 or 6 emails. If this is entirely to much to expect we can always help them by mandating that they spend 6 figures annually meeting a much more robust set of requirements that will include notifying all possible affected parties down to Hannah Montana Linux devs if any still exist.
Any strategy that assumes that the rest of the world is functional or makes you personally responsible for fixing all of it is equally broken but there is a reasonable middle ground and sending a few more emails lies within it
semiquaver 1 days ago [-]
> we can always help them by mandating that they spend 6 figures
Who’s we? Mandate with what authority?
AWS and GCP are downstream another level. Should the reporter also have worked with them? And their customers? And the customers of their customers?
IMO this whole discussion seems like people are annoyed by the security researchers doing god’s work and wish they didn’t exist or think that they should be fully subservient to the projects and companies they are helping for free. The bugs were there before the researchers revealed them!!
bossyTeacher 1 days ago [-]
> expecting people to always do the right thing is fantasy level thinking.
Most people in tech think like the techie in this comic strip.
The notification happen when the fix was shipped. That people would prefer to been spoon fed only serious security issues is understandable, but not realistic.
A large percentage of kernel fixes have the potential to be similarly bad. For some the potential isn't even realized until after the fix has shipped.
Ever stable release GregKH says you must upgrade now, because there is something security relevant in there. This happens at least once a week.
As for shared hosting providers it is my sense that there is always at least one local privilege escalation available to miscreants. Making shared hosting only safe if there is a certain amount of trust.
I remember bugs that were similarly bad from my university days 30+ years ago. Has anything substantially changed?
CodesInChaos 23 hours ago [-]
> Who knows how many shared hosting providers were hacked with this.
I'd consider a shared hoster which allows users to run their own (native) code and doesn't use VMs for tenant isolation extremely irresponsible in 2026.
saysjonathan 23 hours ago [-]
This is probably more common than you think. VMs are expensive, both in resources and cost (if you’re using something commercial). OS-level isolation (shared kernel, cgroups, namespaces) is used pervasively
CodesInChaos 4 hours ago [-]
Modern VMs, e.g. using Firecracker shouldn't be that expensive. I think it's crazy that Kubernetes doesn't use a VM per pod model, especially since it was started by security conscious google.
akerl_ 1 days ago [-]
Who knows how many attackers had found this vulnerability and had already been using it prior to this research finding it?
BeetleB 24 hours ago [-]
Argument from uncertainty is not a good way to reason about this.
I could equally ask: "Who knows how many attackers learned about this vulnerability from this disclosure, and used it before the distributions fixed it?"
akerl_ 24 hours ago [-]
Yes, you could. Thats the core of my point: there is no Right way to handle vulnerability disclosure. There are many competing factors, most of them have major elements of uncertainty because you can’t know who knows what or how various projects or stakeholders will react.
So maybe folks should take a break from the kind of armchair quarterbacking that this was “incredibly irresponsible”, as was done upthread, or that the researchers should be blacklisted for life, as a parallel commenter stated.
Quarrelsome 1 days ago [-]
well now everyone does, so the irresponsible disclosure makes it significantly worse.
akerl_ 1 days ago [-]
It’s your opinion that it’s irresponsible and that it makes something worse.
Quarrelsome 1 days ago [-]
and its your opinion that it doesn't. Shall we continue stating the obvious? We are communicating using glyphs. This language is English. We are on Hacker News. This branch of the conversation is extremely unproductive.
akerl_ 1 days ago [-]
I asked a question and you replied with a statement. Your statement didn’t frame itself as an opinion but as fact.
The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.
There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.
Quarrelsome 1 days ago [-]
you seemed to suggest with your initial statement that any disclosure was acceptable as people would have been using the exploit prior to the disclosure. I don't think that's a strong argument given now the initial people who were using the exploit prior to disclosure are now joined by people who have learned of the exploit as a consequence of the disclosure happening before all the distribtions were ready.
So I feel like the argument reduces into "why is it a problem that now anyone could exploit it, if some people were exploiting it already". Which imho isn't a sensible argument because the issue is clearly the amount of people capable of using the exploit for nefarious purposes, which has increased.
akerl_ 1 days ago [-]
Idk why you felt the need to use quotes to wrap something I didn’t say, and that is a pretty uncharitable attempt at reframing my question. If you wanted a quote, here’s what I’d say:
“Because we can’t know if there was exploitation by existing parties who had discovered the vulnerability on their own, there are upsides to disclosing earlier so that affected users can take mitigating steps and review their systems for indicators of compromise. Additionally, the more projects the researchers pull into the loop for coordinated disclosure, the higher the likelihood that they further leak the vulnerability to more attackers.”
Quarrelsome 1 days ago [-]
Idk why you felt the need to use quotes to wrap something I didn’t say. Despite the fact I didn't say that, its a much more interesting argument than your original statement implies and it is unfortunate we didn't start there.
However the issue is that we cannot know if the attack space has been broadened or lessened as a consequence of this disclosure, because of how eager it was. If it wasn't eager then we could much more comfortable in suggesting that the attack space has probably been reduced.
Given the exploit had been living in the linux code base undetected for so long in the first place, I think its fair to state that disclosing the exploit prior to the distributions being ready and given the distributions are the principal attack vector of the exploit: that the researcher has made the situation worse and should reflect on their actions.
akerl_ 1 days ago [-]
… I used quotes to wrap something that I was saying. I even called out that it was something I was saying, as a more accurate variant of what you’d claimed I meant.
Quarrelsome 1 days ago [-]
and I prefaced my quotes with the statement "So I feel like the argument reduces into". I mean, idk what punctuation I'm supposed to use there that doesn't offend you, but I just figured we can all read words and it was clear that I wasn't saying you said that, but rather, as I read the argument it was reducable to that and I took issue with that potential reduction.
The idea about the available exploit space and how the actors within it might, or might not move is a much more interesting avenue of conversation and I thank you for elaborating on your initial comment. <3
I do however feel that its hard to be confident about whether or not the attack space has been increased or reduced as a consequence of the eager disclosure. I feel we could make the case either way.
psifertex 15 hours ago [-]
You could try to make that case either way, but as has been pointed out by others all over this thread, the system we've landed on (90/+30) is industry standard after over two and a half decades of experimentation.
Anything else inevitably has worse for the public good.
Having spent that entire time and then some on both offensive and defensive teams, I assure you longer delays after notification do NOT decrease the overall risk to the public.
There's a reason we've landed where we have as a security community.
pphysch 24 hours ago [-]
The public disclosure page has a big blue "Get the exploit" button.
It's an advertisement for an unpatched critical exploit and apparently some kind of infosec company.
krzyk 13 hours ago [-]
There are so many distributions that it is not possible to notify each one, unless there is some single distribution list for all.
And if you disclose to just a handful, why ignore the rest?
bitexploder 18 hours ago [-]
There is no such thing as irresponsible disclosure. Thanks though.
1 days ago [-]
bombcar 23 hours ago [-]
The title on this post was changed to imply that only the Gentoo developer was left out - which I could believe.
bombcar 19 hours ago [-]
And now it was changed back. I'm goin' insane.
Sophira 11 hours ago [-]
And now it's changed to be a generic "For Linux kernel vulnerabilities" rather than specifically about Copy-Fail.
franktankbank 5 hours ago [-]
> but apparently it's the responsibility of whoever finds the vulnerability
Aka a white hat professional which should be a prized function richly rewarded. Do you really want these things to be calcified into a government function?
PunchyHamster 22 hours ago [-]
At least thankfully workaround is one line in a file.
bethekidyouwant 20 hours ago [-]
“Shared hosting providers”
These haven’t been a thing since VMs … basically for this reason. There’s always a local privilege exploit.
sgbeal 11 hours ago [-]
> “Shared hosting providers” These haven’t been a thing since VMs
That is unfortunately not true. i left my last one only a few years ago and they're still going strong without me.
porridgeraisin 11 hours ago [-]
Fundamentally.
The disclosure is private. Meaning neither the commit messages nor any public info can leak too much information about the bug. It's usually kept rather discrete.
It is impractical for the kernel to broadcast to all its users privately.
Meaning that either a) distro maintainers should be privy to it, but where does this end?[1] or b) we have the current situation
[1] probably the top 5 distros security teams can just be copied into the private mail. Maybe the kernel security private list can forward the emails to them as well.
Problem is, every other type of communication between distros and kernel is implicit. In commit messages, patches and release notes. So it's an exceptional case.
BTW, with LLMs there's a new issue. It is now cheap to scan the kernel commit log maybe in _next and ask it to identify what could be a patch for a private disclosure. And then immediately RE the patch and exploit it on deployed kernels.
IshKebab 23 hours ago [-]
> Who knows how many shared hosting providers were hacked with this.
None? Because nobody* does hosting using Linux users as a security boundary. It's not the 90s.
* Standard HN disclaimer for people that think that some retro shell box with 10 users disproves "nobody": nobody does not literally mean exactly 0 people in this context.
mschuster91 24 hours ago [-]
> Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix. Who knows how many shared hosting providers were hacked with this.
Maybe it is irresponsible how little attention we pay to software security. Maybe, software developers of all kind should spend an entire year not developing any features at all, but fix all the tech debt of 30 years instead.
Yes, that sounds revolutionary, but I do not see an alternative in an age where all you need to find kernel bugs of this scale with AI agents.
TacticalCoder 21 hours ago [-]
> It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.
It's a total arsehole'y move to not share with open-source projects (like Debian) but for commercial vendors like Microsoft I don't give a crap.
Now let's not get carried away either: that's a privilege escalation, so it already requires access to a local account. We're not exactly in Jia Tan "I backdoor every SSH out there if your Linux distro is using systemd" territory either.
johnbarron 1 days ago [-]
>> Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.
Maybe a decade of corporations with revenue in the billions, paying peanuts and coffee money, for critical vulnerability disclosures made it....
deng 1 days ago [-]
> It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.
Yes, this was clearly a marketing stunt to promote Xint code.
I, for one, will never use Xint code and will advise everyone to never use it. To anyone working there: enjoy your 15 minutes, I hope this backfires right in your face.
psifertex 15 hours ago [-]
I doubt it will and I hope it doesn't.
External security research happens for one of only a few reasons typically:
1) hobbyists who are learning or just like to do it for fun
2) bug bounties (good luck with those in most open source)
3) marketing for security companies
4) non-public research going to CNO/CNE
If you want to kill 3, the output of 1 will not come close to 4 and the public is NOT better off with fewer public bugs.
23 hours ago [-]
999900000999 1 days ago [-]
Counterpoint. End users have a right to mitigate this issue on their systems.
It is a really really bad look for Linux, puts a bit of water on all hype around switching from Windows.
roxolotl 1 days ago [-]
It does? The disclosure even says the concern for single user systems is very low. If someone has access to your single user system, remote or otherwise, you’ve already lost on the sort of device people would be switching from windows to Linux on.
m3047 22 hours ago [-]
> The disclosure even says the concern for single user systems is very low.
For single user systems (not rigorously defined, I presume it's the intersection of our two definitions which we might be talking about) the nature of the exploit is local privilege escalation, of which there could be many possible, and many mitigations / countermeasures against. This could have suddenly appeared from the ether of "unknown unknowns" for some people.
Those people farther up the food chain still potentially have service accounts, maybe even user accounts for some purposes, perhaps "trusted" services which deliver them code which they deserialize and run once. (Have a pickle.)
severity * impact * likelihood
Not everyone looking to migrate from Windows 95 plans to run everything as root afterward.
Not everybody needs or wants to wait for their distro, or plans to patch their IC firmware when a config change will do.
999900000999 1 days ago [-]
Someone like an AI coding agent perhaps ? This is the type of thing Prompt injection was made for.
No OS is perfect. The awkward rollout for this bug fix is proof of that.
Filligree 24 hours ago [-]
Root access does not typically add anything interesting, for a desktop system. All the valuable stuff is already owned by the single user.
vhantz 1 days ago [-]
As opposed to all other operating systems with no CVEs ever?
weavejester 1 days ago [-]
Hype around switching from Windows servers?
ddtaylor 23 hours ago [-]
What happens if someone does the exploit in WSL?
cbarnes99 1 days ago [-]
You clearly have no idea how often windows has unpatched privesc exploits.
johnbarron 1 days ago [-]
>> puts a bit of water on all hype around switching from Windows.
Said no one ever...present post excluded :-))
jasonmp85 1 days ago [-]
[dead]
windexh8er 24 hours ago [-]
[flagged]
ExoticPearTree 7 hours ago [-]
> It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.
I disagree. Exploits should pe published as soon as they are written and found vulnerabilities to have as much details as possible, because if the researchers cannot write an exploit, someone else could.
- this has the advantage of forcing upgrades as soon as possible. No more “we need to see and schedule patching”
- publishing it as soon as possible makes everyne aware of the threat
- it is a learning experience for everyone
- “responsible disclosure” was invented by lazy companies that have zero interest in fixing a problem quickly
semiquaver 1 days ago [-]
> Note that for Linux kernel vulnerabilities, unless the reporter chooses
to bring it to the linux-distros ML, there is no heads-up to
distributions.
Why would they imply it is incumbent on the reporter to liaise with distributions? That seems to assume a high level of familiarity with the linux project. Vulnerability reporters shouldn’t be responsible for directly working with every downstream consumer of the linux kernel, what’s the limiting principal there? Should the reporter also be directly talking to all device manufacturers that use Linux on their machines?
IMO reporter did more than enough by responsibly disclosing it to linux and waiting for a patch to land.
Aren’t there people in the linux project itself with authority over and responsibility for security vulnerabilities? One would think they would be the ones notifying downstream distros…
aduwah 1 days ago [-]
Especially since the reporter is explicitly asked not to notify the distro teams first.
```As such, the kernel security team strongly recommends that as a reporter of a potential security issue you DO NOT contact the “linux-distros” mailing list UNTIL a fix is accepted by the affected code’s maintainers and you have read the distros wiki page above and you fully understand the requirements that contacting “linux-distros” will impose on you and the kernel community. ```
Do I expect that every distribution is already patched? I don't. However each of us choose the distribution to run. Security can be one of the criteria for the choice. I played safe and I'm using Debian. Other people can make a different tradeoff maybe based on their personal threat analysis.
There are people running end of life kernels and distributions in production, or with pinned old kernels especially on ARM SBCs. I know both. Those are other choices made at the user end of the process.
IMHO the disclosure and fix process was run in the proper way from the researcher to the end user.
nubinetwork 22 hours ago [-]
I don't get why the initial reporter should have to do that legwork. The kernel maintainers should be doing that.
nomel 18 hours ago [-]
Ffs, we're talking about open source projects here. Those mailing lists, mentioned there, ARE PUBLIC.
Make them private? Now you have a nice stream of zero days, long before fixes are available, making bad actors who made it in filthy rich.
nomel 11 minutes ago [-]
If you're suggesting a private, security, information channel is made for all the maintainers of the ~600 actively maintained distributions, including those that pop into existence just to get access to that list, then that would be a point you could make. But, that text above is reasonable, because the mailing lists they're referring to are public.
stonogo 24 hours ago [-]
The kernel team has been at odds with the CVE process and the oss-security community about this stuff for many, many years now. It's a big part of why the kernel team established a CNA and started flooding CVE notifications; they don't believe that security problems are different than non-security problems, and refuse to establish norms or policies based on the idea that they are.
throw0101a 20 hours ago [-]
> […] they don't believe that security problems are different than non-security problems, and refuse to establish norms or policies based on the idea that they are.
They believe there is no difference being able to get root and not being able to get root? It seems to me that to-be(-root) and not-to-be(-root) are quite different.
brenns10 17 hours ago [-]
No, they believe that almost all bugs in an operating system kernel are also likely to be security bugs. The ones which get domain names, POC exploits, and CVE assignments are the ones which were found by security researchers. But the bugs that get found and fixed by kernel developers regularly without fanfare are also very likely to be exploitable. It's just that nobody took the time to cook up an exploit chain. To kernel maintainers, it's silly to assign CVEs to just some of the likely exploitable bugs just because a security firm found them. So they decided to take the reigns and handle CVEs themselves, to ensure all potentially exploitable bugs are marked as such.
IshKebab 23 hours ago [-]
It's such a bizarre viewpoint. I wonder when Linus will see sense.
IMO it's pretty obviously not a view that they seriously hold, it's just one of those technical justifications people come up with to avoid admitting something they don't want to admit - in this case that Linux has a poor security track record.
rincebrain 18 hours ago [-]
I think it's an extension of the premise that you should just be taking the whole stable tree with all its patches constantly, whether they're labeled as security fixes or not, because you can never really know for sure some bugs weren't security bugs.
I don't agree with the premise, but I do think it's a sincerely held one.
bombcar 17 hours ago [-]
The kernel begrudgingly admitted of the existence of LTS releases, they really don't like long-lived kernels and people not tracking at or near the latest release.
IshKebab 11 hours ago [-]
I dunno, if you think about it for more than a few seconds you can see the obvious holes in it, like it's definitely true that some bugs are "may allow RCE", but you also can do a LOT better than not even trying. And even if you do say "we're not putting the effort in to backport security fixes" (which is fine), that doesn't entail "security bugs are just bugs".
These are smart people. If it wasn't about their own project I really think they'd have a different point of view. I wonder what they say about Microsoft's security bugs for example!
guiambros 20 hours ago [-]
Linus? You mean, the same Linus who thinks "security people are f*cking morons", and "security bugs are just bugs"?
Linus is the reason why kernel team doesn't talk to distros. For them bugs are bugs, security related or not.
Literally never. Why would he? He's surrounded by sycophants. And we have Greg for whenever Linus isn't involved anymore, and Greg is just as boneheaded.
ferngodfather 8 hours ago [-]
> and you fully understand the requirements that contacting “linux-distros” will impose on you
Imposing requirements on the reporter? No.
sega_sai 1 days ago [-]
The reporter took time to check and mention on their website specific distributions Ubuntu/RHEL/SUSE. One would have thought reporting to security teams of at least those would be responsible.
semiquaver 1 days ago [-]
“One” would have thought? Can you point to a written policy that says that’s how it should be?
happyopossum 1 days ago [-]
No, nor can I point to a written policy that states one should cover one’s mouth when they cough.
Everyone involved here failed to do the right thing, and hiding behind the lack of written words is weak sauce.
anikom15 1 days ago [-]
The tenets of decency don’t need to be written down.
tob_scott_a 1 days ago [-]
If you can't write it down, why would you expect it to be universal and enforceable? Different cultures exist and have different opinions on what "decency' means, after all.
A security researcher's ethical obligations are to protect users over vendors (barring any contractual agreement in place). From what has been discussed in this thread, they meet that bar.
Sure, they could have gone the extra mile to ensure the distros were in a good place to patch before they published the exploit. That's a kindness you can wish for, but don't disparage them for not going that extra mile. It's a bonus.
It's also possible that it simply didn't occur to them to do so this time. There's certainly lessons to be learned either way. I don't know that the right lessons will emerge from hostility.
Quarrelsome 1 days ago [-]
> If you can't write it down, why would you expect it to be universal and enforceable?
and this is the problem. It used to be the case that if you were smart enough to find an exploit you were also smart enough to realise what would happen if you irresponsibly disclosed it. I guess these tools have made that pattern no longer apply.
true_religion 1 days ago [-]
From my point of view, they told the kernel security team which is in charge of fixing this. If it’s important for them to tell other people, then it should’ve been written down and further reiterated when they made their report.
The skills to detect code exploits is not the same as the skills to navigate an informal org chart to the satisfaction of an amorphous audience if end users (i.e. us on HN).
That said… as they are a company that supposedly specializes in this field, and is trying to sell a product, I do believe they should do better. Right now, I don’t have much confidence in their product.
scragz 1 days ago [-]
different cultures have different views on disclosing vulnerabilities to distros before the public?
embedding-shape 1 days ago [-]
Yes :) The blackhatter would obviously sit on it until they can sell it or use it, the whitehatter collaborate the kernel and distros to patch, and the greyhatter argues on HN whether the latest *fail was responsible enough or not.
sunshowers 22 hours ago [-]
Yes? "Different cultures" doesn't just mean different countries; there are many cultures within infosec.
anikom15 1 days ago [-]
There is little difference in culture here. Nearly all open source work is done in English.
skywhopper 1 days ago [-]
The reporter made a website explicitly calling out Ubuntu, RedHat, Amazon, and SUSE but didn’t notify them, and you think that’s reasonable? That they might not have known those distributions are downstream from the kernel team?
Legend2440 24 hours ago [-]
If you notify the kernel and they ship a fix, it seems reasonable to expect that they will communicate the fix to the distros.
I see this as an organizational failure of the Linux ecosystem. There should be better communication between distro and kernel development.
dweinus 20 hours ago [-]
The reporter clearly knows the distro fixes have not been shipped, read their report. They chose to disclose anyway.
john_strinlai 19 hours ago [-]
>They chose to disclose anyway.
yes, because 30 days had passed from the time the patch landed in the kernel, as per industry standard.
approximately every security researcher, including the likes of google and other big names you may know, does a 90+30 disclosure, which is what happened here. they do this for good reason, which has been figured out over decades of experience in reporting thousands and thousands of vulnerabilities.
the only security researchers i know of that dont like 90+30 actually argue for shorter timelines (or immediate disclosures).
JeremyNT 6 hours ago [-]
What do you think went differently in this case versus other high profile vulnerabilities that had binaries already available for major distros? I feel like it often (usually?) works out that major distros have kernel packages incorporating the fixes already available.
Is this just down to luck, a quirk in the timing about when Linus merged the fix versus when the release gets cut?
sigmar 23 hours ago [-]
What is the heuristic for who should get the heads up? Should they notify amazon but not google simply because they named amazon linux in the report? Seems to me the answer to my first question gets messy fast.
sparker72678 1 days ago [-]
Sure, maybe it's not a _requirement_, but now we're all in more pain because the reporters are more interested in Fame than Safe Remediation.
tptacek 23 hours ago [-]
No, you're in more pain, but other defenders with different postures benefit from having faster and fuller disclosure.
ori_b 21 hours ago [-]
Mind explaining how sitting on it a month after the patch landed is 'faster'? To my mind, that's a month where attackers could analyze commit logs, but maintainers are not acting with urgency to ship fixes.
tptacek 21 hours ago [-]
No, I wouldn't, because my own preferences are towards immediate disclosure. Tavis Ormandy dropped Zenbleed out of the sky onto us. It wasn't comfortable, it was a scramble for us, but I don't blame Tavis for it; he made a principled call. Better that people know, than that information be concealed from them while designated elites perform a process.
ori_b 21 hours ago [-]
I'd also prefer immediate disclosure, but I don't get how waiting a month without telling anyone is good regardless of which side you land on.
john_strinlai 20 hours ago [-]
>I'd also prefer immediate disclosure
wait, what?
you are in another comment thread, of this very post, calling these reporters bumbling and incompetent for their disclosure. "merely bumblingly incompetent and overly eager to get their marketing pitch out the door" - that is your quote.
you also said "Basic care would involve making sure the patches had made it into the wild before ending the embargo", which is the literal opposite of immediate disclosure.
but now you are saying they should have just dropped it with no reporting at all? because that is what "immediate disclosure" means. pop up the exploit script on twitter and call it done.
ori_b 13 hours ago [-]
Yes, if you release the vulnerability as soon as possible, that's a good choice. If you have an embargo and make sure that fixes get out to users in a timely manner before ending the embargo, that's also a reasonable choice.
If you're going wait a month between landing the patch (possibly notifying attackers), but not notify the people who may get the patch to users, it seems like something was mishandled.
throw0101a 20 hours ago [-]
> No, you're in more pain, but other defenders with different postures benefit from having faster and fuller disclosure.
Good for them. But just because some folks cannot afford 24/7 response teams and on-call personnel that doesn't make them or their systems any less important.
Lots of non-profits and academic institutions had to scramble because of the Linux kernel team's position of non-communication to distros.
tptacek 20 hours ago [-]
The conversation about how Linux handle these things is a good and worthy one to have and one "non-profits and academic institutions" need to have when they select distributions. I'm just here to push any of that scrutiny off the vulnerability reporters; Linux is lucky to have them, even if it's mishandling their reports. Vulnerability researchers don't owe these people anything.
froh 1 days ago [-]
it's trivial to find out how to report a security issue like this to Linux distros.
and it's beyond me to not think about doing this and instead exposing everyone and their neighbor to this exploit up front.
I'm certain this is even a felony in some legislations, rightfully so.
22 hours ago [-]
dboreham 23 hours ago [-]
Agree it's not a good look for these folks, notwithstanding that disclosure is mostly theater.
whatevaa 23 hours ago [-]
Stop blaming the reporter. Start asking kernel to fix their process. Linux kernel is no longer a toy project, it has full time employees employed by various companies. They should have handled notifying distributions. Not some rando.
pamcake 14 hours ago [-]
Look, if they namedrop specific distros in their announcement (marketing) blog post as affected, I think a heads-up before publishing that is appropriate and expected.
I don't think they would have gotten as much flame if it weren't for how the RHEL 14 mention and such were put.
This is a security company with a professional(?) communications department banking on pointing fingers at distro maintainers. We are not talking about solo security researchers or academics here.
nirava 11 hours ago [-]
Exactly. Any security person absolutely KNOWS that the distros are still going to be vulnerable. They're exploiting this process loophole to knowingly cause chaos and gain notoriety.
At this point this is not really white-hat/ethical hacking anymore.
Ofc the kernel-distro security loophole is stupid and should be patched ASAP, but that doesn't absolve this company of wrongdoing.
systems_glitch 6 hours ago [-]
We all know that's what it is, I don't know why people aren't willing to just say it.
It has a domain, it has a logo, they were going for maximum impact because it's their business.
22 hours ago [-]
bcjdjsndon 8 hours ago [-]
Linus should take his trademark autistic rage where he calls other peoples code "dogshit" onto his own work for once. He likes the glory of leading the kernel development but not the responsibilitys like this.
dweinus 20 hours ago [-]
No, I will. The distros and the kernel devs should be talking and moving on high sev patches, sure. But real people will have gotten hurt because the reporter didn't want to wait for that to happen. That's on them.
john_strinlai 19 hours ago [-]
you must be unfamiliar what used to happen before hard deadlines were set on disclosure. it was much worse for the users.
there is ~3 decades of more context if you search for it.
stingraycharles 12 hours ago [-]
tldr: if security issues don’t get disclosed (or the real threat of disclosure) they won’t get fixed / prioritized.
pkoiralap 20 hours ago [-]
It's one thing to report a vulnerability, another entirely to make a crazy exploit available for any tom, dick, and harry to take and use. It was irresponsible of whoever came up with it to release it in the world without first giving major distros a head's up.
rcxdude 6 hours ago [-]
A proof of concept is a very standard thing to include in a disclosure, almost table stakes nowadays because of the amount of bad reports. Once there's any disclosure there will be exploits developed and published anyway, it's not a meaningful difference.
bell-cot 20 hours ago [-]
Bashing on the reporter is pointless feel-good. This is a massive vuln. It was 4 weeks after Kernel had a patch. They had no way to know if others parties had also discovered the vuln. Lord Knows how many millions of systems could already have been rooted. The reporter is not their minion.
If I call 911 to report a fire at an oil storage facility - and they ask me to alert the hospital, then phone the neighboring county's Sheriff Dept., and then...yeah. Either I'm way out in the sticks (and known to/trusted by the 911 operator), or else the 911 service is run by children.
robocat 10 hours ago [-]
Great metaphor.
I'd hate to be involved in any emergency services. Too many people have opinions on how things should have been done.
iTokio 13 hours ago [-]
The most interesting exchange, related to disclosure, is this one:
> Nope, sorry, we are NOT allowed to notify anyone about anything "ahead
of time" otherwise we will have to tell everyone about everything.
That's the only policy by which all the legal/governmental agencies
have agreed to allow us to operate in, so we are stuck with it.
greg k-h
whateverboat 11 hours ago [-]
As much as I like linux, this is stupid.
fguerraz 5 hours ago [-]
Distributions using outdated (sorry “stable”) kernels are stupid.
We are not 20 years ago, the world in which it made sense doesn’t exist anymore, but the industry is slow to move on. Just pick a long term release and update it regularly.
harshreality 5 hours ago [-]
Yes.
Distros (point release distros) should use LTS kernels and keep up to date with them. Their "we'll maintain our own kernel branches" model either leads to many missed bugfixes, or duplicates Greg K-H's workload internally, for no practical benefit.
If a distro is suspicious of particular patches in the -stable tree, they could maintain a blacklist of them. However, instead of doing that and accruing overhead of possible future merge conflicts, they should hash out their concerns on the -stable mailing list.
GranPC 1 days ago [-]
Just for what it's worth, I just pushed an eBPF-based workaround for people who are running kernels in which AF_ALG is linked directly into the kernel and not as a module: https://github.com/Dabbleam/CVE-2026-31431-mitigation
I am running this in production right now and it mitigates the attack, with no unexpected side-effects as far as I can see.
sersi 10 hours ago [-]
Interesting comment by Greg Kroah-Hartman when asked why the kernel team doesn't notify distros directly
> Nope, sorry, we are NOT allowed to notify anyone about anything "ahead
of time" otherwise we will have to tell everyone about everything.
That's the only policy by which all the legal/governmental agencies
have agreed to allow us to operate in, so we are stuck with it.
I'd be interested in knowing more about that policy... Seems that there should be exceptions for the major distros.
Of course, major distros who have contracts with SLA could also pay for someone to be on the kernel security team and get a heads up like that..
gregkh 9 hours ago [-]
The members of the kernel security team are not allowed to tell their employers anything that happens on the security list. They are there as individual members, not as employees.
And try to define "major distros" in a way that actually means anything viable.
If you just want to count users, then that would only be Android (everything else is a rounding error.) After Android, that would be Yocto, and then Debian. All distros after that are mere fractions of overall users compared to those 3 by number of running systems alone.
If you want to count it as "$ spent on Linux" then that cuts out Android and Yocto and Debian as those distros are free, and would focus purely on the tiny installed base of paid Linux systems, and cut everyone else out.
So what is a fair way to do this other than "we notify no one, and tell everyone to always update their systems to the latest stable releases that we support."
Especially as there is no way for us to determine your use case (i.e. if a specific bug is a vulnerability for you or not.)
sersi 4 hours ago [-]
Thanks for the reply (and thanks for the work you do)! Fair enough. And the issue is also that without some form of vetting you run the risk of disclosing the 0 day too early?
About that "That's the only policy by which all the legal/governmental agencies have agreed to allow us to operate in, so we are stuck with it.", you mean that if you disclose selectively, then you become liable for damages? or was it a more direct conversation with legal/governmental agencies?
And for a bug like this, what is the policy with backporting patches to lts branches? Since it was corrected in mainline on april 1st but only backported after the public disclosure. Do you delay backporting to minimise any attention on the security issue?
I guess that having a patch for that land on all the LTS branch would signal to any would be attacker that it's a significant security issue...
Sorry for all the questions but I'm genuinely interested.
If you want to talk about possible exploiting being done. Then Android is out (userland is crippled) and I guess yocto as well (same issue). Not that they can’t be attacked, but because mostly what is there is static. As it’s a privilege escalation attack, that leaves us with anything that is running code by unverified users (vulnerable server software, linux shell services, untrusted software you think you’ve sandboxed with user account,…). That put Debian, Ubuntu, Rhel, Fedora, Arch,… installation as the juicest targets.
tonyarkles 5 hours ago [-]
Oh... thank you for the reminder to try running the C version of this exploit on an Android phone over adb. The curiosity is now killing me.
Edit: for context, I work in embedded and the aarch64 version (PR #42 in the repo) has successfully popped every device I've tried it against except one where I have a custom kernel to work around a driver issue and (looking back at my git logs) accidentally forgot to enable the user-mode API for alg_aead specifically. Lucky mistake.
hnfong 6 hours ago [-]
Just a wild guess:
Given the potential impact a severe security issue in the kernel (like this one), it seems that the only process that is acceptable for government agencies of various countries (that deal with intelligence and national security) is to either keep secrets from everyone, or disclose them to everyone.
Otherwise, the entities on the priority disclosure list would basically have free access to zero day vulnerabilities. Then every country with a national intelligence agency would invent a distro and try to squeeze themselves onto that list, and things would become very political and ugly if the agents of any country can't get into that list...
KingMachiavelli 24 hours ago [-]
`nosuid` and probably `nodev` should IMO be the default filesystem mount options.
`/dev` is already a special devtmpfs and the initrd minimal /dev can just explicitly mount the initrd tmpfs rootfs with `dev` and `suid` if necessary.
Letting SUID binaries just "exist" anywhere is a stupendous security issue. What if you mount some external storage medium, how are you to verify that none of the SUID binaries on that block device are malicious.
Additionally, this exploit appears to only work if the user executing the SUID binary can also read the SUID binary. There's no reason for non-root users to have read on a SUID binary.
NixOS does this correctly. No SUID in the normal package installation directory `/nix/store` and no package leakage outside of that no `nosuid` can safety be used on all other mountpoints. The exception is just a single-purpose `/run/wrappers.$hash` directory that safety contains executable ONLY SUID wrappers.
muvlon 23 hours ago [-]
While I hate suid as much as the next person, it's really not the problem here.
The bug that is being exploited gives you basically arbitrary page cache poisoning. At that point it's already game over. Patching a suid program is maybe the easiest way to get a root shell from that but far from the only.
xorcist 22 hours ago [-]
The proof of concept exploit is just that. It is meant to demonstrate one attack vector only. There are many others. If your goal is to prevent the conceptual exploit only then there are many easier ways to accomplish that, such as blacklisting, that does not make you safer.
With this vulnerability you can manipulate the page cache. You could also manipulate ld.so to hook into arbitraty system calls, or set your uid to 0, or any of another dozen or so ways to elevate your privileges.
Mount points have nothing to do with this, even if is always a good idea to disallow suid in user writable areas and prevent reading suid files, but that's for other reasons. NixOS does nothing to fix this and is just as vulnerable as everyone else.
akdev1l 23 hours ago [-]
Without read permissions you cannot execute the binary, that would not make any sense.
To execute the binary it needs to be read from disk and loaded into memory.
In fact if you have read permissions but not executable permissions on a specific binary then you can still execute it by calling the linker directly /bin/ld.so.1 /path/to/binary (the linker will read and load the binary and then jump to the entry point without an exec() call)
aaronmdjones 21 hours ago [-]
> Without read permissions you cannot execute the binary
This is not correct, as when the binary is setuid-someone-else, you are not the one executing it; they are.
Removing world-readability from all setuid-root binaries on the system would be sufficient to kill the PoC script provided for this vulnerability. It would not be sufficient to prevent exploitation though; there are many ways to abuse the ability to write to files you have read access to in order to gain root, for example by using the vulnerability to alter the cached copy of a file in /etc/sudoers.d/, or overwrite /etc/passwd, or /etc/crontab, ... the list goes on.
akdev1l 21 hours ago [-]
interesting but in that case no point in keeping the x bit either and suid binaries should just be 4700 ?
aaronmdjones 21 hours ago [-]
If they don't have world-execute permission, an access(2) check for executability would return negative, leading to things like shells not tab-completing it. The kernel would also deny attempting to execute it, as it is not executable for your fsuid.
This workaround only applies to kernels with the impacted code compiled as a module. RHEL, Fedora, and Gentoo (we use a modified Fedora config) all are configured to build this in directly. Without a patch or config change (as Sam from Gentoo was alluding to), those distributions remain vulnerable.
jcul 1 days ago [-]
There was some discussion on the GitHub issues about workarounds to disable it, even though it is baked in.
For compiled-in kernels you can also work around it without rebooting via apparmor, seccomp or SELinux at the least, there may be eBPF or other methods too.
holowoodman 1 days ago [-]
The potential remedy doesn't work on RedHat and derivatives because the affected code is not a module there but statically compiled in.
lokar 4 hours ago [-]
I don’t see what the fuss is about. Sometimes the reporter will go above and beyond beyond to try to coordinate a broad response, sometimes they won’t
No one serious should consider the kernel (on its own) a good security boundary (and basic containers don’t count either).
sophacles 4 hours ago [-]
Nonsense the kernel owns huge swaths of the IO path.
For example:
It is responsible for all ethernet and IP layer operations, and handles TCP, etc. (By default, there's ways to move these things into userland, but in 99% of cases, programs open a socket, and exchange buffers with the kernel - the kernel does the rest).
You're telling me that the kernel should not be a security boundary against malformed packets? That it's no big deal if some malformed packet can crash a machine, cause remote code execution, or cause the system to perform poorly (all real things that have been possible in most tcp/ip stacks, including linux)?
Hell (ip|nf)tables firewalling is a security boundary, and it is implemented in the kernel. If you configure a rule and the kernel code handling that rule has a bug that allows bad traffic - isn't this a case of the kernel literally advertising itself as a security boundary and failing?
This is where some genius usually steps in and says "well thats why you use a hardware firewall hurr durr - but those are just boxes running a tcp/ip stack with help from some chips that can assist the operations. Problems there:
* The hardware or firmware or even the linux kernel running on such boxes may also have bugs, letting bad traffic through to the hosts/servers.
* There are categories of network stack bugs with packets that look like good traffic that can still be exploited on the host's kernel.
(Aka defense in depth requires the kernel to be the best security boundary it can be also).
lokar 3 hours ago [-]
That’s mostly true, I really had in mind local exploits
At the same time, you should use application layer proxies (eg http) that have little or no privileges in your system, nothing else running, as restricted as possible, etc. don’t expose more general hosts to direct raw IP traffic from the internet.
foreman_ 10 hours ago [-]
The Linux kernel security team has chosen not to operate a coordinated downstream notification process. That’s a defensible choice at this scale, but it has a consequence: the kernel is a low-coordination upstream by design. Distros that ship it carry the cost. They track the commit stream, pay a vendor that does (RHEL, Ubuntu Pro), or accept the latency. Different points on a cost curve, not different levels of competence.
The researcher’s job is to surface information. The kernel team’s job, in this architecture, is to patch. The distros’ job is to track. The operator’s job is to pick a distro whose tracking matches the threat model.
When a 30-day disclosure catches you out, the question isn’t who failed. It’s which point on the cost curve your distro choice put you on, and whether that was the point you thought you were on.
A make-me-root bug sitting in the kernel since 2017 and the LTS branches still don't have a clean backport. That's a long window for anything running 6.12 or older
1970-01-01 8 hours ago [-]
What's really sad about Copy Fail is that it doesn't seem to work on Android. This is a purely bad situation for Linux.
What's interesting is that their website is also down right now. These seem like special-timed DDos attacks so maintainers cannot communicate the issue well.
walletdrainer 8 hours ago [-]
I suspect there are very few real setups affected by that proftpd bug.
Nobody is ddosing anything to cover it up.
SubiculumCode 6 hours ago [-]
Is this one related to CVE-2026-41940, allows bypass in cpanel?
Was not disclosed to stagex, and I expect a lot of linux distros. Thankfully we were already on kernel 7.0 so not impacted
seniorThrowaway 1 days ago [-]
Ubuntu has patches out, tested before and after patching.
worthless-trash 13 hours ago [-]
I believe this is the side effect of having upstream manage the CVE process.
The distros dont get any involvement until release, welcome to the suck.
Avamander 5 hours ago [-]
This is the effect of "every vulnerability is a bug" and "we can't rate the severity of any vulnerabilities".
Which very clearly results in "bugfixes" (security patches) not making it everywhere in time because it's just simply ridiculous to ask for each downstream consumer to rate the severity of everything on their own. It's easy to shit on CVEs, some even put out shit CVEs, but at the same time contribute absolutely nothing towards providing a better alternative.
It's quite certain that both the Linux project and the Linux CNA needs to take some responsibility and put in some effort at communication and making it easier to triage.
harshreality 4 hours ago [-]
They can't. Linux has too high a profile. Any additional "in group" that had access to embargoed critical security information would have a much higher chance of being compromised.
The solution is not to tell more people that patch xxxxxx is a critical security bugfix that needs distros to roll new kernel versions immediately.
Major vendors (all the cloud providers) will have security teams that can have the bug mitigated in a few minutes once they're notified.
For everyone else...
Part of the solution is that distros need to stop believing that their distro kernel branches are any better than linux-stable, and use linux-stable and engage with the linux-stable list and patchsets if they're concerned about what's going into them.
Part of the solution is each distro needs a process for pushing critical updates (module blacklists, ebpf patches) to address things like this without forcing all distro users to reboot, which many won't do promptly anyway.
m00dy 10 hours ago [-]
Welcome to AI first world, everything is about fail and repriced.
2OEH8eoCRo0 18 hours ago [-]
Seems silly. How many distros need to be notified? There are hundreds.
kro 12 hours ago [-]
That is true, but if at least the widely used ones would get notified before that would be beneficial. If they have a responsible security contact point.
- Debian
- Ubuntu
- Arch
- Amazon/Azure
- Fedora/RHEL
2OEH8eoCRo0 8 hours ago [-]
Then those that aren't notified will complain. I think it's on the distros to follow kernel developments since they are consumers of the kernel, not the other way around. Kernel devs can't possibly know all of the stakeholders that they need to notify.
Akuehne 6 hours ago [-]
[dead]
anthk 11 hours ago [-]
Hyperbola GNU was save because they still use Python 3.8 for both political and stable reasons.
nromiun 10 hours ago [-]
Python 3.10 is only used for the exploit. You can easily rewrite it for 3.8 as well. The vulnerability itself does not require Python at all.
anthk 6 hours ago [-]
True. At least I can disable the module in Syslinux as a workaround.
JasonHEIN 24 hours ago [-]
huh somehow seeing people not using ai to work is like wow moment which i cherish a lot these days
lionkor 21 hours ago [-]
You're likely in an echo chamber! Barely anyone I know uses AI as more than a fallible tool.
ramon156 8 hours ago [-]
I blame X for this. I don't use it, but whenever I open the feed I get bombarded with bullshit that is packaged like a unicorn.
they disclosed 30 days after the patch was merged in the thing they reported to.
its the same disclosure policy as google's project zero, and several other major players, so you should probably be trying to ping a lot more people
reporters should not be responsible for finding out and individually reporting to every downstream consumer. blame the kernel security team, who is in a much better position to coordinate notifications to individual distro security teams.
VladVladikoff 20 hours ago [-]
In the original thread they admitted multiple times that they rushed it out for marketing reasons.
john_strinlai 20 hours ago [-]
as an explanation for the misnumbered redhat version.
the disclosure itself followed a normal timeline, which you can view at the bottom of their blog post.
18 hours ago [-]
tptacek 22 hours ago [-]
The security research community would run you out on a rail if you tried to take a successful research product and attach mandatory disclosure norms to it.
VladVladikoff 20 hours ago [-]
Couldn't the product itself disclose to the vendors?
tptacek 20 hours ago [-]
No firm in the world would use a vulnerability research product that automatically disclosed to vendors.
18 hours ago [-]
Skywalker13 22 hours ago [-]
I have checked all the servers (bookworm, bullseye) that I manage, and none of them have the algif_aead module loaded.
Seems not fatal to all non-patched systems.
Denvercoder9 22 hours ago [-]
Not having the module loaded doesn't mean you're not vulnerable, the kernel loads the module on-demand when it's needed. I tried the exploit on such a system, and it worked.
However, not having the module loaded does mean that in normal operation you don't need the module, so the proposed mitigation of disabling the module is safe in the sense that it won't disrupt anything.
Skywalker13 22 hours ago [-]
I don't know what exactly can load this module but the servers are running for many weeks and I suppose that if something will load this module, it stays loaded until the next reboot.. no ?
I tried to rmmod on all servers and rmmod always returns `ERROR: Module algif_aead is not currently loaded`, that's why I think it's fine. Of course I take a look on https://security-tracker.debian.org/tracker/CVE-2026-31431 for the updates.
rcxdude 6 hours ago [-]
the kernel will autoload modules when they are needed. The fact that the module hasn't been loaded is an indication that the bug may not have been exploited, but it does not mean that you are not vulnerable to it. You need to block the module from loading or remove it entirely to mitigate the issue (which is what the first line of the recommended mitigation states).
Denvercoder9 21 hours ago [-]
> I don't know what exactly can load this module
Well, for one thing, opening an AF_ALG socket, as the exploit does.
bombcar 17 hours ago [-]
rmmod just tells you it's not loaded; you'd have to delete the module to prevent it auto-loading.
TacticalCoder 21 hours ago [-]
> I have checked all the servers (bookworm, bullseye) that I manage, and none of them have the algif_aead module loaded.
But only Trixie (and testing/Sid) are patched (as I type this).
On Bookworm (and Bullseye), you want to add the module to list of blocked modules. It's a one-line change.
Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix. Who knows how many shared hosting providers were hacked with this.
It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers. One would hope that the former would notify the latter, but apparently it's the responsibility of whoever finds the vulnerability.
the real problem is:
>It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers.
the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.
what should be happening, as you allude to, is a communication channel between the kernel security team and distribution maintainers. they are in a much better position to coordinate and communicate with the maintainers than random reporters are.
the minute the patch landed in the kernel, a notification should have gone out from the kernel team to a curated list of distro security folk that communicated the importance of the patch, and that the public disclosure would be in 30 days.
Not "separately to every single downstream", there is the "linux-distros" mailing list for disclosures: https://oss-security.openwall.org/wiki/mailing-lists/distros
This random blogpost from 2022 serves as a proof that disclosing kernel vulnerabilities to the distros list is a well-known practice: https://sam4k.com/a-dummys-guide-to-disclosing-linux-kernel-...
I agree it's a shame that the process isn't more streamlined and the kernel developers aren't forwarding the reports to the distros list.
They are professional security researchers, they must know this is the way it is done in the ecosystem.
Kicking the can around leads nowhere.
just as a note, its not as simple as firing off an email to linux-distros and calling it a day.
qualys, one of the big firms (10,000+ customers across 130 countries. i.e. "professional researchers"), has even taken a stance against emailing linux-distros because of the restrictions and policies involved:
There are (some, loose) norms of vulnerability disclosure, and this isn't one of them.
It’s certainly a thing some people do. But there is not a unified consensus on how to handle vulnerabilities. Different security researchers (or, in fact, the same researchers releasing different findings) can and do take many different courses of action.
The kernel devs patched the kernel. The kernel devs have a pretty known, straightforward stance in how they ship fixes for anything, because anything in the kernel can be a security problem.
Distro maintainers can see kernel changes. Some distros aggressively track new changes. Others backport what they feel are relevant. Others don’t do either.
Users pick what distro they use, and how they set up their infra.
Maybe if I were paying for RHEL licenses I’d be eyeballing the money I pay and RHEL’s response time.
But the ownership here lies with system operators, who pick their infrastructure, who design their security model, and who build their operational workflows. This vuln is a great example: people who looked at shared untrusted workloads on a single kernel and said “Hell no” had a much calmer day than teams who thought that was a good idea.
In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.
Otherwise, it’s on the end user. Distro volunteers don’t owe you anything. Kernel devs don’t owe you anything.
I don’t care about what would be the most effective way of doing things. I care about what folks involved actually owe to each other, and distro volunteers don’t owe users any kind of active chasing of remediation due to the user’s threat model.
The idea of making some kind of streamlined process that solves what you didn’t like about this vulnerability’s remediation is that it ignores basically all the complexity. Like “what about distros that don’t abide by embargoes” or “what distros count as ones that matter” or “what about all the vulns that aren’t in Linux, they’re in software that’s packaged across many operating systems”.
This vulnerability is, for some threat models, a really big deal. A security group found the vulnerability. They disclosed it. It was patched.
Folks here have gotten all kinds of bent out of shape that the groups involved didnt do things in the way each internet commenter would have liked. But this is the system working.
This vulnerability is, for other threat models, a death sentence.
> A security group found the vulnerability. They disclosed it. It was patched.
It was patched only after some people who should have been notified well in advance happened to notice something was up. That is NOT HOW IT'S SUPPOSED TO WORK.
For as long as the unpatched window remains open, skids will mess around and break things. Organized crime teams will use it for some really nasty hacking/ransomware/exfil/extortion/whatever. I guarantee you, this vuln is powerful and widespread enough that intel orgs will use it to kill targets, if they haven't already been using it for years. And if they have, we can just bank on them pulling out all the stops to take advantage of the remaining time for wreaking havoc. Make a project out of it and see if you can guess some of the future headlines.
Certain folks might not care much because they are citizens of one or more of those orgs' nations, so those targets are welcome to die in their opinion. That's fine. You do you, I'll do me, we'll all just go on doing our thing. But it's all fun and games until the wrong target gets hit and now there's a pact between the Germans and the Austrians being invoked and a few dozen million Europeans die. Or a geopolitical hotspot flares up and overnight 20% of the global petroleum supply chain grinds to a halt. Use your imagination. This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.
How is this different from any other day? Because now we've got a world-changing vuln out in the wild with no distro mitigation on day 1, and who the hell knows how many unscrupulous actors poised to take advantage of it before the fun and games stops. There will be no adults in the room when the miscreants decide to deploy while they still can.
Is this vuln going to start the next world war? Probably not. I don't expect it to and I hope and pray it doesn't. But leaving a vuln like this undisclosed to the very people whose job it is to protect us all is playing with fire. Not matches; more like a 10-grams-less-than-critical mass of plutonium.
sam is right to be pissed and he's doing a very good job of hiding it, because he knows that his users are at the mercy of TPTB in the Linux kernel world. Somebody's head needs to roll for this, and I don't mean some dude the CIA wants to hax0r because he's next on the list.
A Linux LPE is a nothingburger unless you’re relying on the Linux kernel to enforce internal security boundaries, which would simply be foolish.
Now, y'all tell me, since I'm not a web guy. How hard is it going to be to tweak this lovely little pathogen into some kind of browser exploit? It just needs to be combined with a sandbox escape to work on current versions, right? Difficult but quite worth investing the time and effort to develop if that's your line of business. If that happens, every at-risk Tails user is going to have to stay offline for a while, unless they want to play the drone lottery.
Or how about chaining it with any of the as-yet unpatched bugs in gawd-only-knows how many web services out there that have poor input sanitization code? That bug now graduates from a DoS crash causer to a root grab. Good luck stopping it with your fancy AI Behavioral Analysis security tools. They better be fast. The sploit is going to do its work in two packets, maybe three. Fun times.
Lucky for us systems monkeys, it's not like anybody is spending billions of dollars to develop vuln finding AI tools right at this very second. So there shouldn't be many unpatched web services holes.
Oh, wait.
Of course, as the grey hats can already tell you, the really delicious part of this thing is how it's going to become the LPE tool of first resort for any APT that's already inside ur base killin ur doodz.
Nothingburger? This nothingburger is going to root a million OS instances before we know what hit us.
People who run servers that give out shell access to uses or randos already needed to contend with this.
Added later: you may find https://gtfobins.org/ fascinating or horrifying.
The whole thread, really bringing me back to comp.security.unix. I'm not complaining! I miss comp.security.unix.
Otherwise it’s not.
Not sure what the solution could/should be, but surely there could be a better, easier mechanism for kernel to advise all distro maintainers who care, and for those distro maintainers to subscribe in some way. Whether any distro maintainers do so (let alone do something about the vuln notifications) would be entirely up to them. There could also be some easier way for end users to see what the distros' policies on this are, such that they can take that into account when selecting a distro.
We don’t have to agree, but the site rules are pretty clear that swipes like that aren’t ok.
That kind of distro maintainers and kernel devs communication path already exists: the linux-distros@ mailing list. But since anybody can read it, posting “hey everybody, this is a security patch” has basically the same effect as the security researcher posting, in terms of disclosing the vuln to bad actors.
Given that anybody can make a Linux distro, and Linux distros aren’t generally either capable or interested in background checking their teams or policing their individual security practice, it doesn’t seem possible to have a communication channel that distros can sign up for that lacks this problem.
If you want that, buy a commercial distro of linux, or use Windows. That's a huge part of Microsoft's value proposition to enterprise - they pay people to stay on top of security patches for you. Same with RedHat and others.
Expecting anything of unpaid volunteers is unreasonable.
> THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
There's not really an enforcement mechanism in FOSS like there is in capitalism world, it just comes down to what we want our part of the world to look like. So I think we'd think more clearly if we leave aside the ideas like "who owes who what." I think it's fun to imagine what sort of motivations and incentives there are if we put away the money ones.
Nonsensical string of words with no meaning.
If you want something that someone else isn't giving you, you have the option to try to do it yourself, or try to compel someone else to give you what you want somehow. Feel free to idk pay someone to track the kernel list and 4000 others and send you heads-ups? Try to pass a law to make people do what you want since you don't care about words like "owe"?
Yes, exactly, the opposite of paying, since when you pay someone something they owe you whatever you paid for.
If we leave aside owe, deserve, and earn, we can start discussing things like what we want our kernel ecosystem to look like, how we can make it safer, etc, without being burdened by these concepts.
It's a simple intellectual exercise, that's all. If you're having a strong reaction to it, imo that'd make it even more fun for you to participate.
You want someone to do something for you for some other reason than that they owe you.
They already are doing something for you that they don't owe you. They are writing software that you benefit from. You just want them (or somebody) to do something else that they don't owe you.
They aren't, because they don't owe you and it's not something they want to do for fun, and so since the problem is they don't owe you, you wish to set aside words like "owe".
Well sure. Looks like you found the problem and the solution alright. Why didn't anyone else think of that?
If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?
Do you imagine say a dating website where people compete to look attractive by getting points by doing the best job at finding the most bugs and patches and reporting them to the most downstream consumers the fastest?
Exactly! That's what I'm interested in exploring.
> If no one is doing a task you want done because they aren't obligated to, then you seek some other reason besides obligation. Ok, what then?
That's what I love exploring. Action with no obligation. Have you any examples of that in your life? Nobody obligates me to do the long walks I enjoy where I stick a 360 camera on my head and then upload the footage to Mapillary and other open platforms, I just like to do it, and I want to find other things that I'm motivated to do without obligation, and I'm fascinated by things people do for "no reason." Understanding human motivation is really important to me for some reason.
As to "what then," yes what then? If I run a cashless commune, how do we make sure the toilets get cleaned? That's the whole question, and I love exploring it. If you'd like to experience it yourself, you could always try attending a regional Burn for a bit of a micro version of it, people doing things just for the sake of it.
I'm sorry, I don't quite understand what you mean by the dating app thing.
Who decides who is a trustworthy distro maintainer? In the open source world everyone is equal, no favorites are chosen. If your point is that the distros backed by companies making at least $x million revenue a year should get priority disclosure... pretty sure somebody will take issue with this.
And it's not like a hypothetical issue either. Given the high stakes, bad actors are highly incentivized to masquerade as some small scale niche distro until they get their effectively free zero day CVE.
Linux like every open source project is just a bunch of people who are YOLOing it. Not something you use for your fortune 500 critical mission infrastructure.
But from what I understand they were not given enough information to know if it was relevant or not. The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.
If the commit message says it fixes a security bug, then bad actors immediately know there's a possible exploit there. So maybe it's intentional? (not familiar with the policy for this)
"There is no benefit in operating in-place in algif_aead since the source and destination come from different mappings. Get rid of all the complexity added for in-place operation and just copy the AD directly."
They dropped the ball when the shipped supposedly secure systems where their method for getting alerted to security updates was "hope people reporting to upstream will also notice a mailing list that will alert them".
(Caveat: Distro's like Ubuntu advertise security updates so this is on them. I'm not sure Gentoo does that, if they don't well then no one dropped the ball because no one represented that Gentoo got prompt security updates).
“30 days should be enough time” why? Why is 30 days a magic number? Especially in open source.
Yeah it isn’t the researchers problem to tell every distributor of the kernel about the fix or verify that everyone has the fix, but fuck maybe wait until at least someone has the fix and maybe don’t drop it on a Friday. That is just malicious
But, even if I agreed with you, how do you propose they tell the patchers this that doesn’t tell the whole world?
And they dropped it on a Wednesday.
If researchers want to showcase their ability (either individually or as an organization) to identify and address security vulnerabilities in complex multi-stakeholder environments, I very much expect them to figure this out. After all, it doesn't make much sense if a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions, so that identified issues are resolved with minimal impact to the business.
These vendor interactions you're referring to are the company's customers, correct? Are you proposing the company hire another company to manage getting updates to their customers?
Feels like the more sensible process would be for kernel maintainers to announce when a version contains a fix for a high-impact security vulnerability and for distro maintainers to pay attention to that. Could be done without revealing what the vulnerability actually is in most cases, trusting the kernel maintainer's judgement. There does seem to be a public linux-cve-announce mailing list.
No it can’t. The bad actors that should actually worry most people are actively combing through commits on mainstream codebases, using a combination of automation/AI and manual review to pluck vulns out by their remediations.
Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.
People have gotten so used to the Github flavour of free-labour, social-network-style FOSS that they've forgotten what all those LICENSE files actually say, which is to make it explicitly clear that the devs are not responsible to you for your issues, up to and including the software setting your house on fire. If you don't like it, you don't have to use it.
They can't, because (responsible) security disclosures are private, _not public_. That's the whole point of the system: notify the developers in private ahead of time (usually 30, 60 or 90 days) so they can write, test and roll-out the fixes before you release the info to the whole world. This is to minimize the time between when bad actors gain access to the exploits vs. when users install the patch. So "keeping up on security disclosures" cannot ever be a 'pull' process.
Usually the maintainers of the big distros are part of (private) security mailinglists and receive such info. Just not in this case it seems.
Sending emails to some big distros would still result with e.g. Gentoo not getting that info because they are not a big distro.
Not ideal, but also: shit happens? It's always a balancing act choosing the lesser of multiple evils and most of the time it seems to work ok-ish, which is probably the best we can hope for ;-P
For this specific "bug" they took care to not mention any security angle in the commit message, making it extremely hard for an outsider to even realize this was a critical patch. I assume this was because they wanted to push the fix without breaking embargo.
The post you are responding to says that it would be nice if they copied literally one mailing list.
Who would curate that list though? You don't need permission from the kernel team to spin up a new distro. I can go and create fork of Debian or Arch or whatever today and the kernel team would never know (and neither should they).
This is completely in the responsibility of the distros. If you don't like this model, use something like FreeBSD.
You don't need anyone's permission to make a distro, that's true, but if you notify Debian, Canonical, Fedora, Red Hat and Arch you're covering a very large fraction of users; way more than today's 0%. In cases like this, perfect is the enemy of the good.
The name is a misnomer.
Given this was announced when backports weren't ready (and given the POC was at least opaque if not obfuscated), I'm getting the vibe fixing the vuln wasn't as high as a priority as making a media splash.
> Note that for Linux kernel vulnerabilities, unless the reporter chooses > to bring it to the linux-distros ML, there is no heads-up to > distributions.
so, no, `linux-distros` list don't solve the problem.
They openly refuse to do this and have been given authority by MITRE to work against any such process.
There would be a lot of people gloating if this happened to MS.
Linux is a free kernel that literally revolutionized the computing landscape.
There are only 2 words in this term, and neither one even slightly applies.
A sacred cow is called a sacred cow because there is no reason for it to be sacred.
Linux is perfectly subject to criticism, and so not at all sacred.
Linux has earned a stunning amount of respect and gratitude by actually providing stunning utility and quality. IE, it's not just a random object like a cow that everyone decided to worship for no reason.
Spoken as a freebsd user who has plenty of critiques of the entire linux ecosystem.
I agree.
> A sacred cow is called a sacred cow because there is no reason for it to be sacred.
Here we diverge. Linux earns sacred cow status when people interpret legitimate criticism of it as an attack that must be debunked or dismissed. And there's plenty of that happening in this forum; you may not be treating it as a sacred cow, but plenty of people are.
And to expound on why it even matters, it does a disservice to Linux to treat it this way: if you can't engage with its flaws, you'll never help fix them, and instead attack people who try.
It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass.
The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch.
Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.
FTFA:
> I see that on the 11th of April 6.19.12 & 6.18.22 were released with the fix backported.
> Longterm 6.12, 6.6, 6.1, 5.15, 5.10 have not received the fix and I don't see anything in the upstream stable queues yet as I write.
I wouldn't go so far as to call this "the kernel devs patched it". Virtually none of the kernels that distro's are actually using today have received a fix. This looks like an extremely lackluster response from the kernel security team.
Pretty much the only non-rolling distro's that are shipping a fixed kernel are Fedora 44 and Ubuntu 26.04, both released in the last few weeks. Their previous releases both shipped with Linux 6.17 which is still vulnerable today!
But it's been at least 15 years since "reversing means patches are effectively disclosures legible mostly to attackers" became a norm in software security. And that was for closed-source software (most notably Windows). The norms are even laxer for open source.
But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details.
It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least?
About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there.
> Their job is to bring information to light, not to manage downstreams.
The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions.
If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs.
People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago.
they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add).
whoever did the marketing on "responsible disclosure" was a genius.
tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules."
And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough.
The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports.
Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation.
The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there.
I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before.
[citation needed]
Is there any evidence that Linux distros (specifically) act in this way? Or a particular distro?
there is ~3 decades of citations you can look at, spread out over every security mailing list, security conference, etc. that you can think of.
one decent start is https://projectzero.google/vulnerability-disclosure-faq.html...
"Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.
[...]
While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results.
[...]
For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy."
>Linux distros (specifically) act in this way
carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works.
Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that.
This is the result of that failure.
If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem.
It's 2026. We're more than 30 years into the Linux ecosystem. I don't believe this bullshit for a moment.
Given how trivially users can implement mitigation, distributions could have done _something_ to protect their users prior to publication date. A handful of messages is all that was required, not "every single downstream" - that is a straw man.
The publication of a bug that trivially gains root on an incredible number of Linux installs that was discovered using an A.I. tool prior to any of the "downstreams" implementing a fix is intentional. I speculate the motivation is free promotion of the A.I. tool.
yeah, distributions could be following the kernel updates more closely and they would have been patched prior to publication. mainline was patched 30 days before publication.
it is not the reporter's responsibility to babysit the linux distributions.
"not caring" would be not disclosing the vulnerability at all, and instead selling it to the highest bidder on one of the private markets
which, given the ridiculous and undeserved lashings the researchers are receiving from people completely outside of the security ecosystem, i would not be surprised if they moved in that direction. they would certainly make more money.
the linux kernel team is in a 10000% better position to communicate to and coordinate their downstreams. it seems completely backwards to me to suggest that the reporter should be responsible for figuring out every possible downstream and opening up separate reports to each of them.
the kernel team should have a process/channel to say "this is important! disclosure is in 30 days" that is received by distro security teams. because this is not the first or last time the kernel will have a local privilege escalation. hoping that every reporter, forever in the future, will take the onus on themselves is a recipe for disapointment.
Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.
How do you figure that? From what I could tell from the earlier post, the fix has only been backported to 6.18 and later, and as TFA indicates the distro's were not informed of the security implications of this fix. All distro's shipping a major kernel version from more than a year ago -- and that includes all LTS kernels -- are vulnerable, regardless of how "timely" their patch schedules follow upstream.
To be fair, I question the wisdom of managing kernels like that across all distros.
the baddies are looking at every patch anyways.
Even if the only purpose of looking at the status to make yourself look good in marketing materials, it's surprising that it didn't happen.
the vulnerability report was submitted to the kernel security team and appropriate kernel maintainers. those are the people responsible for patching the kernel, which they did 30 days ago.
They patched 2 of 7 supported kernels.
is the reporter of that vulnerability responsible for finding and submitting a vulnerability report to every single piece of software that uses left-pad? all ~millions of them?
or do they submit the report to left-pad, get them to fix it at the source, and trust that the people relying on left-pad will update their software like they should when they see a security-relevant update is available?
Those groups don't exist, to my knowledge. And probably can't, realistically speaking.
> Is your software AI-era safe?
> Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. [...]
> [Try Xint Code]
More chaos makes their product seem even more attractive.
I get put into a read-only dashboard with ZERO info. is this live? is this static? how do I use it? the API button just leads me to a swagger doc.
this was a failure of the kernel security team, and their stance on communicating security issues with their downstreams.
Hell, Crowdstrike is still purchased.
Sure, they have no legal obligation to disclose, but we all also have no legal obligation to buy their services. Blacklisting bad actors like this is the right move to discourage this kind of behavior.
they did a proper coordinated disclosure, following the industry standard 90+30 process. that is why the exploit dropped 30 days after the patch landed.
the kernel team should have communicated with their downstream about the importance of the patch. that is the kernel security team's responsibility -- and they are much better positioned to do that than crossing your fingers and hoping every reporter will contact every distro every single time there is a vulnerability.
there are very good reasons disclosure works this way, backed by a couple of decades of debate about it.
I just don't see the point in complaining about how shirking the norms of your industry will make you look irresponsible. I don't really care that they could have decided to sell the vulnerability instead. It isn't material.
Tavis Ormandy dropped Zenbleed right onto Twitter. He's doing fine. You can blacklist him if you want; I imagine he's not going to notice.
There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all...
I couldn't find a public copy of that.
The best starting point I found for reporting vulnerabilities was: https://github.com/microsoft/MSRC-Security-Research/security...
You can email without agreeing to anything. But for a serious issue Microsoft would obviously try and track down who you are and what jurisdiction you are in.
> MICROSOFT BOUNTY TERMS & CONDITIONS
> Last updated: July 23, 2025
> The Microsoft Bug Bounty Programs Terms and Conditions ("Terms") cover your participation in the Microsoft Bug Bounty Program (the "Program"). These Terms are between you and Microsoft Corporation ("Microsoft," "us" or "we"). By submitting any vulnerabilities to Microsoft or otherwise participating in the Program in any manner, you accept these Terms.
Who knows if its enforceable.
Those private actors aren't planning to sit around and hold onto these exploits they've horded forevermore, they're obviously paying for them so they can one day use them.
We must get public funds to reward ethical disclosure of big impact vulns like this.
Mostly cover citizens within a very limited set of jurisdictions.
Otherwise there's a chance at extradition.
This is not true in many jurisdictions.
We need an anonymous bounty system.
If so, that's a bit naive. In the actual world, that buyer wants to buy more stuff from me, not penalize me.
And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems. (I'm not saying "responsible disclosure" is the correct way to do that, but hoarding vulnerabilities and exploits and selling them to the highest bidder certainly isn't.)
This is how society needs to work.
Or at least went dark ..
You'll learn who the buyers are if you routinely have the really good stuff to sell! If you are offering iOS zero click on a semi-regular basis, the buyer is going to want to try to deal with you directly and preferably offer you a more regular form of employment, if you are interested. Some national governments may offer certain benefits to you, depending on your situation.
All depends on what you have to offer. If you were able to offer this https://arstechnica.com/security/2025/09/microsofts-entra-id... or something of that magnitude, a lot of problems in your life would just go away. The buyers would all be Five Eyes and the intelligence gain of having that kind of access even briefly is priceless.
In a more Western-centric context, imagine if you had a flaw like that, same 'no logs are generated' and 'every single customer account is accessible' but the impacted vendor was Alibaba Cloud. The researcher would get to name their price. That's the real world, that's the world we share. We shouldn't be blind to that.
it wasn't sold for profit, it was openly disclosed.
> And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems.
All that "responsible disclosure" does is keep people from demanding better.
Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.
> Just fyi.
...
> Be glad it was disclosed at all. Be glad a patch was available prior to release.
I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.
Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.
I'm so glad these so called "researchers" aren't totally evil, I'm so grateful they're only half evil, give them a lollipop.
Whatever, the way they disclosed it isn't much different from no disclosure at all - the exploit would have been identified in the wild and fixed soon thereafter.
"Researchers"...
non-security people always seem to get up in arms about it, but there is very good reasons why the industry has landed on the process it has, which has been hashed out over a few decades.
1. Status quo. Researchers are free to disclose to a vendor, free to sell vulns to legitimate companies, free to do full disclosure if they want. This situation benefits security. Researchers are able to pay their bills while also doing meaningful research into OSS projects that are unable to fund the kind of security audit they need. Harm reduction, of sorts.
2. Everyone is a bad actor. No one is going to do this work for free/for a bounty. Horrible flaws will be found and shared with ransomware gangs and the like. 0day will sell for a percentage of the ransom winnings. Researchers will live like kings, everyone else will suffer.
Which do you prefer?
If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.
In that world, the vulnerability has more value to those who seek to exploit it for their own motives, regardless of the consequences. They hope that no one else stumbles on it and fixes it, preventing them from continuing to use it to do bad things.
In the world where it is disclosed, there is more value in fixing the vulnerability as the maintainer’s reputation is at risk (and potentially monetary loss or legal liability if they are shown to be negligent).
> In computer security, coordinated vulnerability disclosure (CVD, sometimes known as responsible disclosure)
I guess you can learn something new after 36 years.
If you are referring to what you quoted, your pedantry and sharpshooting would result in an incomplete English sentence: "that's why we have the responsible disclosure" is missing a noun. Now that we are firmly in worthless pedantry:
Protocol (n):
1.a. a system of rules that explain the correct conduct and procedures to be followed in formal situations
1.b. a set of conventions governing the treatment and especially the formatting of data in an electronic communications system
If you don't like what I said or disagree, poke holes in factual inaccuracies. However, in the reality that I am pretty sure we all share, responsible disclosure is a well established protocol that is followed by many security researchers, and was imperfectly followed here.
> You: No, I wouldn't, because my own preferences are towards immediate disclosure.
And there it is. You could have said "I don't think responsible disclosure is a good idea" and moved on, but now we have whatever the fuck this is.
Bluffing sure as hell beats incapable of being wrong. I'll take it.
[1]: https://news.ycombinator.com/item?id=47969417
These researchers found a vulnerability in the Linux kernel. They could have just written a blog post and put it online, or not told anybody, or sold it. But instead they decided to tell the Linux kernel devs, and give them time to act before publishing.
And your beef is that you’ve decided they needed to also inform individual downstream projects that use the Linux kernel? Why? Which ones?
No, it's commonly followed practice: https://en.wikipedia.org/wiki/Coordinated_vulnerability_disc...
I'm all for lighting a fire under the developer's ass, but we live in an imperfect world and the biggest problem that we have is end-users. We may have applied the mitigation on day 0, and updated as soon as the kernel landed in our distro - and if some of us didn't then we've even got savvy users in that "don't update fast enough group" (which is fine, which is human, but is said imperfection).
Major distros should at least have gotten a few days of notice for something this catastrophic. It doesn't help that the kernel is fixed if "normies" aren't able to access it on day 0. For reference, the standard is 30 for the developer to fix and 90 for it to land on machines. Even 30+7 would have been a substantial improvement.
Ethical security research involves ethics, and maybe they aren't referenced in university/college any more - but here's what I was taught: https://www.acm.org/code-of-ethics .
> 1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.
> [...] Computing professionals should consider whether the results of their efforts will [...] and will be broadly accessible.
> 1.2 Avoid harm.
> (Honestly, all of it)
> 2.3 Know and respect existing rules pertaining to professional work.
> 3.1 Ensure that the public good is the central concern during all professional computing work.
> People—including users, customers, colleagues, and others affected directly or indirectly—should always be the central concern in computing.
Maybe other code of ethics for CS exist; I'd like to know which ethics these ethical researchers were following.
no, the standard is 90 days from notification or 30 days from the patch date, typically whichever is sooner.
e.g.
please also note that you are blindly quoting wikipedia articles at people who either currently work in security research, or used to work in security research. while we are not infallible, you should perhaps consider that we at least have real life experience dealing with vulnerability disclosure processes, and arent just learning about them today from wikipedia. when a room full of experienced professionals are telling you that you are misunderstanding something, that is a sign to step back for a second and maybe reconsider your position.> There is no such thing as "the responsible disclosure protocol".
And yes, I admit I got dragged down to their level and beat myself with a dumb stick in the process.
Nobody, for what it's worth, is arguing that major distros shouldn't have gotten some kind of notice. The problem is that the entity responsible for doing that isn't the vulnerability research lab. In fact, as a general procedural point, researchers can't go contact downstreams. They might be able to do so in the specific case of Linux, but you've tried to spin that possibility into a binding obligation derived from established practices, which: no. That's not a real thing.
I never said "binding obligation," that is the first time "binding" has appeared in this discussion and was introduced by you. Once again claiming things I have never said. Doing what you are free to do can still be a shitty thing to do.
I am a bluffing moron who knows nothing, you win.
> For reference, the standard is 30 for the developer to fix and 90 for it to land on machines.
I’ve never seen that as a standard anywhere.
Are you thinking of this? https://projectzero.google/vulnerability-disclosure-policy.h...
Please don't put words in my mouth when I have clearly stated the contrary. I used the word "disclosure," that is very different to keeping things secret.
As a user and admin I disagree. Makes one appreciate what a masterful bit of lexical-engineering “Responsible” Disclosure is, kinda like “Secure” (from me, not forme) Boot — “Responsible” Disclosure is 100% about reputation-management for the various corporation/foundation middleman entities sitting between me and my computer.
Those groups don't care that my individual computer is vulnerable but about nobody being able to say “RHEL is vulnerable” or “Ubuntu is vulnerable”. The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk than to be surprised by the fix and hope nothing bad happened in that meantime.
Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.
Even when there is no known use case of the attack (other than the security researcher's)?
> The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk
By the time you hear about it, the money could be gone because 1000 hackers heard about it from the researcher before you did.
> than to be surprised by the fix and hope nothing bad happened in that meantime.
Hope is not a good strategy here.
What could possibly go wrong?
And do you agree with that behaviour?
The real debate here is what went wrong with getting that info downstream, and whose responsibility was that?
No, it's really not.
High severity vulnerabilities are responsibly handled by quietly neutralising them with subtle patches that do not reveal the vulnerability, waiting for those patches to distribute. Then patching or removing the root cause of the vulnerability (at which point opportunists will start to notice), and finally publicly disclosing it when there are already good mitigations in place.
Example: spectre/meltdowm mitigations.
I've been asked to use this approach myself when reaching out to maintainers. Sometimes it's possible to directly fix the vulnerability as a "side effect" by making a legitimate adjacent change.
That’s what you’re saying here.
https://x.com/spendergrsec/status/2049566830771970483
https://lore.kernel.org/linux-cve-announce/2026042214-CVE-20...
Or is everyone expected to upgrade and reboot every 48 hours for all eternity and just deal with potential regressions all the time?
I think this reflects poorly on the original reporters. If you have a weaponized 700-byte universal local root exploit script ready to go, perhaps you should coordinate with major distros for patches to be available before unleashing it on the world. No matter how "veteran" you are.
(This bug does not technically require a reboot to mitigate).
It's a category error to talk about a disclosure event like this as something that would destabilize someone's fleet operations. The Linux kernel is fallible. So is the x64 architecture. You already have to be ready to lock things down and reboot (or mitigate) at a moment's notice.
Remember: whatever else grumpy sysadmins have to say about this, Xint are the good guys. Contrast them with the bad guys, who have vulnerabilities just as bad as CopyFail, but aren't disclosing them at all --- you only find out about them when it's discovered they're actively be exploited. There's no patch at all. There isn't even a characterization of how they work, so that you could quickly see what to seccomp. That's the actual threat environment serious Linux shops operate in.
LPEs are not rare.
Opportunists are the ones who will sell a 0day to bad guys. Or who will drop a 0day publicly to promote their services. And they’ll fight tooth and nail against any actual legal obligation to engage in responsible and coordinated disclosure, because they make more money without that.
As soon as a patch is committed, the clock starts ticking, the exploit will be discovered by reverse engineering recent commits. The commit was made on April 1st, Xint disclosed it on the 29th. If the Kernel Security team had wanted to, they had 28 days to backport patches in the LTS branches...
So, I wouldn't put any blame on Xint there.
Using quotes around something where you’re actually doing a strawman paraphrase of another commenter you disagree with is bad form.
Ubuntu/RHEL is vulnerable and so are most Linux users by extension.
I absolutely 100% agree with this and I'm glad to see somebody saying it. Any system that is one LPE away from being compromised is already insecure.
The only important system that uses it as a security boundary is Android and there is mitigated by the fact that APKs need user approval, plus strict SELinux and seccomp policy plus the GrapheneOS hardening, and in this case the mitigations succeeded (https://discuss.grapheneos.org/d/35110-grapheneos-is-protect...)
On Hyperbola GNU/Linux, they will shift into OpenBSD, they got fed up with the corporate slopware (and propietary Linux became). They will still make Hyperbola BSD GNU-license compatible, from core to the userland tools.
In my case, I wish Emacs and GNU developers embraced plotutils and left out Gnuplot (is not GNU at all; worse, it conflicts with the GPL) and made Texinfo independent of LaTeX to produce PDF and HTML files with equations. Groff + troff+pic+eqn already do that, no Texlive needed. So can mandoc under OpenBSD, no magic needed, everything under few MB's.
Texlive it's huge (full instal it's over 7GB) and the so-called free FSDG is not 100% free, at all. With just that GNU Emacs would be truely GNU-standalone, relying on GNU tools for plots under Emacs' Calc and Texinfo books exported into PDF. A good plus for security.
Once you get that working, the rest would just follow their way. Also, GNU Hurd being developed with propietary LLM's/SAAS it's a disgrace against what GNU stands for too. They can go back to the right path, but they need will, for sure.
LPEs on Linux are obscenely commonplace.
I'm honestly unaware of what systems could be put in place to prevent this but expecting people to always do the right thing is fantasy level thinking. I mean I bet the disclosers thought they were doing the right thing, hence why it's a bad thing to rely on.
edit: spelling/grammer.
The kernel security team was given the heads up a month ago. At that point it is their decision.
It's fundamentally their position to not work the way that you describe.
I'd start with Greg's own words. You can probably find more on it from Spender/grsecurity's blog.
As for the latest patch, Greg is currently being forced to clean up a big fucking mess by external parties. And he's miserable about it.
Partly they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways (on one hand this case where the vulnerability is almost ignored; on the other hand, I saw cases where a VM panic that could be triggered only by a misbehaving host—which could just choose to stop executing the VM—was given a CVE).
The reason they don't is because Linus and Greg have repeatedly, publicly stated that they don't want to because they don't believe that vulnerabilities conceptually make sense for the linux kernel and they refuse to engage in the process.
That's exactly what I wrote: "they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways".
But there is also a question of bandwidth. If a maintainer asks to bring a specific vulnerability to distros-list, the kernel security people will be reasonable. I did it last March.
For a first approximation: Ubuntu, Debian, RHEL(-derived) to begin with, and SuSE which is in EU/server space (AIUI):
* https://commandlinux.com/statistics/most-popular-linux-distr...
* https://commandlinux.com/statistics/linux-server-market-shar...
Seems like Gentoo, Arch, Mint, and Slackware could also be as well:
* https://distrowatch.com/dwres.php?resource=major
U/Deb/RHEL are 'upstream' of a lot of other projects, and fixes would trickle down to Rocky, Alma, etc. Perhaps VM OS in cloud (AWS, Azure) could be a usage gauge as well.
But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal.
And no, the proposed mitigations don't help with half of the distributions out there...
What’s your theory here? What crime?
Also, all kinds of aiding and abetting.
Copying from the comment I was replying to:
> But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal
But it’s not the law anywhere I’m aware of today, and I’d not support it becoming a law.
Instead of that, you’d rather make the law compel free individuals to limit their speech, or to hand over their work to big companies privately, so big companies can save money?
That doesn’t sound like a nice future, if it’s even enforceable at all.
That's besides the point. If people use the official mitigation on https://copy.fail/#mitigation they will not sufficiently protect themselves on mainstream distros like Ubuntu and Debian.
The page also states
> Most major distributions are shipping the fix now.
This text was probably prepared in advance, but this was simply not true at the time of publication.
Edit: As of this writing, most distros including Redhat, Fedora, Debian Stable, do not have patches available in the package repos, though they're being actively worked on.
Considering that the patches have been available for a while, someone surely reversed what they were for and was actually exploiting this in the wild.
In the age of AI, I’d argue that “responsible disclosure” is dead. Arguably even in closed source projects. Just ask Claude to do a diff between the previous version and to see whether anything fixed in there could have had security implications.
We’re not there yet, but very soon the only way to responsibly disclose a vulnerability will be immediately.
Linux kernel is one of the most audited open-source projects ever. I guarantee you that someone did reverse the patch.
> but forgot to tell the distros
Probably an oversight, but irrelevant. The bug was in the linux kernel. It's insane to suggest that they should have notified everyone shipping the linux kernel.
With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited. Even with shared hosting, you generally have root in your VM or container anyway. Unless this enables an escape from that?
Still the risk that people who run "curl | bash" without care could get bitten, but usually its "curl | sudo bash" anyway...
Lots of shared hosters don't use VMs or containers. It's some arbitrary number of people logging in to a shared system, each one with a home directory under /home/THE_USER_NAME. i've had several such hosters over the years (thankfully not right now, though).
Things like HPC clusters are multiuser & don't entirely trust their users. If they did we wouldn't need users/groups/permissions etc in the first place.
And then there are users running claude-cli and friends who may just find it convenient to use a local root exploit to remove obstacles.
So containers don't protect you, only a VM.
How so?
But even if you think making unethical decisions in personal self interest is something no one should be criticized for, surely the Linux kernel team ought to have some process for notifying the top distributions of an upcoming LPE, just out of practicality.
Distros are downstream of kernel, that doesn’t entitle them to expect to be contacted directly by every security reporter. That’s not on them. Distros that are big enough should be plugged into the linux security team for notifications.
Security researchers cannot be held responsible for broken lines of communication within the org charts of projects that they study. They’re providing a valuable public service already, how much more do you want?
Yes it does. That's how it's always been done and distros can ship a fix well before it ends up in a kernel release.
Any strategy that assumes that the rest of the world is functional or makes you personally responsible for fixing all of it is equally broken but there is a reasonable middle ground and sending a few more emails lies within it
AWS and GCP are downstream another level. Should the reporter also have worked with them? And their customers? And the customers of their customers?
IMO this whole discussion seems like people are annoyed by the security researchers doing god’s work and wish they didn’t exist or think that they should be fully subservient to the projects and companies they are helping for free. The bugs were there before the researchers revealed them!!
Most people in tech think like the techie in this comic strip.
https://xkcd.com/538/
A large percentage of kernel fixes have the potential to be similarly bad. For some the potential isn't even realized until after the fix has shipped.
Ever stable release GregKH says you must upgrade now, because there is something security relevant in there. This happens at least once a week.
As for shared hosting providers it is my sense that there is always at least one local privilege escalation available to miscreants. Making shared hosting only safe if there is a certain amount of trust.
I remember bugs that were similarly bad from my university days 30+ years ago. Has anything substantially changed?
I'd consider a shared hoster which allows users to run their own (native) code and doesn't use VMs for tenant isolation extremely irresponsible in 2026.
I could equally ask: "Who knows how many attackers learned about this vulnerability from this disclosure, and used it before the distributions fixed it?"
So maybe folks should take a break from the kind of armchair quarterbacking that this was “incredibly irresponsible”, as was done upthread, or that the researchers should be blacklisted for life, as a parallel commenter stated.
The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.
There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.
So I feel like the argument reduces into "why is it a problem that now anyone could exploit it, if some people were exploiting it already". Which imho isn't a sensible argument because the issue is clearly the amount of people capable of using the exploit for nefarious purposes, which has increased.
“Because we can’t know if there was exploitation by existing parties who had discovered the vulnerability on their own, there are upsides to disclosing earlier so that affected users can take mitigating steps and review their systems for indicators of compromise. Additionally, the more projects the researchers pull into the loop for coordinated disclosure, the higher the likelihood that they further leak the vulnerability to more attackers.”
However the issue is that we cannot know if the attack space has been broadened or lessened as a consequence of this disclosure, because of how eager it was. If it wasn't eager then we could much more comfortable in suggesting that the attack space has probably been reduced.
Given the exploit had been living in the linux code base undetected for so long in the first place, I think its fair to state that disclosing the exploit prior to the distributions being ready and given the distributions are the principal attack vector of the exploit: that the researcher has made the situation worse and should reflect on their actions.
The idea about the available exploit space and how the actors within it might, or might not move is a much more interesting avenue of conversation and I thank you for elaborating on your initial comment. <3
I do however feel that its hard to be confident about whether or not the attack space has been increased or reduced as a consequence of the eager disclosure. I feel we could make the case either way.
Anything else inevitably has worse for the public good.
Having spent that entire time and then some on both offensive and defensive teams, I assure you longer delays after notification do NOT decrease the overall risk to the public.
There's a reason we've landed where we have as a security community.
It's an advertisement for an unpatched critical exploit and apparently some kind of infosec company.
And if you disclose to just a handful, why ignore the rest?
Aka a white hat professional which should be a prized function richly rewarded. Do you really want these things to be calcified into a government function?
That is unfortunately not true. i left my last one only a few years ago and they're still going strong without me.
The disclosure is private. Meaning neither the commit messages nor any public info can leak too much information about the bug. It's usually kept rather discrete.
It is impractical for the kernel to broadcast to all its users privately.
Meaning that either a) distro maintainers should be privy to it, but where does this end?[1] or b) we have the current situation
[1] probably the top 5 distros security teams can just be copied into the private mail. Maybe the kernel security private list can forward the emails to them as well.
Problem is, every other type of communication between distros and kernel is implicit. In commit messages, patches and release notes. So it's an exceptional case.
BTW, with LLMs there's a new issue. It is now cheap to scan the kernel commit log maybe in _next and ask it to identify what could be a patch for a private disclosure. And then immediately RE the patch and exploit it on deployed kernels.
None? Because nobody* does hosting using Linux users as a security boundary. It's not the 90s.
* Standard HN disclaimer for people that think that some retro shell box with 10 users disproves "nobody": nobody does not literally mean exactly 0 people in this context.
Maybe it is irresponsible how little attention we pay to software security. Maybe, software developers of all kind should spend an entire year not developing any features at all, but fix all the tech debt of 30 years instead.
Yes, that sounds revolutionary, but I do not see an alternative in an age where all you need to find kernel bugs of this scale with AI agents.
It's a total arsehole'y move to not share with open-source projects (like Debian) but for commercial vendors like Microsoft I don't give a crap.
Now let's not get carried away either: that's a privilege escalation, so it already requires access to a local account. We're not exactly in Jia Tan "I backdoor every SSH out there if your Linux distro is using systemd" territory either.
Maybe a decade of corporations with revenue in the billions, paying peanuts and coffee money, for critical vulnerability disclosures made it....
Yes, this was clearly a marketing stunt to promote Xint code.
I, for one, will never use Xint code and will advise everyone to never use it. To anyone working there: enjoy your 15 minutes, I hope this backfires right in your face.
External security research happens for one of only a few reasons typically:
1) hobbyists who are learning or just like to do it for fun 2) bug bounties (good luck with those in most open source) 3) marketing for security companies 4) non-public research going to CNO/CNE
If you want to kill 3, the output of 1 will not come close to 4 and the public is NOT better off with fewer public bugs.
It is a really really bad look for Linux, puts a bit of water on all hype around switching from Windows.
For single user systems (not rigorously defined, I presume it's the intersection of our two definitions which we might be talking about) the nature of the exploit is local privilege escalation, of which there could be many possible, and many mitigations / countermeasures against. This could have suddenly appeared from the ether of "unknown unknowns" for some people.
Those people farther up the food chain still potentially have service accounts, maybe even user accounts for some purposes, perhaps "trusted" services which deliver them code which they deserialize and run once. (Have a pickle.)
severity * impact * likelihood
Not everyone looking to migrate from Windows 95 plans to run everything as root afterward.
On the copy.fail site:
Not everybody needs or wants to wait for their distro, or plans to patch their IC firmware when a config change will do.No OS is perfect. The awkward rollout for this bug fix is proof of that.
Said no one ever...present post excluded :-))
I disagree. Exploits should pe published as soon as they are written and found vulnerabilities to have as much details as possible, because if the researchers cannot write an exploit, someone else could.
- this has the advantage of forcing upgrades as soon as possible. No more “we need to see and schedule patching”
- publishing it as soon as possible makes everyne aware of the threat
- it is a learning experience for everyone
- “responsible disclosure” was invented by lazy companies that have zero interest in fixing a problem quickly
Why would they imply it is incumbent on the reporter to liaise with distributions? That seems to assume a high level of familiarity with the linux project. Vulnerability reporters shouldn’t be responsible for directly working with every downstream consumer of the linux kernel, what’s the limiting principal there? Should the reporter also be directly talking to all device manufacturers that use Linux on their machines?
IMO reporter did more than enough by responsibly disclosing it to linux and waiting for a patch to land.
Aren’t there people in the linux project itself with authority over and responsibility for security vulnerabilities? One would think they would be the ones notifying downstream distros…
https://docs.kernel.org/process/security-bugs.html
```As such, the kernel security team strongly recommends that as a reporter of a potential security issue you DO NOT contact the “linux-distros” mailing list UNTIL a fix is accepted by the affected code’s maintainers and you have read the distros wiki page above and you fully understand the requirements that contacting “linux-distros” will impose on you and the kernel community. ```
The bug is in the kernel, so it's OK to notify only the kernel team. Then they should notify the distributions they are in contact with.
The first message about Copy Fail that I see in the archive https://www.openwall.com/lists/oss-security/2026/04/ is from April 29. I run apt on my Debian 13 yesterday and got the fixed kernel.
Do I expect that every distribution is already patched? I don't. However each of us choose the distribution to run. Security can be one of the criteria for the choice. I played safe and I'm using Debian. Other people can make a different tradeoff maybe based on their personal threat analysis.
There are people running end of life kernels and distributions in production, or with pinned old kernels especially on ARM SBCs. I know both. Those are other choices made at the user end of the process.
IMHO the disclosure and fix process was run in the proper way from the researcher to the end user.
Make them private? Now you have a nice stream of zero days, long before fixes are available, making bad actors who made it in filthy rich.
They believe there is no difference being able to get root and not being able to get root? It seems to me that to-be(-root) and not-to-be(-root) are quite different.
IMO it's pretty obviously not a view that they seriously hold, it's just one of those technical justifications people come up with to avoid admitting something they don't want to admit - in this case that Linux has a poor security track record.
I don't agree with the premise, but I do think it's a sincerely held one.
These are smart people. If it wasn't about their own project I really think they'd have a different point of view. I wonder what they say about Microsoft's security bugs for example!
Linus is the reason why kernel team doesn't talk to distros. For them bugs are bugs, security related or not.
https://lkml.iu.edu/hypermail/linux/kernel/1711.2/01701.html...
Literally never. Why would he? He's surrounded by sycophants. And we have Greg for whenever Linus isn't involved anymore, and Greg is just as boneheaded.
Imposing requirements on the reporter? No.
Everyone involved here failed to do the right thing, and hiding behind the lack of written words is weak sauce.
A security researcher's ethical obligations are to protect users over vendors (barring any contractual agreement in place). From what has been discussed in this thread, they meet that bar.
Sure, they could have gone the extra mile to ensure the distros were in a good place to patch before they published the exploit. That's a kindness you can wish for, but don't disparage them for not going that extra mile. It's a bonus.
It's also possible that it simply didn't occur to them to do so this time. There's certainly lessons to be learned either way. I don't know that the right lessons will emerge from hostility.
and this is the problem. It used to be the case that if you were smart enough to find an exploit you were also smart enough to realise what would happen if you irresponsibly disclosed it. I guess these tools have made that pattern no longer apply.
The skills to detect code exploits is not the same as the skills to navigate an informal org chart to the satisfaction of an amorphous audience if end users (i.e. us on HN).
That said… as they are a company that supposedly specializes in this field, and is trying to sell a product, I do believe they should do better. Right now, I don’t have much confidence in their product.
I see this as an organizational failure of the Linux ecosystem. There should be better communication between distro and kernel development.
yes, because 30 days had passed from the time the patch landed in the kernel, as per industry standard.
approximately every security researcher, including the likes of google and other big names you may know, does a 90+30 disclosure, which is what happened here. they do this for good reason, which has been figured out over decades of experience in reporting thousands and thousands of vulnerabilities.
the only security researchers i know of that dont like 90+30 actually argue for shorter timelines (or immediate disclosures).
Is this just down to luck, a quirk in the timing about when Linus merged the fix versus when the release gets cut?
wait, what?
you are in another comment thread, of this very post, calling these reporters bumbling and incompetent for their disclosure. "merely bumblingly incompetent and overly eager to get their marketing pitch out the door" - that is your quote.
you also said "Basic care would involve making sure the patches had made it into the wild before ending the embargo", which is the literal opposite of immediate disclosure.
but now you are saying they should have just dropped it with no reporting at all? because that is what "immediate disclosure" means. pop up the exploit script on twitter and call it done.
If you're going wait a month between landing the patch (possibly notifying attackers), but not notify the people who may get the patch to users, it seems like something was mishandled.
Good for them. But just because some folks cannot afford 24/7 response teams and on-call personnel that doesn't make them or their systems any less important.
Lots of non-profits and academic institutions had to scramble because of the Linux kernel team's position of non-communication to distros.
Google search: https://share.google/aimode/eihDKXZJy94Z5lC1p
and it's beyond me to not think about doing this and instead exposing everyone and their neighbor to this exploit up front.
I'm certain this is even a felony in some legislations, rightfully so.
I don't think they would have gotten as much flame if it weren't for how the RHEL 14 mention and such were put.
This is a security company with a professional(?) communications department banking on pointing fingers at distro maintainers. We are not talking about solo security researchers or academics here.
At this point this is not really white-hat/ethical hacking anymore.
Ofc the kernel-distro security loophole is stupid and should be patched ASAP, but that doesn't absolve this company of wrongdoing.
It has a domain, it has a logo, they were going for maximum impact because it's their business.
here is a good start: https://projectzero.google/vulnerability-disclosure-faq.html...
there is ~3 decades of more context if you search for it.
If I call 911 to report a fire at an oil storage facility - and they ask me to alert the hospital, then phone the neighboring county's Sheriff Dept., and then...yeah. Either I'm way out in the sticks (and known to/trusted by the 911 operator), or else the 911 service is run by children.
I'd hate to be involved in any emergency services. Too many people have opinions on how things should have been done.
https://www.openwall.com/lists/oss-security/2026/05/01/3
> Nope, sorry, we are NOT allowed to notify anyone about anything "ahead of time" otherwise we will have to tell everyone about everything. That's the only policy by which all the legal/governmental agencies have agreed to allow us to operate in, so we are stuck with it.
greg k-h
We are not 20 years ago, the world in which it made sense doesn’t exist anymore, but the industry is slow to move on. Just pick a long term release and update it regularly.
Distros (point release distros) should use LTS kernels and keep up to date with them. Their "we'll maintain our own kernel branches" model either leads to many missed bugfixes, or duplicates Greg K-H's workload internally, for no practical benefit.
If a distro is suspicious of particular patches in the -stable tree, they could maintain a blacklist of them. However, instead of doing that and accruing overhead of possible future merge conflicts, they should hash out their concerns on the -stable mailing list.
I am running this in production right now and it mitigates the attack, with no unexpected side-effects as far as I can see.
> Nope, sorry, we are NOT allowed to notify anyone about anything "ahead of time" otherwise we will have to tell everyone about everything. That's the only policy by which all the legal/governmental agencies have agreed to allow us to operate in, so we are stuck with it.
I'd be interested in knowing more about that policy... Seems that there should be exceptions for the major distros.
Of course, major distros who have contracts with SLA could also pay for someone to be on the kernel security team and get a heads up like that..
And try to define "major distros" in a way that actually means anything viable.
If you just want to count users, then that would only be Android (everything else is a rounding error.) After Android, that would be Yocto, and then Debian. All distros after that are mere fractions of overall users compared to those 3 by number of running systems alone.
If you want to count it as "$ spent on Linux" then that cuts out Android and Yocto and Debian as those distros are free, and would focus purely on the tiny installed base of paid Linux systems, and cut everyone else out.
So what is a fair way to do this other than "we notify no one, and tell everyone to always update their systems to the latest stable releases that we support."
Especially as there is no way for us to determine your use case (i.e. if a specific bug is a vulnerability for you or not.)
About that "That's the only policy by which all the legal/governmental agencies have agreed to allow us to operate in, so we are stuck with it.", you mean that if you disclose selectively, then you become liable for damages? or was it a more direct conversation with legal/governmental agencies?
And for a bug like this, what is the policy with backporting patches to lts branches? Since it was corrected in mainline on april 1st but only backported after the public disclosure. Do you delay backporting to minimise any attention on the security issue?
I guess that having a patch for that land on all the LTS branch would signal to any would be attacker that it's a significant security issue...
Sorry for all the questions but I'm genuinely interested.
EDIT: Just read your blog post at http://www.kroah.com/log/blog/2026/01/02/linux-kernel-securi... which does answer a lot of my questions...
Edit: for context, I work in embedded and the aarch64 version (PR #42 in the repo) has successfully popped every device I've tried it against except one where I have a custom kernel to work around a driver issue and (looking back at my git logs) accidentally forgot to enable the user-mode API for alg_aead specifically. Lucky mistake.
Given the potential impact a severe security issue in the kernel (like this one), it seems that the only process that is acceptable for government agencies of various countries (that deal with intelligence and national security) is to either keep secrets from everyone, or disclose them to everyone.
Otherwise, the entities on the priority disclosure list would basically have free access to zero day vulnerabilities. Then every country with a national intelligence agency would invent a distro and try to squeeze themselves onto that list, and things would become very political and ugly if the agents of any country can't get into that list...
Letting SUID binaries just "exist" anywhere is a stupendous security issue. What if you mount some external storage medium, how are you to verify that none of the SUID binaries on that block device are malicious.
Additionally, this exploit appears to only work if the user executing the SUID binary can also read the SUID binary. There's no reason for non-root users to have read on a SUID binary.
NixOS does this correctly. No SUID in the normal package installation directory `/nix/store` and no package leakage outside of that no `nosuid` can safety be used on all other mountpoints. The exception is just a single-purpose `/run/wrappers.$hash` directory that safety contains executable ONLY SUID wrappers.
The bug that is being exploited gives you basically arbitrary page cache poisoning. At that point it's already game over. Patching a suid program is maybe the easiest way to get a root shell from that but far from the only.
With this vulnerability you can manipulate the page cache. You could also manipulate ld.so to hook into arbitraty system calls, or set your uid to 0, or any of another dozen or so ways to elevate your privileges.
Mount points have nothing to do with this, even if is always a good idea to disallow suid in user writable areas and prevent reading suid files, but that's for other reasons. NixOS does nothing to fix this and is just as vulnerable as everyone else.
To execute the binary it needs to be read from disk and loaded into memory.
In fact if you have read permissions but not executable permissions on a specific binary then you can still execute it by calling the linker directly /bin/ld.so.1 /path/to/binary (the linker will read and load the binary and then jump to the entry point without an exec() call)
This is not correct, as when the binary is setuid-someone-else, you are not the one executing it; they are.
Removing world-readability from all setuid-root binaries on the system would be sufficient to kill the PoC script provided for this vulnerability. It would not be sufficient to prevent exploitation though; there are many ways to abuse the ability to write to files you have read access to in order to gain root, for example by using the vulnerability to alter the cached copy of a file in /etc/sudoers.d/, or overwrite /etc/passwd, or /etc/crontab, ... the list goes on.https://www.bleepingcomputer.com/news/security/new-linux-cop...
https://github.com/theori-io/copy-fail-CVE-2026-31431/issues...
https://github.com/theori-io/copy-fail-CVE-2026-31431/issues...
Basically: sudo grubby --update-kernel=ALL --args=initcall_blacklist=algif_aead_init
sudo reboot
No one serious should consider the kernel (on its own) a good security boundary (and basic containers don’t count either).
For example:
It is responsible for all ethernet and IP layer operations, and handles TCP, etc. (By default, there's ways to move these things into userland, but in 99% of cases, programs open a socket, and exchange buffers with the kernel - the kernel does the rest).
You're telling me that the kernel should not be a security boundary against malformed packets? That it's no big deal if some malformed packet can crash a machine, cause remote code execution, or cause the system to perform poorly (all real things that have been possible in most tcp/ip stacks, including linux)?
Hell (ip|nf)tables firewalling is a security boundary, and it is implemented in the kernel. If you configure a rule and the kernel code handling that rule has a bug that allows bad traffic - isn't this a case of the kernel literally advertising itself as a security boundary and failing?
This is where some genius usually steps in and says "well thats why you use a hardware firewall hurr durr - but those are just boxes running a tcp/ip stack with help from some chips that can assist the operations. Problems there:
* The hardware or firmware or even the linux kernel running on such boxes may also have bugs, letting bad traffic through to the hosts/servers.
* There are categories of network stack bugs with packets that look like good traffic that can still be exploited on the host's kernel.
(Aka defense in depth requires the kernel to be the best security boundary it can be also).
At the same time, you should use application layer proxies (eg http) that have little or no privileges in your system, nothing else running, as restricted as possible, etc. don’t expose more general hosts to direct raw IP traffic from the internet.
The researcher’s job is to surface information. The kernel team’s job, in this architecture, is to patch. The distros’ job is to track. The operator’s job is to pick a distro whose tracking matches the threat model.
When a 30-day disclosure catches you out, the question isn’t who failed. It’s which point on the cost curve your distro choice put you on, and whether that was the point you thought you were on.
https://discourse.nixos.org/t/is-nixos-affected-by-copy-fail...
What's interesting is that their website is also down right now. These seem like special-timed DDos attacks so maintainers cannot communicate the issue well.
Nobody is ddosing anything to cover it up.
Copy Fail
https://news.ycombinator.com/item?id=47952181
The distros dont get any involvement until release, welcome to the suck.
Which very clearly results in "bugfixes" (security patches) not making it everywhere in time because it's just simply ridiculous to ask for each downstream consumer to rate the severity of everything on their own. It's easy to shit on CVEs, some even put out shit CVEs, but at the same time contribute absolutely nothing towards providing a better alternative.
It's quite certain that both the Linux project and the Linux CNA needs to take some responsibility and put in some effort at communication and making it easier to triage.
The solution is not to tell more people that patch xxxxxx is a critical security bugfix that needs distros to roll new kernel versions immediately.
Major vendors (all the cloud providers) will have security teams that can have the bug mitigated in a few minutes once they're notified.
For everyone else...
Part of the solution is that distros need to stop believing that their distro kernel branches are any better than linux-stable, and use linux-stable and engage with the linux-stable list and patchsets if they're concerned about what's going into them.
Part of the solution is each distro needs a process for pushing critical updates (module blacklists, ebpf patches) to address things like this without forcing all distro users to reboot, which many won't do promptly anyway.
- Debian
- Ubuntu
- Arch
- Amazon/Azure
- Fedora/RHEL
its the same disclosure policy as google's project zero, and several other major players, so you should probably be trying to ping a lot more people
reporters should not be responsible for finding out and individually reporting to every downstream consumer. blame the kernel security team, who is in a much better position to coordinate notifications to individual distro security teams.
the disclosure itself followed a normal timeline, which you can view at the bottom of their blog post.
Seems not fatal to all non-patched systems.
However, not having the module loaded does mean that in normal operation you don't need the module, so the proposed mitigation of disabling the module is safe in the sense that it won't disrupt anything.
I tried to rmmod on all servers and rmmod always returns `ERROR: Module algif_aead is not currently loaded`, that's why I think it's fine. Of course I take a look on https://security-tracker.debian.org/tracker/CVE-2026-31431 for the updates.
Well, for one thing, opening an AF_ALG socket, as the exploit does.
But only Trixie (and testing/Sid) are patched (as I type this).
On Bookworm (and Bullseye), you want to add the module to list of blocked modules. It's a one-line change.