> 2025-08-11 NVIDIA reiterated the request to postpone disclosure until mid-January 2026.
> 2025-08-12 Quarkslab replied that the bugs were first reported in June 18th and mid-January was well past the standard 90 day normally agreed for coordinated disclosure and that we did not see a rationale for postponing publication by, at a minimum, 3 months. Therefore Quarkslab continued with the publication deadline set to September 23rd 2025 and offered to extend the deadline an additional 30 days provided NVIDIA gave us some insights about the full scope of affected products and if the fixes are to be released as a stand alone security fix, as opposed to rolled into a version bump that includes other code changes.
Richest corporation in the world needs 7 months to remedy? Why not 4 years?
> Back in 2022, NVIDIA started distributing the Linux Open GPU Kernel Modules. Since 2024, using these modules is officially "the right move" for both consumer and server hardware. The driver provides multiple kernel modules, the bugs being found in nvidia.ko and nvidia-uvm.ko. They expose ioctls on device files, most of them being accessible to unprivileged users. These ioctls are meant to be used by NVIDIA's proprietary userland binaries and libraries. However, using the header files provided in the kernel modules repository as a basis, it's possible to make direct ioctl calls.
If only there were some way to release the source code for your userland programs so that the computing public could look at the code, then offer a fix for a bug such as this.
Unfortunately, so far as I'm aware, there is no way to do this and having a few people who are working against what has to be a large number of deadlines look at extremely low-level code for very sophisticated software is the only way forward for these things.
"No way to prevent this" say programmers of only languages where this regularly happens.
This only happens if you have the worst version of Tony's Billion Dollar Mistake. So C, C++, Zig, Odin and so on but not Rust.
It's a use-after-free, a category of mistake that's impossible in true GC languages, and also impossible in safe Rust. We have known, for many years, how to not have this problem, but some people who ought to know better insist they can't stop it, exactly like America's gun violence.
It’s semantics. Zig can still have dangling references/uaf. You can do something like ‘var foo: *Bar = @intToPtr(0x00)’ but in order to “properly” use the zero address to represent state you have to use ‘var foo: ?*Bar = null’ which is a different type than ‘*Bar’ that the compiler will force you to check before accessing.
It’s the whole make it easy to write good code—not impossible to write incorrect code philosophy of the language.
Judging from the article, Zig would have prevented the CVE.
> This includes memory allocations of type NV01_MEMORY_DEVICELESS which are not associated with any device and therefore have the pGpu field of their corresponding MEMORY_DESCRIPTOR structure set to null
This does look like the type of null deref that Zig does prevent.
Looking at the second issue in the chain, I believe standard Zig would have prevented that as well.
The C code had an error that caused the call to free to be skipped:
threadStateInit(&threadState, THREAD_STATE_FLAGS_NONE);
status = rmapiMapWithSecInfo(/*…*/); // null deref here
threadStateFree(&threadState, THREAD_STATE_FLAGS_NONE);
Zig’s use of ‘defer’ would ensure that free is called even if an error occurred:
threadStateInit(&threadState, THREAD_STATE_FLAGS_NONE);
defer threadStateFree(&threadState, THREAD_STATE_FLAGS_NONE);
status = try rmapiMapWithSecInfo(/*…*/); // null deref here
Nothing can prevent a sufficiently belligerent programmer from writing bad code. Not even Rust—which I assume you’re advocating for without reading the greater context of this thread.
> If only there were some way to release the source code for your userland programs so that the computing public could look at the code, then offer a fix for a bug such as this.
These bugs are in the already open sourced kernel modules, the userland components are largely irrelevant as long as an attacker can just do invoke the affected ioctl directly.
Counterargument: security by obscurity does work. The common strawman is that it doesn't, but that's when it's the only defence.
See Spectre and Meltdown - if it was easy to exploit then we would all be pwned unpatched just by running the Windows installer - just like how Windows XP machines used to do that back in the day....
If your exploit requires lots of disassembling, decrypting random ad-hoc custom crypto, and even finding what you're looking for in some random 100MB .dll, that just isn't very likely to be found except by the nationstate guys. The signal-to-noise ratio is a wonderful thing. It's much easier to hide something amongst very mundane things (most secrets are boring) than to heavily guard something and advertise "SECRETS ARE HERE". There's quite a few examples of this in various programs and web services, you obviously don't know because you didn't find it!
Heh, good point, but it isn't really true when you invert it :) If you randomly search for stuff, you're very unlikely to find anything, only if you know what you are searching for, only then you find something...
You're not wrong but I think it's sort of irrelevant. Rust is cool but from my understanding, graphics card drivers are almost an entire OS in itself. I don't think Nvidia is writing a new driver for each GPU, I think they're using a core driver codebase and making relevant modifications for each card.
My point is that I suspect that the Nvidia driver is a decades-long project, and dropping everything and rewriting in Rust isn't really realistic .
> 2025-08-11 NVIDIA reiterated the request to postpone disclosure until mid-January 2026.
> 2025-08-12 Quarkslab replied that the bugs were first reported in June 18th and mid-January was well past the standard 90 day normally agreed for coordinated disclosure and that we did not see a rationale for postponing publication by, at a minimum, 3 months. Therefore Quarkslab continued with the publication deadline set to September 23rd 2025 and offered to extend the deadline an additional 30 days provided NVIDIA gave us some insights about the full scope of affected products and if the fixes are to be released as a stand alone security fix, as opposed to rolled into a version bump that includes other code changes.
Richest corporation in the world needs 7 months to remedy? Why not 4 years?
At least until the SEC starts punishing revenue inflation through self-dealing.
Microsoft might hold a patent on this.
If only there were some way to release the source code for your userland programs so that the computing public could look at the code, then offer a fix for a bug such as this.
Unfortunately, so far as I'm aware, there is no way to do this and having a few people who are working against what has to be a large number of deadlines look at extremely low-level code for very sophisticated software is the only way forward for these things.
"No way to prevent this" says proprietary codebases where this always happens
"No way to prevent this" say programmers of only languages where this regularly happens.
This only happens if you have the worst version of Tony's Billion Dollar Mistake. So C, C++, Zig, Odin and so on but not Rust.
It's a use-after-free, a category of mistake that's impossible in true GC languages, and also impossible in safe Rust. We have known, for many years, how to not have this problem, but some people who ought to know better insist they can't stop it, exactly like America's gun violence.
What is my_ptr->member but unwrapping an optionally null pointer.
It’s the whole make it easy to write good code—not impossible to write incorrect code philosophy of the language.
> This includes memory allocations of type NV01_MEMORY_DEVICELESS which are not associated with any device and therefore have the pGpu field of their corresponding MEMORY_DESCRIPTOR structure set to null
This does look like the type of null deref that Zig does prevent.
Looking at the second issue in the chain, I believe standard Zig would have prevented that as well.
The C code had an error that caused the call to free to be skipped:
Zig’s use of ‘defer’ would ensure that free is called even if an error occurred:Followed by never touching the variable ever again.
"'No Way to Prevent This,' Says Only Nation Where This Regularly Happens"
These bugs are in the already open sourced kernel modules, the userland components are largely irrelevant as long as an attacker can just do invoke the affected ioctl directly.
See Spectre and Meltdown - if it was easy to exploit then we would all be pwned unpatched just by running the Windows installer - just like how Windows XP machines used to do that back in the day....
If your exploit requires lots of disassembling, decrypting random ad-hoc custom crypto, and even finding what you're looking for in some random 100MB .dll, that just isn't very likely to be found except by the nationstate guys. The signal-to-noise ratio is a wonderful thing. It's much easier to hide something amongst very mundane things (most secrets are boring) than to heavily guard something and advertise "SECRETS ARE HERE". There's quite a few examples of this in various programs and web services, you obviously don't know because you didn't find it!
My point is that I suspect that the Nvidia driver is a decades-long project, and dropping everything and rewriting in Rust isn't really realistic .