Not much we didn't know (you're basically SOL since an owner was compromised), however we now have a small peek into the actual meat of the social engineering, which is the only interesting news imho: https://github.com/axios/axios/issues/10636#issuecomment-418...
jasonsaayman and voxpelli had useful write ups from the "head on a swivel" perspective of what to watch out for. Jason mentioned "the meeting said something on my system was out of date." they were using Microsoft meeting and that's how they got RCE. Would love more color on that.
An owner being compromised is absolutely survivable on a responsibly run FOSS project with proper commit/review/push signing.
This and every other recent supply chain attack was completely preventable.
So much so I am very comfortable victim blaming at this point.
This is absolutely on the Axios team.
Go setup some smartcards for signing git push/commit and publish those keys widely, and mandate signed merge commits so nothing lands on main without two maintainer sigs, and no more single points of failure.
Did you investigate the maintainer compromise and publication path? The malicious version was never committed or pushed via git. The maintainer signs his commits, and v1 releases were using OIDC and provenance attestations. The malicious package versions were published locally using the npm cli after the maintainer's machine was compromised via a RAT; there's no way for package maintainers to disable/forbid local publication on npmjs.
It seems the Axios team was largely practicing what you're preaching. To the extent they aren't: it still wouldn't have prevented this compromise.
I can not find a single signed recent commit on the axios repo. It is totally yolo mode. Those "signed by github" signatures are meaningless. I stand by my comment in full.
One must sign commits -universally- and -also- sign reviews/merges (multi-party) and then -also- do multi party signing on releases. Doing only one step of basic supply chain security unfortunately buys you about as much defense as locking only a single door.
I do however certainly assign significant blame to the NPM team though for repeatedly refusing optional package signing support so packages with signing enabled can be refused at the server and client if unsigned by a quorum of pinned keys, but even aside from that if packages were signed manually then canary tools could have detected this immediately.
It wasn’t done through git. It was a direct npm publish from the compromised machine. If you read further down in the comments (https://github.com/axios/axios/issues/10636#issuecomment-418...), it seems difficult to pick the right npm settings to prevent this attack.
If I understand it correctly, your suggestions wouldn’t have prevented it, which is evidence that this is not as trivially fixable as you believe it is.
The interesting detail from this thread is that every legitimate v1 release had OIDC provenance attestations and the malicious one didn't, but nobody checks. Even simpler, if you're diffing your lockfile between deploys, a brand new dependency appearing in a patch release is a pretty obvious red flag.
npm could solve half of this by letting packages opt into OIDC-only publishing at the registry level. v1 already had provenance attestations but the registry happily accepted the malicious publish without them.
Looks like a very sophisticated operation, and I feel for the maintainer who had his machine compromised.
The next incarnation of this, I worry, is that the malware hibernates somehow (e.g., if (Date.now() < 1776188434046) { exit(); }) to maximize the damage.
Any good payload analysis been published yet? Really curious if this was just a one and done info stealer or if it potentially could have clawed its way deeper into affected systems.
This article[0] investigated the payload. It's a RAT, so it's capable of executing whatever shell commands it receives, instead of just stealing credentials.
That's the reality of modern war. Many countries are likely planting malware on a wide scale. You can't even really prove where an attack originated from, so uninvolved countries would also be smart to take advantage of the current conflict. Like if you primarily wrote German, you would translate your malware to Chinese, Farsi, English, or Hebrew, and take other steps to make it appear to come from one of those warring countries. Any country who was making a long term plan involving malware would likely do it around this time.
NPM is designed to let you run untrusted code on your machine. It will never work. There is no game to step up. It's like asking an ostrich to start flying.
It’s far from a complete solution, but to mitigate this specific avenue of supply chain compromise, couldn’t Github/npm issue single-purpose physical hardware tokens and allow projects (or even mandate, for the most popular ones) maintainers use these hardware tokens as a form of 2FA?
The attacker installed a RAT on the contributor’s machine, so if they had configured TOTP or saved the recovery codes anywhere on that machine, the attacker could defeat 2FA.
Oh, yes, I missed that the TOTP machine was compromised:\ Would that then imply that it would have been okay if codes came from a separate device, eg. a TOTP app on a Palm OS device with zero network connectivity? (Or maybe these days the easiest airgapped option is an old android phone that stays in airplane mode...)
All maintainers need to do is code signing. This is a solved problem but the NPM team has been actively rejecting optional signing support for over a decade now. Even so maintainers could sign their commits anyway, but most are too lazy to spend a few minutes to prevent themselves from being impersonated.
I ask this on every supply chain security fail: Can we please mandate signing packages? Or at least commits?
NPM rejected PRs to support optional signing multiple times more than a decade ago now, and this choice has not aged well.
Anyone that cannot take 5 minutes to set up commit signing with a $40 usb smartcard to prevent impersonation has absolutely no business writing widely depended upon FOSS software.
Anyone that maintains code for others to consume has a basic obligation to do the bare minimum to make sure their reputations are not hijacked by bad actors.
Just sign commits and reviews. It is so easy to stop these attacks that not doing so is like a doctor that refuses to wash their hands between patients.
If you are not going to wash your hands do not be a doctor.
If you are not going to sign your code do not be a FOSS maintainer.
Perhaps, but if it's gotten to the point where millions of people download the unsigned code, signing should probably become required. Even reproducible builds.
Required by who though? If your business etc depends upon some code, it's up to you to ensure its quality, surely? You copy some code onto your machine then it's your codebase, right?
While I think anyone unwilling to sign their code is negligent, I also feel anyone unwilling to ensure credible review of code has been done before pushing it to production is equally negligent.
No. axios (v1 at least; not v0) were setup to publish via OIDC, but there's no option on npmjs for package maintainers to restrict their package to *only* using OIDC. The maintainer says his machine was infected via RAT, so if he was using software-based 2FA, nothing could have prevented this.
This and every other recent supply chain attack was completely preventable.
So much so I am very comfortable victim blaming at this point.
This is absolutely on the Axios team.
Go setup some smartcards for signing git push/commit and publish those keys widely, and mandate signed merge commits so nothing lands on main without two maintainer sigs, and no more single points of failure.
It seems the Axios team was largely practicing what you're preaching. To the extent they aren't: it still wouldn't have prevented this compromise.
One must sign commits -universally- and -also- sign reviews/merges (multi-party) and then -also- do multi party signing on releases. Doing only one step of basic supply chain security unfortunately buys you about as much defense as locking only a single door.
I do however certainly assign significant blame to the NPM team though for repeatedly refusing optional package signing support so packages with signing enabled can be refused at the server and client if unsigned by a quorum of pinned keys, but even aside from that if packages were signed manually then canary tools could have detected this immediately.
If I understand it correctly, your suggestions wouldn’t have prevented it, which is evidence that this is not as trivially fixable as you believe it is.
The next incarnation of this, I worry, is that the malware hibernates somehow (e.g., if (Date.now() < 1776188434046) { exit(); }) to maximize the damage.
[0]: https://safedep.io/axios-npm-supply-chain-compromise/
I feel like npm specifically needs to up their game on SA of malicious code embedded in public projects.
Edit: wait, did the attacker intercept the totp code as it was entered? Trying to make sense of the thread
NPM rejected PRs to support optional signing multiple times more than a decade ago now, and this choice has not aged well.
Anyone that cannot take 5 minutes to set up commit signing with a $40 usb smartcard to prevent impersonation has absolutely no business writing widely depended upon FOSS software.
Normalized negligence is still negligence.
Just sign commits and reviews. It is so easy to stop these attacks that not doing so is like a doctor that refuses to wash their hands between patients.
If you are not going to wash your hands do not be a doctor.
If you are not going to sign your code do not be a FOSS maintainer.
Interesting it got caught when it did.