After an incident as widely publicized as Axios, I'd expect dependency auditing, credential rotation, and public incident communication to all be carried out with much more urgency. And if they were going to send this out to all of their users (as they should), I would expect _that_ to happen shortly after publishing the post (why wait 11 days???).
I don't blame you, took me awhile to find the date.
You could equally say that using fetch means that the developers don't know how to use axios.
They do the same thing, except axios does it a little better, when it doesn't pwn you.
Axios predates the availability of fetch in node by 2 years, and fetch has never caught up with axios so there was no reason to switch to fetch, unless you need to run on both client and server then of course you use fetch.
Fetch is one of those things I keep trying to use, but then sorely regret doing so because it's a bit rubbish.
You're probably reinventing axios functionality, badly, in your code.
It's especially useful when you want consistent behaviour across a large codebase, say you want to detect 401s from your API and redirect to a login page. But you don't want to write that on every page.
Now you can do monkey patching shenanigans, or make your own version of fetch like myCompanyFetch and enforce everyone uses it in your linter, or some other rubbish solution.
Or you can just use axios and an interceptor. Clean, elegant.
And every project gets to a size where you need that functionality, or it was a toy and who cares what you use.
Axios is something where you get most of that work done for you by the community for free, and a lot of people know it. As long as you don’t get pwned due to it. Oh and you will actually find community packages that integrate with it, vs ourFetch, which again, nobody knows or even cares that it exists.
Applies to web frameworks, databases and other types of software and dependencies - if you work with brilliant people, you might succeed rolling your own, but for most people taking something battle tested, something off the shelf is a pretty sane way to go about it.
In this case it’s a relatively small dependency so it’s not the end of the world, but it’s the exact same principle.
An alternative world-view is: "A little copying is better than a little dependency," from https://go-proverbs.github.io
Does become subjective about what "small" and "little" are though.
I think the ideal model would be being able to depend on upstream code, but being able to review ALL of the actual code changes when pulling in new dependency versions (with a nice UI) and being able to vendor things and branch off with a single command whenever you need it, so you don't have to maintain it yourself by default but it's trivial when you want to.
It's actually surprising that in regards to front end development the whole shadcn approach hasn't gotten more popular. Or anywhere else for that matter, focusing on making code way more easy to maintain, to compile/deploy, with less complexity along the way.
It's the difference between using a SQL library and some person on your team writing their own SQL library and everyone having to use it. There's a vast gulf between the two, professionally speaking.
People dissing axios probably suffer from other NIH problems too.
https://github.com/sindresorhus/ky
From the readme:
- Simpler API
- Method shortcuts (ky.post())
- Treats non-2xx status codes as errors (after redirects)
- Retries failed requests
- JSON option
- Timeout support
- Upload and download progress
- Base URL option
- Instances with custom defaults
- Hooks
- Response validation with Standard Schema (Zod, Valibot, etc.)
- TypeScript niceties (e.g., .json() supports generics and defaults to unknown, not any)
Of course, this is only for projects where I have to make a lot of HTTP requests to a lot of different places where these niceties make sense. In most cases, we're usually using a library generated from an OpenAPI specification and fall back to `fetch` only as an escape hatch.
That's a pretty big asterisk though. Taking on a supply chain risk in exchange for reducing developer friction is not worth it in a lot of situations. Every dependency you take increases your risk of getting pwned (especially when it pulls in it's own dependencies), and you seriously need to consider whether it's worth that when you install it.
Don't get me wrong, sometimes it is; I'm certainly not going to create my own web framework from scratch, but a web request helper? Maybe not so much.
(Source: have built out much more scuffed variants of this than the one I just described like https://github.com/boehs/ajar)
I guess a LLM can do as well. Although that's not something I'm quite ready to admit.
For reference: https://github.com/sampullman/fetch-api/blob/main/lib/fetchA...
What's the reason to switch to something less stable short/long term? Because its older and newer code is always better?
I am totally with you on axios; but why is express shocking, and what do you expect to see in its place? Fastify? Hono? Node:http?
What did I just read?
That GitHub action used to sign their Mac apps.
So they assume the certificate used to sign is compromised.
The risk is not to existing app, but theoretically someone could give you a copy of a malicious OpenAI binary, sign it with the compromised certificate, and impersonate OpenAI. Unlikely, but not impossible.
> At that time, a GitHub Actions workflow we use in the macOS app-signing process downloaded and executed a malicious version of Axios (version 1.14.1)
So if I understand this correctly their GH Actions is free to upgrade the package just like that? Is this normal practice or it’s just shifting blame?
They mention it toward the end:
> The root cause of this incident was a misconfiguration in the GitHub Actions workflow, which we have addressed. Specifically, the action in question used a floating tag, as opposed to a specific commit hash, and did not have a configured minimumReleaseAge for new packages.
Some preventive actions everyone should take:
1. pin GitHub actions to SHAs (GitHub sadly doesn’t enforce immutable tags, it’s opt in only, but commits are practically non-repeatable, minus sha collision), also even if an action it’s not compromised directly, the latest version might be using a compromised dependency, so by taking the latest of one you get the latest (compromised) of the other. Pinning just prevents auto updates like this.
2. do npm ci instead of npm install (the first will be using a lock file the other may take the latest depending on how your package.json defines dependencies)
3. have a min release age (e.g. min-release-age=7 in .npmrc) most recent SSC attacks were removed from npm within hours. Adding a delay is similar to a live broadcast “bleep” delay.
It has become so critical and ubiquitous that it has become a huge target for attackers.
Side note. I'm sure many of you know this, but for those who don't, setting min-release-age=7 in .npmrc (needs npm 11.10+), would have made the malicious axios (@1.14.1 and @0.30.4) invisible to npm install (removed within ~3h). Same for ua-parser-js (caught within hours) and node-ipc (caught in days). It wouldn't have prevented event-stream (over 2 months), but you can't win them all.
Some examples (hat tip to [2]):
~/.config/uv/uv.toml
exclude-newer = "7 days"
~/.npmrc
min-release-age=7 # days
~/Library/Preferences/pnpm/rc
minimum-release-age=10080 # minutes
~/.bunfig.toml
[install]
minimumReleaseAge = 604800 # seconds
p.s. sorry for the plug, but we released a free tool ([3]) to gather all these settings + a cli to auto configure them. You can set these settings without it, but if you are confused (like me) with what is in minutes, what's in seconds, what is in days and where each of them is located, this might save you a few keystrokes / prompts (it also ensures you have the right min version for the package manager, otherwise you'll have the settings but they would be ignored...)[0] https://nodejs.org/en/blog/announcements/v18-release-announc...
[1] https://nodejs.org/en/blog/release/v21.0.0
Didn't mean it as an ad btw, the supply chain risk is real though. Axios could be the best HTTP library ever written and it still would've dropped a RAT on your laptop on March 31 without min-release-age set.
Deps should be updated when you need some features or bugfixes from the new versions; not just when DependaBot prompts you to do it.
I see value in DependaBot and things like that only to check that your module still passes your CI with upgraded dependencies (and if not, then it's worth looking at the failure, to be prepared for the updgrade in the future).