Also, is automated version bumps really such a good thing? Many times I have wasted hours tracking down a bug that was introduced by bumping library. Sometimes only the patch version of the library is different so it shouldn't be breaking anything... but it does! It is so much better to update intentionally, test, deploy. Though this does assume you have a modest number of dependencies which pretty much excludes any kind of server-side javascript project.
(The larger problem here isn’t even Dependabot per se, since all Dependabot does is fire PRs off. The problem is that people then try to automate the merging of those PRs, and end up shooting themselves in the foot with GHA’s more general footguns. It also doesn’t help that, until recently, GitHub’s documentation recommended using these kinds of dangerous triggers for automating Dependabot.)
Really? Dependabot runs on a number of my repositories without my having consciously enabled it.
I've never experienced this. Do you have a `.github/dependabot.yml` file in your repository? That's how it's enabled.
(GitHub has muddied the water here a bit by having two related but distinct things with the same name: there's "Dependabot" the subject of this post, and then there's "Dependabot security updates" which are documented separately and appear to operate on a different cycle[1]. I don't know if this latter one is enabled by default or not, but the "normal" one is definitely disabled until you configure it.)
[1]: https://docs.github.com/en/code-security/dependabot/dependab...
Nope. Example: https://github.com/m50d/tierney/pull/55
Do you have a Dependbot entry in your account/org-level applications?
I don't think so. I have no memory of such a thing, and there is no org.
OT: Semantic versioning’s major flaw is that is the presumption that the entire chain of package maintainers is extremely diligent about correctly bumping their version numbers. There have certainly been a few projects that are very good about this, but the vast majority are not.
The only solution to this that I know of is to test: Manually exercise the features with the dependencies, write automated tests that check your critical needs from a dependency, or identify and run the tests from the dependency’s test suite that cover your use cases. (Also: Contribute tests that you want in the suite!)
> The new app now calculates the yearly tax summary almost instantaneously. It’s a huge improvement over the previous version, which used to take several seconds. You ship it.
> …and oops, one of your biggest partners has called you to complain. You’ve broken their website, which embeds your app as part of their business management suite. It turns out their code expected your calculation to take at least 5 seconds. Now that it’s faster, users encounter lots of errors and results that don’t make any sense.
> In frustration, you quit your job and return to the construction industry. At least here, no one expects a house upgrade without disruption.
Huge https://xkcd.com/1172/ vibes
I get your question regarding scaling, but that's the job: you can choose to outsource code to 3rd-party libraries, and eternal vigilance is the trade-off.
Assume your 3rd-party dependencies will try to attack you at some point: they could be malicious; they could be hacked; they could be issued a secret court order; they could be corrupted; they could be beaten up until they pushed a change.
Unless you have some sort of contract or other legal protection and feel comfortable enforcing them, behave accordingly.
0: https://www.wiz.io/blog/github-action-tj-actions-changed-fil...
The corollary of reviewing all code on all dependency updates is you should review all code or the new deps you add, including the transformation by build processes that might mean what is in the package manager might be different and same for all transitive dependencies.
Same with the language and runtime tooling.
It is too hard to be perfect!
Still have flashbacks from that one time when some dependency in our Go project dropped support for go1.18 in its patch version update, and we almost couldn't rebuild the project before the Friday evening. Because obviously /s being literally unable to build the dependency is a backwards-compatible change.
I don’t have the exact exam language in front of me right now but the requirement would be something like “you have some process for learning about, assessing, and mitigating vulnerabilities in software dependencies that you use”.
Enabling an automated scan and version bump tool like dependabot is a common and easy way to prove your organization has those capabilities. But you could implement whatever process you want here and prove that you do it on the schedule you say you do in order to satisfy the audit requirement.
Depends. Do you want to persist the belief that software requires constant maintenance because it's constantly changing? Then yes: automate your version bumps and do it as often as possible.
If you want software to be stable then only update versions when you have a bug.
1. Malicious code is injected into some project.
2. People have a chance to pick it up and put it into their code.
3. The malicious code is found, publicized, and people react.
The faster you act after step 1, the more chance you'll have time to put it into your system before the world reaches step 3. Dependabot maximizes the speed of reaction after step one. If I'm doing things somewhat more manually then I'm much more likely to experience the world finding out about a corrupted dependency before I start incorporating it.Now, just typing it out it may sound like I'm more freaked out than I actually am. While supply-chain attacks are a problem, they are getting worse, and they will continue to get worse, they are also still an exotic situation bubbling on the fringe of my awareness, as opposed to something I'm encountering regularly. For a reasonable project the most likely outcome is that dependabot enhancing this exposure window will still not have any actual real-world impact, and I'm aware of that. However, where this is relevant is if you are thinking of Dependabot and its workflow as a way of managing security risk, because you imagine updates as likely carrying security improvements, and that's your primary purpose for using it (as opposed to other uses, such as, your system slowly falling behind in dependencies until it calcifies and can't be updated without a huge degree of effort, a perfectly reasonable threat and Dependabot is a sensible response to that), then you also need to consider the ways in which is may actually increase your vulnerability to threats like supply-chain attacks.
And of course, projects do not start out with all their vulnerabilities on day one and then monotonically remove them. Many vulnerabilities are introduced later. For each such vulnerability, there is a first release that includes them, and for which treating that update as if it was just a Good Thing was in fact not true, and anyone who pushed it in as quickly as possible made a mistake. Unfortunately, sometimes hard problems are just hard problems.
Though I have wondered about the idea of programming something like Dependabot, but telling it, hey, tell me about known CVEs and security releases, but otherwise, let things cook for 6 months before automatically building a PR for me to update. That would radically reduce this risk I'm outlining here.
(In fact, after pondering, I'm kind of reminded of how Debian and a lot of Linux distros work, with their staged Cutting Edge versus Testing versus Stable versus Long Term Support. Dependabot sort of builds in the presumption that you want that Cutting Edge level of updates... but in many cases, no, I really don't. I'd much rather build with Stable or Long Term Support for a lot of things, and dip into the riskier end of the pool for specific things if I need to.)
https://docs.github.com/en/code-security/dependabot/dependab...
https://docs.github.com/en/code-security/dependabot/dependab...
The bottom line with these kinds of things is that virtually nobody should be using `pull_request_target`, even with “trusted” machine actors like Dependabot. It’s a pretty terrible footgun.
[1]: https://www.synacktiv.com/en/publications/github-actions-exp...
If someone wants to merge a bot PR or any other PR by an untrusted third party, they will have to "adopt" the bot commit as their own, sign the commit locally, and then wait for a second human reviewer to do a signed merge.
Not signing code means it could be tampered with all sorts of ways. Get a Nitrokey and setup git to sign with it and have your team do the same.
> Here's the trick: github.actor does not always refer to the actual creator of the Pull Request. It's the user who caused the latest event that triggered the workflow.
Also pull_request_target is a big red flag in any GHA and even highlighted in GHA docs. It’s like running untrusted code with all your secrets handed over to it.
For better or worse, it's a pattern that GitHub explicitly documents[1].
(An earlier version of this page also recommended `pull_request_target`, hence the long tail of public repositories that use it.)
[1]: https://docs.github.com/en/code-security/dependabot/working-...
When you need to add dependencies that manage packages to avoid version conflicts, when you need to add dependencies that check for vulnerabilities in your dependencies, that's when you know you are in too deep.
And it's not like these things are necessary, for the big majority of systems that have less than 100k users, you can just build systems that run for less than 200$ per month. I've started working with an empty requirements.txt and empty package.json, everything is fine.
Before you call me a dinosaur and claim that I might as well be writing assembly, first, I do depend on an Operating System and a Programming Language, and maybe a database if I need the convenience, there's such a thing as nuance. And second, I'm using cutting edge tech, these things (Linux, python, mysql) have existed for less than 50 years! They are in their infancy!
But man, I'm really banking on the ethos of building on top of the shoulders of "giants" falls soon. I hate to be on the side of hackers, but it's inevitable that something bad will happen if you keep on doing shit like this.
It's like having a friend or group of people that constantly have orgies with random people and they develop a huge culture around using different types of condoms and profilactics of some kind or other, and giving tips on what works and what doesn't and switching strategies when one of their friends gets an STD but still not quite ever abandoning the idea of having massive orgies.
All of the while the cost of developing software is dropping to 0, almost no one uses GPL, and surprise surprise, companies build proprietary systems on top of 95% software of the commons. It sucks to compete with companies and programmers that just npm install 1000 things, not sure what I'm banking on, maybe a slew of lawsuits that increase the liabilities of writing bad software that incentivizes actually understanding the shit that we build?
Gnight HN.
No? In what world would it be safe to merge code, AI-generated or not, which you haven't reviewed, much less do it automatically without you even knowing it happened?
How do you know that you need the changes (whether bug or CVE)? How do you know the code isn't malicious? How do you know your systems are compatible with the change? How do you know you won't need to perform manual work during the migration?
Relying on a human reviewer, regardless, is a weak guarantee. If your security posture is "Joe shouldn't make mistakes", you still have a weak security posture.
I disagree for the reasons listed above, but let's focus for a moment on 3rd-party dependencies here, versus trusted ones. Given the numerous scenarios I listed above, it's a huge step from "using a 3rd party library" to "trusting the author of it".
We'd have to start with "...and you don't trust the author", because for most 3rd-party dependencies, the author has given us neither sufficient evidence to do so, nor sufficient recourse if that trust is violated.
> Relying on a human reviewer, regardless, is a weak guarantee.
Relying on countless random 3rd party to not own you when they know people are pulling in their code as a dependency is a far, far worse guarantee. How would that strategy have protected someone against this supply-chain attack?:
https://www.wiz.io/blog/github-action-tj-actions-changed-fil...
The problem isn't the auto-merging 1.0.2 - it's the lack of attestation in the PR that appears to be code from the foobar authors who were trusted in version 1.0.1.
Why would you choose to give little review to a dependency?
> The problem isn't the auto-merging 1.0.2 - it's the lack of attestation in the PR that appears to be code from the foobar authors who were trusted in version 1.0.1.
The authors were never trusted and never will be. What was trusted was the code at the commit hash for the first time 1.0.1 got tagged, and now a bot is saying you should move away from that trusted code.
Is that a good idea? That depends on, among other things: if you need to; and if you trust the new code.
The foobar author could have intentionally or unintentionally included an exploit, or they could have had their account hacked and someone else included one on their behalf, or just changed the code behind an existing tag (see my previous comments for a recent example).
I get what you're saying about this being overwhelming. Without this, though, we've seen it's just security theater, because your code is as strong as it's weakest link. More eyeballs on a given release/commit also means more people looking out for something nefarious, which is counteracted by a shorter time-since-release. Maybe multiple AI agents will make it easier