That kind of notation, called SCCS/RCS, is the equivalent of finding a rotary phone in a modern office. Nobody uses it in 2005 Windows kernel code unless their programming background goes back decades, to government and military computing environments
—
The astrophysics lab I worked at in 2006 was still using svn and had a bunch of Fortran with references to systems from the 70s and 80s. The code ran perfectly well thanks to modern optimizing compilers and having moved from Vax to Linux in the 90s, it was a surprisingly seamless transition.
It reminds me of a conference talk I’ve referenced before “do over or make due” basically implying rewriting large amounts of mostly functioning code was not worth the effort if it could be taped together with modern tools.
That meant the files has the entire "$Revision: 1.3 $" nonsense and "file changelog" at the top too - though many newer files never bothered to include the tags to actually get RCS to replace them. Inconsistent as hell.
And while the "family" of devices the software was for traces it's origin to the mid '90s, functionally none of the code was older than ~5 years at that time.
Naturally even with only a few tens of engineers it regularly messed up, commits stepped on each other's toes and the entire tree got corrupted regularly. For fun I wrote a script that read it all and imported the entire history into git - you only had to go back a few years before the entire thing was absolute nonsense.
I have no idea why that was still being used then, but I assume it had been in use from the very start of that entire hardware family. Perhaps as it was fundamentally a "hardware" company - which until surprisingly recently seemed to consider "source control" to be "shared folders on remote machines" - "software" source control wasn't considered a priority.
What’s interesting about the malware in this post is that it goes one step further: instead of exploiting mismatches, it corrupts the computation itself — so every infected system agrees on the same wrong answer!
More broadly: any interpretive mismatch between components creates a failure surface. Sometimes it shows up as a bug, sometimes as an exploit primitive, sometimes as a testing blind spot. You see it everywhere — this paper, IDS vs OS, proxies vs backends, test vs prod, and now LLMs vs “guardrails.”
Fun HN moment for me: as I was about to post this, I noticed a reply from @tptacek himself. His 1998 paper with Newsham (IDS vs OS mismatches) was my first exposure to this idea — and in hindsight it nudged me toward infosec, the Atlanta scene, spam filtering (PG's bayesian stuff) and eventually YC.
https://users.ece.cmu.edu/~adrian/731-sp04/readings/Ptacek-N...
The paper starts with this Einstein quote "Not everything that is counted counts and not everything that counts can be counted", which seems quite apt for the malware analyzed here :)
Do you mean skeptical on which government was responsible or that it was in fact a government effort?
I can see how attribution could be debatable (between two main suspects mainly), but are / were there any good arguments against this being a gov effort? I would find it highly unlikely that someone other than a gov could muster up so much domain knowledge, source pristine 0days and be so stealthy at the same time.
I still use RCS today. It's certainly not my preferred option, but my collaborator likes it, and it's not too annoying for me to use.
Rather this was developed by a team of 6-8 people. Maybe two or three of them working on the implant, another engineer handling the exploits and propagation, and yet another building the LP and communications channels. They are supported by a scientist with deep knowledge of the process they are messing around with (say developing nuclear weapons), and a mathematician that knows how to introduce subtle and undetectable errors.
Every academic institution, every school, all under the radar of recruitment and more. It's difficult to believe, but the network is real.
There are certainly people here on HN who've been solicited, most who'll never mention it.
It's fun to imagine, though, what tight groups of highly motivated, stupidly intelligent people can do when they collectively commit to doing so - and with a hefty budget to assist.
Perhaps you meant cvs? Subversion was released in 2004 and git appeared in 2005.
The reference to the 70s and 80s code didn’t imply it was version controlled before svn/cvs though if that’s what you meant, but by that time it was and still had old timestamps commented in the text files.
https://bazaar.abuse.ch/sample/9a10e1faa86a5d39417cae44da5ad...
I'll probably build a Windows XP VM first.
[1] https://en.wikipedia.org/wiki/Assassinations_of_Iranian_nucl...
Obviously it was found by a mathematician, but I still suspect it wasn’t obvious in published research or that it ended up not causing significant enough deviations to cause research to revisit the calculations.
My team ran into some interesting but very small deviations when we moved our iterative solar wind model from 32 bit to 64 bit, but the changes weren’t significant enough to revisit or re-do prior research wholesale.
Like my team in the 2000s I suspect anyone who had data crunched by this bug also revisited it and either concluded it wasn’t significant enough or redid the work and it didn’t change the conclusions.
I am curious now if this bug was cited in any papers at the time to give a rough idea how aware or affected academics were.
We had researchers doing what I suppose might be called HPC on Sequent Symmetrys, which were i386s in the mid-80s and Pentiums by the mid-90s. There were other high-performance x86 SMP boxes that were roughly equivalent (e.g. NCR 3550). That plus some pretty good x86 FORTRAN compilers (e.g. Lehey (sp?)) made this reasonable. I also know a lot of folks who had desktop/side SMP PPros + FORTRAN to save grant money on the big iron and got useful work out of them.
Basically, x86 was way cheap and had useful amounts of FP. There's a reason x86 displaced risc; this is one. I'm sure they would have rather used something like an X/MP-48, but one plays the hand one is delt.
And yes, to be clear, I don’t consider it contributing to “science” if it’s not published, reviewed, and reproducible.
We will probably keep doing it until we encounter an alien intelligence and snap out of it.
Obviously IOCs are presented.
This comment is very exaggerated, I can think of a few more "morally corrupt" things to do.
But indeed many more details in the link you shared. Thanks for posting this!
This LLM style of writing has had it's day.
This one has some additional details, based on a talk given by one of the authors.
(@dang - consider re-pointing to this?)
The current article is hard to read
I was about to respond saying what a terrible article it was, as it reads as if the author has no idea what he was talking about. Attempting to paraphrase the original article would explain it.
Edit: Old link for those wondering, since it got changed: https://hackingpassion.com/fast16-pre-stuxnet-cyber-sabotage...
https://www.theregister.com/2026/04/24/fast16_sabotage_malwa...