118 points by kepler471 95 days ago | 13 comments
ivanjermakov 94 days ago
I had a great experience writing self modified programs is a single instruction programming game SIC-1: https://store.steampowered.com/app/2124440/SIC1/
ycombinatrix 93 days ago
Cool recommendation, will give it a try.
iamcreasy 82 days ago
Is it possible to mutate the text segment by another process? For example, injecting something malicious instead of exec-ing a shell?
Cloudef 94 days ago
Kaze Emanuar's "Optimizing with Bad Code" video also goes briefly go through self-modifying code https://www.youtube.com/watch?v=4LiP39gJuqE
DrZhvago 94 days ago
Someone correct me if I am wrong, but self-mutating code is not as uncommon as the author portrays it. I thought the whole idea of hotspot optimization in a compiler is essentially self-mutating code.

Also, I spent a moderately successful internship at Microsoft working on dynamic assemblies. I never got deep enough into that to fully understand when and how customers where actually using it.

https://learn.microsoft.com/en-us/dotnet/fundamentals/reflec...

pfdietz 94 days ago
A program that can generate, compile, and execute new code is nothing special in the Common Lisp world. One can build lambda expressions, invoke the compile function on them, and call the resulting compiled functions. One can even assign these functions to the symbol-function slot of symbols, allowing them to be called from pre-existing code that had been making calls to that function named by that symbol.
BenjiWiebe 93 days ago
I know that no other language can match Lisp, but many languages can generate and execute new code, if they're interpreted. Compile, too, if they're JITted. They all require quite a bit of runtime support though.
alcover 94 days ago
I often think this could maybe allow fantastic runtime optimisations. I realise this would be hardly debuggable but still..
barchar 94 days ago
It sometimes can, but you then have to balance the time spent optimizing against the time spent actually doing whatever you were optimizing.

Also on modern chips you must wait quite a number of cycles before executing modified code or endure a catastrophic performance hit. This is ok for loops and stuff, but makes a lot of the really clever stuff pointless.

The debuggers software breakpoints _are_ self-modifying code :)

vbezhenar 94 days ago
I used GNU lightning library once for such optimisation. I think it was ICFPC 2006 task. I had to write an interpreter for virtual machine. Naive approach worked but was slow, so I decided to speed it up a bit using JIT. It wasn't a 100% JIT, I think I just implemented it for loops but it was enough to tremendously speed it up.
userbinator 94 days ago
Programs from the 80s-90s are likely to have such tricks. I have done something similar to "hardcode" semi-constants like frame sizes and quantisers in critical loops related to audio and video decompression, and the performance gain is indeed measurable.
econ 94 days ago
The 80's:

Say you set a value for some reason. Later you have to check IF it is set. If the condition needs to be checked many times you replace it with the code (rather than set a value to check some place). If you need to check if something is still true repeatedly you replace the condition check with no-ops when it isn't true.

Also funny are insanely large loop unrolls with hard coded valued. You could make a kind of rainbow table of those.

alcover 94 days ago
> "hardcode" semi-constants

You mean you somehow avoided a load. But what if the constant was already placed in a register ? Also how could you pinpoint the reference to your constant in the machine code ? I'm quite profane about all this.

ronsor 94 days ago
> Also how could you pinpoint the reference to your constant in the machine code?

Not OP, but often one uses an easily identifiable dummy pattern like 0xC0DECA57 or 0xDEADBEEF which can be substituted without also messing up the machine code.

mananaysiempre 94 days ago
If you’re willing to parse object files (a much easier proposition for ELF than for just about anything else), another option is to have the source code mention the constants as addresses of external symbols, then parse the relocations in the compiled object. Unfortunately, I’ve been unable to figure out a reliable recipe to get a C compiler to emit absolute relocations in position-independent code, even after restricting myself to GCC and Clang for x86 Linux; in some configurations it works and in others you (rather pointlessly) get a PC-relative one followed by an add.
userbinator 94 days ago
All the registers were already taken.

You use a label.

Retr0id 94 days ago
It already does, in the form of JIT compilation.
alcover 94 days ago
OK but I meant in already native code, like in a C program - no bytecode.
lmm 94 days ago
If you are generating or modifying code at runtime then how is that different from bytecode? Standardised bytecodes and JITs are just an organised way of doing the same thing.
connicpu 94 days ago
LuaJIT has a wonderful dynamic code generation system in the form of the DynASM[1] library. You can use it separately from LuaJIT for dynamic runtime code generation to create machine code optimized for a particular problem.

[1]: https://luajit.org/dynasm.html

Retr0id 94 days ago
I mean that, too.
112233 94 days ago
Linux kernel had the same idea, and now they have "static keys". It's both impressive and terrifying.
xixixao 94 days ago
I’ve been thinking a lot about this topic lately, even studying how executables look on arm macOS. My motivation was exploring truly fast incremental compilation for native code.

The only way to do this now on macOS is remapping whole pages as JIT. This makes it quite a challenge but still it might work…

oxcabe 94 days ago
It's impressive how well laid out the content in this article is. The spacing, tables, and code segments all look pristine to me, which is especially helpful given how dense and technical the content is.
f1shy 94 days ago
I have the suspicion that there is a high correlation between how organized the content is, and how organized and clear the mind of the writer is.
AStonesThrow 94 days ago
It was designed by Elves on Christmas Island where Dwarves run the servers and Hobbits operate the power plant
belter 95 days ago
I guess in OpenBSD because of W ^ X this would not work?
mananaysiempre 94 days ago
Not as is, but I think OpenBSD permits you to map the same memory twice, once as W and once as X (which would be a reasonable hoop to jump through for JITs etc., except there’s no portable way to do it). ARM64 MacOS doesn’t even permit that, and you need to use OS-specific incantations[1] that essentially prohibit two JITs coexisting in the same process.

[1] https://developer.apple.com/documentation/apple-silicon/port...

saagarjha 94 days ago
No, the protection is per-thread. You can run the JITs in different threads
rkeene2 94 days ago
In Linux it also needs mprotect() to change the permissions on the page so it can write it. The OpenBSD man page[0] indicate that it supports this as well, though notes that not all implementations are guaranteed to allow it, but my guess is it would generally work.

[0] https://man.openbsd.org/mprotect.2

Retr0id 94 days ago
It's not required on linux, if the ELF headers are set up such that the page is mapped rwx to begin with. (but rwx mappings are generally frowned upon from a security perspective)
akdas 94 days ago
I was thinking the same thing. Usually, you'd want to write the new code to a page that you mark as read and write, then switch that page to read and execute. This becomes tricky if the code that's doing the modifying is in the same page as the code being modified.
timewizard 94 days ago
The way it's coded it wouldn't; however, you can map the same shared memory twice. Once with R|W and a second time with R|X. Then you can write into one region and execute out of it's mirrored mapping.
fdsafkas 94 days ago
[dead]
fadfsdfaes 94 days ago
[dead]
sfdoiaojfias 94 days ago
[flagged]
Someone 94 days ago
Fun article, but the resulting code is extremely brittle:

- assumes x86_64

- makes the invalid assumption that functions get compiled into a contiguous range of bytes (I’m not aware of any compiler that violates that, but especially with profile-guided optimization or compilers that try to minimize program size, that may not be true, and there is nothing in the standard that guarantees it)

- assumes (as the article acknowledges) that “to determine the length of foo(), we added an empty function, bar(), that immediately follows foo(). By subtracting the address of bar() from foo() we can determine the length in bytes of foo().”. Even simple “all functions align at cache lines” slightly violates that, and I can see a compiler or a linker move the otherwise unused bar away from foo for various reasons.

- makes assumptions about the OS it is running on.

- makes assumptions about the instructions that its source code gets compiled into. For example, in the original example, a sufficiently smart compiler could compile

  void foo(void) {
    int i=0;
    i++;
    printf("i: %d\n", i);
  }
as

  void foo(void) {
    printf("1\n");
  }
or maybe even

  void foo(void) {
    puts("1");
  }
Changing compiler flags can already break this program.

Also, why does this example work without flushing the instruction cache after modifying the code?

nekitamo 94 days ago
For the mainstream OSes (Windows, OSX, Linux Android) You don't need to flush the instruction cache on most x86 CPUs after modifying the code segment dynamically, but you do on ARM and MIPS.

This has burned me before while writing a binary packer for Android.

znpy 94 days ago
The author clearly explained that the whole article is more a demonstration for illustrative purposes than anything else.

> Changing compiler flags can already break this program.

That's not the point of the article.

saagarjha 94 days ago
They check all those assumptions by disassembling the code.
Cloudef 94 days ago
> self-modifying code > brittle

I mean that is to be very much expected, unless someone comes up with a programming language that fully embraces the concept.