We just released a driver that allows users of just-bash to attach a full Archil file system, synced to S3. This would let you run just-bash in an enrivonment where you don't have a full VM and get high-performance access to data that's in your S3 bucket already to do like greps or edits.
At this point why not make the agents use a restricted subset of python, typescript or lua or something.
Bash has been unchanged for decades but its not a very nice language.
I know pydantic has been experimenting with https://github.com/pydantic/monty (restricted python) and I think Cloudflare and co were experimenting with giving typescript to agents.
This is a really interesting idea. I wonder if something like Luau would be a good solution here - it's a typed version of Lua meant for sandboxing (built for Roblox scripting) that has a lot of guardrails on it.
I'll add that agents (CC/Codex) very often screw up escaping/quoting with their bash scripts and waste tokens figuring out what happened. It's worse when it's a script they save and re use because it's often a code injection vulnerability.
I want them to be better at it, but given how hard it is for me as a human to get it right (which is to say, I get it wrong a lot, especially handling new lines in filenames, or filenames that start with --) I find it hard to fault them too much.
If you present most LLM's with a run_python tool it won't realize that it can access a standard Linux userspace with it even if it's explicitly detailed. But spiritually the same tool called run_shell it will use correctly.
Gotta work with what's in the training data I suppose.
Agents really do not care at all how "nice" a language is. You only need to be picky with language if a human is going to be working with the code. I get the impression that is not the use case here though
just-bash comes with Python installed, so in a way that's what this has done. I've used this for some prototypes with AI tools (via bash-tool), can't really productionise it in our current setup, but it worked very well and was undeniably pretty cool.
> std::slop is a persistent, SQLite-driven C++ CLI agent. It remembers your work through per-session ledgers, providing long-term recall, structured state management. std::slop features built-in Git integration. It's goal is to be an agent for which the context and its use fully transparent and configurable.
Agreed! Very notable codex behavior to prefer python for scripting purposes.
I keep telling myself to make a good zx skills or agents.md. I really like zx ergonomics & it's output when it shells out is friendly.
Top comments are lua. I respect it, and those look like neat tools. But please, not what I want to look at. It would be interesting to see how Lua fairs for scripting purposes though; I haven't done enough io to know what that would look like. Does it assume some uv wrapper too?
TIL about Monty. A number of people have tried to sandbox [python,] using python and user space; but ultimately they've all concluded that you can't sandbox python with python.
Virtual Machines are a better workload isolation boundary than Containers are a better workload isolation boundary than bubblewrap and a WASM runtime.
> Should a (formally verified) policy engine run within the same WASM runtime, or should it be enforced by the WASM runtime, or by the VM or Container that the WASM runtime runs within?
> How do these userspace policies compare to MAC and DAC implementations like SELinux AVC, AppArmor, Systemd SyscallFilter, and seccomp with containers for example?
It’s cool to see this project and others pop up. Virtualizing os primitives like bash and even file systems
You can interface around the nodejs files system interface and have access to some nice tools, like git isomorphic for instance. Then obviously everything couples nicely with agents.
The unix commandline tools being the most efficient way to use an LLM has been a surprise.
I wonder the reason.
Maybe 'do one thing well'? The piping? The fact that the tools have been around so long so there are so many examples in the training data? Simplicity? All of it?
The success of this project depends on the answer.
Even so, I suspect that something like this will be a far too leaky abstraction.
But Vercel must try because they see the writing on the wall.
If you want a better guess: It's because of the man pages for all the tools are likely duplicated across so many media for the LLM training that there's just an efficient pipeline. They go back to the 70s or whatever.
I'm not convinced. I don't want to rack servers and diagnose bad RAM like it's still the 90's, so I'm paying someone else for the privilege, especially to get POPs closer to customers than I want to drive or fly to setup, especially in foreign countries where I don't speak the language or know the culture. Fun for vacation but a recipe to waste time and money setting up a local corporate entity and a whole team when I can just pay GCP or AWS and have a server on the other side of the planet from me faster than I can book a plane flight and hotel reservation there.
There's also the maintenance of the server to be considered. Vercel or other PaaS/Lambda/GCP functions/etc serverless means there's just less crap for me to manage, because they're dealing with it, and yeah, they charge money for that service. Being able to tell Claude code, I setup ssh keys and sudo no password for you, go fix my shit; like, that works, but then the hard drive is full so I have to up size the VPS, and if you're stupid/brave, you can give Claude Code MCP access to Chrome so it can click the buttons in Hetzner to upsize for you, but that's time and tokens spent not working on the product so at the end of the day I think Vercel is gonna be fine. AI generating code means there are many many more people trying out making some sort of Internet company, but they'll only discover cheaper options only after paying for Vercel becomes painful.
I did a slightly less ambitious prototype a few weeks ago where I created added lazy loading of GCS files into the just-bash file-systems, as well as lots of other on-demand files. Was a lot of fun.
Trying to secure the sandbox the harness is running in seems like the hard way to do things. It's not a bad idea, but I think it'd be easier to focus on isolating the sandbox and securing resources the harness sandbox accesses, since true security requires that anyhow.
What, exactly, is "safe" about TypeScript other than type safety?
TypeScript is just a language anyway. It's the runtime that needs to be contained. In that sense it's no different from any other interpreter or runtime, whether it be Go, Python, Java, or any shell.
In my view this really is best managed by the OS kernel, as the ultimate responsibility for process isolation belongs to it. Relying on userspace solutions to enforce restrictions only gets you so far.
I agree on all counts and that this project is silly on the face of it.
My comment was more that there is a massive cohort of devs who have never done sysadmin and know nothing of prior art in the space. Typescript "feels" safe and familiar and the right way to accomplish their goals, regardless of if it actually is.
Interesting concept but I think the issue is to make the tools compatible with the official tools otherwise you will get odd behaviour. I think it is useful for very specific scenarios where you want to control the environment with a subset of tools only while benefiting from some form of scripts.
This ends up reading files into node.js and then running a command like grep but implemented in JS. I love the concept but isn’t this incredibly slow compared to native cli tools? Building everything in JS on top of just readFile and writeFile interfaces seems pretty limited in what you can do for performance.
I have been playing around with something like this.
I'm not going for compatibility, but something that is a bit hackable. Deliberately not having /lib /share and /etc to avoid confusion that it might be posix
I would not over-read into that doc. In practice, the only missing stuff are extreme edge cases of the type that is actually not consistent between other implementations of bash.
In practice it works great. I haven't seen a failed command in a while
Incompatibilities don't matter much provided your error messages are actionable - an LLM can hit a problem, read the error message and try again. They'll also remember that solution for the rest of that session.
Because bash is everywhere. Stability is a separate concern. And we know this because LLMs routinely generate deprecated code for libraries that change a lot.
I've been working with the shell long enough that I know just by looking at it.
Anyway, it was rethorical. I was making a point about portability. Scripts we write today run even on ancient versions, and it has been an effort kept by lots of different interpreters (not only bash).
I'm trying to give sane advice here. Re-implementing bash is a herculean task, and some "small incompatibilities" sometimes reveal themselves as deep architectural dead-ends.
Why couldn’t they name it `agent-bash` then? What’s with all the “just-this”, “super-that” naming?
Like developer lost the last remaining brain cells developing it, and when it’s came to name it, used the first meaningless word that came up.
After all you’re limiting discovery with name like that.