I love seeing people experiment with RTS game UIs as agent orchestration interfaces — the metaphor fits (multiple units, fog of war, resource management). Mostly demos so far, but the creative potential is huge.
The biggest challenge is that as LLM costs drop dramatically each year, the number of agents able to be orchestrated grows orders of magnitude. So the UI needs to be able to compress this growing information into something meaningful for effective human steerability. A constant moving target.
What's interesting is that the tooling seems to be moving closer to the metal (CLI, APIs, infrastructure) rather than up toward better visual interfaces.
My bet is that the orchestration infrastructure underneath is more durable than any UI layer. I've been building an orchestration system focused on reusable workflows, observability, and feedback loops — and I think that foundation holds even as the interface keeps changing.
If some tech CEO makes a major announcement on X, it's newsworthy and belongs here. Anything else that's actual news is also fair game ... but all other X posts do not belong here!
But his random thoughts on X do not.
Aside from that I've seen few posts on X that didn't follow the pattern, and were short lived at the top.
You can just downvote anything you don't like and move along.
Why are they great? Because it is simply text that I use to interact with them. That's really simple and powerful.
I don't understand why I can't levitate that simple interface into a web UI inside my phone browser?
It feels like this should be as simple as webmux (tmux on the web). But it feels surprisingly elusive.
I would really like something that is a tiny layer on top of the existing great text chat modes.
That way I could use opencode or Gemini or Claude or whatever is next. The less software the better.
Using someone else’s software in the exploration phase is like chewing someone else’s gum.
The problem is more around ops / visibility / delegation / orchestration of agents, but the solution is being misslabelled as "IDE" which I feel like is the wrong analogy although the right "in-between" step towards what the next thing will be.
I think the key is to combine human and agent task tracking in one pane of glass.
We are moving into Codespaces now and it basically gives us an isolated full runtime env with Docker-in-Docker running Postgres. Developers had been trying various things to script worktrees, dealing with jank related to copying files into worktrees, managing git commands to orchestrate all of this, and managing port assignments.
Now with dev containers, we get a full end-to-end stack that we start up using Aspire (https://aspire.dev) which is fantastic because it's programmable.
All the ports get automatically routed and proxied and we get a fully functioning, isolated environment per PR; no fiddling with worktrees, easy to share with product team, etc.
A 64GB developer machine can realistically run ~2 of our full stacks (Pg, Elastic, Redis, Hatchet, Temporal, bunch of other supporting services). Frontend repo is 1.5m+ lines of TS (will grind small machines to a halt on this alone). In Codespaces? A developer could realistically work on 10 streams of changes at once and let product teams preview each; no hardware restrictions. No juggling worktrees, branches, git repo state.
I can code from any browser, from my phone, from a MacBook Neo, from a Chromebook. Switching between workstreams? Just switch tabs. Fiddling around with local worktrees for small, toy projects seems fine. But for anything sizable, future seems to be in dev containers.
There are also a lot of projects out there approaching this from many different angles.
Curious what features people would like to see in an Agentic IDE? Would you like to instruct multiple agents in real time (like vibe coding on steroids) or dispatch autonomous agents to solve a long-running task? Something else?
Early days and would appreciate any feedback
they had my attention. now they lost it.