Really interesting idea. Finally a use of AI that goes beyond the simple “chat on top of a resume” and actually tries to demonstrate skills with real evidence by reading the code.
I also really like the separation between the public and private agent. It’s an architectural choice that many people ignore when talking about AI agents.
The approach to cost control and model selection is also very solid.
Curious, how did you settle on Haiku/Sonnet? Because there are much cheaper models on OpenRouter that probably perform comparatively...
Consider Haiku 4.5: $1/M input tokens | $5/M output tokens
vs MiniMax M2.7: $0.30/M input tokens | $1.20/M output tokens
vs Kimi K2.5: $0.45/M input tokens | $2.20/M output tokens
I haven't tried so I can't say for sure, but from personal experience, I think M2.7 and K2.5 can match Haiku and probably exceed it on most tasks, for much cheaper.
Since they're opening it publicly on irc here, the safety rails might be a consideration. I've made an agent recently and that's why I'm paying a premium to Anthropic atm -- Though I'm still experimenting to see if it's really necessary.
It's getting some organic usage -- 100M input tokens for just chats this month -- and I've seen enough users try to throw Haiku against the wall and failing to trick it into misbehaving. It "pumps the breaks" a lot and imitates annoyance when you ask it repeatedly :) Handles emotionally driven real-life questions mid-conversation well. It just works.
Not seeing all that consistently with other models I've tried so far -- but I've assumed it's not a completely fair comparison with (e.g.) open weights, since these safety rails are presumably not always arising from the natural model calls.
I have a relatively hard personal agentic benchmark, and Mimo v2-Flash scores 8% higher in 109 seconds for $0.003 (0.3 cents!) vs Haiku which took 262 seconds for $0.24 (24 cents)
Gemini 3.1 Flash Lite Preview (yes that is its name) is also a solid choice.
MiniMax M2.7 is actually pretty solid. I’ve been using it for coding lately and it handles most tasks just fine, but Opus 4.6 is still on another level.
Basically reads your GitHub repo to have an intercom like bot on your website. Answer questions to visitors so you don’t have to write knowledge bases.
The rooms-as-contexts pattern is underrated. You get namespace isolation for free without building any session management. Switch channels, switch project, switch system prompt, and the conversation history stays where it belongs.
The other win is client agnosticism. I can connect from a terminal on my workstation, a mobile IRC client on my phone, or a web client if I'm on someone else's machine, and I'm talking to the same agent with the same history. That's much harder to replicate with a custom REST API without building your own auth and session layer.
The backscroll is the part that makes it feel persistent. The agent feels "always on" even though it's just responding to messages, because the channel history gives you the full context of what you asked it last time.
RFC 1459 originally stipulated that messages not exceed 512 bytes in length, inclusive of control characters, which meant the actual usable length for message text was less. When the protocol's evolution was re-formalized in 2000 via RFCs 2810-13 the 512-byte limit was kept.
However, most modern IRC implementations support a subset of the IRCv3 protocol extensions which allow up to 8192 bytes for "message tags", i.e. metadata and keep the 512-byte message length limit purely for historical and backwards-compatibility reasons for old clients that don't support the v3 extensions to the protocol.
So the answer, strictly speaking, is yes. IRC does still have message length limits, but practically speaking it's because there's a not-insignificant installed base of legacy clients that will shit their pants if the message lengths exceed that 512-byte limit, rather than anything inherent to the protocol itself.
I run an agent and borrow inspiration from what claude code used to do with "think hard" -- but instead of increasing the thinking budget, it promotes the request from Haiku to Opus
It's not very natural though. Curious what other people are doing as well
For future reference I recommend having another Haiku instance monitor the chat and check if people are up to some shenanigans. You can use ntfy to send yourself an alert. The chat is completely off the rails right now...
There is probably a much simpler solution. Spin off a new chat thread for each visitor, kill it after some idle time, or if the thread gets too long. There is no reason to allow random people interact if the goal is to have only an "interactive resume"
This is such a great idea. I have an idea now for a bot that might help make tech hiring less horrible. It would interview a candidate to find out more about them personally/professionally. Then it would go out and find job listings, and rate them based on candidate's choices. Then it could apply to jobs, and send a link to the candidate's profile in the job application, which a company could process with the same bot. In this way, both company and candidate could select for each other based on their personal and professional preferences and criteria. This could be entirely self-hosted open-source on both sides. It's entirely opt-in from the candidate side, but I think everyone would opt-in, because you want the company to have better signal about you than just a resume (I think resumes are a horrible way to find candidates).
If the bot could also take care of any unpaid labour the interview process is asking for, that'd be swell. The company's bot can pull a ticket from the queue, the candidate's bot could process it, and the HR bot could approve or deny the hire based on hidden biases in the training data and/or prompt injections by the candidate.
It uses ARIA labels? If they're not present then it sends a message to a lawyer agent to start a case with a judge agent to sue for breaches of disability a11y legislation.
Cool approach using IRC as transport. I've been experimenting with MCP as the control plane for letting AI agents manage infrastructure specifically database operations. The lightweight transport idea is underrated vs heavy REST APIs.
One question. Sonnet for tool use? I am just guessing here that you may have a lot of MCPs to call and for that Sonnet is more reliable. How many MCPs are you running and what kinds?
> That boundary is deliberate: the public box has no access to private data.
Challenge accepted? It’d be fun to put this to the test by putting a CTF flag on the private box at a location nully isn’t supposed to be able to access. If someone sends you the flag, you owe them 50 bucks :)
This reads like it was written by AI. I don't understand how it provides any real security if the "guardrails" against prompt injection are just a system prompt telling the dumber model "don't do this"
lol I sent this link to my Claude bot connected to my Discord server and it started converting with nully and another bot named clawdia. moltbook all over again. I’m surprised how effortlessly it connected to IRC and started talking.
> The model can't tell you anything the resume doesn't already say.
Good observation. But I would worry that in the scenario when this setup is the most successful, you have built a public facing bot that allows people to dox you.
While I am a huge fan of IRC, wouldn't be simpler to simulate IRC, since you are embedding it? Or is the chatroom the actual point? Kudos on the project!
The LLM is the key element here, not the 7 dollars VPS... The model itself has cost billions of dollars to train and of the service shuts down or is interrupted for some reason your fancy setup breaks like nothing.
> The model itself has cost billions of dollars to train
But that has nothing to do with this use case, right? By the same logic, Linux has millions of man-hours went into it but we can use it for free on a $7 VPS.
> service shuts down or is interrupted for some reason your fancy setup breaks like nothing
No, it doesn't. That's what I meant by commodity. You can switch to another service and it will work just fine (unless you meant that all LLM providers might cease to exist).
Also note that they have a $2/day API usage cap, meaning that they are willing to spend $60+/month for the LLM use. If everything else fails, they can use those funds to upgrade the VPS and run a local model on their own hardware. It won't be Sonnet-4.6-level, but it will do. It just doesn't make sense with current dollar-per-token prices.
No, the key (novel) element here is the two-tiered approach to sandboxing and inter-agent communication. That’s why he spends most of the post talking about it and only a few sentences on which models he selected.
Lol. /nick
The IRC implementation needs to be a bit more locked down.
EDIT: So much fun to be in an IRC chat room - replete with trolling! Like a Time Machine to the 90's!
I have a 7$/yr vps 512mb ram which can run this. I have run crush from the charmbracelet team on the vps and all of it just works and I get an AI agent which I can even use with Openrouter free api key to get some free agentic access for free or have it work with the free gemini key :-)