Pretty nifty. As of now, the code doesn't compile: there's some stray "span" stuff in codegen.rs[1], and it's trying to format `Warning` which doesn't implement `Display` in main.rs[2].
Fixing these, it runs mostly as advertised, but it seems to assume that one-letter types are always generic parameters, so it's impossible to (for example) generate this:
struct X;
enum A {
P(X),
Q
}
Trying this:
(struct X)
(enum A (P X) Q)
produces this:
struct X;
enum A<P, X> { Q }
while using a multi-letter type like `String`:
(enum A (P String) Q)
produces the expected:
enum A { P(String), Q }
One way to solve this would be to always require the generic annotation, and let it be empty when there are no generics, but when I tried that it did something weird:
The website seems to have some bugs on mobile, seen on Chrome 147.0.7727.137
- Cannot horizontally scroll the code snippets on homepage when it overflows. The scroll bars appear but swiping the snippet does nothing.
- Footer links are unresponsive (loon, GitHub, MIT Licence links)
- In the changelog page, scrolling makes the hamburger menu hide release dates behind it
- Hamburger close chevron looks misaligned (not sure if this was a deliberate choice)
I like the ubiquitous type inference. It reminds me a bit of ELSA for Emacs Lisp: https://github.com/emacs-elsa/Elsa. In particular, type aware macros have been on my wishlist forever: there's no good reason I shouldn't be able to write, e.g. an elisp or CL/SBCL compiler-macro that specializes an operation based on its inferred type. In normal lisps, it's hard to get even the declared types.
That said, I wish that part of Loon were less coupled to the allocation model though. What made you opt for mandatory manual memory management in an otherwise high-level language? And effects?
There are two things common in language design that, honestly, strike me as unnecessary:
1. manual allocation and lifetime stacking, and
2. algebraic effects.
On 1: I think we often conflate the benefits of Rust-style mutability-xor-aliased reference discipline with the benefits of using literal malloc and free. You can achieve the former without necessitating the latter, and I think it leads to a nicer language experience.
It's not just true that GC "comes with latency spikes, higher memory usage, and unpredictable pauses" in any meaningful way with modern implementations of the concept. If anything, it leads to more consistent latency (no synchronous Drop of huge trees at unpredictable times) and better memory use (because good GCs use compressed pointers and compaction).
On 2: I get non-algebraic effects for delimited continuations. But lately I've seen people using non-flow-magical effects for everything. If you need to talk to a database, pick a database interface and pass an object implementing the interface to the code that needs it. Effects do basically the same thing, but implicitly.
I always saw algebraic effects as a more-ergonomic alternative to functor/applicative/monad for managing I/O and otherwise impure code. If you aren't particularly concerned with that level of purity then yeah it's "just" an indirect way to write an interface.
I've found that in practice, people use effects systems as dynamic-extent globals, like DEFVAR-ed variables in Lisp.
"Oh, it's not a global. Globals are bad. Effects are typed and blend into the function signature. Totally different and non-bad."
No. Typing the effects doesn't help: oh, sure, in Koka I can say that my function's type signature includes the "database connection" effect. Okay, that's a type. Where does the value backing that type come from? Thin air? No, the value backing an effect comes from the innermost handler, the identity of which, in a large program, is going to be hard to figure out.
Like all global variables, the sorts of "effects" currently in vogue will lead to sadness at scale. Globals don't stop being bad when we call them something else: they're still bits of ambient authority that frustrate local reasoning. It's as if everyone started smoking again but called cigarettes "mist popsicles" and claimed that they didn't cause cancer.
There's no way around writing down names for the capabilities we give a program and propagating these names from one part of the program to another. Every scheme to somehow free us from this chore is just smuggling in ambient authority by another name. Ambient authority is seductive. At small scales, it's fine. Better than fine! Beautiful. Then, one day, as your program scales and its maintainership churns, you find you have no idea who implements what.
Software engineering develops antibodies against these seductions. The problem is that the antibodies are name-based, so when we dress up old, bad ideas with new names, we have to re-learn why they're bad.
P.S. You might object, "You're talking about dynamic-extent effects. What about lexically-scoped effects systems?", you might ask. "These fix the problems with dynamic-extent effects."
Sure. Lexical effects are better. That's why every decent language already has a "lexically-scoped effect system". It's called let-over-lambda, or if you squint, an "object". We've come full circle.
I think some comments are missing the upside of it being precisely Rust, without any new semantics. If you want lisp that compiles to machine code, Common Lisp can get reasonably efficient. The purpose of bringing Rust into it is to surface Rust-specific semantics -- which many people quite like!
If you already have the ability to express the grammar productions in Rust that allow for optionally-specified types (e.g. variable declaration), then you have the ability to express lifetimes and the turbofish (which is just a curious way to call a generic function with a specific type parameter). The only weird thing would be that Lisp uses the apostrophe character for something very different than Rust, but you could just pick any other way to denote lifetimes.
Type F must be a function that's generic over any possible lifetime 'a, with a single argument that's a reference with lifetime 'a to a tuple of two numbers, and returns a reference with the same lifetime 'a to an 8-bit number.
The full code is usually something like:
fn foo<F>(callback: F) where for<'a> F: ...
Which is a generic function foo that takes the argument of type F, where F must be...
It seems like this is more like writing Rust in an s-expression syntax instead of having a proper lisp dialect that compiles to Rust, which is cool I guess but not very interesting.
It's quite weird-looking for someone who's done any amount of lisp programming.
Yeah, it sort of reminds me of the microcode assembly of a few of the lisp machines, that, while in s-expressions were also clearly not lisp themselves. But could be an interesting target for some lisp macros.
A let that defines variables that have a lifetime beyond the scope of the expression? Yeah, that's really unusual. And it's not even the oddest looking thing from the first example block of code.
So if I wanted to actually use this and I write some rust-but-lisp code and there's a compile error, will it show me a nice error message with an arrow pointing to where the error happened in my lisp code?
Can I use the amazing `rust-analyzer` LSP to get cool IDE features?
I suspect the answer is no, but these might be good further prompts to use.
Unfortunately, given the clear LLM basis of this project, s-expressions aren't a great choice. I've found coding agents struggle really hard with s-expression parentheses matching.
Much better to give them something more M-expr styled, I think a grammar that is LL(1) is probably helpful in that regard.
Basically the more you can piggyback on the training data depth for algol-style and pythonic languages the better.
It is absolutely not true, I've vibecoded an app for myself in CL and opus/sonnet had 0 problems with parens and types. Add to it an MCP to work with REPL and it is much more smooth than Go in my experience.
That has definitely not been my experience as of late. I have produced multiple, largeish Clojure projects with AI that have been perfectly formatted and functional. Perhaps you were using an older or possibly smaller model? I am admittedly using Claude with higher end models and mid to high effort but it has been working great for months for me at this point.
Nope, but to be fair when you're working on your own novel S-exprs you don't have LSPs to guide the coding agent. I imagine that it works a lot better in the context of a known and understood language environment like Clojure, CL, scheme, etc. The other option would be to write an LSP in a non-S-expr language to ensure that no turn can end with mismatched parens, for example.
Greenspun's tenth rule was formulated in a time before things like first-class functions were commonplace in industrial languages. Rust supports not just functional programming idioms but outright Scheme-style macros, it's out of scope for Greenspun's.
Yes, but you could do the same by transforming Rust's ASTs. The only downside is that your input format is different from the format you are transforming. But the upside is that readability is much improved, which matters because code is typically read far more often than it is written.
How do you change the syntax to eliminate reverse compatibility? I guess you could change the names of most key functions between releases. But to be compatible with rust you would need to make breaking changes every release.
>S-expression syntax parsers are not hard to write.
I'm not sure I quite understand the point of your comment.
Are you implying that LLMs should be used for very hard to write code? I feel like the best use of LLMs is to automate the easy stuff so that I can focus on the hard to write stuff.
For everyone who is shaming on the project for "not implementing enough," then you can definitely help me with it.
For everyone who is shaming on the project for being "LLM slop," sure but that's the reason why something like this can exist in the first place. The point isn't to be a finished, production-ready product. The point is to be an interesting work, and just a sly bit silly
Scheme already has hygenic macros, I don't get why you'd vibecode a worse (less battle tested, llm-generated) replacement. I'm not sure why this hit the front-page, to be honest, because it doesn't seem noteworthy or interesting (Anyone and their mother can vibecode something like this in eight hours)