Haskell is admittedly, probably the most powerful widely (or even somewhat widely) used language for doing this, but this general pattern works really well in Rust and TypeScript too and is one of my very favorite tools for writing better code.
I also really like doing things like User -> LoggedInUser -> AccessControlledLoggedInUser to prevent the kind of really obvious AuthZ bugs people make in web applications time and time again.
I've found this pattern to be massively underutilized in industry.
https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
You do not need Haskell for that eg it works in Python (via pydantic, attrs data classes)
my experience is that ocaml is more powerful than rust for enforcing this sort of type safety, because you have gadts that give you more expressive power, and polymorphic variants and object types (record row types) that give you more convenience. and the module system and functors of course.
you also avoid some abstraction limitations/difficulties that come from the rust borrow checker for places where garbage collection is just fine
type NewType<T, Name> = T & { readonly __brand: Name };
type Qwert = NewType<string>
I don't really see a big problem here?I'm not sure if `NewType` in your comment is supposed to stand in for a specific newtype (in which case it probably doesn't need to be generic[1]) or if it's supposed to be a general-purpose type constructor for any newtype (in which case it should take a second type parameter to let me distinguish e.g. `EmailAddress` from `Password`[2]). The use of `unique symbol`s is also only really necessary if you want to keep the brand private to force users to go through a validation function or whatnot, otherwise you can just use string literal types.
I agree these incantations aren't big problems (it all falls out naturally from knowledge of TypeScript's type system, and can be abstracted away as per my comment in [2]), but the fact that you goofed in the very comment where you were trying to make that point is causing me me second-guess myself.
[0]: https://github.com/microsoft/TypeScript/blob/v6.0.3/src/lib/...
There are helper libraries to ease this (zod supports branded types, I think?), but I guess my general point is that while typescript might give you the ingredients you need to implement type safety in cases like this if you try really hard and remember all your rules everywhere, it doesn't come naturally so it's hard to maintain at scale.
Somehow, it feels like a better solution than these complicated type systems. Does any other language do this outside BEAM?
But I did want to add something the article also touches on: types can be not only about ensuring safety or correctness at runtime, but also about representing knowledge by encoding the theory of how the code is supposed to work as far as is practical, in a way that is durable as contributors come and go from a codebase.
Admittedly this can come at the cost of making it slower to experiment on or evolve the code, so you have to think about how strongly you want to enforce something to avoid the rigidity being more painful than valuable. But it's generally a win for helping someone new to a codebase understand it before they change it
Imagine you have to distinguish between unescaped and escaped strings for security purposes. Even with a dynamically typed language, you can keep escaped strings as an Escaped class, with escape(str)->Escaped and dangerouslyAssumeEscaped(str)->Escaped functions (or static methods). There's a performance cost to this, so that's a tradeoff you have to weigh, but it is possible.
Another way of doing this is Application Hungarian[1], though that relies on the programmer more than it does on the compiler.
[1] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...
That part is (de facto) required for dynamically typed languages, but not for statically typed ones where the newtype constructor/deconstructor can be elided at compile time. Rust and C++ especially both do the latter by having true value types available for wrappers that evaporate into zero extra machine code.
But then just this moment I wondered: do any major runtimes using models with no static type info manage to do full newtype elision in the JIT and only box on the deopt path? What about for models with some static type info but no value types, like Java? (Java's model would imply trickiness around mutability, but it might be possible to detect the easy cases still.) I don't remember any, but it could've shown up when I wasn't looking.
As for other JVM languages like Kotlin and Scala, they have basically what "newtype" is, but it can only be completely erased in the byte code when they have a single field.
What I'm imagining for my curiosity about the dynamic case would look more like “JS/Lua/whatever engine detects that in frob(x) calls, x is always shaped like { foo: ‹string› } and its object identity is unused, so it replaces the calling convention for frob internally, then propagates that to any further callers”, and it might do the same thing when storing one of those in fields of other objects of known shapes, etc. until eventually it hits a boundary where the constraint isn't known to hold and has to be ready to materialize the wrapper object there.
Kotlin and Scala sound like they're doing the Rust/C++ thing at the bytecode level, if it's being “erased”, so just the static case again but with different concrete levels for machine vs language.
You can do it in Assembly. That doesn't mean it's cost effective.
The Confucian philosophy that people act like water coming down a mountain, seeking the path of least resistance comes to play.
Haskell, OCaml, F#, and their ilk can yield beautiful natural domain languages where using the types wrong is cost prohibitive. In languages without those guarantees every developer needs discipline to avoid shortcuts, and review needs increase, and time-pressure discussions rehashed.
And of course Rust and TypeScript were heavily influenced by Haskell... they just don't mention it and call things differently, to avoid the "monads are scary, I need to write a tutorial" effect. Though it's less about monads and more about things like type classes.
Imitation is the sincerest form of flattery.
Haskell type classes are not classes (like Java or PHP classes); they are comparable to Rust traits -- which are different from PHP traits which are comparable to Java/C# interfaces (with default impls; if you just want contracts you have... PHP interfaces).
A fundamental difference is that you can instantiate/implement a type class (or Rust trait) for any* type, compared to interfaces where each class declares the interfaces it implements. You can therefore create generic (forall) instances, higher kinded type classes, etc.
Actually in modern Java you can simulate type classes approach with a mix of interfaces and default methods implementations.
In C# you can have the experience more straightforward with extensions types introduced in C#13.
Then we have yet another way to approach type classes in Scala, with traits and implicits.
And so on, as I haven't yet run out of examples.
On our last product, we decided to start switching from Typescript to Rust on the backend because we got tired of crashes. I consider that to be one of the greatest technical mistakes I've made ever, as our productivity slowed massively. I'll just share two time-draining issues that only occur in Rust: (1) Writing higher-order functions (e.g.: a function to open a database connection, do something, and then close it -- yes, I know you can use RAII for this particular example), which is trivial in Haskell and TypeScript and JavaScript and C++ and PHP, turned out to be so impossible in Rust [even after asking Rust-expert friends for help], that I learned to just give up and never try, though it sometimes worked to write a macro instead. (2) It's happened many times that I would attempt a refactoring, spend all day fixing type errors, finally get to the top-level file, get a type error that's actually caused somewhere else by basic parts of the design, and conclude the entire refactoring I had attempted is impossible and need to revert everything.
On top of that, Rust is the only modern language I can name where using a value by its interface instead of its concrete type lies somewhere between advanced and impossible, depending on what exactly you're doing.
I came away concluding that application code (as opposed to systems or library code) should, to a first approximation, never be written in Rust.
However I share your conclusion, outside scenarios where having automated resource management as the main approach is either technically impossible, or a waste of time trying to change pervasive culture, I don't see much need for Rust.
In fact those that write comments about wanting a Rust but without borrow checker, the answer already exists.
Way back as an undergrad in 2011, I contributed to Plaid, a JVM language whose main feature is based on affine and linear types. I'm one of the very few people in the world who knew what borrowing is before Rust had it. So I know first-hand that borrow-checking is perfectly compatible with garbage collection.
This is also not strange for those in the Rust community with type systems experience, hence the Roadmap 2026 proposals for a more ergonomic experience.
Thus we have Linear Haskell, Swift 6 ownership, D ownership, Koka, Hylo, Chapel, OxCaml, Scala Capabilities, Ada/SPARK proofs, Idris, F*, Dafny,....
(2) In such situations the compiler (type system or borrow checker) is telling you that what you wanted to do has hidden bugs, and therefore refuses to compile. Usually that's a good thing.
(3) &dyn Trait
(2) No, it stems from a compiler limitation (imposed in large part by the need for static memory layout), not because there's anything intrinsically buggy about doing this.
(3) Look up "dyn-compatibility", for the largest, but not the only, problem with doing this.
Aside from having vibes of "I've chosen to get hit weekly in the face with a baseball bat, but have learned to like it, and so should you" it's also seldom true.
All three of these examples are also quite easy to do with C and C++. It's not about garbage collection.
Only other language that I think gets close to rust ergonomics is Kotlin, but it suffers from having too many possibilities for abstractions.
On my line of work we don't do Web servers from scratch, we use lego pieces like with enterprise integrations.
Think Sitecore, Dynamics, Sharepoint, Optimizely, Contentful, SAP, Mongolia, Stripe, PayPal, Adobe, SQL Server, Oracle, DB2,.....
Axum offers very little over existing .NET, Java, nodejs SDKs provided by those vendors.
The things I found quite difficult or impossible in Rust were to me pretty basic patterns for modularity and removing duplication that it's really shocking that these complaints are not more common.
I currently have but two hypotheses for why.
First, the second problem I mentioned only comes from using tokio, which causes your top-level program to secretly be using a defunctionalized continuation data type, derived from where exactly in other files you put your await's, that might not be Send. If you're not using tokio, you won't experience that issue.
Second...I was kinda told to just give up on deduplication and have lots of copy+pasted code. This raises the very uncomfortable hypothesis that Rust afficionados are some combination of people who came to Rust early and never learned traditional software design and don't know what they're missing, and people who were raised on traditional good software engineering but then got hit with Rust's metaphorical baseball bat of lack-of-modularity over and over until they got used to being hit with a baseball bat as a normal pain of life.
I don't like either of these explanations (esp. with tokio seeming quite dominant), so I'm awaiting an explanation that makes more sense. https://xkcd.com/3210/
Many things are plainly not permitted, either because the borrow-checker isn't clever enough, or the pattern is unsafe (without garbage collection and so on).
Many functional/Haskell patterns simply can not be translated directly to Rust.
A deeply-baked assumption of Rust is that your memory layout is static. Dynamic memory layout is perfectly compatible with manual memory management, but Rust does not readily support it because of its demands for static memory layout.
A very easy place to see this is the difference in decorator types between Rust and other languages like Java. Java's legacy File/reader API has you write things like `new PrintWriter(new BufferedWriter(new FileWriter("foo.txt")))`, where each layer adds some functionality to the base layer. The resulting value has principal type `PrintWriter` and can be used through the `Writer` interface.
The equivalent code in Rust would give you a value of type `PrintWriter<BufferedWriter<FileWriter>>` which can only be passed to functions that expect exactly that type and not, say, a `PrintWriter<BufferedWriter<StringStream>>`. You would solve this by using a template function that takes a `T where T: Writer` parameter and gets compiled separately for every use-site, thus contributing to Rust's infamous slow build times.
It would be perfectly sane, and desirable for application code, to be able to pass around a PrintWriter value as an owned pointer to a PrintWriter struct which contains an owned pointer to a BufferedWriter struct which contains an owned pointer to a FileWriter struct. You could even have each pointer actually be to a Writer value of unknown size, and thus recover modularity.
In Rust, there is sometimes a painful and very fragile way to do this: have each writer type contain a Box<&dyn Writer>, effectively the same as the Java solution above. This works, except that, if one day you want to add a method to the Writer trait that breaks dyn-compatibility, then you will no longer be able to do this, and will need to rewrite all code that uses this type.
This is definitely not the case and is unnecessarily insulting.
The truth is that some things are harder in Rust but a) often those things are best avoided anyway (e.g. callbacks), and b) it's worth the trade-off because of the other good things it allows.
Surely as a Haskell user of all things you must understand that sometimes making things harder is worth the trade-off. Yeay everything is pure! Great for many reasons. Now how do I add logging to this deeply nested function?
I know that it's insulting! And it doesn't make sense, because I generally think Rust programmers are smart people. But right now, it's the only explanation I've got, so it is alas necessarily insulting. So please, please, please give me a better explanation that actually makes sense.
> The truth is that some things are harder in Rust but a) often those things are best avoided anyway (e.g. callbacks), and b) it's worth the trade-off because of the other good things it allows.
This sounds like the seeds of a better explanation, but it needs a lot more to actually suffice. E.g.: why are callbacks best avoided anyway, when they're virtually required for a large number of important programming patterns? (In more technical language: they're effectively the only way to eliminate duplication in non-leaf-expressions. In even more technical language: they're the way to do second-order anti-unification.)
> Surely as a Haskell user of all things you must understand that sometimes making things harder is worth the trade-off. Yeay everything is pure! Great for many reasons. Now how do I add logging to this deeply nested function?
And this is a great illustration of the difference. First, you will seldom find Haskell programmers trying to argue that, actually, things like deeply-nested logging that everyone wants are actually "best avoided anyway." Second, you'll actually get a solution if you ask about them -- in this case, to either use MTL-style, to use a fixed alias for your monad stack, or that unsafePerformIO isn't actually that bad.
BTW, similar to my unpleasant conclusion for Rust above, I have another unpleasant conclusion for Haskell: Haskell is incredible for medium-sized programs, but it has its own missing modularity features that make it non-ideal for large programs (e.g.: >50k lines). But this is a much smaller problem than it sounds because Haskell is so compact that, while many projects can be huge, very few individual codebases will need to approach that size.
Look up "callback hell". Basically they encourage spaghetti.
> you'll actually get a solution if you ask about them
You got solutions to your problems didn't you? Macros are a perfectly reasonable thing to use in Rust, even if they are best avoided where possible. Exactly like unsafePerformIO.
If you were expecting Rust to work perfectly in every situation... well it doesn't. GUI programming in particular is still awkward, and async Rust has more footguns than anyone is happy with.
Despite that it's still probably the best language we have for a surprisingly large range of domains.
I think they meant that in Haskell it is very easy to write externally unreadable code..
As a customer of Mercury, it's truly one of the critical companies my toolkit, and I just can't help but feel that their choosing of Haskell made their progress, development and overall journey that much better. I realize that you can make this argument with most languages, and it's not to say that a FP lang like Haskell is a recipe for success, but this intentional decision particularly pre "vibe coding" and the LLM era seems particularly prescient, of course combined with their engineering culture that was detailed in the post.
> The problem is that we cannot trust code we cannot instrument. If a third-party binding makes HTTP calls through concrete functions, we have no way to add tracing, no way to inject timeouts tuned to our SLOs, no way to simulate partner outages in testing, and no way to explain the 400ms gap in a trace except by squinting at it and developing theories. So we write our own. More work upfront, but the clients we write are observable by construction, because we built them that way from the start.
Some people call this "high-level," too.
I will say, though, that 2 million lines of code is much less code than it sounds like at first glance, especially for a company in a highly-regulated space like finance, plus a few years of progress.
Absolutely not an objective metric, but I have found that Haskell just has a different "aspect ratio". Line count may be somewhat lower, but the word count is essentially largely the same as more imperative OOP languages.
a) Haskell's reputation for terseness partially comes from its overrepresentation in academic / category-theoretic circles, where it's typically fine to say things like `St M -> C T`. But for real software it's a lot more useful to say things like `TransactionState Debit -> Verified Transaction` etc etc.
b) The other part of Haskell's terse reputation is cultural, something extending back to LISP: people being way too clever about saving lines with inscrutable tricks or macros. I imagine that stuff is discouraged at a finance company like Mercury in favor of clarity and readability: e.g. perhaps the linter makes you split monadic stuff into pedantic multiline do expressions even if you can do it in a one-liner with >> and >>=.
> This is not a complaint about volunteer maintainers. It is simply one of the ambient risks of building serious systems on a smaller ecosystem.
And so instead of paying the lib authors who already have domain expertise and know their codebase, they chose to rewrite it from scratch/fork without contributing back. So classic.
it's so easy to scout when a company has this haskell philosophy. either by the interviewers themselves or by the bloggers they hired to guide their team.
the trick? i just..lie. "oh yeah i'm super pragmatic. i'm not hardline about haskell. i don't think you should be fancy." see how easy it is? i am suddenly hired and got a fat raise. and if the company moves off haskell? i quit immediately, get another haskell job, and talk to my former coworkers on the way out to embolden them to do the same.
it helps that i have the "real world" stuff on my resume.
i rode the 2010s job hopping ride as a haskeller doing this. each time a 20-30% raise. and i get to still write haskell. and i am always a top percentile haskeller at the company so i can code however tf i want lolol. suddenly - singletons, Generics, HKD!
so here's to earning another million bucks "noodling around with Haskell" :cheers:
Congrats I guess? Not sure where the abuse/guilt comes from.
I've made all my money over a decade in Haskell. Millions. Paid for all my stuff.
It all started with a recruiter on LinkedIn
I've been this person, and I've worked with this kind of person, and been the victim of this kind of person. They love language X, or framework Y, and are convinced that so many problems in front of them are shaped in a way that would be solved through the application of it.
They now have a hammer and they go searching for nails to hit with it.
I've been in shops that used Haskell, and it was... fine? It's I guess nice for people who enjoy writing in it -- I prefer other FP languages personally. I like nerdy things like that and used to hang out on Lambda the Ultimate or whatever. But I don't think there's any real secret powers in Haskell or most other tools. I've been burned too many times by that kind of approach.
If only cross-compilation became easy so that I can develop on my chip Macs and deploy on x64/AMD Linux servers.
>statically linking Haskell binaries is quite a challenge
>build requirements really slow down the process. I have to use dockers to help cache dependencies and avoid recompiling things that have not changed, but it is still slow and puts out large binaries.
Also, the Docker-based deployment takes a lot of time as it needs to recompile each module. While you can cache some part of it, it's still slow.
Meanwhile with Go it's painless. And i am not the only one having this issue:
https://news.ycombinator.com/item?id=47957624#47972671
Such a shame Haskell is beautiful and performant language still build is slow.
The Mercury site also looks way better than most other banks I have ever used (load speed is also very good.) On the danger of seeming like a shill (I'm not), I'm tempted to try them out.
what does that mean?
In languages with option types, if you want to weaken the type requirement for a function parameter, or strengthen the guarantee for a return type, you have to change the code at every call site. E.g, if you have a function which you can improve by changing
- a parameter Foo to Option<Foo> or
- a return value Option<Bar> to Bar
you would have to change the code at all call sites. Which could be anything between annoying and practically impossible.
In languages that solve null pointer errors instead with untagged union types (like TypeScript or Scala 3), this problem doesn't occur. So you can change
- a parameter Foo to Foo | Null or
- a return value Bar | Null to Bar
and all call sites of the function can remain unchanged, since the type system knows that weakening the type requirement for a parameter, or strengthening the promise for a return type, is a safe change than can't cause a type error.
So yes, option types do avoid null pointer exceptions, but they solve the issue in a very suboptimal way.
If you were calling a function which might return null (String | Null), you will already have null handling at the call site, but if you now change that function such that it never returns null (String), you still have the (now unnecessary) null handling, but this doesn't hurt and you don't have to change anything at the call site.
Likewise, if you were passing a String to a function that doesn't accept null (String), the call site already made sure that the parameter isn't null, and if you change the function so that it does now accept null (String | Null), again nothing needs to be changed at the call site.
I must admit I’ve never had this problem in application development. In fact, I do want to change my callers because strengthening the contract is an opportunity to simplify the callsites - they no longer have to handle the optionality. The change might carry some semantic meaning too, why are you getting x instead of Maybe x all of the sudden? Are there some other things you should reconsider in the callers? I can see how it could be useful in library development, but there are also patterns to account for this that are idiomatic to Haskell.
I don't think Clojure has untagged union types like TypeScript or Scala.
> but null is a high price to pay for this convenience.
Why would it be? Untagged unions prevent null pointer errors just as much as option types do, only they don't have the discussed disadvantages of option types.
That's literally what they explain in the rest of the comment.
Actually I think you can just change concrete argument `Foo` to type constraint in Haskell as well using a type class. So the function would be something like `foo :: ToMaybeFoo a => a -> .. ->`. And you would implment `ToMaybeFoo` instance for `Foo` and `Maybe Foo`.
Agree that this is more involved than typescript, but you get to keep `null` away from your code...
> but you get to keep `null` away from your code...
I don't think this would be desirable once we have eliminated null pointer exceptions with untagged unions.
It is quite simple. Instead of accepting a concrete type `Foo`, the function is changed to accept types that can be converted to `Option<Foo>`. Since both `Foo` and `Option<Foo>` can be converted to `Option<Foo>`, the existing call sites that passes `Foo` would not require changing.
>A couple million lines of Haskell, maintained by people who learned the language on the job, at a company that moves huge amounts of money? The conventional wisdom says this should be a disaster, but surprisingly, it isn't. The system we've built has worked well for years, through hypergrowth, through the SVB crisis that sent $2 billion in new deposits our way in five days,1 through regulatory examinations, through all the ordinary and extraordinary things that happen to a financial system at scale.
This one is quite telling. Do people have counter examples?
Obviously Mercury is successful, and obviously Haskell is how they did it. So it's essential to their success. Would it be instrumental to anyone else's anywhere else doing anything else? Can't possibly know, I don't think.
You can still compare lines of code and bug rate over the same period of time.
Being able to minimize boilerplate and have strong refactoring and bug resistant types is a huge edge.
The only problem is their ecosystems are limited so you might spend more time than you like implementing an API or binding a system library.
Try a better programming language next time, dagnabbit!!!
(There will be downvotes I suppose. More lines of code the better?)
I’ve been using Mercury for 5 years. In that time, I’ve been able to wire transfer money without having to worry it might disappear (functionally impossible at certain other banks), created hundreds of virtual debit cards each with their own limit and pulling from different accounts, created dozens of accounts (a “place to put money”) named by function (each of my household utilities gets its own account, with an automatic rule to pull in money whenever it gets paid out), and… well, I think that covers everything.
This has given me unprecedented insight into my financial life. I know exactly how much I spend on groceries, on each utility, and on entertainment. I can project ahead and get a burn rate for my household. And my ex wife uses it too, on the same login, which is as easy as “make an account named with her first name” and a corresponding virtual debit card.
I’m convinced the only reason people don’t use Mercury is that they don’t know what they’re missing.
You have to pay for personal banking (a couple hundred a year iirc), but the business banking is free. If you want to try them out, you can start an LLC for a few dollars (at least in Missouri) and get overnight access to Mercury. All that’s required is your EIN.
They’ve been one of the single best products I’ve ever used. The sole wrinkle was when they canceled all their existing virtual cards due to reasons, which threw my recurring billing into chaos. But every great company is allowed at least one mega annoyance, and that one was a blip.
If you’re wondering whether to try them out, the answer is yes, and I’m excited for you to discover how cool it is. https://www.mercury.com
Very well could be true because I had no idea who or what they are.
Do they have strong low level automation support for the customer programmatically even for personal accounts? I use ledger for plaintext accounting for both personal and business and sync of data is slightly annoying, perhaps Mercury’s products solve that trivially?
I made this to solve it https://sras.me/accounts/
Feel free to use it as it stores data on your browser's local storage only. For syncing between devices, you would be able to use Google firebase's free tier and export your accounts (after compressing and encrypting) there and import from another device. Let me know if you want to try it..
I thought that was the whole point of banks - does money randomly disappear when people do wire transfers?