> For the uninitiated, Linear is a project management tool that feels impossibly fast. Click an issue, it opens instantly. Update a status and watch in a second browser, it updates almost as fast as the source. No loading states, no page refreshes - just instant, interactions.
How garbage the web has become for a low-latency click action being qualified as "impossibly fast". This is ridiculous.
I also winced at "impossibly fast" and realize that it must refer to some technical perspective that is lost on most users. I'm not a front end dev, I use linear, I'd say I didn't notice speed, it seems to work about the same as any other web app. I don't doubt it's got cool optimizations, but I think they're lost on most people that use it. (I don't mean to say optimization isn't cool)
A web request to a data center even with a very fast backend server will struggle to beat 8ms (120hz display) or even 16ms (60hz display), the budget for next frame painting a navigation. You need to have the data local to the device and ideally already in memory to hit 8ms navigation.
This is not the point, or other numbers matter more, then yours.
In 2005 we wrote entire games for browsers without any frontend framework (jQuery wasn't invented yet) and managed to generate responses in under 80 ms in PHP. Most users had their first bytes in 200 ms and it felt instant to them, because browsers are incredibly fast, when treated right.
So the Internet was indeed much faster then, as opposed to now. Just look at GitHub. They used to be fast. Now they rewrite their frontend in react and it feels sluggish and slow.
> Now they rewrite their frontend in react and it feels sluggish and slow.
And decided to drop legacy features such as <a> tags and broke browser navigation in their new code viewer. Right click on a file to open in a new tab doesn’t work.
Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area. And the techniques to "fix" this usually only mitigate the problem in read-scenarios.
The techniques Linear uses are not so much about backend performance and can be applicable for any client-server setup really. Not a JS/web specific problem.
My take is, that a performant backend gets you so much runway, that you can reduce a lot of complexity in the frontend. And yes, sometimes that means to have globally distributed databases.
But the industry is going the other way. Building frontends that try to hide slow backends and while doing that handling so much state (and visual fluff), that they get fatter and slower every day.
> Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.
The bottleneck is not the roundtrip time. It is the bloated and inefficient frontend frameworks, and the insane architectures built around them.
This is not magic. It's using standard web technologies (SSE), and a fast and efficient event processing system (NATS), all in a fraction of the size and complexity of modern web frameworks and stacks.
Sure, we can say that this is an ideal scenario, that the server is geographically close and that we can't escape the rules of physics, but there's a world of difference between a web UI updating at even 200ms, and the abysmal state of most modern web apps. The UX can be vastly improved by addressing the source of the bottleneck, starting by rethinking how web apps are built and deployed from first principles, which is what Datastar does.
actually if you live near a city the edge network is 6ms RTT ping away, that’s 3ms each direction, so if e.g. a virtual scroll frontend is windowing over a server array retained in memory, you can get there and back over websocket, inclusive of the windowing, streaming records in and out of the DOM at the edges of the viewport, and paint the frame, all in less than 8ms 120hz frame budget, and the device is idle, with only the visible resultset in client memory. That’s 120hz network. Even if you don’t live near a city, you can probably still hit 60hz. It is not 2005 anymore. We have massively multiplayer video games, competitive multiplayer shooters and can render them in the cloud now. Linear is office software, it is not e-sports, we’re not running it on the subway or in Africa. And AI happens in the cloud, Linear’s website lead text is about agents.
I can't help but feel this is missing the point. Ideally, next refresh click latency is a fantastic goal, we're just not even close to that.
For me, on the web today, the click feedback for a large website like YouTube is 2 seconds for first change and 4 seconds for content display. 4000 milliseconds. I'm not even on some bad connection in Africa. This is a gigebit connection with 12ms of latency according to fast.com.
If you can bring that down to even 200ms, that'll feel comparatively instantaneous for me. When the whole internet feel like that, we can talk about taking it to 16ms
Linear is actually so slow for me that I dread having to go into it and do stuff. I don’t care if the ticket takes 500ms to load, just give me the ticket and not a fake blinking cursor for 10 seconds or random refreshes while it (slowly) tries to re-sync.
Everything I read about Linear screams over-engineering to me. It is just a ticket tracker, and a rather painful one to use at that.
This seems to be endemic to the space though, eg Asana tried to invent their own language at one point.
Trite remark. The author was referring to behaviour that has nothing to do with “how the web has become.”
It is specifically to do with behaviour that is enabled by using shared resources (like IndexedDB across multiple tabs), which is not simple HTML.
To do something similar over the network, you have until the next frame deadline. That’s 8-16ms. RTT. So 4ms out and back, with 0ms budget for processing. Good luck!
I was also surprised to read this, because Linear has always felt a little sluggish to me.
I just profiled it to double-check. On an M4 MacBook Pro, clicking between the "Inbox" and "My issues" tabs takes about 100ms to 150ms. Opening an issue, or navigating from an issue back to the list of issues, takes about 80ms. Each navigation includes one function call which blocks the main thread for 50ms - perhaps a React rendering function?
Linear has done very good work to optimise away network activity, but their performance bottleneck has now moved elsewhere. They've already made impressive improvements over the status quo (about 500ms to 1500ms for most dynamic content), so it would be great to see them close that last gap and achieve single-frame responsiveness.
I developed an open-source task management software based on CRDT with a local-first approach. The motivation was that I primarily manage personal tasks without needing collaboration features, and tools like Linear are overly complex for my use case.
This architecture offers several advantages:
1. Data is stored locally, resulting in extremely fast software response times
2. Supports convenient full database export and import
3. Server-side logic is lightweight, requiring minimal performance overhead and development complexity, with all business logic implemented on the client
4. Simplified feature development, requiring only local logic operations
There are also some limitations:
1. Only suitable for text data storage; object storage services are recommended for images and large files
2. Synchronization-related code requires extra caution in development, as bugs could have serious consequences
3. Implementing collaborative features with end-to-end encryption is relatively complex
The technical architecture is designed as follows:
1. Built on the Loro CRDT open-source library, allowing me to focus on business logic development
2. Data processing flow:
User operations trigger CRDT model updates, which export JSON state to update the UI. Simultaneously, data is written to the local database and synchronized with the server.
3. The local storage layer is abstracted through three unified interfaces (list, save, read), using platform-appropriate storage solutions: IndexedDB for browsers, file system for Electron desktop, and Capacitor Filesystem for iOS and Android.
4. Implemented end-to-end encryption and incremental synchronization. Before syncing, the system calculates differences based on server and client versions, encrypts data using AES before uploading. The server maintains a base version with its content and incremental patches between versions. When accumulated patches reach a certain size, the system uploads an encrypted full database as the new base version, keeping subsequent patches lightweight.
Local-First & Sync-Engines are the future. Here's a great filterable datatable overview of the local-first framework landscape:
https://www.localfirst.fm/landscape
My favorite so far is Triplit.dev (which can also be combined with TanStack DB); 2 more I like to explore are PowerSync and NextGraph. Also, the recent LocalFirst Conf has some great videos, currently watching the NextGraph one (https://www.youtube.com/watch?v=gaadDmZWIzE).
It's starting to feel to me that a lot of tech is just converging on other platforms solutions. This for example sounds incredibly similar to how a mobile app works (on the surface). Of course it goes the other way too, with mobile tech taking declarative UIs from the Web.
I've been very impressed by Jazz -- it enables great DX (you're mostly writing sync, imperative code) and great UX (everything feels instant, you can work offline, etc).
Main problems I have are related to distribution and longevity -- as the article mentions, it only grows in data (which is not a big deal if most clients don't have to see that), and another thing I think is more important is that it's lacking good solutions for public indexes that change very often (you can in theory have a public readable list of ids). However, I recently spoke with Anselm, who said these things have solutions in the works.
All in all local-first benefits often come with a lot of costs that are not critical to most use cases (such as the need for much more state). But if Jazz figures out the main weaknesses it has compared to traditional central server solutions, it's basically a very good replacement for something like Firebase's Firestore in just about every regard.
ElectricSQL and TanStack DB are great, but I wonder why they focus so much on local first for the web over other platforms, as in, I see mobile being the primary local first use case since you may not always have internet. In contrast, typically if you're using a web browser to any capacity, you'll have internet.
Also the former technologies are local first in theory but without conflict resolution they can break down easily. This has been from my experience making mobile apps that need to be local first, which led me to using CRDTs for that use case.
Because building local first with web technologies is like infinity harder than building local first with native app toolkits.
Native app is installed and available offline by default. Website needs a bunch of weird shenanigans to use AppManifest or ServiceWorker which is more like a bunch of parts you can maybe use to build available offline.
Native apps can just… make files, read and write from files with whatever 30 year old C code, and the files will be there on your storage. Web you have to fuck around with IndexedDB (total pain in the ass), localStorage (completely insufficient for any serious scale, will drop concurrent writes), or OriginPrivateFileSystem. User needs to visit regularly (at least once a month?) or Apple will erase all the local browser state. You can use JavaScript or hit C code with a wrench until it builds for WASM w/ Emscripten, and even then struggle to make sync C deal with waiting on async web APIs.
Apple has offered CoreData + CloudKit since 2015, a completed first party solution for local apps that sync, no backend required. I’m not a Google enthusiast, maybe Firebase is their equivalent? Idk.
Well .... that's all true, until you want to deploy. Historically deploying desktop apps has been a pain in the ass. App stores barely help. That's why devs put up with the web's problems.
Ad: unless you use Conveyor, my company's product, which makes it as easy as shipping a web app (nearly):
You are expected to bring your own runtime. It can ship anything but has integrated support for Electron and JVM apps, Flutter works too although Flutter Desktop is a bit weak.
and if you didn't like or cared to learn CoreData? just jam a sqlite db in your application and read from it, it's just C. This was already working before Angular or even Backbone
In this case it's not about being able to use the product at all, but the joy from using an incredibly fast and responsive product, which therefore you want to use local-first.
Local first is super interesting and absolutely needed - I think most of the bugs I run into with web apps have to do with sync, exacerbated by poor internet connectivity. The local properties don't interest me as much as request ordering and explicit transactions. You aren't guaranteed that requests resolve in order, and thus can result in a lot of inconsistencies. These local-first sync abstractions are a bit like bringing a bazooka to a water gun fight - it would be interesting to see some halfway approaches to this problem.
Tight coupling to MongoDB, fragmented ecosystem / packages, and react came out soon after and kind of stole its lunch money.
It also had some pretty serious performance bottlenecks, especially when observing large tables for changes that need to be synced to subscribing clients.
I agree though, it was a great framework for its day. Auth bootstrapping in particular was absolutely painless.
non-relational, document oriented pubsub architecture based on MongoDB, good for not much more than chat apps. For toy apps (in 2012-2016) – use firebase (also for chat apps), for crud-spectrum and enterprise apps - use sql. And then React happened and consumed the entire spectrum of frontend architectures, bringing us to GraphQL, which didn't, but the hype wave left little oxygen remaining for anything else. (Even if it had, still Meteor was not better.)
I'm also building a local first editor and rolling my own CRDTs. There are enormous challenges to make it work. For example the storage size issue mentioned in the blog, I end up using with yjs' approach which only increase the clock for upsertion, and for deletion remove the content and only remain deleted item ids which can be efficiently compressed since most ids are continuous.
In case you missed it and it's relevant, there was an automerge v3 announcement posted the other day here which claimed some nice compression numbers as well
As far as I know, automerge is using DAG history log and garbage collecting by comparing the version clock heads of 2 clients. That is different than yjs. I have not followed their compression approach in v3 yet, will check if having time.
Local first is amazing. I have been building a local first application for Invoicing since 2020 called Upcount https://www.upcount.app/.
First I used PouchDB which is also awesome https://pouchdb.com/ but now switched to SQLite and Turso https://turso.tech/ which seems to fit my needs much better.
How is this approach better than using react-query to persist storage which periodically sync the local storage and the server storage? Perhaps I am missing something.
That approach is precisely what the new TanStack DB does, which if you don't know already has the same creator as React Query. The former extends the latter's principles to syncing, via ElectricSQL, both organizations have a partnership with each other.
The latency is off the critical path with local first. You sync changes over the network sure, but your local mutations are stored directly and immediately in a local DB.
Me neither. Considered we are talking about collaborative network applications, you are loosing the single-source-of-thruth (the server database) with the local first approach. And it just adds so much more complexity. Also, as your app grows, you probably end up to implement the business logic twice. On the server and locally. I really do not get it.
Fair enough, I thought what I'd originally written for that section was too wordy, so I asked Claude to rewrite it. I'll go a bit lighter on the AI editing next time. Here's most of the original with the code examples omitted:
Watching Tuomas' initial talk about Linear's realtime sync, one of the most appealing aspects of their design was the reactive object graph they developed. They've essentially made it possible for frontend development to be done as if it's just operating on local state, reading/writing objects in an almost Active Record style.
The reason this is appealing is that when prototyping a new system, you typically need to write an API route or rpc operation for every new interaction your UI performs. The flow often looks like:
- Think of the API operation you want to call
- Implement that handler/controller/whatever based on your architecture/framework
- Design the request/response objects and share them between the backend/frontend code
- Potentially, write the database migration that will unlock this new feature
Jazz has the same benefits of the sync + observable object graph. Write a schema in the client code, run the Jazz sync server or use Jazz Cloud, then just work with what feel like plane js objects.
Yep, but LLMs being used as aid in writing blog posts is a relatively new phenomenon.
I'm on the fence. IMO ivape provides a hypothesis, but brings it as fact. That isn't a good starting point for a discussion (though a common mistake), but that doesn't prove they are wrong either. Btw, I believe the HN guidelines encourage you to take the positive angle, at least for comments.
As for the topic at hand local-first means he end up with a cache; either in memory or on disk. If you got the RAM and NVMe you might as well use it for performance. Back in the days not much could be cached, but your connection was often too lousy or not 24/7. So you ended up with software distribution via 3.5 inch floppies or CDROM. Larger distributions used gigantic disk cache either centralized (Usenet) or distributed (BitTorrent). But the 'you might as well use it' issue is it introduces sloppiness. If you develop with huge constraints you are disciplined into deterrence to start, failing, or succeeding efficiently. We hary ever hear about all the deterrence and failing.
That's an interesting question. I will open a Jira ticket to schedule a meeting with the Product Team, and they will assign you several stories with acceptance criteria written by AI.