191 points by maguay 2 days ago | 23 comments
pjc50 1 day ago
>> SMTP "“didn’t win because it was ‘better,’” he argued, but “just because it was easier to implement."

Yes - and this is actually really important! It's true of most of the important early internet technologies. It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts, while internet standards let individual decentralized admins hook their sites together.

Did any of the ITU standards win? In the end, internet swallowed telephones and everything is now VOIP. I think the last of the X standards left is X509?

MisterTea 1 day ago
> It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts,

Anyone remember the promise of ATM networking in the 90's? It was telecom grade networking which used circuit switched networking that would handle voice, video and data down one pipe. Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes. You called a computer as if it were a telephone (or maybe that was Datakit?) and ATM handed the user a byte stream like TCP. Imagine never needing an IP stack or setting traffic priority because the network already handles the QoS. Was it simple to deploy? No. Was it cheap? Nooohooohooohooo. Was Ethernet any of those? YES AND YES. ATM was superior but lost to the simpler and cheaper Ethernet which was pretty crappy in its early days (thinnet, thicknet, terminators, vampire taps, AUI, etc.) but good enough.

The funny part is this has the unintended consequences of needing to reinvent the wheel once you get to the point where you need telecom sized/like infrastructure. Ethernet had to adapt to deterministic real-time needs so various hacks and standards have been developed to paper over these deficiencies which is what TSN is - reinventing ATM's determinism. In addition we also now have OTN, yet another protocol to further paper over the various other protocols to mux everything down a big fat pipe to the other end which allows Ethernet (and IP/ATM/etc) to ride deterministically between data-centers.

pjc50 1 day ago
> Ethernet had to adapt to deterministic real-time needs

Without being able to get too into the telco detail, I think the lesson was that hard realtime is both much harder to achieve and not actually needed. People will happily chat over nondeterministic Zoom and Discord.

It's both psychological and slightly paradoxical. Once you let go of saying "the system MUST GUARANTEE this property", you get a much cheaper, better, more versatile and higher bandwidth system that ends up meeting the property anyway.

silvestrov 2 hours ago
> not actually needed

What you need is more that enough bandwidth.

Think of the difference between a highway with few cars versus a highway filled to the brim with cars. In the latter case traffic slows to a crawl even for ambulances.

It seems like it was just cheaper and easier to build more bandwidth than it was to add traffic priority handling to internet connectivity.

wat10000 11 hours ago
I saw a story once, which may well be completely made up, about why AT&T got out of the cell phone business. They had a research project, but reliability was an issue. They couldn't see a way to do better than 1 dropped call in 10,000. Their standard for POTS at the time was 1 in 2 billion.

Seeing that the tech would never be good enough, they sold off the whole thing for cheap. Years later, they bought it back for way, way more money because they desperately needed to get into the cell phone business that was clearly headed to the moon.

I totally understand the pride they had in the reliability of their system, but it turns out that dropped calls just aren't that big of a deal when you can quickly redial and reconnect.

projektfu 10 hours ago
Seems a little sus. AT&T basically created the cellular mobile phone, and built up an analog, then digital system (D-AMPS/TDMA). AT&T sort of sold out the mobile business in 2004 to Cingular (BellSouth) because TDMA was a dead end. They then bought BellSouth back in 2006 and carried on with CDMA.

Those old phones had a long range. It was hard to make small ones because the old AT&T towers were much farther apart, up to 40km. Meanwhile, their competitors focused on smaller coverage areas (e.g. 2km or less for PCS) and better tech (CDMA), and it seemed to pay off.

SllX 5 hours ago
This is a minor detail, but the "AT&T" that bought BellSouth in 2006 was the AT&T formerly known as SBC which bought the husk of Ma Bell and rebranded itself, i.e. the AT&T we have today.
otabdeveloper4 1 hour ago
> People will happily chat over nondeterministic Zoom and Discord.

Well, not "happily". (Doesn't every video conference do the "hold on, can you hear me? I have wifi issues" dance every other day?) But it works on a good day.

johannes1234321 37 minutes ago
At work it became mostly flawless. Everybody is used to it and people can jump in calls quickly when chat discussion etc don't suffice. The glitches are on a comparable level to physical meetings where somebody comes late and disturbs all while getting settled or somebody speaking too quiet for the room.

In my club when there is a virtual club meeting however, where people don't have frequent video meetings there is always somebody with trouble ... often the same.

burnished 12 hours ago
Yeah, big differences between an absolute guarantee and "we'll take as much as we can get"
EvanAnderson 1 day ago
ATM was superior in the context of a bill-by-the-byte telco-style network where oversubscribed links could be carefully planned. The "impedance mismatch" IP's of unreliable datagram delivery with ATM's guaranteed cell delivery created situations where ATM switches could effectively need unlimited buffer RAM to make their delivery guarantees even if the cells were containing IP datagrams that could just be discarded with no ill consequences.

There's likely an element of the "layering TCP on TCP" problem going on, too.

The classic popular treatment of the subject is: https://www.wired.com/1996/10/atm-3/

pseudohadamard 10 hours ago
It was designed by people who were trying to digitally emulate 1920s copper-wire circuits at a time when the entire world was moving to packet-switched digital data. I remember visiting a large telco at the time and having to tell them about this new thing called ADSL that was going to steamroller them if they weren't careful. "Nooo... no, that's not real, you can't do that over a phone line, not possible. And even if it was it'll never take off, if anyone really wants a digital link they can go with our X.25 or ISDN offerings".

When I pointed out in a previous post how much X.400 sucked, even that never got anywhere near X.25. X.25 is the absolute zero on any networking scale, the scale starts with X.25 at -273degC and goes up from there.

convolvatron 13 hours ago
atm did not have cell delivery guarantees. it did have per-connection qos negotiation that could include the loss probability as one of the many metrics that were supported. the only way to provide 'zero loss' is to implement hop-by-hop error detection and retransmission, which is only really done in HPC networks, and some satellite transport schemes where the loss is high and bursty and the latency is high.

however, actually building a functional routing infrastructure that supported QOS was pretty intractable. that was one of several nails in ATMs coffin (I worked a little on the PNNI routing proposal).

edit: I should have admitted that yes, loss does have a relationship to queue depth, but that doesn't result in infinite queues here. it does mean that we have to know the link delay and the target bandwidth and have per-flow queue accounting, which isn't a whole lot better really. some work was done with statistical queue methods that had simpler hardware controllers - but the whole thing was indeed a mess.

kstrauser 13 hours ago
I was there for ATM, and I'm so freaking glad it lost. It's a prime example of "a camel is a horse designed by committee". A 53 byte cell with a 48 byte payload? Of course! What an excellent idea! We definitely want a 10% overhead on a ludicrously small packet, just so it has tolerable voice latencies if you scale it down to run on a 64Kb DS0, never mind that literally everything in the industry was scaling up to fatter pipes.

ATM was nifty if you had a requirement of establishing voice-style, i.e. billable, connections. No thanks. It was an interesting technology but hopelessly hobbled by the desire to emulate a voice call that fit into a standard invoice line.

bigfatkitten 12 hours ago
If you’re primarily concerned with shuffling low latency voice around the place, and you want to do hardware forwarding on relatively inexpensive silicon, then that cell size is entirely sensible.

That approach of course didn’t age well when voice almost became a niche application.

pseudohadamard 10 hours ago
Thus its acronym, A Technical Mistake. Or, from the telco side, A Tariffing Mechanism.
convolvatron 13 hours ago
note that it was 'tolerable latency without echo cancellation in France', most other places had long enough latency anyways that they needed to have it anyways. and of course now everything needs echo cancellation.

I think standards are important, and I'm sad that no one bothers anymore, but stuff like this and the inclusion of interlace in digital video for that little 3 year window when it might have mattered does really sour one on the process.

kstrauser 13 hours ago
I'd forgotten about the French connection here.

BTW, I searched Kagi for "tolerable latency without echo cancellation in France" and saw your comment. Wow. I didn't realize web crawlers were that current these days.

MiddleEndian 9 hours ago
Not The Silliest Contrivance to happen to video standards :P
pimlottc 7 hours ago
My college went all-in on ATM-over-fiber and wired all the dorm rooms with it. It was a PITA. Of course no computers came with ATM support and the cards cost $400+ each so the school had hundreds of cards that they would “lease” them out to students each year. There would be a huge “install depot” at the start of the year where students brought in their (desktop) computers and volunteers would open them up, install the cards, install drivers and configure them for our network.

For Linux heads, it was doubly annoying, as ATM was not directly supported in the kernel. You had to download a separate patch to compile the necessary modules, then install and run three separate system daemons, all with the correct arguments for our network, just to get a working network device. And of course you had to download all the necessary packages with another computer, since you couldn’t get online yet. This was the early 2000s, so WiFi was not really common yet.

Even once you got online, one of the admins would randomly crash every so often and you’d have to restart to get back online. It was such a pain.

p_l 1 day ago
Pretty sure TSN is unrelated to ATM determinism, and comes from a completely separate area (replacing custom field buses where timing and contention is more important than bandwidth). Some of ATM complexity came from wanting to deliver the same quality of experience as plesiosynchronous networks provided for voice (that's how it got the weird cell size).

Once those requirements dropped down (partially because people just started to accept weird echo) the replacement became MPLS and whatever you can send IP over where Ethernet sometimes shows as package around the IP frame but has little relation to Ethernet otherwise.

MisterTea 1 day ago
Not directly related but a consequence.
p_l 1 day ago
ATM semantics and TSN semantics are quite different, the closest overlap would be in AFDX (avionics full duplex ethernet) except AFDX creates static circuits
nofriend 9 hours ago
Was it actually superior though? The usual treatment is that packet switching works better at the scale of the internet. With voice, hogging a whole line works, but for the internet it makes more sense to slow everybody down when congestion occurs rather than preventing some people from connecting at all. I get why the telecoms would have you waste your bandwidth reserving a connection you don't need, and I get why they would try and sell that as a superior solution because of some nonsense about reliability, but I don't see it as providing much benefit to the user.
somat 7 hours ago
One reason I heard the internet works as well as it does is that it inverts the bell system. Where the bell system is a smart network with dumb edge devices. The internet is a dumb network with smart edge devices. The reason this is supposed to be better is that it is much much easier to upgrade the network.

And this sort of checks out, most of the complaints about the internet architecture is when someone starts putting put smart middle boxes in a load bearing capacity and now it becomes hard to deploy new edge devices.

rayiner 13 hours ago
> Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes

I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.

packetlost 12 hours ago
Wait, are you serious? This is how it works?
rayiner 10 hours ago
Yes: https://fasterdata.es.net/performance-testing/troubleshootin.... A simplistic TCP server will blast packets on the link as fast as it can, up to the size of the TCP receive window. At that point it’ll stop transmitting and wait for an ACK from the client before sending another window’s worth of packets.

To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.

But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.

Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.

EvanAnderson 10 hours ago
Heh heh. If that shocks you, search engine for "bufferbloat" and prepare to be horrified.
Spooky23 8 hours ago
I experienced this with a VDI project when we mistakenly got 25Gb links delivered to the hosts.

We were expecting to get some sort of unbelievably fast internet experience, but it was awful as the internet gateway was 1 Gb or something similar.

wat10000 10 hours ago
This is how old-school TCP figures out how fast it can send data, regardless of the underlying transport. It ramps up the speed until it starts seeing packet loss, then backs off. It will try increasing speed again after a bit, in case there's now more capacity, and back off again if there's loss.
bombcar 9 hours ago
You can achieve a bit of performance here by tuning it so it will never exceed the true speed of the link - which is only really useful when you know what that is and can guarantee it.
themafia 10 hours ago
Anyone remember the incredible disrepute of the phone company in the 80s?

We just wanted our own stuff. We did not want to coordinate with a proprietary vendor to network or be charged by the byte to do so.

cyberax 13 hours ago
And for a while, telco engineers tried to retrofit Internet to their purposes.

I worked on a network that used RSVP ( https://en.wikipedia.org/wiki/Resource_Reservation_Protocol ) to emulate the old circuit-switched topology. It was kinda amazing to see how it could carve guaranteed-bandwidth paths through the network fabric.

Of course, it also never really worked with dynamic routing and brought in tons of complexity with stuck states. In our network, it eventually was just removed entirely in favor of 1gbit links with VLANs for priority/normal traffic.

fmajid 1 day ago
I started my career at France Telecom's R&D lab in Caen, Normandy. They had their own home-grown X.400 email client, and even though they could have set up a SMTP server for free, they deliberately chose to MX to a paid SMTP to X.400 gateway out of OSI ideology.

It was complete garbage.

Another lab of theirs proudly made a Winsock that would use ATM SVCs instead of TCP and proudly made a brochure extolling their achievement "Web protocol without having to use TCP". Because clearly it was TCP hindering adoption of the Web /s

The Bellhead vs. Nethead was a real thing back then. To paraphrase an old saying about IBM, Telcos think if they piss on something, it improves the flavor.

One of the jobs I had applied out of college was to lead Schengen's central police database (think stolen car reports, arrest warrants etc) which would federate national databases. For some unfathomable reason, they chose X.400 as messaging bus for that replication, and endured massive delays and cost overruns for that reason. I guess I dodged a bullet by not going there.

ajb 57 minutes ago
The rivalry continues in the fibre era, with ITU's GPON and successors competing with IEEE EPON etc. ITU does seem to have lost out comprehensively at layer 3. They do some stuff like OAM which is only interesting at Telco scale, although in the mobile era bodies like ETSI are more relevant.

The other difference from that era, and even the early internet era, is how much is no longer standardised at all, but decided by global monopolies. Back then it was a given that Everything would at least need to interoperate at the national level. But we may be returning to that .

jcranmer 1 day ago
WebPKI is derived from X.509, but I don't think X.509 lives on anymore. X.500 was stripped down to form LDAP, which is still in very heavy use today. There's still some X.400 systems in existence. I think some of the early cellphone generations may have used the ITU standards in the physical layer?

Of course, the biggest--and weirdest--success of the ITU standards is that the OSI model is still frequently the way networking stacks are described in educational materials, despite the fact that it bears no relation to how any of the networking stack was developed or is used. If you really dig into how the OSI model is supposed to work, one of the layers described only matters for teletypes--which were are a dying, if not dead, technology when the model was developed in the first place.

chuckadams 14 hours ago
There's an entire book devoted to ripping up the OSI model: https://docs.google.com/document/d/1iL0fYmMmariFoSvLd9U5nPVH...
matheusmoreira 8 hours ago
What an interesting read. Thank you for posting it.
MrDrMcCoy 11 hours ago
Everyone who knows what the OSI model is should read at least some of this book.
pzb 8 hours ago
X.509 absolutely lives on -- https://www.itu.int/rec/t-rec-x.509 last update was October 2024. However WebPKI uses PKIX which is fairly stubbornly stuck on RFC5280.

On the ITU side, they have made improvements including allowing a plain fully qualified domain name as the subject of a certificate, as an alternative to sequence of set of attributes.

tosti 6 hours ago
If you mean the presentation layer, hard disagree. Not thinking about presentation creates problems. For example, Go treating ASCII headers as UTF-8 caused trouble. Only slightly not worrying about an HTTP/2 vs HTTP/1.1 mismatch caused trouble for reverse proxies.

Now I'm young enough not to have seen teletypes in an actual production use setting, but I've never heard anyone suggesting the presentation layer was for teletypes. That's just Google-level FUD.

jcranmer 6 hours ago
No, it was the session layer.
otabdeveloper4 34 minutes ago
TLS is our session layer.
pseudohadamard 9 hours ago

  X.500 was stripped down to form LDAP
No, LDAP was a student project from UMich that somehow gained mindshare because (a) it wasn't ISO, and (b) it cleverly had an 'L' in front of it. It's now more complex and heavyweight than the original DAP, but people think it isn't because of that original clever bit of marketing.
otabdeveloper4 33 minutes ago
It's still lightweight. I implemented a working implementation on a literal weekend.
lukeh 6 hours ago
Well, it started off simpler, but, yes.
rstuart4133 1 day ago
Doh! Of course it was easier to implement. IETF wants a working open source implementation before standardising.

Have you ever tried to implement an ITU standard from just reading the specs? It's hard. Firstly you have to spend a lot of money just to buy the specs. Then you find the spec is written by somebody who has a proprietary product, and is tiptoeing along a line that reveals enough information to keep the standards body happy (ie, has enough info to make it worthwhile to purchase the specification), and not revealing the secret sauce in their implementation.

I've done it, and it's an absolute nightmare. The IETF RFCs are a breath of fresh air in comparison. Not only can you read the source, there are example implementations!

And if you think that didn't lead to a better outcome, you're kidding yourself. The ITU process naturally leads to a small number of large engineering orgs publishing just enough information so they can interoperate, while keeping enough hidden so the investment discourages the rise of smaller competitors. The result is, even now I can (and do) run my own email server. If the overly complicated bureaucratic ITU standards had won the day, I'm sure email would have been run by a small number of CompuServe like rent seeking parasites for decades.

ogurechny 1 day ago
Given that general public uses social network services for electronic messaging today, and those don't even pretend they want to be interoperable, we've got parasites of a totally different class on top of the Internet infrastructure.
deepsun 7 hours ago
Remember jabber/xmpp? At least they tried to interoperate. Google Talk at the beginning had interoperability as its main feature, but Google quickly scrapped that.

UPDATE: some say that's because XMPP was too encompassing of a standard (if a format allows to do too much it loses usefulness, like saying that binary files format can store anything). IMO that's not the reason, they could just support they own subset. They scrapped interoperability for competition only IMO.

jech 13 hours ago
> IETF wants a working open source implementation before standardising.

I don't think that's IETF policy. Individual IETF working groups decide whether to request publication of an RFC, and the availability of open source implementations is a strong argument in favour of publication, but not a hard requirement.

If the IETF standards are sometimes useful, it's more a matter of culture than of policy.

pseudohadamard 10 hours ago
A great example of this was PKIX, whose policy was "we'll publish it as a standard and someone else will have to figure out how to make it work". There are 20-year-old standards-track PKIX documents that have no known implementations.
zzo38computer 8 hours ago
I have been told that ITU specifications are deliberately confusing so that they can sell consulting services.

However, I think DER is good (and is better than BER, PER, etc in my opinion). (I did make up a variant with a few additional types, though.)

OID is also a good idea, although I had thought they should add another arc for being based on various kind of other identifiers (telephone numbers, domain names, etc) together with a date for which that identifier is valid (to avoid issues with reassigned identifiers) as well as possibility of automatic delegation for some types (so that e.g. if you register an account on another system then you can get a free OID from it too; there is a bit of difficulty in some cases but it might be possible). (I have written a file about how to do this, although I did not publish it yet.)

agwa 1 day ago
I'll note that while X.509 certificates are deployed widely on the Internet, they are not deployed in the manner the ITU intended. There is no global X.500 directory and Distinguished Names are just opaque identifiers that are used to help find issuers during chain building. That hardly counts as a win for the ITU in my book.
justsomehnguy 12 hours ago
And in some usages CN is just doesn't even looked up upon.
unscaled 6 hours ago
LDAP might have won over DAP, but it's still heavily based on the X.500-family of standards. Unlike SMTP (which is a completely different standard), LDAP is strongly based on DAP and other X.500 family standards.

Besides LDAP and X.509, you've got old standards that were very successful for a while. I'm perhaps a little bit too young for this, but I vaguely remember X.25 practically dominated large-scale networking, and for a while inter-network TCP/IP was often run over X.25. X.25 eventually disappeared because it was replaced by newer technology, but it didn't lose to any contemporary standard.

And if you're looking for new technology, CTAP (X.1278) is a part of the WebAuthn standard, which does seem to be winning.

I'm pretty sure there are other X-standards common in the telco industry, but even if we just look at the software industry, some ITU-T standards won out. This is not to say they weren't complex or that we didn't have simpler alternatives, but sometimes the complex standards does win out. The "worse is better" story is not always true.

The OP article is definitely wrong about this:

> “Of all the things OSI has produced, one could point to X.400 as being the most successful,

There are many OSI standards that are more successful than X.400, by the seer virtue of X.400 being an objective failure. But even putting that aside, there are X-family standards that are truly successful and ubiquitous.X.500 and X.509 are strong contenders, but the real winner is ASN.1 (the X.680/690 family, originally X.208/X.209).

ASN.1 is everywhere: It's obviously present in other ITU-T based standards like LDAP, X.509, CTAP and X.400, but it's been widely adopted outside of ITU-T in the cryptography world. PKCS standards (used for RSA, DSA, ECDSA, DH and ECDH key storage and signatures), Kerberos, S/MIME, TLS. It's also common in some common non-cryptographic protocols like SNMP and EMV (chip and pin and contactless payment for credit cards). Even if your using JOSE or COSE or SSH (which are not based on ASN.1), ASN.1-based PKCS standards are often still used for storing the keys. And this is completely ignoring all the telco standards. ASN.1 is everywhere.

lukeasch21 11 hours ago
X.25 and other ITU specs won out massively in aviation, and they are just recently starting to go through the slow painful process of moving to IP. We'll probably see it hanging around for at least another 15 years in that sector.
rayiner 13 hours ago
teddyh 1 hour ago
userbinator 11 hours ago
H.261-264 video codecs, depending on your definition of "win".
bigfatkitten 13 hours ago
> In the end, internet swallowed telephones and everything is now VOIP.

Using ITU voice codecs!

buttocks 59 minutes ago
And still, ITU modem standards and T.30 … yes, fax still lives!
ghaff 1 day ago
And you could add any number of the big standards group-based standards that a great deal of blood, sweat, and tears were poured into. Not universally the case, but more true than false.
SV_BubbleTime 1 day ago
As x509 goes. I doubt many could explain it off hand with BER, DER and others being subset to ASN.1 and other obscura.

I’ve never been a fan

AnimalMuppet 10 hours ago
At the time, when there were so many different platforms still in existence, "easier to implement" was in fact a major component of "better".
pseudohadamard 1 day ago
It's not so much that SMTP won, it's that X.400 lost because it suuuuucked. Anyone who's ever had to work with that piece of s*t, as opposed to rhapsodising over what it could theoretically do, can tell you stories about this. It made Microsoft Mail and Lotus Notes look good in comparison. Notes actually did X.400, so imagine Notes but even suckier.
p_l 1 day ago
A lot of the IETF standards winning was vendors avoiding work even when paid for.

Another was NIH in considerable important places.

Yet another was that ITU standards promoted use of compilers generating serialization code from schema, and that required having that compiler. One common issue I found out from trying to rescue some old Unix OSI code was that the most popular option in use at many universities was apparently total crap.

In comparison, you could plop a grad student with telnet to experiment with SMTP. Nobody cared that it was shitty, because it was not supposed to be used long. And then nobody wanted to invest in better.

pabs3 1 day ago
The critical part of that quote "Like a car with no brakes or seatbelts."
pjc50 1 day ago
It doesn't seem to have worked out like that? You might as well say "like a car without a man walking in front of it with a red flag" https://en.wikipedia.org/wiki/Red_flag_traffic_laws
bragr 1 day ago
That's a partisan framing. Another framing could be that SMTP is the golf cart SMBs were asking for, not the car they were being sold.
msla 1 day ago
Yes, the TCP/IP protocol stack beat the OSI protocol stack comprehensively, even down to four layers beating out seven unless you're so wedded to the Magic Number of Seven that you see Session as distinct from Application in the modern world, like how Newton was so wedded to seeing Seven Shades of Light in a spectrum he was sure to note indigo as distinct from violet in the rainbow.

(Presentation and Session are currently taught in terms of CSS and cookies in HTML and HTTP, respectively. When the web stack became Officially Part of the Officiously Official Network Stack is quite beyond me, and rather implies that you must confound the Web and the Internet in order to get the Correct Layering.)

https://computer.rip/2021-03-27-the-actual-osi-model.html - The Actual OSI Model

> I have said before that I believe that teaching modern students the OSI model as an approach to networking is a fundamental mistake that makes the concepts less clear rather than more. The major reason for this is simple: the OSI model was prescriptive of a specific network stack designed alongside it, and that network stack is not the one we use today. In fact, the TCP/IP stack we use today was intentionally designed differently from the OSI model for practical reasons.

> The OSI model is not some "ideal" model of networking, it is not a "gold standard" or even a "useful reference." It's the architecture of a specific network stack that failed to gain significant real-world adoption.

addaon 12 hours ago
I still think the missing opportunity with e-mail was for the USPS (back in the US-dominant internet days) to take a leading role and implement "e-stamps." Provide a subscription service that managed a per-user account, cost a 1¢ stamp to send a message, and guaranteed delivery of messages received with a 1¢ stamp on them -- with the received stamp value being put in the user's account, so a user who received more mail than they sent would never spend a penny. (Messages received from other services could be rejected, delivered, or binned for later inspection at the user's discretion.) This would have the obvious downside of centralizing a major early-Internet feature (although federation is certainly possible as well), but it would have the upside of penalizing companies sending millions of e-mails, but not users using it for person-to-person communication, or companies using it for per-(valuable)-customer communication. We could have had a world without spam… and if USPS took 10% off the top (0.9¢ of each incoming message given to the user account), or similar, I could imagine it having a big impact on their budgetary issues.
halJordan 11 hours ago
The physical usps works because, the usps controls every inbox and every outbox; everyone has to have an inbox/outbox with the single carrier, and no one can actually reject or refuse mail. All the downsides of iMessage but the government reading your email bc it's not an encrypted protocol. Spam exists in the real world, this wouldn't have worked either
addaon 10 hours ago
> Spam exists in the real world, this wouldn't have worked either.

A two-or-more order-of-magnitude reduction in a problem seems like a good start and a worthwhile step, not something to disregard because it's not 100%…

petcat 27 minutes ago
The USPS is paid by spammers to ensure delivery of their physical spam. You doing think they wouldn't have also accepted payment from spammers to ensure delivery of their internet spam?
waynecochran 9 hours ago
Yes. I don't know if this is exactly the recipe, but something akin to this could have .. no should have .. existed. Probably 1¢ is too much. Also, full public key encryption and digital signatures should be easily integrated by now as well. I know the whole trust problem ... yadda yadda ... I don't even read my email hardly at all anymore -- I want everyone that needs to get a hold of me don't rely on email.
Ferdinandpferd 11 hours ago
I found the artificial cost ideas interesting at the time but I think the Ad landscape shows that it doesn't really work. All but the least sinister scammers would happily pay pretty well and have to be prevented from buying ads unless financial regulations could prevent any kind of laundering proceeds back into more ads.
TZubiri 11 hours ago
It's worth noting the difference between a fixed cost for sending a message, and a fixed inventory of messaging, and an auction bid system where bids are maximized by competition unless bidders form a cartel.

Funnily enough, if collusion is prohibited, the goal of such a law would be more competition, but the result is more mergers and monopolies, up until the point where antitrust kicks in and ad-hoc limits the monopoly, so each industry ends up with 1 bidder, or 2-3 tops

kelnos 10 hours ago
> We could have had a world without spam

I doubt it. USPS charges everyone to send snail mail, and I get plenty of spam in my mailbox. I end up with way more spam in my snail mailbox than in my email inbox, since the latter has filtering.

pembrook 11 hours ago
Not sure it was a big missed opportunity to create a communication protocol that...financially penalizes communication?

Sounds like a really fast way to kill a network instead of grow it into a 4B daily active user staple like email is today. You'd basically ensure that email would ONLY be spam, because marketers would be the only ones willing spend money to reach people.

Every time I see someone suggest micropayments on HN I have to wonder if people here have any understanding of how actual humans are. Turning every action on your network into a purchase decision is a good way to ensure nobody ever does anything on your network and thus it never becomes a network.

Humans will always gravitate toward the lowest friction way to achieve their goals. So immediately some private company would introduce a free communication channel as a loss leader instead, theirs would grow faster, and then they'd monetize via ads once their network reached critical mass (see also, whatsapp). Killing the more egalitarian decentralized protocol in the process.

addaon 11 hours ago
Not all communication has positive value. 99.9% of the e-mail I receive not only has no value in itself, but the overhead of managing it, ignoring it, and categorizing it is highly negative -- and decreases the value of the valuable e-mail I receive, because I can't be arsed to check it promptly or consistently because of the overhead of the dreck. But as others point out, even charging money would only reduce spam by an order of magnitude or two, not entirely -- and since I send 1 - 10 actual e-mails a week, I only need to receive a dozen a week to never pay a penny.

My primary goal is not to send e-mail for free -- my primary goal is to have reliable, low-overhead communication with humans. Having this sponsored by spammers is a fine start, but even if I paid a dollar a year or so, that would be much lower overhead than even a day's worth of looking through spam is today (at the rate I value my time -- but even if you value your time orders of magnitudes less, the payoff is there).

kevin_thibedeau 11 hours ago
This is what Xanadu and OSI were going to deliver: real world pay services recast on electronic networks. That could never compete against unmetered communication delivered by the likes of FidoNet, Compuserve, and the open internet protocols.
altairprime 11 hours ago
[dead]
twobitshifter 11 hours ago
My physical mailbox full of junk mail says that spam would still exist.
WalterBright 11 hours ago
3 or 4 items, sure. But my email account gets several hundred per day.
maguay 8 hours ago
And yet, when the USPS did deliver email (via paper, no less, with their E-COM system), over half of the message volume was sent by one mass-mailer: https://buttondown.com/blog/the-e-com-story

Afraid the spammers will always be with us.

TZubiri 11 hours ago
Have you heard about hashcash? They propose a novel similar mechanism for postage for email with some interesting theoretical consequences.
ogurechny 1 day ago
An article from Microsoft Systems Journal in 1993 ends with a bunch of different electronic mail addresses:

https://jacobfilipp.com/MSJ/1993-vol8/qawindows.pdf

By 1995, the “Internet” e-mail address was the only remaining one.

SllX 5 hours ago
1993 "socials".
jerjerjer 1 day ago
> If the history of email had gone somewhat differently, the last email you sent could have been rescinded or superseded by a newer version when you accidentally wrote the wrong thing. It could have auto-destructed if not read by midnight.

Immutability is one of the best things about email.

Gigachad 12 hours ago
As a platform for sending invoices and official communications it’s fine. As a way for people to talk with each other it sucks. These days I’m of the opinion that most messaging should just be auto deleted after a month. If there’s something particularly important you want to keep, note it down. Otherwise just let it be forgotten.
silon42 1 day ago
Certainly it should be immutable if read.
amelius 23 minutes ago
Yes, messaging apps like WhatsApp have some very desirable features that are missing in e-mail.

I wish someone would write some RFCs and e-mail could get an update.

PunchyHamster 1 day ago
> C=no; ADMD=; PRMD=uninett; O=uninett; S=alvestrand; G=harald

that would be very annoying way to write e-mail and no less prone to typosquatting (if anything, more)

Both standards lacked hindsight we have today but x.400 would just be added complexity (as years of tacked-on extensions would build upon it) that makes non-error-prone parsing harder

msla 1 day ago
Plus, having to change email addresses when you physically move, in addition to when you change providers, would be immensely annoying.
giantrobot 14 hours ago
Ah but the solution was an X.500 directory where you just look up the recipient! So you never type the e-mail address, you just look up "Joe Smith" to send them an e-mail. Like looking them up in the phone book. Ignore the fact that the directory may return multiple Joe Smiths at the same large organization, not return Joe Smyth you wanted to message, or that there's not even a hint of anonymity with such directories. Oh yeah the internal organization of a company could be easily enumerated from the outside.
philipstorry 1 day ago
SMTP won because it was simpler, but it's probably good to look at why it was simpler.

SMTP handled routing by piggybacking on DNS. When an email arrives the SMTP server looks at the domain part of the address, does a query, and then attempts transfer it to the results of that query.

Very simple. And, it turns out, immensely scalable.

You don't need to maintain any routing information unless you're overriding DNS for some reason - perhaps an internal secure mail transfer method between companies that are close partners, or are in a merger process.

By contrast X.400 requires your mail infrastructure to have defined routes for other organisations. No route? No transfer.

I remember setting up X.400 connectors for both Lotus Notes/Domino and for Microsoft Exchange in the mid to late 90s, but I didn't do it very often - because SMTP took over incredibly quickly.

An X.400 infrastructure would gain new routes slowly and methodically. That was a barrier to expanding the use of email.

Often X.400 was just a temporary patch during a mail migration - you'd create an artificial split in the X.400 infrastructure between the two mail systems, with the old product on one side and the new target platform on the other. That would allow you to route mails within the same organisation whilst you were in the migration period. You got rid of that the very moment your last mailbox was moved, as it was often a fragile thing...

The only thing worse than X.400 for email was the "workgroup" level of mail servers like MS Mail/cc:Mail. If I recall correctly they could sometimes be set up so your email address was effectively a list of hops on the route. This was because there was no centralised infrastructure to speak of - every mail server was just its own little island. It might have connections to other mail servers, but there was no overarching directory or configuration infrastructure shared by all servers.

If that was the case then your email address would be "johnsmith @ hop1 @ hop2 @ hop3" on one mail server, but for someone on the mail server at hop1 your email address would be "johnsmith @ hop2 @ hop3", and so on. It was an absolute nightmare for big companies, and one of the many reasons that those products were killed off in favour of their bigger siblings.

rogerbinns 1 day ago
> ... why it was simpler.

In the early 90s I implemented a gateway between Novell email and X.400. What amused me the most was X.400 specified an exclusive enumerated list of reasons why email couldn't be delivered, including "recipient is dead". At the X.400 protocol level this was a binary number. SMTP uses a 3 digit number for general category, followed by a free form line of text. Many other Internet standards including HTTP use the same pattern.

It was already obvious at the time that the X.400 field was insufficient, yet also impractical for mail administrators to ensure was complete and correct.

That was the underlying problem with the X.400 and similar where they covered everything in advance as part of the spec, while Internet standards were more pragmatic.

chuckadams 13 hours ago
> so your email address was effectively a list of hops on the route

Who can forget addresses like "utzoo!watmath!clyde!burl!ulysses!allegra!mit-eddie!rms@mit-prep"

somat 11 hours ago
Ehhh.. This is a bit revisionist for a couple reasons.

1. smtp predates dns. or really even most of the internet. It was originally designed to work over uucp.

2. early smtp used bang paths (remember those) where the route or partial route is baked into the path.

themafia 10 hours ago
Of course, for reliability, you could even bake multiple paths into the envelope address.
pnw 13 hours ago
My first job at college was wrangling campus email, both X.400 and SMTP. As the article points out, SMTP won out because it was simple and developed in the open, not buried in standards committees, and SMTP code was widely available. It was the Cathedral and the Bazaar hypothesis playing out in real time.

Just seeing that X.400 notation is giving me bad memories!

thund 10 hours ago
wow, how to romanticize X.400 ...

- poor Internet fit, assuming managed, trusted networks - some promises depended on all participating systems behaving honestly

- once a message reaches another server, you cannot guarantee it isn't copied, backed up, or logged

- X.400 read receipts: more reliable but also more privacy invasive

- X.400 metadata: carries a lot of routing, classification, and organizational info leading to potential privacy leaks

- SMTP is ugly but observable, you don't need a standard specialist to debug issues

grandinj 1 hour ago
Yeah, as someone who had to implement a protocol stack to talk to a X.400 server, it was not fun at all. Weird encodings, monster spec, all sorts of weird server-specific stuff that you had to do exactly right if you wanted the server to accept your email.

Compared to that, when I implemented RFC821/822 (i.e. SMTP) mail, the hardest part was the weird line-encodings, but other than that, the spec was ___so___ nicely readable and pragmatic.

throwaway_ocr 1 day ago
X.400 is still in use today for things like sending invoices and orders through EDI.

Yes, it is a pain to manage. Yes, it is all still mostly running on 20+-year-old hardware and software.

It is slightly ironic that the main way we communicate X.400 addresses between parties is through modern email.

roryirvine 1 day ago
Is that actually true today? When I was doing EDI stuff ~20 years ago, it was mostly done using FTP, with some forward-thinking orgs moving to SFTP or (HTTPS-based) AS2.

I see that Wikipedia claims that "X.400 is quite widely implemented[citation needed], especially for EDI services", and that might once have been the case - but I doubt it was particularly widespread even at the time that article was first written. It's worth noting that that [citation needed] tag dates from October 2008!

foresto 13 hours ago
> The ugly addressing? It “provides solutions to certain problems and is ugly for good reason,” Betanov explains. “Make it less ugly, and it immediately loses functionality. Thus, the solution is not to make addressing nicer, but to hide it from the user,” something both internet email and X.400-powered software could easily do with headers, not so much with addresses.

Reminds me of IPv6. ;)

fulafel 1 day ago
For anyone wondering about the rest of the X standards, they're at: https://www.itu.int/itu-t/recommendations/index.aspx?ser=X

For example from 2023: X.1095: Entity authentication service for pet animals using telebiometrics

gadders 1 day ago
My first business card when I was working for a tech company had an X.400 address on it. Nobody was memorising that. Or writing it down quickly.
ExoticPearTree 1 day ago
This is an example of how simplicity won over features.

Not even then, when people with access to computers were probably in the thousands, would anyone liked to type "C=no; ADMD=; PRMD=uninett; O=uninett; S=alvestrand; G=harald" just like in the example of the article.

rjsw 1 day ago
You were not supposed to type it out, you looked it up using your X.500 directory.
bombcar 1 day ago
All we need is an x.500 directory of all addresses in the world, which won't be abused by anyone at anytime!
slackfan 1 day ago
However did we live during the era of the White Pages phone directory.
toast0 1 day ago
Sure, but then you have the problem of figuring out which Sarah Connor in Los Angeles.

To say nothing of popular names.

aworks 1 day ago
My name is not particularly common although I was the first to claim firstname.lastname@gmail.com. I've been getting email intended for other people with the same name for decades.

I've seen estimates that there are only 10,000 people with my last name in the US. Back in the days of local telephone directories, I was always the only one with that last name.

Internet scaling is an interesting thing. I don't know if I feel less unique or that I'm in an exclusive club.

kstrauser 13 hours ago
I registered [my HN username]@yahoo.com many, many years ago. Once a year I log into that mail account and I'm always amazed at how many other people have decided to give out that email, at Yahoo! of all places, as their own. Why? Just, why?
bombcar 1 day ago
Spam and scam had to work on a human scale, via locals paid something resembling a living wage, not automated machines sending millions a second or people working for pennies a day.

I want a phone that can only ring if the source of the call is within artillery range.

thaumasiotes 1 day ago
Is this an example of simplicity winning over features, or an example of features that are advertised but don't exist failing to win over the competition?

Some examples from the article:

> You could have messaged an entire organization or department

This is a mailing list.

> So it was possible, say, for one implementation of X.400 to offer X.400 features like recalling a message, in theory at least, when such guarantees would fail as soon as messages left their walled garden. But “they couldn't buck the rules of physics,” Borenstein concluded. Once a message reached another server, the X.400 implementations could say that an email was recalled or permanently deleted, but there was no way to prove that it hadn’t been backed up surreptitiously.

This is a feature that (1) is in the spec, and also (2) is impossible to implement. That's not a real feature. It's a bug in the spec.

> You don’t email with X.400 today. That is, unless you work in aviation, where AMHS communications for sharing flight plans and more are still based on X.400 standards (which enables, among other things, prioritizing messages and sending them to the tower at an airport instead of a specific individual).

This is... also a mailing list. There's nothing difficult about having an email address for the tower. That email could go to one person, or many people. What's the difference supposed to be? What "feature" are we saying X.400 has that email didn't start with?

jech 13 hours ago
>> You could have messaged an entire organization or department

> This is a mailing list.

The way I understand it, the layering is different. In X.400, multicasting was a feature of the protocol. An SMTP mailing list, on the other hand, is an endpoint that terminates a protocol transaction, and then initiates one transaction for each final recipient.

I guess it boils down to where it is preferable to have the extra complexity: the ITU-T protocols invariably prefer to put it inside the network, while the Internet protocols prefer to put it at the endpoints. The SMTP protocol is simple, and therefore the mailing list software needs to be complex.

elzbardico 1 day ago
Working, free implementations are better than perfect specification barelly supported only incompletely by closed, expensive implementations.
ChrisMarshallNY 10 hours ago
Argh. That red book. I may still have my copy around, somewhere.

X.400 was an “all things, to all men” solution; kinda like TIFF, for images.

I worked on an X.400 product, that never got out of the crib.

You could do things like specify the route that the email took, which was important, because there was support for microtransactions, all along the way. You could do things like pay extra for “premium delivery,” and “registered”-like messages.

It was really crazy. It did work, though.

The issue with specs like that, however, is they only ever get partially implemented. If you have an infrastructure, composed of many partial steps, it can be a mess.

cwillu 1 day ago
“If the history of email had gone somewhat differently, the last email you sent could have been rescinded or superseded by a newer version when you accidentally wrote the wrong thing. It could have been scheduled to arrive an hour from now. It could have auto-destructed if not read by midnight.”

That would have required a lot of changes to computing history beyond simply email, and I doubt many of them would have been improvements.

sinnickal 12 hours ago
Having PP flashbacks right now.. You weren't there man... you don't know!
EvanAnderson 1 day ago
The X.400 world would have had different spam economics because metered usage by your telco (who would be acting as a "Value Added Network" provider and delivering your X.400 mail) would likely have been the norm. As other comments have pointed out, this is still A Thing today with X.400 VANs being used for EDI.
computersuck 1 day ago
More like X.400 times convoluted
dreamcompiler 1 day ago
Gall's Law:

"A complex system that works is invariably found to have evolved from a simple system that worked."

https://lawsofsoftwareengineering.com/laws/galls-law/

In my naive youth I always thought top-down design was the sensible way to build systems. But after witnessing so many of them fail miserably, I now agree with Gall.

beng-nl 1 day ago
Well said. And similarly, it always seems to be the simple, bottom up, “let’s just build something simple and minimal that works” projects that get iterated on that do can do well, and start to strain when the technical debt and complexity accumulate.
jgalt212 2 days ago
> You could have been notified when the message was read a full 15 years before email had something similar tacked on.

Thanks to email security scanners this feature is largely broken.

And so are single click to unsubscribe links. So much so that we have to put our unsubscribe page behind a captcha.

rant over

hilariously 1 day ago
Not trying to be rude, but If you put your unsubscribe page behind a captcha I am going to mark you as spam and move on.
AdamN 1 day ago
I don't unsubsubscribe unless I explicitly subscribed in the past. If I did not subscribe in the first place then it's spam (exception for small businesses who may not know better in which case I'll delete or unsubscribe).
sam_lowry_ 1 day ago
There are unsubscribe headers that are used by mail user agents like mutt to unsubscribe from mailing list managers like mailman.

These are "scanner-proof" so far but support in clients like Outlook or Gmail is non-existent.

chuckadams 13 hours ago
Gmail not only understands the List-Unsubscribe header, it requires it for bulk deliverability.
kstrauser 13 hours ago
I'll try unsubscribing once if it looks like a legitimate org, like someone I actually did business with but didn't expect them to email me. After that, it's going to the junk box to train the server what spam looks like.
throw0101d 1 day ago
> You could have been notified when the message was read a full 15 years before email had something similar tacked on.

Which spammers and marketers would have loved.

I have "load remote content" disabled on my e-mail client so that tracking graphics/pixels do not leak such information to the sender.

jgalt212 1 day ago
> I have "load remote content" disabled on my e-mail client so that tracking graphics/pixels do not leak such information to the sender.

Often times that's meaningless as email scanner software will load and inspect all links and images regardless of the human's email client preferences. It basically comes down to can Constant Contact, or similar, detect if a link was clicked by security software or an actual human. And security software wants to look like an actual human because if security software looks like security software it's very easy for bad actors to serve safe payloads to security software and malware payloads to human actors.

dpark 1 day ago
Are you saying that email scanners were not only fetching the unsubscribe link but also submitting the “unsubscribe” button/form on the page?

I find this hard to believe since everyone else seems to manage this without a Captcha.

yencabulator 1 day ago
I does sound like they made a HTTP GET request have side effects.
dpark 1 day ago
I suspect that’s exactly what they did. And then they “solved” it with a Captcha. Conveniently I bet human unsubscribes also dropped when that was instituted.
cap11235 1 day ago
If I cannot just click a button and unsubscribe, guess what, you are malicious spam.
bombcar 1 day ago
And if you can't figure out how to make an unsubscribe page that doesn't require a captcha (and is triggered by email scanners) you are incompetent. Claude can figure it out.
sam_lowry_ 1 day ago
> and is triggered by email scanners

Did you mean "and is NOT triggered by email scanners"?

AFAIU, "email scanners" get more aggressive over time, so there is no once-and-forever solution. I guess AI-enabled email scanners can attempt to solve captchas as well.

lxgr 12 hours ago
Yeah, use `List-Unsubscribe`. Has the additional advantage that I don't need to find the "unsubscribe" link at the bottom of some bloaty HTML, works across languages etc.

If the email scanner of your recipient insists on clicking "unsubscribe" on their behalf without that being the desired outcome, that's not on you to prevent them from.

bombcar 1 day ago
I mean if your unsubscribe link unsubscribes someone just because Microsoft Email Phishing for Copilot visited the link to see if it was a Virus, then you need to “get gud” as the kids say.
sam_lowry_ 1 day ago
The fault lies with Microsoft Email Phishing for Copilot or with the European Commission.
jgalt212 1 day ago
Email scanners don't exactly publish the methods by which you can reliably determine if a page was loaded or a link was clicked by a security scanner. If they did not appear human, then they'd be easy to trick and then not do their security job well.
lxgr 12 hours ago
> we have to put our unsubscribe page behind a captcha.

Hope you're not ever sending email to EU residents!

Have you ever heard of `List-Unsubscribe`? It solves your problem without massively annoying people and breaking accessibility and/or the law.

WhyNotHugo 1 day ago
I think you're referring to things like tracking pixels, whereas the author was likely referring to _actual_ email read receipts, where the sender can request a read receipt, and the receiver's MUA will prompt them to send one.
jgalt212 1 day ago
Yes, same feature, different implementations.
trollbridge 1 day ago
No, it’s largely broken because of spam. I don’t want to be signed up to your useless email marketing list, and I want to use an email client that makes unsubscribing as easy as possible.
jgalt212 1 day ago
> I don’t want to be signed up to your useless email marketing list,

useless is in the eye of the beholder.

dpark 1 day ago
If I didn’t specifically opt in to receiving marketing emails (and no, failing to opt out is not the same), they are spam. I’ve never heard anyone say “I’m sure glad this company added me to their email list without my request.”

The fact that you happen to work on a mailing list product does not change that reality.

jgalt212 1 day ago
I hear what you're saying, but irrespective of how one landed on such a list, the unsubscribe mechanism is broken. e.g. It's entirely possible and likely you've subscribed to one or more marketing lists, newsletters, transaction emails, etc that you want to be on, but your security software inadvertently unsubscribed you (without your permission).
lxgr 12 hours ago
No, it's not, because I don't use shitty security solutions.

If other people do and you are making me jump through hoops as a result to preserve your conversion rate, I'm reporting you to the relevant regulator.

> the unsubscribe mechanism is broken

Which one?

Are you saying some security solutions actually send a `List-Unsubscribe`/`List-Unsubscribe-Post` compliant HTTP POST with the correct payload, or do you think a URL in the email body is the gold standard of allowing people to unsubscribe?

Or are you just telling yourself that rationalization to avoid acknowledging that you're probably causing massive annoyance to many recipients?

dpark 1 day ago
I think this is extremely unlikely. Firstly because I almost never subscribe to newsletters or marketing lists. But also because I don’t believe my security software is submitting POSTs on random forms it finds links to. That would be insane behavior.

I can believe someone, somewhere has insane security software that does stuff like that. But I don’t believe it’s common.

AnimalMuppet 13 hours ago
That I want to be on? No. What usually happens is that I give my email to somebody (an auto repair place, say), for one-time use, and they add me to their marketing mailing list, even though that is not what I gave them my email for. That is not a list that I want to be on and willingly subscribed to.
mohamed_azeem 1 day ago
[dead]
mohamed_azeem 1 day ago
[dead]
kstrauser 13 hours ago
Useless is in the eye of the recipient. The sender doesn't get a vote.
lxgr 12 hours ago
And guess who's the beholder of your spam...
xnx 1 day ago
> Thanks to email security scanners this feature is largely broken.

One person's feature is another's anti-feature. I'm glad it's dead.

PunchyHamster 1 day ago
waiting for inevitable "gmail bad, why it spams my emails so much" rant
a-dub 1 day ago
i once did a contract for a company that built a product around connectors for legacy lan e-mail products and an x.400 mta. it was a gigantic steaming pile of shit and made me appreciate the simple internet protocols so much more than i already did.