I don't think v6 is the absolute pinnacle of protocol design, but whenever anybody says it's bad and tries to come up with a better alternative, they end up coming up with something equivalent to IPv6. If people consistently can't do better than v6, then I'd say v6 is probably pretty decent.
Not just that. Almost every single thing people think up that's "better" is something that was considered and rejected by the IPv6 design process, almost always for well-considered reasons.
All the complaints I hear are pretty much all ignorance except one: long addresses. That is a genuine inconvenience and the encoding is kind of crap. Fixing the human readable address encoding would help.
If you add *any* address bits you've already broken protocol compatibility and you need to upgrade the entire world. While you're already upgrading the entire world, you should add so many address bits that we'll never need more, because it costs the same, and you may as well fix those other niggling problems as well, right?
It still would have been a ton of work, but we could have just had what IPv6 claimed to be: IPv4 with bigger addresses. Except after the upgrade, there'd be no parallel system. And all of DJB's points apply: https://cr.yp.to/djbdns/ipv6mess.html
The people involved in core Internet protocol design were used to the net being a largely walled garden of governments, corporations, universities, and a small number of BBSes and niche ISPs.
Major protocol upgrades had happened before, not just for the core protocol but all kinds of other then-core services.
It had been a while but not that long, I think less than 20 years, and last time it was pretty easy. They assumed they could design something better and phase it in and all the members of the Internet community would just do the right thing.
That’s probably what made them feel they could push a more radical upgrade.
Unfortunately they started this right as the massive tsunami of Internet commercialization hit. Since V6 was too new, everyone went with V4. Now all the sudden you had thousands of times more nodes, sites, and personnel, and all of them were steeped in IPv4 and rushing to ship on top of it. You also lost the small town atmosphere of the early net where admins were a club and could coordinate things.
Had V6 launched five years earlier V4 would probably be dead.
V6 usage will probably keep creeping up, but as it stands we will likely be dual stack forever. Once the installed user base and sunk cost is this high the design is fixed and can never be changed without a hard core heavy handed measure like a government mandate.
In DNS64, whenever your DNS resolver encounters an IPv4-only site, it translates it to an IPv6 address under a translator prefix, and returns that address to the client. The client connects to the translator server via that address, and the translator server opens an IPv4 connection to the website. Your side of the network is IPv6-only, not even running tunneled v4.
This only breaks things to about the same small extent that the introduction of NAT did.
Yes! They need an alternate encoding form that distills to the same addresses.
My machines Link-local IPV6 address is "fe90::6329:c59:ad67:4b52%8"
If I try to paste that into the address bar in Edge or Chrome (with the https://) it does an internet search on that string! No way around it.
I have to do workarounds like: "http://fe90::6329:c59:ad67:4b52%8.ipv6-literal.net:8081/
All to test the IPv6 interface on a web server I'm running on my local machine.
Yes, it denies simple P2P connectivity. World doesn't need it. Consumers are behind firewalls either way. We need a way for consumers to connect to a server. That's all.
With IPv4 + NAT, you have a public IP address. That public address goes to your router. Your router can forward any port to any machine on your LAN. I used to run Minecraft servers from a residential connection on IPv4, it was fine. Never had to call the ISP.
In many countries they don't have enough, so you have CGNAT.
Still, I do think that the solution of, "one IPv4 address per household + NAT" is a perfectly good system. I view the IPv6 mentality of giving each computer in the world a globally unique IPv6 address as a non-goal.
If you are giving out public IPs then you aren't really NAT'ing.
I also deployed it as a pilot on an internal network. Other than getting direct IPv6 connectivity to some services, which sometimes gave us better performance, it conferred no advantage to us.
IPv6 is great for phones where you don't expect any inbound traffic. Even then, every US carrier is using Carrier NAT to route and proxy traffic for their own purposes.
I don't want our communications infrastructures to be just for consumers.
Unfortunately, the internet is used for a lot more than using one of the six gigantic centralized websites.
Speaking of that, why don't we just keep ipv4 for ourselves and let them eat ipv6?
Worth pointing out that this article was written by the now-CEO of Tailscale. I don't know if "The world doesn't need P2P connectivity" is a compelling take.
I do wish ISPs would refrain from intentionally breaking things though. It ought to be illegal for them to block specific ports or filter specific sorts of traffic absent a pressing and active security concern.
Roughly, it's my belief that an IPv6 world makes it easier for centralizing forces and harder for local p2p or p2p-esque ones; e.g. an IPv6 world would have likely made it easier to do bad things like "charge for individual internet user in a home."
The decentralization of "routing power" is more a good thing than bad, what you pay for in complexity you get back in "power to the people."
This idea comes up in every HN conversation about IPv6, and so I suppose this time it's my turn to point out RFC 8981[0]. tl;dr: typically, machines which receive IPv6 address assignment via SLAAC (functional equivalent of DHCP) periodically cycle their addresses. Supposed to offer pretty effective protection against host-counting.
The only reason it's around is because of sunken cost fallacy and people stuck in decades old tech-debt. A new protocol designed today will be different, much the same as how Rust is different than Ada. SD-WAN wasn't a thing in 1998, the cost of chips and the demand of mobile customers wasn't a thing. supply/demand economics have changed the very requirments behind the protocol.
Even concepts like source and destination addressing should be re-thought. The very concept of a network layer protocol that doesn't incorporate 0RTT encryption by default is ridiculous in 2026. Even protocols like ND, ARP, RA, DHCP and many more are insecure by default. Why is my device just trusting random claims that a neighbor has a specific address without authentication? Why is it connecting to a network (any! wired,wireless, why does it matter, this is a network layer concern) without authenticating the network's security and identity authority? I despise the corporatized term "zero trust" but this is what it means more or less.
People don't talk about security, trust, identity and more, because ipv6 was designed to save networking gear vendors money, and any new costly features better come with revenue streams like SD-WAN hosting by those same companies. There are lots and lots of new things a new layer-3 protocol could bring to the scene. But security aside, the main thing would be replacing numbered addressing with identity-based addressing.
It all comes down to how much money it costs the participants of the RFC committees. given how dependent the world is on this tech, I'm hoping governments intervene. It's sad that this is the tech we're passing to future generations. We'll be setting up colonies on mars, and troubleshooting addressing and security issues like it's 2005.
That's false. Firstly, rfc1883 was published in 1995 which means work started some time before that, and the RFC process included operating system vendors and RIR administrators. The primary author of rfc1883 worked at Xerox Parc, and the primary author of rfc1885 worked at DEC. Neither were networking gear companies.
https://www.ietf.org/process/rfcs/
> Proposed Standard (PS). The first official stage. Many standards never progress beyond this level.
> Draft Standard. An intermediate stage that is no longer used for new standards.
> Internet Standard. The final stage, when the standard is shown to be interoperable and widely deployed.
I don't know much about MPLS and only know IP routing, but that quote above sounds very hand-waving. How do you route "identity based addressing"?
It is far from hand-waving. Right now we have numeric addressing, where routers look at bits and perform ASIC-friendly bitwise (and other) operations on that number to forward a lot of traffic really fast for cheap.
Identity and trust establishment won't be part of the regular data flow, but at network connection time, each end-device will discover the network authority it has connected to, and build trust that allows it to validate identities in that network, including address assignments, neighbor discovery, name resolution and verification, authorized traffic forwarders (routers) and more.
After the connection is established and the network is trusted, as part of the connection establishment, the network authority designates how addressing should be done. If Alice's Iphone wants to connect to Bob's server, it will encrypt the data, and as part of a very slim header designate Bob's server's cryptographic identifier, destination service identifier, and its own cryptographic identifier for the first packet. To reduce overhead, subsequent traffic can use a simple hash of the connection identifers mentioned earlier.
When devices come online in the network, their cryptographic identifers will become known to the entire network, including intermediate routers. Routing protocols work with the identity authority of the network to build forwarding tables based on cryptographic identifiers, and for established sessions, session ids.
"Cryptographic identifier" is also not a hand-wavy term. what it means must be dynamic, so as to avoid protocol updates like v4 and v6 over addressing. V6 presumed just having lots of bits is enough. An ideal protocol will allow the network itself to communicate the identifier type and byte-size. For example an FQDN, or an IPv4 address alike could be used directly, or a public key hash using a hash algorithm of the network's choice can be used. So long as the devices in the network support it, and the end device supports it, it should work fine.
Internet addressing can use a scheme like this, but it doesn't need to. IPv6 took the wrong approach with NAT, it got rid of it instead of formalizing it. we'll always need to translate addresses. But the internet is actually well-positioned for this, due to the prevalence of certificate authorities, but it will require rethinking foundational protocols like DNS, BGP, and PKI infrastructure.
But my original point wasn't this, it was that tech has come far, our requirements today are different than 30 years ago. Even the OSI layered model is outdated, among other things.
This is just my proposal that I just thought of as I'm typing this, smarter people that can sit down and think through the problem can think of better protocols. I only proposed it to demnostrate the concept isn't hand-wavy or ridiculous.
IPv6 was relatively rushed to meet the address shortage issue of IPv4 while at the same time solve lots of other problems. The next network layer protocol (and we do need one) should have the goal of making networking as a whole adaptable to new and unforeseen requirements (that's why I suggested the network authority be the one to dictate the addressing scheme, and with it, be responsible for translating it if needed). We're being held back, not just in tech but as a species, because of this short-sighted protocol design! exaggerated as that statement might sound, it is true.
I'll reserve further discussion on the topic for when it is required, but I hope this prevents more dismissive responses.
You wouldn't need TLS. this scheme i just thought would actually decentralize/federate PKI a lot more. If you have a public address assigned, your ISP is the IP-CA. I don't want to get into the details of my DNS replacement idea, but similar to network operators being authorities over the addresses they're responsible for, whoever issued you a network name is also the identity authority over that name (so DNS registrars would be CA's). Ideally though, every device would be named, and the people that have logical control over the address will also be responsible for the name and all identity authentication and claims over those addresses and names. You won't have freaking google and browsers dictating which CA root to trust, it will instead be the network you're joining that does that (be it your DHCP server, or your ISP is up for debate, but I prefer the former). Ideally, your public key hash is your address. How others reach you would be by resolving your public key from your identity, the traffic will be sent to your public key (or see my sibling comment for the concept of cryptographic identity). All names would of course be free, but what we call "DNS" today will survive as an alias to those proper names. so your device might be guelo.lan123.yourisp.country but a registrar might sell you a guelo.com alias that points to the former name.
The implications of this scheme are wild, think about it!
Rogue trust providers will be a problem, but only to their domain. right now random CA roots can issue domains for anything. with the scheme I proposed, your country can mess with its own traffic, as can your isp, as can you over your lan. You won't be able to spoof traffic for a different lan, or isp using their name.
Solve all the problems at their foundations!
Which public key you want to route to is above the network layer.
I think they "shipped it" and washed their hands of it.
But I think there should have been more iterations, until we got a little more ipv4+ and less ipv6.
Everything since has been round after round of RFCs trying to adapt IPv4 workarounds to the IPv6 world.
https://news.ycombinator.com/item?id=14986324 (2017)
https://news.ycombinator.com/item?id=20167686 (2019)
The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=37116487 - Aug 2023 (306 comments)
The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=25568766 - Dec 2020 (131 comments)
The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=20167686 - June 2019 (238 comments)
The world in which IPv6 was a good design - https://news.ycombinator.com/item?id=14986324 - Aug 2017 (191 comments)
Otherwise, the networking history part of this post is amazing. I haven't gotten to the IPv6 part yet.
For instance, IPv6's NDP is built on actual IPv6 packets (ICMPv6), rather than some spoofed IP-lookalike thing. No layering violation, and, thanks to multicasting, no need to dump a bunch of broadcast traffic on the layer 2 network.
Only if the L2 network actually supports L2-multicast. Ethernet doesn't, except if your switches are intelligent enough. With cheap ethernet switches, multicast will be simulated by broadcast.
And actually, you can never avoid a layering violation. The only thing that NDP avoids is filling in the source/destination IP portions with placeholders. In NDP, you fill the destination with some multicast IPv6 address. But that is window dressing. You still need to know that this L3-multicast IPv6 address corresponds to a L2-multicast MAC address (or just do L2 broadcast). The NDP source you fill with an L3 IPv6 address that is directly derived from your L2 MAC address. And you still get back a MAC address for each IPv6 address and have to keep both in a table. So there are still tons of layering violations where the L2 addresses either have direct 1:1 correspondences to L3 addresses, or you have to keep L2/L3 translation tables and L3 protocols where the L3 part needs to know which L2 protocol it is running on, otherwise the table couldn't be filled.
It's pretty silly anyway. NDP is more of a layering violation than ARP, because now IPv6 has a stupid circular dependency on itself. Mapping L3 addresses to L2 is a layer below 3, it is not part of layer 3, it is part of the sub-layer that adapts between 2 and 3. DHCP should be part of that sub-layer, too.
Did you know that for every kind of network that IP can run on top of, there's a whole separate standard specifying how to adapt one to the other? RFC 894 specifies how to run IP over Ethernet networks. RFC 2225 specifies how to run IP over ATM networks.
The only thing one should really really really avoid is the TCP mistake of not just having some minimally necessary glue, but that tight coupling of TCP connections to IP addresses in the layer below.
True, but outside bottom-barrel switches, any switch that's not super old should support multicast, no?
Regarding the rest of your comment, I really don't see how all those things count as layering violations. Yes, there is tight coupling (well, more like direct correspondence) between l2 and l3 addresses. However, these multicast addresses are actual addresses furnished by IPv6; nodes answer on these addresses. Basically, the fact that there is semantic correspondence between l2 and l3 is basically an implementation detail. Whereas ARP even needs its own EtherType!
And, yes, nodes need to keep state. But why is that relevant to whether or not this is a layering violation? When two layers are separate, they need to be combined somewhere ("gluing the layers together"). The fact that the glue is stateless seems irrelevant. But again, I'm just a sysadmin.
NDP may very well be a nicer protocol than ARP, but following the logic of the article, the neighbor solicitation part of NDP would be just as unnecessary as ARP.
I think SLAAC came from world where computers were expensive, DHCP servers were separate, and they wanted to eliminate them. But we are in world where computers are cheap and every router can run DHCP.
We could have had easy config with DHCPv6 giving out MAC based addresses by default. The auto config would still work on link-local.
Regardless, ipv6 was to have more IP addresses because of ipv4 exhaustion and NAT?
My Xbox tells me my network sucks because it doesn't have ipv6, but this is a very North-American perspective regardless.
Nit: per RFC8064[0], most modern, non-server devices do/should configure their addresses with "semantically opaque interface identifiers"[1] rather than using their MAC address/EUI64. That stable address gets used for inbound traffic, and then outbound traffic can use temporary/privacy addresses that are randomized and change over time.
Statelessness is accomplished simply by virtue of devices self-assigning addresses using SLAAC, rather than some centralized configuration thing like DHCPv6.
[0] https://datatracker.ietf.org/doc/rfc8064/ [1] https://datatracker.ietf.org/doc/rfc7217/
Pretty sure that it's complaining about lack of upnp. Which, yes, would not be an issue if we had ipv6... but ironically consoles typically have been slow to adopt ipv6 support themselves, so I'm curious if the xbox even supports it..
Steam having issues makes sense given its been around ages. Meta Quest is all new OS and code yet they managed to bork ipv6. Super annoying.
Xbox live has had it for years because ipv6 means no nat and lower latency. It’s been there since at least the 360.
There's one point I don't really get and I would be glad if someone could clarify it for me. When the author says that even over wifi, the CSMDA/CD protocol is not used anymore. Then how does it actually work?
Discussing this, the author explains:
> If you have two wifi stations connected to the same access point, they don't talk to each other directly, even when they can hear each other just fine.
So, each station still has to decide at some point if what its hearing is for them or not, as it could be another station talking to the AP, or the AP talking to another station. How is that done if not using CSMA/CD (or something very similar at least)?
AFAIK, WiFi has always been doing CSMA/CA and starting with the 802.11ax standard also OFDMA. See https://en.wikipedia.org/wiki/Hidden_node_problem#Background
Thanks for your link that helped clarifying this for me!
WiFi is different of course. However as the author wrote, your WiFi devices always go through the access point where they use 802.11 RTS/CTS messages to request and receive permission to send packets. All nodes can see CTS being broadcasted so they know that somebody is sending something. So even CSMA/CA is getting less useful.
for Non-WiFi, we don't use CD because all is bi-dirireactional and all communication have their own lane, no needed because there will never be a collision this is down to the port level on the switches, the algorithm might be still there but not use for it.
For WiFi, CD can never be good or work, because "Detecting" is pointless, it cannot work. we need to "Avoid" so it has functionality because is a shared lane or medium. CA is a necessity, now in 2026, we actually truly don't need it or use it as much since now WiFi and 802.11 functions as a switch with OFDM and with RF signal steering, at the PHY (physical level) the actual RF radio frequency side, it cancels out all other signals say from others devices near you and we "create" similar bi-directional lanes and functions similar as switches.
The article is good and represents how IETF operates a view (opinionated) of what happens inside. We actually need an IETF equivalent for AI. Its actually good and a meritocracy even though of late the Big companies try to corrupted or get their way, but academia is still the driver and steers it, and all votes count for when Working-Groups self organized. (my last IETF was 2018 so not sure how it is now in the 2020s)
Wifi is in any case not considered a bus network, rather a star topology network.
Also funny it was made in 1990 and it only recently reached 50% adoption.
And how the fuck anything in-between knows where to route it ? The article glows a blazing beacon of ignorance about everything in-between.
The whole entire problem with mobile IP is "how we get intermediate devices to know where to go?" we're back to
> The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical.
Which author hinted at then forgot. We can't have globally routable, unique, random-esque ID precisely because it has to be hierarchical. Keeping connection flow ID at L4 instead of L3+L4 changes very little, yeah, you can technically roam the client except how the fuck server would know where to send the packet back when L3 address changes ? It would have to get client packet with updated L3 address and until then all packets would go to void.
But hey, at least it's some progress ? NOPE, nothing at protocol layer can be trusted before authentication, it would make DoS attacks far easier (just flood the host in a bunch of random uuids), and you would still end up doing it QUIC way of just re-implementing all of that stuff after encryption of the insides
As for L3 packets going into the void. Yeah they’re gonna get lost, can’t be helped. But the server also isn’t going to get any L4 acks for those packets. So when a new L3 connection is created, and the L4 session recovered, the lost packet just get replayed over the new L3 connection.
This is not, technically, true. We could have globally-routable, unique, random-esque IDs if every routing device in the network had the capacity to store and switch on a full table of those IDs.
I'm not saying this is feasible, mind you, just that it's not impossible.
Outside of ignoring the laws of physics, this isn’t very useful of speculation.
Because the IP address changed, so classic routing still works. Their point is about identifying a session with something non-constant (the IP of the client), rather than a session token.
Instead of identifying the "TCP" socket with (src ip, src port, dst ip, dst port), they use (src uuid, dst uuid) which allows flows to keep working when you change IP addresses. Just like you can change networks and still have your browser still logged in to most websites.
The packets carrying those UUIDs still are regular old IP packets, UDP in the case of QUIC. Only the server needs to track anything, and only has to change the dst ip of outgoing packets.
As for flooding and DDoS, that’s what handshakes are for, and QUIC already does it (disclaimer: never dug deep in how QUIC works so I can’t explain the mechanism here).
One of the problems we have is when we're born we don't question anything. It just is the way it is. This, of course, lets us do things in the world much more quickly than if we had to learn everything from basic principles, but it's a disadvantage too. It means we get stuck in these local optima and can't get out. Each successive generation only finally learns enough to change anything fundamental once they're already too old and set in their ways doing the standard thing.
How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.
I don't know about you personally but every grade-school, high-school, & college level instructor I ever had would probably vehemently disagree with this statement about me. I remember at least 70 year old college instructor becoming visibly irritated that I would ask what research supported the assertions he made
It was somewhat unexpected to find section headings such as "Is IPv6 a failure?" in the product support documentation, but I thought it was interesting and informative nonetheless.
And doing so would improve nothing, and be no different than the IPV6 rollout. So you have to ship new code to every 'network element' to support an "IPv4+" protocol. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (A lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "A+" address (for "IPv4+" addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6. In any 'address extension' plan the legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the "IPv4+" and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
Deploying the new "IPv4+" code will take time, there will partial deployment of IPv4+ is no different than having partial deployment of IPv6: you have islands of it and have to fall back to the 'legacy' IPv4-plain protocol when the new protocol fails to connect:
That explains it. Like I wrote two years ago¹:
The eternal problem with companies like Tailscale (and Cloudflare, Google, etc. etc.) is that, by solving a problem with the modern internet which the internet should have been designed to solve by itself, like simple end-to-end secure connectivity, Tailscale becomes incentivized to keep the problem. What the internet would need is something like IPv6 with automatic encryption via IPSEC, with IKE provided by DNSSEC. But Tailscale has every incentive to prevent such things to be widely and compatibly implemented, because it would destroy their business. Their whole business depends on the problem persisting.
I understand the appeal of this vision, but I think history has shown that it's not consistent with the realities of incremental deployment. One of the most important factors in successful deployment is the number of different independent actors who need to change in order to get some value; the lower this number the easier it is to get deployment. By very rough analogy to the effectiveness of medical treatments, we might call it the Number To Treat (NTT).
By comparison to the technologies which occupy the same ecological niches on the current Internet, all of the technologies you list have comparatively higher NTT values. First, they require changing the operating system[0], which has proven to be a major barrier. The vast majority of new protocols deployed in the past 20 years have been implementable at the application layer (compare TLS and QUIC to IPsec). The reason for this is obviously that the application can unilaterally implement and get value right away without waiting for the OS.
IPv6 requires you not only to update your OS but basically everyone else on the Internet to upgrade to IPv6. By contrast, you can just throw a NAT on your network and presto, you have new IP addresses. It's not perfect, but it's fast and easy. Even the WebPKI has somewhat better NTT properties than DNSSEC: you can get a certificate for any domain you own without waiting for your TLD to start signing (admittedly less of an issue now, but we're well into path dependency).
Even if we stipulate that the specific technologies you mention would by better than the alternatives if we had them -- which I don't -- being incrementally deployable is a huge part of good design.
[0] DNSSEC doesn't strictly require this, but if you want it to integrate with IKE, it does.
There are plenty of anarchists and disaster aid groups interested in building a more decentralized alternative to the internet. Meshtastic, AnoNet, Reticulum, MeshCore, etc are all evidence of that
Then there's also stuff like Dave Ackley's robust-first computing that's looking towards a completely different paradigm for computing in general that focuses on robustness.
There were many of us who, even when it was still IPng (IP Next Generation) in the mid 1990's, tried to get it working and spent significant amount of effort to do so, only to be hit with unrealistic ideological ideals that blocked our ability to deploy it, especially with the limitations of the security tools back in the day.
Remember when IPng started, even large regional ISPs like xmission had finger servers, many people used telnet and actually slackware enabled telnet with no root password by default!!! I used both to get wall a coworker who was late to work because he was playing tw2000.
Back then we had really bad application firewalls like Altavista and PIX was just being invented, and the large surveillance capitalism market simply didn't exist then.
The IAB hampered deployment by choosing hills to die on without providing real alternatives, and didn't relent until IPv4 exhaustion became a problem, and they had lost their battle because everyone was forced into CGNAT etc...because of the IETF, not in spite of it.
The IAB and IETF was living in a MIT ITS mindset when the real world was making that model hazardous and impossible. End to end transparency may be 'pretty' to some people, but it wasn't what customers needed. When they wrote the RFCs to make other services simply fail and time out if you enabled IPv6 locally, but didn't have ISP support they burned a lot of good will and everyone just started ripping out the IPv6 stack and running IPv4 only.
IMHO, Like almost all tech failures, it didn't flail based on technical merits, it flailed based on ignorance of the users needs and a refusal to consider them, insisting that adopters just had to drink their particular flavor of Kool-aid or stick to IPv4, and until forced most people chose the latter.
That behavior is due to the same politics mentioned above.
A few more pragmatic decisions, or at least empathetic guidance would have dramatically changed the acceptance of ipv6.
You would only see a timeout to an AAAA record if the connection attempt to the A record already failed. Some software (looking at you, apt-get) will only print the last connection failure instead of all of them, so you don't see the failure to connect to the A record. I've seen people blame v6 for this even though they don't have v6 and it's 100% caused by their v4 breaking.
Run `getent ahosts example.com` to see the order your system sorts addresses into. `wget example.com` (wget 1.x only though) is also nice, because it prints the addresses and tries to connect to them in turn, printing every error.
I mean... adding v6 is the right thing to do either way, but "AAAA is higher priority than A" isn't the reason.
There is an expired 6man draft that explains some of the issues here.
https://www.ietf.org/archive/id/draft-buraglio-6man-rfc6724-...
To be clear, I go and clean out the temporary fixes for dual stack problems, but you want some more info so here it is.
$ grep 'apt.systemd.daily' /var/log/syslog.1 | grep '^2026-04-16T01:09' | wc -l
86375
$ grep 'apt.systemd.daily' /var/log/syslog.1 | grep '^2026-04-16T01:09' | head -n 1
2026-04-16T01:09:15.276295-06:00 MrBig apt.systemd.daily[45660]: /usr/bin/unattended-upgrade:2736: Warning: W:Tried to start delayed item http://us.archive.ubuntu.com/ubuntu questing-updates/main amd64 bpftool amd64 <snip>
$ grep 'apt.systemd.daily' /var/log/syslog.1 | grep '^2026-04-16T01:09' | head -n 1 | wc -c
8116
IPv6 aaaa timeout was shown to be the problem, adding `Acquire::ForceIPv4 "true";` fixed the problem on several hosts. $ getent ahosts us.archive.ubuntu.com
91.189.91.81 STREAM us.archive.ubuntu.com
91.189.91.81 DGRAM
91.189.91.81 RAW
91.189.91.82 STREAM
91.189.91.82 DGRAM
91.189.91.82 RAW
91.189.91.83 STREAM
91.189.91.83 DGRAM
91.189.91.83 RAW
2620:2d:4002:1::101 STREAM
2620:2d:4002:1::101 DGRAM
2620:2d:4002:1::101 RAW
2620:2d:4002:1::102 STREAM
2620:2d:4002:1::102 DGRAM
2620:2d:4002:1::102 RAW
2620:2d:4002:1::103 STREAM
2620:2d:4002:1::103 DGRAM
2620:2d:4002:1::103 RAW
There are no non `fe80::` (link local addresses) on the host. $ ip a | grep inet6
inet6 ::1/128 scope host noprefixroute
inet6 fe80::786a:e338:3957:b331/64 scope link noprefixroute
inet6 fe80::a10c:eae9:9a49:c94d/64 scope link noprefixroute
So to be clear, I removed my temporary ipv4 only apt config, but there are a million places for this to be brittle and you see people doing so with sysctl net.ipv6.conf.* netplan, systemd-networkd, NetworkManager, etc... plus the individual client etc....Note:
https://datatracker.ietf.org/doc/html/rfc6724#section-2.1
And how "::/0" > "::ffff:0:0/96"
And the preceding text:
> If an implementation is not configurable or has not been configured, then it SHOULD operate according to the algorithms specified here in conjunction with the following default policy table:
One could argue that GUA's without a non-link-local IPv6 address should just be ignored...and in a perfect world they would.
But as covered int the first link in this post this is not as easy or clear as expected and people tend to error towards following rfc6724 which states just below the above refrence:
> Another effect of the default policy table is to prefer communication using IPv6 addresses to communication using IPv4 addresses, if matching source addresses are available.
I am not an IPv6 hater...just giving observations that when you introduce a breaking change, and add additional friction, it dramatically reduces adoption.
Many companies I have been at basically just implement enough to meet Federal Government requirements and often intentionally strip it out of the backend to avoid the brittleness it caused.
I am old enough to remember when I could just ask for an ASN and a portable class-c and how nice that was, in theory IPv6 should have allowed for that in some form...I am just frustrated with how it has devolved into an intractable 'wicked problem' when there was a path.
The fact that people don't acknowledge the pain for users, often due to situations beyond their control is a symptom of that problem. Ubuntu should never have even requested an IPv6 aaaa in the above system, and yes it only does because of politics and RFC requirements.
user@ubuntu-server:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 25.10
Release: 25.10
Codename: questing
user@ubuntu-server:~$ uname -a
Linux ubuntu-server 6.17.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Sat Oct 18 10:10:29 UTC 2025 x86_64 GNU/Linux
user@ubuntu-server:~$ getent ahosts us.archive.ubuntu.com
91.189.91.82 STREAM us.archive.ubuntu.com
91.189.91.82 DGRAM
91.189.91.82 RAW
91.189.91.81 STREAM
91.189.91.81 DGRAM
91.189.91.81 RAW
91.189.91.83 STREAM
91.189.91.83 DGRAM
91.189.91.83 RAW
2620:2d:4002:1::102 STREAM
2620:2d:4002:1::102 DGRAM
2620:2d:4002:1::102 RAW
2620:2d:4002:1::101 STREAM
2620:2d:4002:1::101 DGRAM
2620:2d:4002:1::101 RAW
2620:2d:4002:1::103 STREAM
2620:2d:4002:1::103 DGRAM
2620:2d:4002:1::103 RAW
user@ubuntu-server:~$ ip --oneline link | grep -v lo: | awk '{ print $2 }'
enp0s3:
user@ubuntu-server:~$ ip addr | grep inet6
inet6 ::1/128 scope host noprefixroute
inet6 fe80::5054:98ff:fe00:64a9/64 scope link proto kernel_ll
user@ubuntu-server:~$ fgrep -r -e us.archive /etc/apt/
/etc/apt/sources.list.d/ubuntu.sources:URIs: http://us.archive.ubuntu.com/ubuntu/
user@ubuntu-server:~$ sudo apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu questing InRelease
Get:2 http://security.ubuntu.com/ubuntu questing-security InRelease [136 kB]
<snip>
Get:43 http://security.ubuntu.com/ubuntu questing-security/multiverse amd64 c-n-f Metadata [252 B]
Fetched 2,602 kB in 3s (968 kB/s)
Reading package lists... Done
I didn't think to wrap that in 'time', but it only took a few seconds to run... more than two and less than thirty.
The IPv6 packet capture running during all that reveals that it never tried to reach out over v6 (but that my multicast group querier is happily running): user@ubuntu-server:~$ sudo tcpdump -i enp0s3 -s 0 -n 'ip6 or icmp6'
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), snapshot length 262144 bytes
22:16:44.327503 IP6 fe80::5054:98ff:fe00:64a9 > ff02::2: ICMP6, router solicitation, length 16
22:17:35.823917 IP6 fe80::<REDACTED> > ff02::1: HBH ICMP6, multicast listener query v2 [gaddr ::], length 28
22:17:41.706930 IP6 fe80::5054:98ff:fe00:64a9 > ff02::16: HBH ICMP6, multicast listener report v2, 1 group record(s), length 28
I even manually ran unattended-upgrade, which looks to have succeeded. Other than unanswered router solicitations and multicast group query membership chatter, there continued to be no IPv6 communication at all, and none of the messages you reported appeared either in /var/log/syslog or on the terminal. user@ubuntu-server:~$ sudo /usr/bin/unattended-upgrade
user@ubuntu-server:~$ sudo grep -e 'Tried to start delayed item' /var/log/syslog
user@ubuntu-server:~$
What am I doing wrong?The problem isn't the happy path, the problem is when things fail, and that linux, in particular made it really hard to reliably disable [0]
Once that hits someone's vagrant or ansible code, it tends to stick forever, because they don't see the value until they try to migrate, then it causes a mess.
The last update on the original post link [1] explains this. The ipv4 host being down, not having a response, it being the third Tuesday while Aquarius is rising into what ever, etc... can invoke it. It causes pains, is complex and convoluted to disable when you aren't using it, thus people are afraid to re-enable it.
[0] https://wiki.archlinux.org/title/IPv6#Disable_IPv6 [1] https://tailscale.com/blog/two-internets-both-flakey
But aside from that, I actually do think we could have baked address extensions into the existing packet format's option fields and had a gradual upgrade that relied on that awful bodge that was (and is) NAT. And had a successful transition wherein it died a well-deserved death by now. :-)
I do think that the IETF didn't realize that they were losing their agency, so its very likely that TUBA would have made the difference. not for any technical reason, but that it would have been a few years earlier when people were still listening.
The fact that IS-IS survived as a relevant IP routing protocol says a lot on its own.
In the beginning it was an experiment and should have been ambitious, the IETF had just moved to CIDR which bought almost a decade of time, and they should have aimed high.
It is just when you significantly change a system, you need to show users how to accomplish the work they are doing with the old system, even if how they do that changes. If you can't communicate a way to replace their old needs, or how that system is fitting new needs that you could never have predicted, you need to be flexible and demonstrate that ability.
If you look at the National Telecommunications and Information Administration. [Docket No. 160810714-6714-01] comments
Microsoft: https://www.ntia.gov/sites/default/files/publications/micros... ARIN: https://www.ntia.gov/sites/default/files/publications/arin_c...
You will see that the address space argument is the only real one they make. It isn't coincidence that rfc7599 came about ~20 years later when 160810714-6714-01 and federal requirements for IPv6 were being discussed.
If you look at the #nanog discussions between RFC 1883 (ipv6) (late 1996) being proposed and Ipv4 exhaustion in early in (2011) it wasn't just the IAB that was having philosophical discussions around this.
Both rfc3484 and rfc6724 suffered from the lack of executive sponsorship as called out in the above public comments. And the following from rfc6724's intro is often ignored with just pure compliance:
> They do not override choices made by applications or upper-layer protocols, nor do they preclude the development of more advanced mechanisms for address selection.
There are many ways that could have played out different, but I noticed Avery Pennarun's last update to that post pretty much says the same in different words.
https://tailscale.com/blog/two-internets-both-flakey
> IPv6 was created in a new environment of fear, scalability concerns, and Second System Effect. As we covered last time, its goal was to replace The Internet with a New Internet — one that wouldn’t make all the same mistakes. It would have fewer hacks. And we’d upgrade to it incrementally over a few years, just as we did when upgrading to newer versions of IP and TCP back in the old days
so all the fairy tales about IP invented for nuclear war was a lie? the moment military started moving around, IP became useless?
For smaller internets, protocols such as RIP (limited to 16 hops) broadcast routing information from each still-working router to other routers. Each router built a picture of the internet (simplifying a bit here, RIP and similar protocols used "distance vector" routing, but other more advanced routing protocols did have each a picture of the internet). So when a packet arrived at its router, that router can forward the pack towards the destination. Such protocols are "interior" routing protocols, used within an ISP's network.
The Internet is too big for such automatic routing and uses an "exterior" routing protocol called BGP. This protocol routes packets from one ISP to the next, using route and connectivity information input by humans. (Again I'm simplifying a bit.)
Wifi uses entirely different protocols to route packets between cells.
Fun fact: wifi is not an acronym for anything, the inventors simply liked how it sounded.
Most certainly it's a reference to "Sci-Fi" or "Hi-Fi".
IP + some dynamic routing handles the situation of "the connection site got nuked and we need to route around it", it's just not in the protocol, it's additional layer on top of it
Wi-Fi and ethernet also have different IPs. And what if you also add Wi-Fi peer-to-peer (Airdrop-ish), Wi-Fi Tunneled Direct Link Setup (literally Chromecast)?
If a vendor implemented simultaneous Dual Band (DBDC) Wi-Fi, that means it can connect to both 2.4ghz and 5ghz at the same time, each with their own mac & ip, because you're trying to connect to the same network on a different band. Or route packages from a 'wan' Wi-Fi to a 'lan' Wi-Fi (share internet on (BSS) infrastructure Wi-Fi A to a new (IBSS) ad-hoc Wi-Fi network B with your smartphone as the gateway on Android.
There's also 802.11 the IEEE 802.11 standard to add wireless access in vehicular environments (WAVE) and EV chargers or IP over the CCS protocol, etc. If all cars need to be 'connected' and 'have a unique address' NAT / CGNAT also isn't cutting it.
There's also IoT. Thread is ipv6 because it's the alternative to routing whatever between wan / lan / zigbee / Z-Wave / etc with a specific gateway at a remote point in the mesh network.
And how about the new DHCP / DNS specs for ipv6, you can now share encrypted DNS servers, DHCP client-ID, unique OUID, etc etc.
It's an infuriating post really. As if IP was only designed for a small scale VPN / overlay network service such as Tailscale.
Mobile IP actually wanted to do this, it just never took off (not the least because both endpoints need to understand it to get route optimization). I think some Windows versions actually had partial Mobile IPv6 support.