Grover's algorithm is provably optimal [0]. No quantum algorithm will ever find an n-bit key by queries to any reasonable sort of oracle faster than Grover's algorithm, and Grover's algorithm is way too slow to be a serious problem.
But symmetric ciphers are not black boxes. They're mostly built on some variant of a Feistel network, which is a very nice construction for turning a messy function into an invertible function that, in a potentially very strong sense, acts like a cryptographically secure permutation.
When I was in grad school, one project I contemplated but never spent any real time on was trying to either generate a real security proof for quantum attacks on Feistel networks or to come up with interesting quantum attacks. And there is indeed an interesting quantum attack against 3-round Feistel networks [1].
This is interesting, because, depending on what sort of security is needed, three or four rounds of Feistel network are sufficient against classical attack [2].
Now ciphers like AES have many more than 3 rounds, so hopefully they're fine. But maybe they're not. My intuition is that there is probably a reasonably small n1 and a reasonably small n2 >= n1 (probably constants, maybe logarithmic in numbers of bits) for which there is no quantum algorithm that can break symmetric crypto given classical query access even to the round functions (n1) or quantum query access to the round functions (n2) [3], but I'm not aware of any proof of anything of the sort. And my intuition definitely should be be trusted fully! (Maybe, even if I'm wrong, there is still a number of rounds that is sufficient for security against query access to the entire cipher.)
[0] The classic result is https://arxiv.org/abs/quant-ph/9701001 and there are newer, more exact results, e.g. https://arxiv.org/abs/0810.3647
[1] https://ieeexplore.ieee.org/document/5513654
[2] https://en.wikipedia.org/wiki/Feistel_cipher
[3] It would be extremely cool if someone built quantum computers and networks and storage such that two parties that don't trust each other could actually communicate and exchange (interesting [5]) qubits. I've written some fun papers on the possible implications of this. If we ever get the technology, then it might actually be meaningful to consider things like chosen-quantum-ciphertext attacks against a classical symmetric cipher. But that's many, many years away, and, in any case, an attacker will only ever get to do a quantum query attack against a cryptosystem if a victim lets them. [4] Otherwise all queries will be classical.
[4] Or in very complex settings where there is an obfuscated black box, for example. This may be relevant for zk-snarks or similar constructions.
[5] I don’t consider the optical qubits exchanged in commercial devices that supposedly implement quantum key distribution to be interesting. To the vendors of such devices, sorry.
The arguments of OP are also applicable to this kind of ciphers.
Besides the fact that there might be some ways in which quantum computers might be able to accelerate attacks against iterated block ciphers with a number of rounds inferior to some thresholds, there exists also a risk that is specific to AES, not to other ciphers.
Recovering the secret key of any cipher when you have a little amount of known plaintext is equivalent with solving a huge system of equations, much too big to be solved by any known methods.
In order to ensure that this system of equations is very big, most ciphers that are built by composing simple operations take care to mix operations from distinct algebraic groups, typically from 3 or more algebraic groups. The reason is that the operations that appear simple in a group appear very complex in other groups. So if you mix simple operations from 3 groups, when you write the corresponding system of equations in any of those groups, the system of equations is very complex. This technique of mixing simple operations from at least 3 algebraic groups has been introduced by the block cipher IDEA, as a more software-friendly alternative to using non-linear functions implemented with look-up tables, like in DES.
An example of such algebraic groups are the 3 algebraic groups used in the so-called ARX ciphers (add-rotate-xor, like ChaCha20), where the 3 groups correspond to the arithmetic operations modulo 2^N, modulo (2^N-1) and modulo 2.
Unlike such ciphers, AES uses algebraic operations in the same finite field, GF(8), but instead of using only simple operations it also uses a rather complex non-linear operation, which is inversion in GF(8), and it relies on it to ensure that the system of equations for key recovery becomes big enough if sufficient rounds are performed.
Because of this rather simple algebraic structure of AES, it has been speculated that perhaps someone might discover a method to solve systems of equations of this kind. For now, it seems very unlikely that someone will succeed to do this.
Even if solving this system of equations seems unfeasible by classical means, perhaps one might discover a quantum algorithm accelerating the solution of this particular kind of systems of equations.
I have mentioned this risk for completeness, but I believe that this risk is negligible.
AES could be modified in a trivial way, which requires no hardware changes in most CPUs, but only software changes, in order to make that system of equations much more complex, so that it would defeat any possible quantum improvement. An example of such a change would be to replace some XOR operations in AES with additions modulo 64 or modulo 32. The only problem would be that there may be devices whose firmware cannot be updated and old encrypted data that has been recorded in the past will not benefit from future upgrades.
However, like I have said, I believe that this risk for AES to be affected by some equation-solving algorithm discovered in the future remains negligible.
As far as the Grover speedup goes, it's already optimal. Requiring O(sqrt(N)) queries is the proven lower bound for unstructured search.
Basically, the best they've proven is something like O(n * inverse_ackermann(n)), but it seems likely the algorithm actually runs in O(n). We also already have a randomised algorithm for this problem that runs in O(n) expected time on worst case input. The expectation is over the random choices.
https://en.wikipedia.org/wiki/Expected_linear_time_MST_algor...
As far as I know, the current state of AES-256 is something like "this attack breaks AES in 2**254 instead of 2**256 if we have something like 2**80 bits of ciphertext to work with in the first place". That's nice for getting papers in crypto conferences but not something to lose sleep over yet, but an AI trained on the entirety of LNCS and ePrint might be a different matter.
That and side-channels, but we've known about those for a while.
Whether AES or ChaCha holds up better in the face of AI is an interesting open question for which I can't offer anything better than a coin flip.
What is going on?
I think an analogy would be, imagine you are driving across north america in a car, but your engine is broken. The mechanic is near by so you put it in neutral and push it.
If someone said, well it took you half an hour to push it to the mechanic, it will take the rest of your life to get it across north america - that would be the wrong take away. If the mechanic actually fixes the engine, you'll go quite fast quite quickly. On the other hand maybe its just broke and can't be fixed. Either way how fast you can push it has no bearing on how fast the mechanic can fix it or how fast it will work after its fixed.
Maybe people will figure out quantum computers maybe they won't, but the timeline of "factoring" 15 is pretty unrelated.
In the context of cryptography, keep in mind its hard to change algorithms and cryptographers have to plan for the future. They are interested in questions like: is there a > 1% change that a quantum computer will break real crypto in the next 15 years. I think the vibe has shifted to that sounding plausible. Doesn't necessarily mean it will happen, its just become prudent to plan for that eventuality, and now is when you would have to start.
https://bas.westerbaan.name/notes/2026/04/02/factoring.html
It doesn't say much by itself, but it has four very good links on the subject. One of these has a picture of the smallest known factor-21 circuit, which is vastly larger than that of the factor-15 circuit, and comparable to much larger numbers. Another is Scott Aaronson's article making the analogy of asking factoring small numbers as asking for a "small nuclear explosion" - if you're in 1940 and not able to make a small nuclear explosion, that doesn't mean you're much farther away from a big nuclear explosion.
I’ve seen so much change so fast my assumption is someone did it already and preprints are making the rounds.
This assumes that there will not be other problems that arise. I suspect that "error correcting" thousands of qubits entangled with one another will be one of those problems.
To get useful results, a quantum computer needs all of its qbits to stay entangled with each other, until the entire group collapses into the result. With current technology, it is very difficult for a reasonable sized group of qbits to stay coherently entangled, so it can only solve problems that are also relatively easy to solve on classical computers.
If someone today were to figure out how to keep large numbers of bits entangled, then quantum computing would instantly be able to break any encryption that isn't quantum safe. It's not something that we are slowly working toward; it's a breakthrough that we can't predict when, or even if, it will happen.
Shor's and Grover's still are algorithm that require a massive amount of steps...
If an attacker can break the symmetric encryption in a reasonable amount of time, they can capture the output and break it later.
In addition, how are you doing the key rotation? You have to have some way of authenticating with the rotation service, and what is to stop them from breaking THAT key, and getting their own new certificate? Or breaking the trusted root authority and giving themselves a key?
I agree. The point I am trying to make is that even for asymmetric encryption (which is far more vulnerable), there are still plausible ways to make a quantum break more difficult.
The only thing that could compromise this scheme, aside from breaking the signing keys, would be to have TLS broken to the extent that viewing real-time traffic is possible. Any TLS break delayed by more than 15 minutes would be worthless.
It sounds like you’re talking about breaking TLS’s key exchange? Why would this not have the usual issue of being able to decrypt recorded traffic at any time in the future?
Edit: If it’s because the plaintext isn’t useful, as knorker got at in a sibling comment… I sure hope we aren’t still using classical TLS by the time requiring it to be broken in 1 minute instead of 15 is considered a mitigation. Post-quantum TLS already exists and is being deployed…
What makes you say that? This is the store now decrypt later attack, and it's anything but worthless.
Oh, worthless for your oauth? Uh… but how do you bootstrap the trust? Sounds to me like you need post quantum to carry the whole thing anyway.
Or you mean one key signs the next? Ok, so your bet is that within the time window an RSA key, RSA can't be cracked?
Why in the world would anyone want to depend on that? Surely you will also pair it with PQ?
There are enough order-of-magnitude breakthroughs between today and scalable quantum error correction, that it makes no sense to try to to guess exactly the order of magnitude of the attacks that will be feasible.
Either you believe they won't happen, in which case you can keep using long-term ECDSA keys, or you believe they will happen, in which case they are likely to overshoot your rotation period.
I dont know what the quantum future holds, but if quantum actually happens then i have low faith in your plan.
I think there are too many unknowns to bet it all on one horse.
So, if we have to change all of our infrastructure due to a supposed quantum computing threat, I'd go with HybridPQ for asymmetric encryption.
I don't think I understand the threat model you are using here?
If that’s the case, would the time eventually be basically irrelevant with enough compute? For instance, if what’s now a data center is able to fit in the palm of your hand (comparing early computers that took up rooms to phones nowadays). So if compute is (somehow) eventually able to be incredibly well optimized or if we use something new, like how microprocessors were the next big thing, would that then be a quantum threat to 128-bit symmetric keys?
Compute has seen in the ballpark of a 5-10 orders of magnitude increase over the last 40 years in terms of instructions per second. We would need an additional 20-30 orders of magnitude increase to make it even close to achievable with brute force in a reasonable time frame. That isn’t happening with how we make computers today.
Keep here in mind that computers today have features approaching the size of a single atom, switching frequencies where the time to cross a single chip from one end to the other is becoming multiple cycles, and power densities that require us to operate at the physical limits of heat transfer for matter that exists at ambient conditions.
We can squeeze it quite a bit further, sure. But anything like 20-30 orders of magnitude is just laughable even with an infinite supply of unobtanium and fairy dust.
None of those are remotely practical, even imagining quantum computers that become as fast (and small! and long-term coherent!) as classical computers.
WPA3 moved from symmetric AES to ECDH which is vulnerable to Quantum. Gonna be a tonne of IOT inverters waste.
The say the 's' in IoT stands for secure, and from my experience that is true. Pretty much nothing is getting thrown out, because it isn't secure.
...but even if they had, what realistically could they have done about it? ML-KEM was only standardized in 2024 [1].
also, the addition of ECDH in WPA3 was to address an existing, very real, not-theoretical attack [2]:
> WPA and WPA2 do not provide forward secrecy, meaning that once an adverse person discovers the pre-shared key, they can potentially decrypt all packets encrypted using that PSK transmitted in the future and even past, which could be passively and silently collected by the attacker. This also means an attacker can silently capture and decrypt others' packets if a WPA-protected access point is provided free of charge at a public place, because its password is usually shared to anyone in that place.
0: https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#WPA3
1: https://en.wikipedia.org/wiki/ML-KEM
2: https://en.wikipedia.org/wiki/Wi-Fi_Protected_Access#Lack_of...
why do you have to assume that?
you're at Acme Coffeeshop. their wifi password is "greatcoffee" and it's printed next to the cash register where all customers can see it.
with WPA2 you have to consider N possible adversaries - Acme Coffee themselves, as well as every single other person at the coffeeshop.
...and also anyone else within signal range of their AP. maybe I live in an apartment above the coffeeshop, and think "lol it'd be fun to collect all that traffic and see if any of it is unencrypted".
with WPA3 you only have to consider the single possible adversary, the coffeeshop themselves.
that was also one of the things fixed [0] in WPA3.
it sounds like you don't consider it relevant to your personal threat model. but the experts in charge of the standard apparently thought it was important to have in general.
0: https://en.wikipedia.org/wiki/Opportunistic_Wireless_Encrypt...
Grover attacks are very blatantly impractical. When someone describes Grover-type attacks in the same breath as Shor-type attacks, without caveats, that's a red flag.
One wonderful thing about Filippo is that when it is possible for him to give concrete advice, he gives it, and brings receipts.
Thanks Filippo!
You suppose what happens if the OpenSSH maintainers considered the cost when implementing those algorithms? Perhaps they did, but decided the benefits were worth it.
And for ECC, I know many are using the "2 exp 255 - 19" / 25519 for it's unlikely to be backdoored but it's only 256 bits but... Can't we find, say, "2 exp 2047 - 19" (just making that one up) and be safe for a while too?
Basically: for RSA and ECC, is there anything preventing us from using keys 10x bigger?
That's correct. The quantum computer needs to be "sufficiently larger" than your RSA key.
> Basically: for RSA and ECC, is there anything preventing us from using keys 10x bigger?
For RSA things get very unwieldy (but not technically infeasible) beyond 8192 bits. For ECC there are different challenges, some of which have nothing to do with the underlying cryptography itself: one good example is how the OpenSSH team still haven't bothered supporting Ed448, because they consider it unnecessary.
the time to run the algorithm has cubic scaling - 1000x more time required.
but it remains exponentially faster, just 1 minute becomes 1 day, 1 day becomes 3 years. still "easily" broken
you can run benchmarks yourself: openssl speed rsa1024 rsa2048
also this (slightly dated) java ex writeup covers this well: https://www.javamex.com/tutorials/cryptography/rsa_key_lengt...
tldr trade off is found between better performance and how many years the data needs to be assumed confidential
every encryption scheme has at least one way to be decrypted.
fidelity of information is one use of encryption, if you apply the solution and get garbage, something is wrong, somewhere.
occultation of information is another use, that is commonly abused by extending undue trust. under the proviso that encryption will eventually be broken, you cant trust encryption to keep a secret forever, but you can keep it secret, for long enough that it is no longer applicible to an attack,or slightly askew usecase, thus aggressive rotation of keys becomes desirable
One-time pads [0] are actually impossible to break, but they're pretty tricky to use: you must never ever reuse them, they must be truely random, and you need some way to share them between both parties (which isn't that easy since they need to be at least as large as all the data that you ever want to transmit).
if you know something about the content e.g. it is for russians, or americans.
you can use a frequency analysis to identify vowels. that goes for a simple substitution cypher that is relying on low frequency of usage[one time use] and does not keep it brief.
when you further substitute numbers for words, you gain more room for verbosity.
if you have high stakes, your message in the clear, should only be useful for a limited time, at the point that it is no longer actionable.
im very familiar with one time pads random, and keyed.
they are a little simple, you can use a triaxial scheme, or a tensor like scheme, for more leg room and more complexity.
depending on what you are doing it may be necessary, to not carry any pads, but to have access at some point, to agreed upon keys, in order to generate a pad on the spot. or even work in your head, if you have skill. e.g. jackdwlovemybigsphnxfqurtz as a weak example.
Right, which is why I didn't quote that part :)
> you can use a frequency analysis to identify vowels.
That will help in many cases, but not against a properly-used one-time-pad.
> but to have access at some point, to agreed upon keys, in order to generate a pad on the spot
That's not really a one-time pad then, that's just a stream cipher. Which do work better than one-time pads in the vast majority of cases, aside from not being "perfectly" secure.