You are assuming that progress on factoring will be smooth, but this is unlikely to be true. The scaling challenges of quantum computers are very front-loaded. I know this sounds crazy, but there is a sense in which the step from 15 to 21 is larger than the step from 21 to 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139 (the RSA100 challenge number).
Consider the neutral atom proposal from TFA. They say they need tens of thousands of qubits to attack 256 bit keys. Existing machines have demonstrated six thousand atom qubits [1]. Since the size is ~halfway there, why haven't the existing machines broken 128 bit keys yet? Basically: because they need to improve gate fidelity and do system integration to combine together various pieces that have so far only been demonstrated separately and solve some other problems. These dense block codes have minimum sizes and minimum qubit qualities you must satisfy in order for the code to function. In that kind of situation, gradual improvement can take you surprisingly suddenly from "the dense code isn't working yet so I can't factor 21" to "the dense code is working great now, so I can factor RSA100". Probably things won't play out quite like that... but if your job is to be prepared for quantum attacks then you really need to worry about those kinds of scenarios.
The best proposal I have heard for rescuing P2SH wallets after cryptographically relevant quantum computers exist is to require vulnerable wallets to precommit to transactions a day ahead of time. The precommitment doesn't reveal the public key. When the public key must be exposed as part of the actual transaction, an attacker cannot redirect the transaction for at least one day because they don't have a valid precommitment to point to yet.
> [0.1% gate error rate] is still wildly out of reach
This is false. When Fowler et al assumed 0.1% gate error rates would be reached for his estimates in 2012 [0], that was ostentatious. Now it's frankly a bit overly conservative. All the big architectures are approaching or surpassing 0.1% gate error rates.
From 2022 to 2024, the google team improved mean two qubit gate error rate from 0.6% [1] to 0.4% [2]. Quantinuum's Helios has a two qubit gate error rate of 0.08% [3]. IBM has Heron processors available on their cloud service with two qubit gate error rates ranging from 0.2% to 0.7% [4]. Neutral atom machines have demonstrated 0.5% gate error rates [5].
What do you mean? The original 2019 supremacy experiment was eventually simulated, as better classical methods were found, but the followups are still holding strong (for example [4] and [5]).
There was recently a series of blog posts by Dominik Hangleiter summarizing the situation: [1][2][3].
Agree. Scott is exactly correct when he just straight calls it crap.
It's inaccurate to say it wins on small numbers because on small numbers you would use classical computers. By the time you get to numbers that take more than a minute to factor classically, and start dreaming of quantum computers, you're well beyond the size where you could tractably do the proposed state preparation.
That slide deck is complaining that correct work on quantum attacks should be seen as negligible priority or as distractions. TFA is complaining that JVG isn't even correct. They are pretty different concerns.
To be clear, I think that slide deck will be looked back upon as naive. In particular, it makes the classic mistake of assuming the size of number factored should be growing smoothly. That's naive because 15 is such a huge cost outlier and because quantum error correction has frontloaded costs. See [1] and [2] for details.
The very first demonstration of factoring 15 with a quantum computer, back in 2001, used a valid modular exponentiation circuit [1].
The trickiest part of the circuit is they compile conditional multiplication by 4 (mod 15) into two controlled swaps. That's a very elegant way to do the multiplication, but most modular multiplication circuits are much more complex. 15 is a huge outlier on the difficulty of actually doing the modular exponentiation. Which is why so far 15 is the only number that's been factored by a quantum computer while meeting the bar of "yes you have to actually do the modular exponentiation required by Shor's algorithm".
would other mersenne numbers admit the same trick? if so, factoring 2047 would be really interesting to see. it's still well within the toy range, but it's big enough that it would be a lot easier to believe that the quantum computer was doing something (15 is so small that picking an odd number less than sqrt(15) is guaranteed to be a correct factorization)
No, 15 is unique in that all multiplications by a known constant coprime to 15 correspond to bit rotations and/or bit flips. For 2047 that only occurs for a teeny tiny fraction of the selectable multipliers.
Shor's algorithm specifies that you should pick the base (which determines the multipliers) at random. Somehow picking a rare base that is cheap to do really does start overlapping with knowing the factors as part of making the circuit. By far the biggest cheat you can do is to "somehow" pick a number g such that g^2=1 (mod n) but g isn't 1 or N-1. Because that's exactly the number that Shor's algorithm is looking for, and the whole thing collapses into triviality.
For each chick they do 24 trials divided into 4 blocks with retraining on the ambiguous shape and actual rewards after each block. During the actual tests they didn't give rewards. In figure 1 they show the data bucketed by trial index. It's a bit surprising it doesn't show any apparent effect vs trial number, e.g. the first trial after retraining being slightly different.
I have to admit I'm super skeptical there's not some stupid mistake here. Definitely thought provoking. But I wish they'd kept iteratively removing elements until the correlation stopped happening, so they could nail down causation more precisely.
I do agree my skepticism level rises extremely high in any experimental psychology experiment. There’s just so many ways to bias results, in addition to “do enough experiments and one of them will get a statistically unlikely result” problem.
This group does a lot like this https://www.dpg.unipd.it/en/compcog/publications … so that’s tempting to think they keep trying things until something odd happens (kind of like physicists who look for 5th forces… eventually they find something odd but often it’s just an experimental issue they need to understand further).
> Using simple simulations,we show that this pattern arises naturally from collider bias when selection into elitesamples depends on both early and adult performance. Consequently, associationsestimated within elite samples are descriptively accurate for the selected population,but causally misleading, and should not be used to infer developmental mechanisms
Is that paper in print? I can't seem to find if it was peer reviewed.
If the paper is true, then, yeesh! That's a pretty big miss on the part of Güllich et al.
Reading through the very short paper there, it seems to not have gone through review yet (typos, mispellings, etc). Also, it's not clear that the data in the tables or the figure are from Güllich's work or are simulations meant to illustrate their idea (" True and estimated covariate effects in the presence of simulated collider bias in the
full and selected samples"). Being more clear where the data is coming from may help the argument, but I likely just missed some sentence or something.
I'll be interested to see where this goes. That Güllich managed to get the paper into Science in the first place lends credence to them having gone through something as simple as Berkson's Paradox and have accounted for that. It's not everyday you get something as 'soft' as that paper into Science, after all. If not, then wow! standards for review really have slipped!
> By late 2024 the biggest numbers that had been factored by an actual digital quantum computer had 35 bits (citing https://arxiv.org/pdf/2410.14397v1 )
This is incorrect. The cited reference says "N <= 35". That N is the number being factored, not the number of bits in the number. Also, footnote a of that paper points out (correctly) that the circuits that were used likely needed knowledge of the factors to create (e.g. as explained in https://arxiv.org/abs/1301.7007 ). As far as I know, only N=15 has been factored on a quantum computer in a no-shenanigans way.
It's conceivable that current ion trap machines could do a no-shenanigans N=21.... but anyone judging progress in quantum computing by largest-number-factored is looking at the wrong metric (for now). You won't see that metric move meaningfully until quantum error correction is done spinning up.
It's fame comes from the simplicity of its construction rather than its utility elsewhere in mathematics.
For example, Graham's number is pretty famous but it's more of a historical artifact rather than a foundational building block. Other examples of non-foundational fame would be the famous integers 42, 69, and 420.
Consider the neutral atom proposal from TFA. They say they need tens of thousands of qubits to attack 256 bit keys. Existing machines have demonstrated six thousand atom qubits [1]. Since the size is ~halfway there, why haven't the existing machines broken 128 bit keys yet? Basically: because they need to improve gate fidelity and do system integration to combine together various pieces that have so far only been demonstrated separately and solve some other problems. These dense block codes have minimum sizes and minimum qubit qualities you must satisfy in order for the code to function. In that kind of situation, gradual improvement can take you surprisingly suddenly from "the dense code isn't working yet so I can't factor 21" to "the dense code is working great now, so I can factor RSA100". Probably things won't play out quite like that... but if your job is to be prepared for quantum attacks then you really need to worry about those kinds of scenarios.
[1]: https://www.nature.com/articles/s41586-025-09641-4
reply