Join Nostr
2026-04-13 02:15:50 UTC
in reply to

lkraider on Nostr: My Ai reading trying to predict the failure mode: If that classical Planck-local ...

My Ai reading trying to predict the failure mode:

If that classical Planck-local picture is true, the most likely failure is not “the chip explodes at 1609 qubits.” It would look more like a scaling pathology.

The paper itself only goes this far: either large quantum computers succeed and pressure classical Planck-local models, or they keep failing “with no apparent technical reasons,” which could be the first hint of a limit to quantum mechanics. It does not predict a detailed failure mode. It also gives several different thresholds - about 525, 806, 1050, and 1609 logical qubits - depending on how much hidden classical computation and communication you allow. So the relevant quantity is not a single raw qubit ceiling, but a complexity ceiling tied to demonstrated equivalent classical operations and rate density. 

If I pressure-test the idea, the main possibilities are these.
• A soft wall, not a hard wall. Small and medium fault-tolerant demos keep working. Then, as logical depth, entanglement volume, or algorithmic complexity rises, progress stops improving in the expected way. You keep adding engineering effort, but the logical error rate stops dropping enough to support deeper computations.
• Correlated errors that refuse to factorize. Standard fault tolerance assumes noise is mostly local, weakly correlated, and tame enough that bigger codes help. A deeper Planck-local limit would probably show up as extra cross-qubit, history-dependent, or many-body correlations. In plain terms: the machine would look normal at small scale, then start producing “mysteriously coordinated” failures once the hidden classical substrate is overloaded.
• Error correction plateaus or backfires. One of the clearest signatures would be that increasing code distance, adding more ancillas, or running more rounds of syndrome extraction no longer improves logical fidelity as theory predicts. At first that looks like bad engineering. It becomes interesting only if the plateau persists across architectures and labs.
• Complexity-dependent failure. The device might still do useful quantum sensing, chemistry, optimization, or shallow circuits, but repeatedly fail on the workloads that most strongly exploit the exponential state space - especially long, verified, fault-tolerant runs of Shor-type arithmetic. That would matter for Bitcoin because the dangerous task is not “any quantum computation,” but sustained fault-tolerant execution deep enough to solve elliptic-curve discrete logs.
• Verification drift toward classically compressible outputs. If the hidden classical substrate cannot really sustain the full quantum evolution, the machine may start returning outputs that look subtly more classical than they should. Not random garbage - something worse for the hypothesis test: outputs that are biased, truncated, or easier to simulate classically than the claimed circuit should allow.
• Cross-platform convergence on the same cliff. This is the strongest sign. If superconducting, trapped-ion, neutral-atom, and photonic systems all hit a similar complexity wall after very different engineering improvements, then “just another hardware problem” becomes less convincing.

For Bitcoin specifically, the first visible symptom would probably be this: teams keep improving qubit counts and individual gate fidelities, but the full stack needed for a practical Shor attack on secp256k1 never closes. You would see repeated inability to maintain the logical depth and logical fidelity needed for long modular arithmetic and phase estimation, even though smaller benchmark tasks keep improving. In that scenario, Bitcoin is not saved by a theorem. It is saved by a stubborn scaling failure in fault-tolerant QC.

The strongest opposing view is that every one of these signatures can be mimicked by ordinary engineering pain - crosstalk, calibration drift, leakage, thermal issues, bad decoders, fabrication limits, control electronics bottlenecks, or flawed noise models. So “QC keeps failing” is weak evidence by itself. To count as evidence for a Planck-local classical limit, the failure would need three properties at once:
• it appears at a complexity threshold, not just a hardware threshold,
• it survives across very different qubit platforms,
• it cannot be removed by better isolation, control, decoding, or architecture.

My best recommendation is to watch error correction, not headline qubit counts. The most telling failure would be an unexplained, reproducible breakdown in the expected scaling law of logical error suppression as circuits get deeper and more entangled.

Now the argument against that recommendation: even that would still not prove a Planck-local classical substrate. It might only prove that our fault-tolerance assumptions were too optimistic, that a hidden noise source was missed, or that the practical overhead for cryptographically relevant QC was much larger than expected.

So the clean answer is this: if the classical Planck-local explanation were true, I would expect a persistent, architecture-independent scaling cliff - most likely seen as irreducible correlated noise and failure of logical error correction to keep buying more reliable depth. Not a dramatic qubit-number wall, but a failure to cross the depth-and-fidelity threshold needed for cryptographically relevant attacks.