No, I do not believe we will ever attain large-scale, fault-tolerant quantum computers that deliver practical, exponential speedup on useful problems (e.g., breaking RSA-2048, simulating arbitrary molecules, or general optimization beyond what classical supercomputers + heuristics can do).Here is why I lean toward the skeptical side:The error-correction overhead is probably fatal in practice 1. The best physical two-qubit gate fidelities in 2025 are still around 99.9 % (superconducting) to 99.99 % (trapped ions). To get one logical qubit with error rate ~10âťÂšâľ (needed for Shorâs algorithm on 2048-bit RSA) you need roughly 1,000â10,000 physical qubits per logical qubit under todayâs best codes, and millions of physical qubits in total. Every doubling of logical-qubit count still multiplies the physical-qubit requirement by roughly 10Ă or more. There is no credible path from todayâs ~100-qubit noisy devices to the required 10âˇâ10âš near-perfect physical qubits without breakthroughs that violate known physics. 2. Decoherence and correlated errors are not just engineering problems Real systems have 1/f noise, cosmic-ray events, thermal fluctuations, and control crosstalk that produce strongly correlated errors across many qubits. Most theoretical fault-tolerance proofs assume independent, local, Markovian noiseâconditions that are provably violated in every physical platform. When physicists model realistic correlated noise, the error-correction threshold collapses or disappears entirely ( Alicia & Kalai 2018â2024 papers, and many follow-ups). 3. The âexponentially small amplitudesâ problem To factor a 2048-bit number with Shorâs algorithm, the quantum Fourier transform must resolve probability amplitudes on the order of 2âťâ´â°â°â°. That is smaller than one part in 10š²â°â°. No physical instrument can distinguish a signal that small from zero, and no error-correction code can protect amplitudes that are smaller than the natural noise floor of the universe (thermal fluctuations, gravity-wave background, etc.). This is not a temporary engineering limit; it is a hard limit from basic quantum metrology and the holographic principle. 4. History of analog computing Every previous attempt to build scalable computers that rely on precise continuous parameters (analog computers in the 1940sâ60s, optical computers in the 1980sâ90s) eventually failed for exactly the same reason: noise and precision requirements scale exponentially with problem size. Quantum computers are the ultimate analog computers, and we have no example in the history of technology of an analog paradigm overtaking digital at scale. 5. What we will get instead We already have, and will continue to improve, noisy intermediate-scale quantum (NISQ) devices and quantum-inspired classical algorithms (tensor networks, QAOA-like heuristics, etc.) that give modest speedups on some chemistry and optimization problemsâmaybe 10â1000Ă in narrow cases. That is useful, but it is not the revolutionary exponential quantum advantage that was promised. In short: small quantum sensors, quantum networks, and specialized quantum simulators will become routine. But the dream of a universal, fault-tolerant quantum computer that obsoletes classical cryptography and simulation is, in my view, physically impossibleânot just hard, not just 50 years away, but impossible with the laws of physics as we understand them.This is obviously a minority position in the current funding environment, but it is shared by a non-trivial number of serious physicists and complexity theorists (Gil Kalai, Mikhail Dyakonov, Stephen Wolfram, Leonid Levin, and others). The evidence has only strengthened in the skepticsâ direction since about 2019. - Grok #quantum #btc #cryptography #bitcoin