There is no real wealth in money gathering.
đ This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.
Edit
There is no real wealth in money gathering.
Thatâs the exaltation of the chemist after figuring out the hydrogenation method for soap
I found the AI article analysis also interesting: I read the Quantum Zeitgeist piece you linked (published Nov 27, 2025). ďżź It is basically a âprepare for PQC because Shor exists and migration is slowâ article, not a âquantum is imminent and will crack everything next yearâ pitch. That framing is mostly reasonable. Where it goes wrong is in the usual places: it blends correct high-level security guidance with sloppy, overconfident technical specifics. A few key assumptions in the article that are easy to over-believe: 1. âRSA-2048 could be done in hours on a sufficiently large QC.â Thatâs a rhetorical shortcut. The best-known public resource estimates depend on explicit hardware assumptions (gate error, cycle time, connectivity, decoding/control latency). Under those assumptions, the 2019/2021 estimate was â8 hours but ~20 million noisy qubits.â ďżź In 2025, Gidney published a new estimate claiming â< 1 million noisy qubitsâ but âless than a weekâ runtime under the same explicit assumptions. ďżź So âhoursâ is not a stable claim even within the same authorâs evolving approach. 2. âSurface code needs roughly 1,000 physical qubits per logical qubit.â Thatâs an oversimplification that hides the real dependency: required code distance depends on your physical error model and target logical error rate, and the qubits-per-logical grows roughly with d^2 (e.g., 2 d^2 - 1 is a common counting baseline). ďżź âAbout 1,000â might be plausible in some regimes, but it is not a universal constant. 3. âNIST selected four PQC algorithms ⌠including Kyber and Dilithium.â Selection vs standardization got blurred. NIST published three finalized PQC standards in August 2024: FIPS 203 (ML-KEM/Kyber), FIPS 204 (ML-DSA/Dilithium), FIPS 205 (SLH-DSA/SPHINCS+). FALCON is selected but (as of Dec 2025) is still slated for FIPS 206 âin development.â ďżź Also, in March 2025 NIST selected HQC as a backup KEM (a separate âbelt-and-suspendersâ move that the article doesnât mention). ďżź 4. âAs of 2024, largest processors have fewer than 1,500 qubits.â Thatâs likely true in the narrow âphysical qubit count on a single processorâ sense (IBM announced Condor at 1,121 superconducting qubits in late 2023). ďżź But this metric is the most hype-prone one; it ignores error distributions, correlated noise, leakage, and whether scaling improves logical error with code distance (the thing that actually matters for cryptography). The strongest opposing expert viewpoint (steelman) against âquantum is a scamâ is simple: ⢠The field is not âtrying to brute-force miracles with noisy toys.â The credible milestone is demonstrating error correction that improves as you scale redundancy (below-threshold behavior). That is now a serious peer-reviewed line of work, not marketing. ďżź ⢠And on the security side, the risk management argument doesnât require believing a CRQC is imminent. It only requires (a) long-lived secrets exist and (b) migration takes years. Thatâs why government guidance targets a long transition horizon (e.g., NSA CNSA 2.0 pushing adoption timelines out to 2030+ for some requirements, with longer-term goals). ďżź Where the article is genuinely useful (and you should keep it): ⢠âHarvest now, decrypt laterâ is the one point that stays true even if youâre deeply skeptical about quantum timelines. The data you encrypt today can have a confidentiality life longer than your migration lead time. Thatâs a planning problem, not a physics bet. ďżź Where it drifts into fear-mongering / weak epistemics: ⢠âExperts estimate 2030sâ is too hand-wavy without a distribution and a definition of âcryptographically relevant.â Serious timeline work frames probabilities over time for a specific capability (e.g., RSA-2048 in 24 hours). ďżź ⢠It mixes architecture commentary with dated company references (e.g., âHoneywellâ in trapped ions, when the commercial entity is Quantinuum) and asserts specific error-rate numbers without sourcing. ďżź ⢠The site itself includes a disclaimer that third-party info hasnât been reviewed for accuracy, which should downgrade confidence in its precise numerical claims. ďżź Reframe as a decision (with a âdo nothingâ option): Option A: Do nothing now. Rational if your encrypted data has short shelf life (days/months) and you can rotate/replace quickly. Option B: Start PQC migration as âcrypto-agility + inventory,â not a rip-and-replace. This is the boring, usually-correct move: find where RSA/ECC live, classify data by secrecy lifetime, plan staged upgrades. Option C: Overreact and âquantum-proof everything immediately.â This is where orgs burn money, break compatibility, and sometimes reduce near-term security due to rushed deployment. My recommendation: treat the article as a decent motivation to do Option B, but do not trust its technical numbers as decision-grade. Ground yourself in NISTâs standards and transition materials, then decide based on (1) data shelf life and (2) your migration time. ďżź Now the hard argument against my recommendation: If you donât have the staff maturity to do crypto-agility well, a PQC program becomes theater: inventories go stale, teams cargo-cult âKyber everywhere,â and you introduce new failure modes (performance regressions, handshake breakage, supply-chain incompatibilities). In that scenario, the honest move is to first fix basic key management, rotation, and dependency tracking before you touch PQC. Small, testable steps with kill-criteria (so you donât get trapped by hype): Step 1: Build a cryptographic inventory (protocols, certs, libraries, HSMs, vendors). Kill-criteria: if you canât list where RSA/ECC are used, you are not âquantum planning,â youâre storytelling. Step 2: Classify systems by âconfidentiality lifetimeâ (weeks / years / decades). Kill-criteria: if nobody can defend the category, assume âyearsâ for anything containing identity, finance, or long-lived IP. Step 3: Pilot PQC in one boundary where breakage is tolerable (e.g., internal service-to-service TLS with a rollback plan). Kill-criteria: if you canât roll back cleanly, youâre not ready for broad deployment. Step 4: Track reality using one metric: demonstrations that logical error improves with increased code distance (not âmore qubitsâ). ďżź A short Orthodox-flavored grounding (since you asked for âground truthâ in this tradition): donât let fear or pride run the plan. Hype feeds both. The sober path is vigilance plus humility: prepare because it is prudent, but refuse to speak more confidently than the evidence allows.
Still on scam territory for me.
I had my AI counter-argument bot cross-check me: A lot of the skepticism you summarized is directionally right (the hype is real; âpractical advantage next yearâ claims are usually marketing), but several of your strongest-sounding bullets rely on shaky premises or outdated âfolk numbers.â If you want an intellectually honest view, you end up in an uncomfortable middle: quantum computing is not a âscam,â but it is also not close to the grandiose promises investors were sold. Here are key assumptions in your writeup that could be false: ⢠âNo progress on factorization in 20 years.â The algorithms (Shor) are old, yes, but the engineering resource estimates and compilation techniques have improved a lot. For RSA-2048 specifically, Gidney+EkerĂĽâs well-known estimate was ~20 million physical qubits under explicit assumptions, and in 2025 Gidney published a new estimate claiming âless than a million noisy qubitsâ (still with demanding assumptions and still far beyond today). ďżź ⢠âMillions of physical qubits per logical qubitâ as a fixed rule. Overhead depends on physical error rates, code choice, connectivity, and what youâre trying to do (memory vs T gates, etc.). IBM is explicitly arguing for qLDPC-style paths that reduce qubit overhead compared to surface-code baselines, at least for some components (e.g., memory). ďżź ⢠âNISQ can only do contrived demos.â Many âsupremacy/advantageâ tasks are contrived, yes, but the real question is whether error-corrected logical qubits can scale with improving logical error as you increase code distance. There are now peer-reviewed results explicitly about being âbelow thresholdâ (the regime you must be in for scalable fault tolerance). ďżź Now the strongest opposing expert viewpoint (steelman), in plain terms: Quantum computing is an engineering program to build a fault-tolerant machine, and the physics is not in serious doubt: we already can create, control, and measure multi-qubit entangled systems; the hard part is driving logical error down faster than system size grows. The most credible âthis is realâ evidence is not qubit counts or sampling stunts, but demonstrations of error correction improving as redundancy increases (crossing into âbelow-thresholdâ behavior). ďżź Where your skepticism is most justified (real failure modes): ⢠Incentives are misaligned. Startups and even big labs market ânumber of qubitsâ because itâs legible, while the actually predictive metrics (two-qubit gate fidelity distributions, leakage, correlated noise, cycle time, error model stability, decoding latency, logical error per round, etc.) are harder to sell. ⢠Scaling is brutal in ways that donât show up in demos: cryogenics, wiring, calibration at scale, correlated noise, fabrication yield, and error decoding/control stacks. Even if the math works, the âsystems engineering taxâ can kill timelines. ⢠âBreak RSAâ headlines are routinely abused. Shor threatens RSA/ECC in principle, but the relevant question is âcryptographically relevant quantum computerâ (CRQC) timelines under realistic constraints, not toy factoring demos. On âis it a scam?â: calling it a scam conflates three different things. 1. Quantum computing as a scientific/engineering field: not a scam. There are serious roadmaps and serious peer-reviewed milestones in QEC. ďżź 2. Quantum computing marketing: often hypey, sometimes misleading, occasionally outrageous. Even pro-quantum voices have publicly complained about hype incentives. ďżź 3. âQuantumâ used as a brand for fraud (e.g., âquantum trading botsâ): thatâs a separate bucket (actual scams) and shouldnât be used to judge the underlying field. ďżź A decision framing that avoids hero narratives: ⢠Option A: Do nothing / ignore QC entirely. Rational if youâre evaluating it as a near-term product category. ⢠Option B: Treat QC as long-horizon R&D, and demand falsifiable milestones (logical qubits with improving logical error; small fault-tolerant demonstrations; resource estimates with explicit assumptions). ⢠Option C: Act only where the downside is asymmetric: migrate cryptography to PQC because migration lead times are long and âharvest now, decrypt laterâ is real even if CRQCs are years out. This is exactly why NIST finalized PQC standards and publishes migration guidance. ďżź My best recommendation (if youâre an operator/investor/strategist): be skeptical of ârevolution soon,â but donât be skeptical of the threat model. Concretely: ignore qubit-count press releases, track error-correction milestones, and in security planning treat PQC migration as a project you start now, not later. ďżź Now Iâll argue hard against my own recommendation: Itâs easy to overreact and waste money âquantum-proofing everythingâ prematurely, especially if your data has short confidentiality lifetime. Also, PQC migration itself has costs and risks (performance, protocol changes, interoperability bugs, long-term confidence in new assumptions). If your organization cannot do crypto-agility well, a rushed migration can reduce security in the near term. The sane middle is staged migration driven by data lifetime and dependency mapping, not panic. If you want to rewrite your critique into small, testable steps with kill-criteria (so itâs not just vibes): ⢠Step 1 (kill-criteria: vendor canât answer): Require any âprogressâ claim to be expressed as (a) logical error per round vs code distance, (b) syndrome extraction cycle time, (c) error correlations/leakage handling, (d) decoding latency budget. If they only talk âqubits,â downgrade credibility. ⢠Step 2 (kill-criteria: handwaving assumptions): For âbreak RSAâ claims, demand a resource estimate with explicit physical assumptions (gate error, cycle time, connectivity, decoding, distillation). Compare against conservative published estimates like Gidneyâs 2019 and 2025 papers. ďżź ⢠Step 3 (kill-criteria: no security inventory): If this is about security, do a cryptographic inventory first. If you canât locate where RSA/ECC are used, âquantum riskâ talk is theater. NISTâs transition logic is basically âstart before X+Y exceeds Z.â ďżź One last note on your sources: Reddit threads and polemical blogs are useful for stress-testing narratives, but theyâre not good evidence. If you want âstrong doubtsâ that survive scrutiny, look at skeptics who argue from physics/complexity constraints (e.g., Kalaiâs âwe may hit a wallâ position) rather than pure finance/hype rhetoric. ďżź
https://eprint.iacr.org/2025/1237.pdf Even the âfactorizationsâ they claim are a complete joke.
https://www.reddit.com/r/QuantumComputing/comments/1nfcplc/assertion_there_are_no_quantum_computers_in/ To quote: > people have shown they can now factor the number 35, after having been stuck with 21 for years. Correction, they could demonstrate a quantum circuit that could factor the number 35 after it knew that 5 and 7 are its factors, see the paper by Gutmann and Neuhaus I linked to above, "Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog".
To quote: > I consider the lack of one error corrected qubit in the history of the human race to be adequate evidence that this is not a serious enough field to justify using the word âfield.â Most of it is frankly, a scam.
https://scottlocklin.wordpress.com/2019/01/15/quantum-computing-as-a-field-is-obvious-bullshit/
Esses dias pesquisei sobre a histĂłria desse poster. Sabia que ele muda entre as temporadas do Arquivo X? A foto original ĂŠ dos anos 70.
Seedsigner
Itâs a scam.