
Question Fedimints and oracles. Could the oracle be the non funded mint?
đź”” This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.
EditQuestion Fedimints and oracles. Could the oracle be the non funded mint?
So every week I get a msg. Borrow against your bitcoin. So fiat debt Or bitcoin debt. Is debt the problem?
So if OP_return bytes were bullets. And nodes were Guns. I would have to limit/filter my magazines to 10 bullet. Because with larger size mags more people can die. Not sure which side of freedom I am on.
Saw this post and had to make a print. Thanks for the look. https://www.printables.com/model/1419522-yellows-hippopowtamus
Call it.
Prepare. A lockdown is not off the table. https://x.com/rbreich/status/1967953311794958769?s=46&t=m2CAF7PUUoC-Jl6gyTr1eg
So I asked LLM this question. Here’s a breakdown of the arguments for and against increasing the OP_RETURN (aka datacarrier) limit in Bitcoin Core 30, from the perspective of its developers, and what those arguments imply. I’ll lay out the core-devs’ rationale, the trade-offs, and the counterarguments. If you like, I can also include what the critics are saying so you can weigh both sides. ⸻ What Core 30 is proposing / what change was merged • Currently, there is an OP_RETURN (datacarrier) default limit of 80 bytes of arbitrary data in an output.  • In Bitcoin Core v30 (targeted for ~October 2025), the policy on OP_RETURN is being relaxed: the datacarrier limit is removed (i.e. much larger data allowed in OP_RETURN outputs). The new limit becomes something near the maximum transaction size or constrained by block size / weight limits.  • One report says the new policy is for ~4 MB (block weight limit or max block) for OP_RETURN data in a transaction output.  • There are command-line parameters to enforce stricter limits (for node operators) but some of those are being deprecated or may become warning-only.  ⸻ Arguments For Increasing / Removing the OP_RETURN Limit (Core Devs’ Side) Here are the reasons, motivations, and benefits that core devs have raised to justify this change: 1. Cleaner standard behaviour / avoiding “workarounds” • With the 80-byte limit, some users who want to embed larger data are forced to use non-OP_RETURN methods (e.g. putting data in outputs, script hacks, or making outputs that remain “unspent” etc.). Some of those are worse for the UTXO set or for node resource management. The idea is that by allowing larger OP_RETURNs, operators and users will prefer this prunable, clearly defined output type rather than those hacky alternatives.  • Some devs argue the existing policy is arbitrary and mismatched: some data embedding already happens via other channels, even if OP_RETURN is restricted. The restriction encourages “less clean” methods.  2. UTXO set / node resource preservation • OP_RETURN outputs are provably unspendable. They do not create UTXOs (since the output is not spendable). That means they don’t bloat the UTXO set. This is a big point: most data methods that misuse other kinds of outputs can leave UTXOs, which must be stored forever by nodes, increasing state size. Allowing more data in OP_RETURN (which is prunable/unspendable) is seen as less harmful in that dimension.  3. Consistency with relay / miner practices • Some miners already accept larger OP_RETURN or data-heavy transactions, or they make side arrangements. So the current “standardness” policies may not reflect what’s actually happening in mining. Allowing larger OP_RETURNs aligns the default relay policy with miner behaviour and expected usage.  4. Censorship resistance & permissionlessness • One of the values of Bitcoin, from the dev-side, is that nodes and network should not pick and choose what kinds of “valid” transaction content they allow in relay by policy in a way that stifles certain legitimate use cases. Restricting data payloads can be seen as a soft form of censorship or gatekeeping. Expanding OP_RETURN allows more free expression (for use cases like decentralized identity, proofs, anchoring, metadata etc.) in a well-defined and standardized way.  5. Better defaults, node operator options • While increasing the default, Core devs note that node operators will still have parameter flags to enforce stricter limits if they choose. So, you get flexibility: people who want a “minimal” node can still run with smaller limits, others can allow more. This offers a more modular approach.  6. Prevent “worse” unintended consequences • Because of the limit, people use worse alternatives that can have higher cost to network state, more expensive to relay, more fragile. By offering a clean, relayed, relay-policy default path for larger data, it may reduce centralization pressures (e.g. data embedding only via private miner APIs) and reduce complexity.  ⸻ Potential Risks / Trade-Offs Core Devs Acknowledge (and Critics’ Arguments) While the core dev side pushes these arguments, they are aware of trade-offs or downsides: 1. Block weight & bandwidth / storage • Larger OP_RETURNs might make transactions larger, increasing bandwidth and storage needs for nodes, especially when syncing, relaying, storing full history. Even if an OP_RETURN doesn’t create new UTXOs, the data still must be stored in blocks forever once mined. More data = larger blocks or fewer regular transactions per block. That could increase cost of running a full node in terms of disk, bandwidth, and CPU verification. 2. Relay spam / mempool abuse • Allowing bigger OP_RETURN outputs means someone might flood the mempool with large, data-heavy “junk” transactions (with low utility), pushing out or crowding regular payments. That affects mempool policy, bandwidth, and could increase cost for “useful” transactions. Core devs believe fee markets will mitigate this — if you send large data you’ll pay proportionally more. But opposition doubts whether fee markets are sufficient deterrents in practice.  3. Decentralization / Node resource requirements • If nodes must store / verify larger data payloads, it raises the bar for hardware, internet bandwidth, storage. Some node-operators with limited resources may be burdened. This could cause fewer people to run full nodes, or make nodes less decentralized. This concern is frequently raised.  4. “Garbage on chain” / non-financial use & ideological purity • Some community members (purists) argue that Bitcoin’s purpose is financial — payments, store of value, settlement — and embedding non-monetary data (images, documents, large arbitrary data) is “mission creep.” They worry that increasing OP_RETURN encourages more “artifacts” or “junk.” There’s also concern that the change could shift incentives away from financial transactions.  5. Backward compatibility / standardness / policy fragmentation • Even though this is not a consensus change (so technically everyone can still enforce policies), there’s a risk of fragmentation: nodes opting into different relay policies, different limitations, causing inconsistencies in what gets relayed, what is accepted, etc. This could make behavior unpredictable for wallets/developers.  6. User cost & miner incentives • Larger data means larger transaction sizes → higher fees. This might price out certain uses. Also, miners might prioritize large data OP_RETURNs if fees are good, but that could change the transaction mix. There are questions about whether users are willing to pay enough to make it practical.  ⸻ Balancing / Why Core Devs Think the Trade-Offs Are Worth It Putting the above together, here’s how the core dev side balances pros and cons and why they believe increasing OP_RETURN (removing or raising the limit) is the “right” direction: • They see the existing limit as often pushing people towards “worse” patterns that are more harmful than a cleaner, larger OP_RETURN. For example, using outputs that remain unspent (thus increasing UTXO set), or data embedded in scriptPubKeys or non-standard outputs. A cleaner standard behavior is more predictable and safer. • They trust the fee market to disincentivize abusive behavior: if someone embeds megabytes of arbitrary data, that increases their transaction size; fees will scale. If mempools get congested, miners will likely ignore low fee data-heavy transactions, reducing abuse. • They argue for maximal flexibility for users, respecting Bitcoin’s permissionless, censorship-resistant ethos. Limiting data via policy is a kind of self-censorship; if users want to put data in, letting them do it via OP_RETURN is more transparent and standardized. • Node operators can still configure stricter limits. So the default is more permissive, but it doesn’t force everyone to accept large payloads if they don’t want to. It’s about making the default more aligned with reality. • They also argue that allowing larger OP_RETURNs “on the books” means better tooling, better standards, and possibly less reliance on special-case APIs or proprietary services to embed data. That could decentralize those use cases. ⸻ What “100k” vs “~4 MB / block weight” Means • Some reports state that the merged pull request sets a new default datacarrier limit around 100,000 bytes (~100 kB) for OP_RETURN outputs.  • Others report that the limit is effectively removed, with data constrained by overall transaction / block size limits (so maybe up to ~4 MB weight, depending on how many OP_RETURN outputs one adds).  • The difference matters: 100 kB is large vs 80 bytes but still somewhat modest vs megabytes. A full block (in weight or serialized size) caps the upper bound anyway, so OP_RETURN is bounded by block limits even if policy allows it. ⸻ Summary of the Core Devs’ Justification Putting together all the developer arguments, the core position is roughly: The 80-byte limit for OP_RETURN is an antiquated policy that causes more harm than good via workarounds, contributes (indirectly) to UTXO bloat, and misaligns node relay policy with what miners already accept. Removing or raising it improves standards, flexibility, and preserves cleaner on-chain data embedding with less collateral damage. By letting node operators enforce stricter limits locally, we offer both permissiveness by default and configurability. The fee market (plus mempool eviction, relay policies) will serve as economic deterrents against abuse. So what I get out of all this is, Devs are wanting standardisation for relays. This all seems to be a house keeping thing. At least now I get their reasons So put crap in a pruneable place or continue the hackathon.
This might be old but I just saw it. https://x.com/lollubee/status/1851807437885706407
I’m using Knots because of datum gateway. Is there a way to use core the same way? I’m have a feeling I’m being my used through fear. I tend to go against that play.
I have this feeling that I’m being play. Like when I was young in The 60’s. A series of assassinations moving a sociaty in a certain direction. Just a feeling.
How does bitcoin play in the dead internet theory?
I have come to the realization. I am a user. I do not consume.
This is the 80’s all over again. Networks not being able to talk to one another. Because of hardware and protocol issues. Lora vs Bluetooth. Beta vs VHS Brat pack vs Cyberpunks now that would be a good death match.
Fascinated with Technology