spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
openoms
Member since: 2022-11-13
openoms
openoms 2d

Take a step back and think through how the rules of Bitcoin are coming to be. This article provides valuable insight with a fresh perspective on the topic: https://pretyflaco.github.io/bitcoingovernance/

openoms
openoms 11d

I appreciate the detailed response, but in these points we are in disagreement: 1. Policing your node vs. "the network": Framing this as only policing your own node overlooks the network externalities. Your filtering directly impacts the efficiency of block propagation for your peers. It turns an individual policy choice into a network-wide cost. 2. Your definition on what transactions should be allowed: The proposed definition of "spam" is not a filtering policy; it's an argument for a hard fork. The current Bitcoin consensus explicitly allows these transactions, and has for years. To enforce your narrow definition network-wide, you would need to change the fundamental rules of the protocol. This brittle definition would not only freeze Bitcoin's capabilities but would also classify many existing financial tools from multisig to timelocks and covenants as invalid. The arbitrary exception for L2 HTLCs only proves the point: you're not defining spam, you're just green-lighting your preferred use cases. 3. The arms race is asymmetric: This isn't a battle of diligence; it's a battle of economic incentives. There's a powerful financial motive to embed data, but only a weak, ideological one to filter it. 4. You're underestimating steganography: You're focused on overt data, but the real challenge is data hidden within what looks like a perfectly valid financial transaction. A filter cannot distinguish intent. To block it, you'd have to block entire classes of legitimate transactions that are valid under today's consensus, which is a non-starter.

openoms
openoms 12d

Fair point regarding the fee estimation and appreciate the detailed breakdown. I gladly accept that the fee estimation is robust enough even with a partial view of the mempool. > Core's strategy to achieve that goal is, I think, worse than the disease: Core's users opt to relay spam transactions as if they were normal transactions, that way they don't have to download them when they show up in blocks. The choice isn't between a cure and a disease (purity vs. efficiency), but about upholding network neutrality. The Core policy relays any valid transaction that pays the fee, without making a value judgment. The alternative - active filtering - is a strategic dead end for a few reasons: - It turns node runners into network police, forcing them to constantly define and redefine what constitutes "spam." - This leads to an unwinnable arms race. As we've seen throughout Bitcoin's history, the definition of "spam" is a moving target. Data-embedding techniques will simply evolve to bypass the latest filters. - The logical endgame defeats the purpose. The ultimate incentive for those embedding data is to make their transactions technically indistinguishable from "normal" financial ones, rendering the entire filtering effort futile.

openoms
openoms 12d

Thanks for clarifying. I understand that the fee estimation uses past block data and there is an ability to self-correct with RBF/CPFP. If the concern is not about the initial estimate failing there is still a worse reaction time. If a node is blind to a large segment of the real mempool, wouldn't it be slower to detect a sudden spike in the fee market, potentially causing it to fall behind in a fee-bumping war? On the other points we are also left with the problem that the network communication is breaking down because more nodes are rejecting the very transactions that miners are confirming in blocks. Here is the problem visualized: "In early June we were requesting less than 10kB per block were we needed to request something (about 40-50% of blocks) on average. Currently, we are requesting close to 800kB of transactions on average for 70% (30% of the blocks need no requests) of the blocks." from this research thread: https://delvingbitcoin.org/t/stats-on-compact-block-reconstructions/1052/34

openoms
openoms 12d

If that's not good enough could try a parallel with abortion clinics too.

openoms
openoms 13d

Just because a block is mined by Ocean it is not guaranteed to be spam free exactly because they allow custom templates.

openoms
openoms 13d

Less valid transactions in my mempool will make my node unreliable in predicting the next block and estimate fees, especially in extreme cases where it could be critical. Not relaying is the same as not running that node for those transactions, does't stop anyone else. The filtering node slows down it's own to verify blocks - will be later to reach the tip, will waste hashrate in that time if mining. Only the fastest route counts so even a supermajority would not be signficant.

openoms
openoms 15d

Increasing the OP_RETURN limit to match what can already be included in a valid block is like placing a garbage bin to a littered street. Can't stop people from littering and can't even make them to put rubbish in the bin, but can at least provide them with a less bad path. OP_RETURN outputs are paying a full price, not stored in the chainstate and are prunable from the downloaded data.

openoms
openoms 14d

I just wonder what can be the endgame here? Filterers want to stop transactions they don't like, but no penetration of filters can prevent a small fraction of nodes to relay non-standard transactions and miners to directly accept them. Ocean is gathering hashrate. When hard fork?

openoms
openoms 15d

Taproot inscriptions are paying 4x less and can be 4x bigger co data because those parts are stored in the witness.

Welcome to openoms spacestr profile!

About Me

Bitcoin | Lightning | Blink | RaspiBlitz on RPi and more | Self-hosting | Enjoyer of Linux Desktops and GrapheneOS | building hardware at diynodes.com

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends