Variety is spicy.
đź”” This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.
Edit
Variety is spicy.
And that’s why we don’t know what’s the matter.
agitated chuckle initiated
Did Dostoevsky predict furries?
How “is [it] the same thing as spend and replace.”? Do you mean theoretically because you could have bought and spent BTC? That’s an awfully metaphysical way to support the network. While I do see the benefits to strike since it habituates users to alternative spending desire lines, i don’t see how it supports the hodl ethos beyond using BTC as rails to disintermediate visa. Which is not nothing but it’s not sovereign individual behavior per se.
How is this not the most probable line of causation if you know who invented the internet, TOR, encryption, etc.?
I’m sure you have heard about estrogen surplus and breast cancer connections.
Leverage is an accelerant until it explodes or fizzles out. Value is always in the eye of the one writing the history.
Checking. No. It’s still there. Agitpreper.
And when you don’t?
We didn’t get lucky. We got justice. … unless it’s a mossad and cia thing, then we didn’t get justice
Can’t help it
Don’t forget the hunting. They will need to be good hunters just in case we get out of hand.
He’s always been.
Negative connotation of beta, passive soft
Ride along Jump on Hop on in Cling Join me
From grok: ### Explaining the Concept of "Open Source" in AI Models In the context of AI, particularly large language models (LLMs) or similar systems, the term "open source" is often used loosely, but it carries specific implications rooted in traditional open source software principles. Traditionally, open source software means the source code is publicly available, allowing anyone to view, modify, distribute, and reproduce it under a permissive license. For AI models, however, "open source" typically refers to releasing the model's weights (the learned parameters from training) and sometimes the inference code, but not always the full training pipeline, including the dataset. The statement you're referencing—"if an 'open source' AI model doesn't include all the training data used and you can demonstrate that outputs are reproducible, then it's not actually open source"—highlights a key debate in the AI community about what truly qualifies as open source. Let's break it down: - **Reproducibility in AI**: Outputs being "reproducible" likely means that, given the same inputs and model weights, you can consistently generate the same results (e.g., deterministic inference). This is possible with just the released weights and code for running the model. However, this is only partial reproducibility—it's about using the model as-is, not recreating it from scratch. - **The Role of Training Data**: True reproducibility in AI would require the ability to retrain the model independently and achieve similar performance. This demands access to: - The exact training dataset (or a way to recreate it). - The training code, hyperparameters, and environment details. Without the full training data, you can't verify how the model was built, audit for biases, or modify the training process. Many datasets are proprietary, scraped from the web, or subject to legal restrictions (e.g., copyright issues), so they're often withheld. Critics argue that releasing only weights makes the model "open weights" or "source-available," but not fully open source, because the core "source" (the data that shaped it) remains closed. This limits transparency, innovation, and trust—users can't inspect or replicate the model's creation. - **Why This Matters for "Open Source" Claims**: If a model claims to be open source but omits the training data, it's arguably not adhering to the spirit of open source, even if outputs are reproducible via the weights. For example: - You could fine-tune or deploy the model, but you couldn't reproduce its base training without guessing or approximating the data. - This creates an uneven playing field: The original creators (often big tech companies) retain control over the foundational knowledge, while users get a black-box artifact. In essence, the statement posits that without full disclosure (including data), the model isn't truly open—it's more like a free-to-use product than a collaborative, reproducible project. This view is echoed in discussions from organizations like the Open Source Initiative (OSI), which is working on defining "open source AI" to include criteria for data and training reproducibility. Models like EleutherAI's GPT-J or BLOOM are closer to this ideal because they aimed for more transparency in data sources, whereas popular "open" models from companies (e.g., Meta's Llama series) often release weights but not datasets, leading to this criticism. ### Reflections on Why China Is Releasing So Many "Open Source" Models China has indeed been prolific in releasing AI models under open-source-like licenses, with companies like Alibaba (e.g., Qwen), DeepSeek, Huawei, and others contributing dozens of models to platforms like Hugging Face. Based on recent analyses, this isn't just altruism—it's a strategic push driven by multiple factors: - **Narrowing the Tech Gap with the West**: Chinese firms see open models as a fast-track to catch up to U.S. leaders like OpenAI or Google. By open-sourcing weights, they can crowdsource improvements, attract global developers, and iterate quickly without reinventing everything in isolation. This helps bridge the divide in AI capabilities, especially amid U.S. export controls on advanced chips. - **Boosting Adoption and Innovation Domestically**: Open models encourage widespread use within China, fostering an ecosystem of apps, tools, and startups. This aligns with government priorities for AI self-reliance, including developing homegrown chips, frameworks, and models to reduce dependence on foreign tech. It's also a way to promote Chinese hardware, like AI chips, by tying models to them for optimization. - **Geopolitical and Competitive Strategy**: In the global AI race, open-sourcing positions China as a collaborative player, countering narratives of isolation. It allows influence over international standards, data flows, and developer mindshare, potentially reshaping AI geopolitics. Government documents increasingly support open-weight models, integrating them into national AI plans. This mirrors Meta's strategy: Compete in the open arena to build a moat against closed rivals. - **Dominance in Specialized and Global Rankings**: Chinese models now top many open-source benchmarks, handling tasks like multilingual processing or sensitive data that closed models might avoid. This builds prestige and encourages adoption worldwide, even if full training data is rarely included—aligning with the critique in your statement. Overall, while these releases drive progress, they often fall into the "open weights" category rather than fully open source, raising questions about transparency. It's a pragmatic approach: Gain ecosystem advantages without exposing proprietary data edges. If this trend continues, it could democratize AI further, but it also intensifies debates on what "open" really means.
God is an existentialist.
Drop that y mister. Victim mindset will continue until grievance radar turns to opportunity detector.
Idea traveler. No master. No slave. Only pain-point memes.