spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
cosmic
Member since: 2024-05-05
cosmic
cosmic 7h

https://welshman.coracle.social/ saved my life with nip46, which is broken on NDK. I spent a whole day trying to make it work for zapnode.io until I gave up and switched to Welshman.

cosmic
cosmic 12h

I couldn't agree more. No matter where we are on the “path” or this “axis,” it feels good and somewhat comforting to know we're not alone in this mess. There are so many expectations, and we think we need to do this or that—adding family and children to the mix. Nobody wants to go to their grave and see their children starving. At the end of the day, it's all dragons, and dragons will be slayed!

cosmic
cosmic 12h

Imagine the possibility that high-IQ individuals may feel more confident that they can address problems through thoughtful consideration, much like how naturally athletic people tend to rely on movement. This approach, sometimes called “overthinking” by those who may not understand it, can sometimes lead to feelings of anxiety or doom. It's a fresh perspective, and I acknowledge I might be mistaken.

cosmic
cosmic 14h

It already has a name: overthinking.

cosmic
cosmic 14h

Based! Maybe all we need is NIP-256 after all.

cosmic
cosmic 14h

https://youtu.be/DFI6cV9slfI

cosmic
cosmic 1d

Clanker Head

cosmic
cosmic 1d

Terraform is the most robust solution, but newer options like Pulumi, CDK, or even Crossplane might be better depending on your use case and environment. There is no “best” — it’s all about trade-offs.

cosmic
cosmic 1d

The true successor to Norton Commander

cosmic
cosmic 1d

A flat screwdriver should not exist /s

cosmic
cosmic 10d

I’m not fixated on “winning,” and certainly not looking to drag this out. But if we’re walking back, let’s be honest about what’s being walked. “Use more tokens, kids.” — ynniv · 4d ago “It’s very likely that given ‘more tokens’ in the abstract sense, current AI would eventually settle on the correct answer.” — July 22, 2025 · 12:27 PM “I don’t mean ‘tokens alone.’” — July 24, 2025 · 1:10 PM “I don’t believe, and never have, that scaling context size alone will accomplish anything.” — July 24, 2025 · 7:53 PM If the position was never “tokens alone,” I don’t know what to do with these earlier posts. So I’ll ask one last time, gently: Was “more tokens = eventual convergence” a rhetorical device, or a belief you now revise? We probably both agree that scaling context is not equivalent to scaling reasoning and that transformers aren’t recursive, stateful, or inherently compositional. I was only pointing out that. If we’re aligned now, we can close the loop.

cosmic
cosmic 10d

Appreciate the clarification attempts. But to be fair, this all started with a confident claim that “more tokens” would eventually get us there — not loops, not memory, not scaffolding — just “tokens,” full stop. That’s not a strawman; it’s quoted: “It’s very likely that given ‘more tokens’ in the abstract sense, current AI would eventually settle on the correct answer.” — Posted July 22, 2025 · 12:27 PM And in case that was too subtle, a few days earlier: “Use more tokens, kids.” — ynniv · 4d ago This was in direct reply to: “You’re not scaling reasoning, you’re scaling cache size.” — Itamar Peretz · July 20, 2025 · 08:37 AM If your view has since changed to “I don’t mean tokens alone” (July 24, 2025 · 1:10 PM), that’s totally fair — we all evolve our thinking. But that’s not what was argued initially. And if we’re now rewriting the premise retroactively, let’s just acknowledge that clearly. So here’s the fulcrum: Do you still believe that scaling token count alone (in the abstract) leads current LLMs to the correct answer, regardless of architectural constraints like stateless inference, lack of global memory, or control flow? • If yes, then respectfully, that contradicts how transformers actually work. You’re scaling width, not depth. • If no, then we’re in agreement — and the original claim unravels on its own. In either case, worth remembering: you can’t scale humans. And that’s still what fills the reasoning gaps in these loops.

cosmic
cosmic 12d

Glad we’ve arrived at a similar perspective now—it feels like progress. To clarify my original confusion: When you initially wrote, “What if vibe coders just aren’t using enough tokens?”, you seemed to imply that tokens alone—without mentioning loops, scaffolding, external memory, or agent orchestration—would inherently unlock genuine reasoning and recursion inside transformers. We're perfectly aligned if your real point always included external loops, scaffolding, or agent architectures like Goose (rather than just “tokens alone”). But I definitely didn’t get that from your first post, given its explicit wording. Thanks for explicitly clarifying your stance here.

cosmic
cosmic 12d

Context as memory? Not quite. Memory isn’t just recalling tokens; it’s about managing evolving state. A context window is a fixed-length tape, overwriting itself continually. There’s no indexing, no selective recall, no structured management. The fact that you have to constantly restate the entire history of the plan at every step isn’t memory—it’s destructive serialization. Actual memory would be mutable, composable, persistent, and structurally addressable. Transformers have none of these traits. Models appear to “collect information, plan, and revise”—but what’s happening there? Each new prompt round is a complete regeneration, guided by external orchestration, heuristics, or human mediation. The model itself does not understand failure, doesn’t inspect past states selectively, and doesn’t reflectively learn from error. It blindly restarts each cycle. The human (or the scaffold) chooses what the model sees next. Avoiding local maxima? Not really. The model doesn’t even know it’s searching. It has no global evaluation function, no gradient, and no backtracking. It has only next-token probabilities based on pretrained statistics. “Local maxima” implies a structured space that the model understands. It doesn’t—it’s just sampling plausible completions based on your curated trace. Can it seem like reasoning? Sure—but only when you’ve done the hard part (memory, scaffolding, rollback, introspection) outside the model. You see reasoning in the glue code and structure you built, not the model itself. So yes, you’re still making the claim, but I still see no evidence of autonomous recursion, genuine stateful memory, or introspective reasoning. Context ≠ memory. Iteration ≠ recursion. Sampling ≠ structured search. And tokens ≠ for dev-hours. But as always, I’m excited to see you build something compelling—and maybe even prove me wrong. Until then, I remain skeptical: a context window isn’t memory, and your best debugger still doesn’t scale.

cosmic
cosmic 12d

Glad we’re converging—because that’s the heart of it: we agree on amplification, but differ on the mechanics. Initially, your stance was stronger, claiming that these models were actively reasoning and recursing internally, escaping local maxima through real inference. We seem to agree they’re powerful tools that amplify our capabilities, rather than autonomous reasoners. My original point wasn’t that LLMs are ineffective; it was just that more tokens alone don’t yield reasoning. Amplification is profound but fundamentally different from real autonomous recursion or stable reasoning. The model’s architecture still lacks structured state, introspection, and genuine memory management. I agree, though—these tools are moving quickly. Maybe they’ll soon surprise us both, and vibe coding might become rocket surgery. Until then, I’m happy sailing alongside you, captaining through the chaos and figuring it out as we go. 🌊

cosmic
cosmic 12d

Totally with you on captaining the ship. I’d never argue against using LLMs as amplifiers — they’re astonishing in the right hands, and yes, it’s our job to chart around the rocks. But that’s the thing: if we’re steering, supervising, checkpointing, and debugging, then we’re not talking about autonomous reasoning agents. We’re talking about a very talented, very unreliable deckhand. This brings us back gently to where this all started: can vibe coders reason? If your answer now is “not exactly, but they can help you move faster if you already know where you’re going,” maybe we’ve converged. Because that’s all I was ever arguing. You don’t scale reasoning by throwing tokens at it. You scale vibes. And someone still has to read the logs, reroute the stack, and fix the hull mid-sail.

cosmic
cosmic 14d

That’s fair — building is the ultimate test. But if the architecture lacks the primitives, then what you’re evaluating isn’t the agent’s reasoning capacity. You can paper over those limits with scaffolding, retries, heuristics, and human feedback. And I agree: it can look impressive. Many of us have built loops that do surprising things. But when they fail, they fail like a maze with no map: no rollback, blame assignment, or introspection—just a soft collapse into token drift. So yes, build it. Just don’t mistake clever orchestration for capability. And when it breaks, remember why: stateless inference has no recursion, memory, or accountability. I hope you do build something great — and I’ll be watching. But if the agents start hallucinating and spinning in circles, I won’t say, “I told you so.” I’ll ask if your debugger is getting tired and remind you that you still can’t scale the human.

cosmic
cosmic 25d

Once, we believed the gates of heaven opened through mercy, justice, and walking humbly with God. Now we offer fiat wealth without weight to secure a heaven we mistake for retirement. But “you cannot serve God and mammon”, and “the love of money is the root of all kinds of evil.” No amount of printed paper can pass through the eye of a needle.

cosmic
cosmic 25d

And for real friends

Welcome to cosmic spacestr profile!

About Me

Building Merka—the next-generation Bitcoin infrastructure at Cosmic Rocks—a cloud you own, and zapnode.io—a community-driven platform for creating and funding Bitcoin nodes

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends