Always run in sandboxes. LLMs without any backdoor tools invocations always are notorious in screwing things up like leaking private keys/deleting data. And we are working towards mitigating man in the middle attacks. It's an interesting problem.
We're working on solving this problem as well. It could be possible to prove that there was no prompt injection/tampering in the middle up to the source. So Routstr nodes cannot screw with user's systems/steal anything. Also, PPQ.ai, OpenRouter, Anthropic, all of these in the pipeline is probably spying on you and are vulnerable to the exact same attack vector. If it's not running locally, you should assume that it's not 100% secure is being spied on and have your agents run in sandboxes.
what's the selling point of Hermes to you? (i use hermes, it just works, haven't really thought about the pros and cons) I feel like a lot of this is hype. There's a bit of backlash again OpenClaw since OpenAI hired Peter.
I've built routstrd for teams. Fully configured with nostr identities with usage tracking for each user's LLM usage. A team is currently using this rn. In terms of support, every LLM request comes with client-id that's associated with , so it's actually quite seamless to track things down. I'd say Nostr has made this much easier than otherwise. https://github.com/Routstr/routstrd-auth/
No more macOs?
Crypto keys as in Bitcoin/Nostr private keys? What are you trying to do?
What do you mean?
There’s a new Routstr node that has some broken upstreams. But it’s the cheapest one right now so routstrd choosing it by default. Node url: https://router.infra.cloudwaddie.com If you’re using routstrd please disable this node for now: “routstrd providers list” “routstrd providers disable ”
The world isn't ready for Shadow. He really wants to build it.
Hey , I'm listening to this rn and heard that your GPU server can serve many users on Nostr, can we pls have a UTXO Routstr node serving your models when you're asleep? đź‘€ https://github.com/routstr/routstr-core I'd love to use it!
But TEEs are not open source though are they? Now that we know that linux kernel has had multiple bugs for almost a decade, what's the likelihood of TEEs having known bugs/backdoors that are waiting to be exploited at Amazon's convenience?
you should check this out. Longevity related.
this problem is well known in the LLM space. Even anthropic nerfs its models after 2-4 weeks. by lowering the amount of thinking tokens/quantizing opus, etc.
Welcome to redshift spacestr profile!
About Me
building Freedom AI Tech aka Routstr Chewing Glass Everyday Crunch Crunch Crunch Checkout https://routstr.com Stay humble stack sats
Interests
- No interests listed.