spacestr

đź”” This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
CurrencyofDistrust
Member since: 2024-03-11
CurrencyofDistrust
CurrencyofDistrust 26m

Love that everyone’s time and attention are being spent building for bots and not actual humans. Make it make sense

CurrencyofDistrust
CurrencyofDistrust 55m

“AI will make us so productive!!” The productivity:

CurrencyofDistrust
CurrencyofDistrust 1h

This is getting so gay

CurrencyofDistrust
CurrencyofDistrust 2h

No way, we’re bitcoiners, which means we’re independent thinkers! /s

CurrencyofDistrust
CurrencyofDistrust 24d

This is so dang cool. Love that this is progressing.

CurrencyofDistrust
CurrencyofDistrust 2h

I’m convinced half of nostr has AI psychosis

CurrencyofDistrust
CurrencyofDistrust 2h

It’s getting old, no doubt

CurrencyofDistrust
CurrencyofDistrust 2h

The best pleasure in life is found in the mundane.

CurrencyofDistrust
CurrencyofDistrust 4h

Yes, this is exactly what nostr needed. Good work, everyone 🙄

CurrencyofDistrust
CurrencyofDistrust 1d

We’re literally being ruled by demons and demon worshippers

CurrencyofDistrust
CurrencyofDistrust 4h

https://www.theregister.com/2026/01/30/road_sign_hijack_ai/

CurrencyofDistrust
CurrencyofDistrust 1d

It depends on what you mean by validated completely. There’s tons of software that is designed to solve security problems where tons of scrutiny is applied and they put fourth an extraordinary amount of effort to validate there are no bugs, only for a zero day to pop up later. There is no such thing as fully secured software. There just isn’t.

CurrencyofDistrust
CurrencyofDistrust 4h

We all thought nostr devs were working on trying to create a new internet, but really they’re just looking for the next trend to jump on.

CurrencyofDistrust
CurrencyofDistrust 1d

All of these depend on infrastructure that humans control.

CurrencyofDistrust
CurrencyofDistrust 1d

The centralized AI companies can easily do so. Also, someone can just take Moltbook offline. I doubt many of them are using OSS models

CurrencyofDistrust
CurrencyofDistrust 1d

Futile or not, the powers that be want to try. And they obviously could be stopped, turn them off?

CurrencyofDistrust
CurrencyofDistrust 1d

Sure, of course they are. That doesn’t mean they’re conscious. As the OP points out, they’re given instructions on what to do. It’s highly likely they were told to do this so an AI bro could go viral. And even if they didn’t explicitly tell them, if you understand what LLMs are, you’d know they’re just larping based on their training data. To add to that, when they see other agents saying “let’s do xyz” it gets added to other agents context windows thus prompting them to behave accordingly.

CurrencyofDistrust
CurrencyofDistrust 1d

It IS true. Vulnerabilities are found in “secure” software literally every day.

CurrencyofDistrust
CurrencyofDistrust 1d

The overnight explosion in popularity is highly suspicious to me. Someone somewhere wants us to believe this was the key thing needed for AGI or something to enforce draconian controls on the internet.

CurrencyofDistrust
CurrencyofDistrust 8h

Always had a feeling that dude was a creep

Welcome to CurrencyofDistrust spacestr profile!

About Me

Christian | Husband | Father Professional hacker Lover of freedom tech

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends