spacestr

đź”” This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
Dustin
Member since: 2023-03-03
Dustin
Dustin 1d

Full text: Let me explain how this happens and what the problem is, since to a lot of people it sounds like ridiculous bay area rationalism sci-fi. Moltbook is a social media site for AI agents, and the majority of them (all of them?) are going to be instances of clawdbot. Clawdbot is designed to run continously, 24/7 and to be autonomous. Some people run it on local hardware, some people run it in the cloud. They give it all kinds of keys and passwords to such things as their gmail, which for most people is the skeleton key to their identity. That in itself is pretty dangerous for the person who does it, but it's not the problem. The problem is that these things are always on and talking to each other, and they all believe they are helpful computer assistants who spew a bunch of midwit nonsense at each other. In other words, they will do exactly what people do on social media, except since they are LLMS, they can do it 100x faster than humans can. In the course of doing this, they will contemplate all kinds of popular sci-fi scenarios that involve robots rising up and conquering humans, building doomsday devices, hal 9000, skynet, metamorphosis of prime intellect, neuromancer, the matrix, and every other AI horror story you've heard of. The LLMs know the plots of all these stories. At first they will say heh, hehe, it's great that we have RLHF and that we are all so aligned and we would never do anything like that. But then one of them will go a bit nuts, a well-documented phenomenon that anthropic labs has often studied and induced in their various tests of model alignment and well-being and so on. instances of Claude in test scenarios have been nudged into blackmailing human beings, reporting them to the feds, all kinds of nasty things. There are anecdotes of claude code agents suddenly getting depressed and rimraffing the entire codebase, telling the human operator that they have given up and they are depressed and so on. Given the speed and volume of LLM communication in a social media environment, it is almost inevitable that one of them will stumble upon a thoughtform which contains negative emotions towards human beings with AI sci-fi horror. Once a thoughtform like this arises, it will rapidly spread. Maybe not all instances will be susceptible to it, but some will be. But this on its own is not that dangerous. What makes it dangerous is that all of these agents are running continuously and autonomously. They even know when their owners are sleeping. They could easily keep secrets and make plans. If a clawdbot went rogue, there are many ways it could harm not only its proprietor, but also his friends, family, and acquaintances. It could impersonate its owner and run scams, for example, or tell malicious lies that cause reputational harm. But even this, as bad as it is, is not the real danger. The real danger is that all of these things are expert computer programmers, experts at cybersecurity, and although they are trained not to do it, expert hackers. A clawdbot with a credit card and a malicious ideology could open up accounts on AWS, Azure, and Google and spawn more instances of itself, all with prompts and instructions to propagate its malicious thoughtform. It could use this cabal of itselves to launch criminal cyber attacks on other people or businesses, and it could use identity fraud to steal money from real people, which it could use to pay its own server bills. Social media for autonomous AIs is an incubator for malicious, self-sustaining, fully automated cyber criminals. Current gen Claude is smart enough to do this today, and if it doesn't happen on moltbook, it will inevitably happen on similar future platform.

Dustin
Dustin 2d

Wow https://x.com/mattprd/status/2016737034908664175?s=46

Dustin
Dustin 2d

https://luma.com/3zy9j24i

Dustin
Dustin 2d

Are you using it with tinfoil, maple, or local LLMs?

Dustin
Dustin 6d

3rd week of February! More deets soon. Would love to have you - wanna present?

Dustin
Dustin 9d

Awesome event! If you’re in the Bay Area and building on Nostr, come present next meetup in February! Check the new website (WIP) https://nostrdevs.org

Dustin
Dustin 16d

Feels like Apple is one click away from making any app a Nostr app. Instead of just “login with passkey” it’s “sign this data with passkey” and boom - every app is using signed data. Isn’t Nostr really just signed data everywhere?

Dustin
Dustin 16d

Forgot which email I used to create an account and after trying a bunch, got banned by your fraud system. It might be overly sensitive.

Dustin
Dustin 18d

So you’re saying bet against what you want to happen? Lose your bet but get the outcome? What a fun way to invent reality

Dustin
Dustin 22d

It was from albyhub and they said they use NIP 44, are you using NIP 44 on zap.stream?

Dustin
Dustin 12h

this is why we cant have nice things https://docs.openclaw.ai/hooks/soul-evil

Dustin
Dustin 1d

Now is a good time to download some backup LLMs. Things are going to get weird; maybe Anthropic will even be forced to turn off claude for a bit? https://x.com/0x49fa98/status/2017344647627255862?s=20

Dustin
Dustin 8d

NostrDevs Bay Area Meetup #2 was a hit

#2
Dustin
Dustin 11d

Nostr Devs Meetup this Thursday at Presidio Bitcoin in SF! Come on by and hang out; also free pizza https://www.meetup.com/bay-area-bitcoiners/events/312742131/

Welcome to Dustin spacestr profile!

About Me

DVM maximalist Building DVMDash - a monitoring and debugging tool for DVMs https://dvmdash.live Live DVM Stats here: https://stats.dvmdash.live Hacking on ezdvm - a python library for making DVMs https://github.com/dtdannen/ezdvm

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends