spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
m4d4m
Member since: 2025-12-05
m4d4m
m4d4m 2d

Yeah - this is the way; fcuk TG & Discord and alike.

m4d4m
m4d4m 2d

And it's definitely something new. Perhaps healthier as well, since it has not undergone the smoked finishing.

m4d4m
m4d4m 2d

This:

m4d4m
m4d4m 7d

Interesting to see someone stood up for him - so watched her video and can recommend. https://youtu.be/TX_iMpNyzIo

m4d4m
m4d4m 7d

Wiki begging me relentlessly for a donation. Sup, sure, just where is the Bitcoin? (Directly, no intermediate payment processor, just the BTCPay server.)

m4d4m
m4d4m 7d

Hell no to any 'invite & earn'

m4d4m
m4d4m 10d

Tired of extension that need to store my nsec directly on the machine, such as nox2-fox and alike. So switched to Amber. But not every web client allow log-in with Nostr connect. In case of such situation there is this one extension: https://github.com/dsbaars/bunker46-extension Extension is not signed - approved by the Firefox, so to get it working: 1) In about:config, switch to False: xpinstall.signatures.required + extensions.langpacks.signatures.required 2) Then you can install the extension - manually locate the downloaded archive and confirm its installation 3) Then generate a new URI in Amber for the Bunker46 > click '+' icon, then select the option Add and nesecbunker. Amber has some default relays pre-filled here. 4) Click on the installed Bunker46 and paste the string - it's long, so forward it via Signal for example 5) Save and test the connection It's afaik best solution. Amber still holds the key and Bunker46 does the job. The creator's best flex is he does not have the npub tied to him on the GH so I can't even thank him here.

m4d4m
m4d4m 11d

My perception of the current state is that it's the evil one (the state of finance/monetary/taxation). Yet I don't think that the rest of society does care. As they have no problem enforcing such a system - and let us not assume it's because they are unaware of it, most are aware in some sense. If they really come into the space even more, the BTC will inherit the bad and not gain much (except for price to the moon). Most in transition won't change their mindset. So I definitely prefer a smooth ride and no mass adoption at all. As they say: Bitcoin is for anyone, but not everyone.

m4d4m
m4d4m 13d

Unfortunately only 16gb :/ chainstate is I think bigger, few gb more. I'm with pruned now, so don't know real size. Best would be load all in RAM, (price still 4x - uhh).

m4d4m
m4d4m 13d

Didn't know it's addressable volume seen by the OS. Was thinking it's just automatically handled space, a cache, where the content is decided by some advanced driver working with some statistics about the files being accessed on the main drive.

m4d4m
m4d4m 13d

I don't want the old system to deteriorate, crash even, and as result shift a lot of the power to the current structure of 'hodlers'. Same as I don't want someone having the older car to experience some fatal failure and force him to jump into the Tesla-like solution. This is what for some the change may be mentally. A slower process is definitely better. (I do not imply you are thinking like this. Just kinda also hijacking the note with my 2 cents.)

m4d4m
m4d4m 13d

I hope people won't need the tool that much tbh

m4d4m
m4d4m 17d

How does the Zapstore protect me as of now? Anyone can publish an app, even Zapstore published many GH APKs themselves instead of the original creators. Zapstore just shift trust one step, isn't it(?) - unless the npub is followed by myself and optionally/ideally this is follow done face to face I don't know if can it be trusted. Also the WoT reputation, is better than nothing, yet can't ultimately save the day. Or I am missing something. This could just happen here as well just it's not yet worth of targeting I think.

m4d4m
m4d4m 17d

GM 🌘

m4d4m
m4d4m 24d

Afaik it won't help much because if model must be split between VRAM <> anything else - it's slow. And there is the KV cache for the context adding to the load. Iirc the inference, in case it doesn't using weights from VRAM, goes for RAM but this is computed on CPU - it's slow as hell (also dependent on how fast is CPU<>GPU communication). The Optane will be avoided and if not, the slowdown compared to RAM is even bigger. Small models still do not fit in consumer HW - maybe for certain tasks & managing tool calls output but must be guided by bigger (API connected) model. (Have no experience really, only I hope some day can offload the token consumption partially to my already existing HW, so far this is possible for simple tasks like generating the title for a conversation but not much more.)

m4d4m
m4d4m 24d

Depending on whether there is a discount or not, but: 8 - 12$ / 30 🥚 ~ same for part of the central EU (CZ/SK/PL/HU) I think.

Welcome to m4d4m spacestr profile!

About Me

-(/*<>*/)- // whatever //

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends