
this is the answer. the nips are axioms, your apps are theorums. prove some useful stuff.
🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.
Editthis is the answer. the nips are axioms, your apps are theorums. prove some useful stuff.
chantrells for the win
I want the privacy, autonomy, offline capability, etc. but it may well be cheaper to run something locally than pay $200 a month or more for something they can rug you on unilaterally (thinking about the weekly caps, etc) I think the small but smart concept is true enough. I see more capacity and quality coming to local models, especially as the bench of open source coding models gets deeper and better.
I am able to run models locally on my rtx3060, but that is a paltry 12gb capacity. the models I can run on that are not worth coding with.
this is a very helpful answer, thankyou. wish I could zap you!
I will #zap good answers
what is the most cost effective way to run a #LocalLLM coding model? I'd like as much capacity as possible, for instance to run something like qwen3-coder, kimi-k2, magistral, etc in their highest fidelity instantiations. I see three high level paths. buy an.. - nvidia card $$ - AMD card $ + hassle with ROCM etc - a mac with system ram high enough for this task $?$? - something else? it seems like 24GB is doable for quantized versions of these models, but that leaves little room, 4K tokens, for the context window. #asknostr #ai #llm
yes. I tried to order something online, and the order was just silently cancelled. I called in and was told that my email wasn't associated eith my name. wtf does that even mean, I ask. no satisfactory answer is given and I am left in the lurch.
call it SMS, a digital postcard
autonomous drug smuggling submarines running in circles
they might be vents, to allow air to escape the mold while it's filling, so there aren't voids among the tread detail
ΔC https://drss.io https://npub.blog building bridges from RSS => nostr and nostr => RSS. nostr seems like the ideal mechanism for distributing feeds of all types, but especially blogs and podcasts.