spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
Daniel Wigton
Member since: 2025-09-07
Daniel Wigton
Daniel Wigton 1h

Congratulations Blue Origin!! Nailed the landing!

Daniel Wigton
Daniel Wigton 20h

True. Or rarely zelle.

Daniel Wigton
Daniel Wigton 20h

I am curious about this Square deal. I am sure that it is wonderful and that it is nice to not have a credit processor take a 3% cut, but who sets the exchange rate? It seems like Block can set their own price for Bitcoin and make a few Satoshis (and they are Satoshis) by fudging a bit in their favor. I wouldn't blame them if they did, they have to ensure that they don't lose money if the vendor chooses to take their payments in USD. I am just curious how that works. I suppose if you pay with a non-Block wallet like BlueWallet you can verify the exchange rate.

Daniel Wigton
Daniel Wigton 1d

It's pronounced "GahJiff"

Daniel Wigton
Daniel Wigton 1d

You can take it but you can't dish it out? I am so confused. But respect.

Daniel Wigton
Daniel Wigton 1d

I'll not be participating in the hard mode future so this is your last chance. It's trash pickup day at my house anyway, there's room in the bin for whatever you can dish out.

Daniel Wigton
Daniel Wigton 1d

Wordle 1,607 5/6 ⬛⬛⬛⬛🟩 ⬛⬛🟩⬛🟩 ⬛⬛🟩⬛🟩 🟨⬛⬛🟨🟩 🟩🟩🟩🟩🟩 Yes, for my part, you do.

Daniel Wigton
Daniel Wigton 1d

The parameter count mostly encodes how much information it can memorize. They are mostly breadth first. You can think of it like a Fourier series to reconstruct a signal. With few parameters you the whole image but smoothed out. The problem with LLMs is that the level of knowledge they have is independent of the level of specificity with which they answer. This is where hallucinations come from. If what you ask requires a higher resolution that their parameter count allows the they fabricate the remaining detail. A human brain is probably about 100 Trillion parameters, so none of it is going to be super impressive. 70 billion parameters is the minimum that I have found is useful for actual discussions. At that level the hallucinations are no longer a constant thing in general conversation, but still exist for specifics. Smaller models are progressively more useless. When you get to single digit billions you have coherent but not informative. Best used for restructuring specific text or generating entertaining output. By the way the 70billion cut-off is for a monolithic model. Mixture of experts is almost always crap. For instance llama 4 nearly always underperforms llama 3.3 even though it 50% more parameters. The best I can say is that it does run faster.

Daniel Wigton
Daniel Wigton 2d

Cool idea. I haven't got into training or fine-tuning yet. I have some ideas I want to try as well.

Daniel Wigton
Daniel Wigton 2d

You can easily mess with it now. Just pick a small enough model. Even 2billion parameter models have some uses. Very general information, or simple agentic tasks. Installing Ollama is super easy and let's you try models in a fairly pain free way. If you have 8G of vram you can do quite a bit. For extra credit install chatbox on your phone and link to your PC for your own always up agent with no limits.

Daniel Wigton
Daniel Wigton 1d

Just like Gnu is pronounced "GahNew"

Daniel Wigton
Daniel Wigton 1d

Gpt-oss models are hilariously and aggravatingly sure of themselves. Ask a sime question and they immediately break out tables and charts like they are doing a PowerPoint presentation at a major convention. All chalk-full of hallucinations that it will stand by come EMP or Highwater.

Welcome to Daniel Wigton spacestr profile!

About Me

Catholic stay at home father of 6. Interested in spaceflight, decentralized communication, salvation, math, twin primes, and everything else.

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends