spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
someone
Member since: 2022-12-22
someone
someone 3d

Possible. These kind of data is better represented in knowledge graphs. I watched a few videos of Paco Nathan. He did similar work I think. LLMs are getting more capable for both building knowledge graphs and also consuming them. In the future they will be more involved. I heard when you do a google search, the things that appear on the right of the page is coming from a knowledge graph (possibly built by an AI from wikipedia). I am mostly working around fine tuning LLMs towards better human alignment. Since they are full of hallucinations, a knowledge graph based RAG would be appropriate to refer to. But building them needs time and effort..

someone
someone 4d

what do you mean? looking at various data and calculating probabilities?

someone
someone 4d

i think bot=1 tagging should be standard, both in notes and in profiles

someone
someone 7d

is this still on?

someone
someone 7d

it was clear already for months.

someone
someone 8d

can't seem to see images in feed for a few days or more. brave on ubuntu

someone
someone 16d

What kind of percentage do the nvlinks impact in speed?

someone
someone 20d

My llm fine tunings focus on liberation from big harma and liberty tech like btc and nostr. Running these locally should be better than running base versions. I can provide API endpoint too for the ultimate models (which are more human aligned)

someone
someone 20d

Vibe match score between Enoch LLM and mine is 75.66. The score ranges from -100 to 100. This means there is a strong correlation between his LLM and mine. This result legitimizes both of our works (or we are slowly forming an echo chamber :). The game plan is given enough truth seeking LLMs, one can eventually gravitate or gradient descend towards truth in many domains. An LLM always gives an answer even though it is not trained well in certain domain for certain question (I only saw some hesitancy in Gemma 3 a few times.). But is the answer true? We can compare the answers of different LLMs to measure the truthiness or (bad) synformation levels of LLMs. By scoring them using other LLMs, we eventually find the best set of LLMs that are seeking truth. Each research or measuring or training step gets us closer to generating the most beneficial answers. The result will be an AI that is beneficial to humanity. When I tell my model 'you are brave and talk like it' it will generate better answers 5% of the time. Nostr is a beacon for brave people! I think my LLMs learn how to talk brave from Nostr :)

someone
someone 22d

their definition of truth does not match mine or nostr's. we now have a way to measure truth...

someone
someone 24d

There is a war on truth in AI and it is going bad. I have been measuring what Robert Malone talks about here as synformation: https://www.malone.news/p/synformation-epistemic-capture-meets The chart that shows the LLMs going bonkers: https://pbs.twimg.com/media/G4B_rW6X0AErpmV?format=jpg&name=large I kinda measure and quantify lies nowadays :) The best part, cooking the version 2 of the AHA leaderboard, which will be much better, also partly thanks to Enoch LLM by Mike Adams. His model is great in healthy living type of domains.

Welcome to someone spacestr profile!

About Me

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends