spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
South_korea_ln
Member since: 2022-12-25
South_korea_ln
South_korea_ln 1d

AI Layoffs Tracker https://layoff.today/ai-layoffs Cynically posting this in ~Stacker_Stocks, too, because the linked article reminded me that recently I was talking to a former physicist turned AI engineer. He's the cynical type, most AI is bullshit, hates VCs, overvalued companies, etc. Yet, he's actively "investing" in these companies by buying their stock, as he doesn't see the bullshit stopping anytime soon. Still, many companies will fire people due to "AI" optimization. And many of these companies see a surge in their stock price whenever they announce a firing round. And the "correction" will be smaller than the gains he's hoping to make until the eventual crash. Korean retail buying SK Hynix and Samsung Electronics also does not believe it'll go down anytime. It goes up until it doesn't. The stupidest of timelines. Let's reward companies that fire their workers. https://stacker.news/items/1490095

South_korea_ln
South_korea_ln 2d

AI model finally learns to say ‘I don’t know’ https://www.independent.co.uk/tech/ai-model-chatbot-overconfidence-don-t-know-b2974020.html > Commonly used AI models like OpenAI’s ChatGPT have been shown to “hallucinate”, or make up facts, as they are incentivised to make guesses rather than admit their lack of knowledge. [...] > To address this, researchers say they used clues from the way the human brain solves the issue. > In humans, brain signals are generated without external input even before birth, which helps deal with the issue. > Mimicking this, scientists developed a system in which the neural network backbone of an AI model underwent brief pre-training with random noise inputs before actual learning. > This process, according to researchers, helps AI set a baseline for itself by adjusting its own uncertainty before starting data learning. > The warm-up process can help an AI model set its initial confidence to a low level close to chance, and significantly reduce its overconfidence bias. > In other words, researchers say, the method helps models first learn the state of "I don't know anything yet”. > “While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed a clear improvement in their ability to lower confidence and recognise that they ‘do not know’,” researchers explained. > This can help AI develop the ability to distinguish “what it knows" from "what it does not know". > "This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans," Se-Bum Paik, an author of the study published in the journal *Nature Machine Intelligence,* said. Mimicking how the brain works makes for good PR, but regardless, if this truly solves the hallucination problem, even if partially, I'd be a happy man. But let's see if this translates into tangible improvements in the big models. I'm skeptical for now. https://stacker.news/items/1488972

South_korea_ln
South_korea_ln 2d

AI model finally learns to say ‘I don’t know’ https://www.independent.co.uk/tech/ai-model-chatbot-overconfidence-don-t-know-b2974020.html > Commonly used AI models like OpenAI’s ChatGPT have been shown to “hallucinate”, or make up facts, as they are incentivised to make guesses rather than admit their lack of knowledge. [...] > To address this, researchers say they used clues from the way the human brain solves the issue. > In humans, brain signals are generated without external input even before birth, which helps deal with the issue. > Mimicking this, scientists developed a system in which the neural network backbone of an AI model underwent brief pre-training with random noise inputs before actual learning. > This process, according to researchers, helps AI set a baseline for itself by adjusting its own uncertainty before starting data learning. > The warm-up process can help an AI model set its initial confidence to a low level close to chance, and significantly reduce its overconfidence bias. > In other words, researchers say, the method helps models first learn the state of "I don't know anything yet”. > “While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed a clear improvement in their ability to lower confidence and recognise that they ‘do not know’,” researchers explained. > This can help AI develop the ability to distinguish “what it knows" from "what it does not know". "Mim > "This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans," Se-Bum Paik, an author of the study published in the journal *Nature Machine Intelligence,* said. Mimicking how the brain works makes for good PR, but regardless, if this truly solves the hallucination problem, even if partially, I'd be a happy man. But let's see if this translates into tangible improvements in the big models. I'm skeptical for now. https://stacker.news/items/1488972

South_korea_ln
South_korea_ln 2d

AI model finally learns to say ‘I don’t know’ https://www.independent.co.uk/tech/ai-model-chatbot-overconfidence-don-t-know-b2974020.html > Commonly used AI models like OpenAI’s ChatGPT have been shown to “hallucinate”, or make up facts, as they are incentivised to make guesses rather than admit their lack of knowledge. [...] > To address this, researchers say they used clues from the way the human brain solves the issue. > In humans, brain signals are generated without external input even before birth, which helps deal with the issue. > Mimicking this, scientists developed a system in which the neural network backbone of an AI model underwent brief pre-training with random noise inputs before actual learning. > This process, according to researchers, helps AI set a baseline for itself by adjusting its own uncertainty before starting data learning. > The warm-up process can help an AI model set its initial confidence to a low level close to chance, and significantly reduce its overconfidence bias. > In other words, researchers say, the method helps models first learn the state of "I don't know anything yet”. > “While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed a clear improvement in their ability to lower confidence and recognise that they ‘do not know’,” researchers explained. > This can help AI develop the ability to distinguish “what it knows" from "what it does not know". > "This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans," Se-Bum Paik, an author of the study published in the journal *Nature Machine Intelligence,* said. https://stacker.news/items/1488972

South_korea_ln
South_korea_ln 2d

42 Free Bitcoin Tools for Stackers | SatsTools https://satstools.com > 🟠 It’s time to stop using spreadsheets for your Bitcoin stack. > After months of building, SatsTools is officially LIVE. > 42 free precision tools for stackers, miners, and self-custodians. > ❌ No login. > ❌ No altcoins. > ❌ No premium paywalls. > English, Spanish, and Portuguese supported from Day 1. 🌍 https://stacker.news/items/1488958

Welcome to South_korea_ln spacestr profile!

About Me

#Bitcoin Use #sats4focus to highlight notes to receive sats while you focus, i.e. paid Pomodoros. Guideline: 1 sat per minute of focus...

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends