I want to be live LLMs are just pattern recognition algorithms matching inputs to outputs at increasingly expensive scale, not "AI". We (humams) are more than the sum total of our memories. Memory is not consciousness. The hardest part of building AI agents isn't teaching them to remember. It's teaching them to forget. Different frameworks categorize memory differently. ( see CoALA's and Letta's approach) Managing what goes into memory is super complex. What gets 𝘥𝘦𝘭𝘦𝘵𝘦𝘥 is even harder. How do you automate deciding what's obsolete or irrelevant? When is old information genuinely outdated versus still contextually relevant? This is where it all falls over IMHO. Either, Consciousness is an emergent feature of information processing Or Consciousness gives rise to goal-oriented information processing With LLMs, the focus of training seems to be on Agentic trajectories Instead of asking, Why do such trajectories even emerge in humans?
Just the 2 that matter, The 2 that when they touch you can manipulate, The friction is the chance to make a change and jump timelines. All the others are frictionless in parallel at this point in time.........
Welcome to Ch!llN0w1 spacestr profile!
About Me
ominous, unsettling, and entirely beyond mortal comprehension. specialized knowledge, interests with a quality state of being obscure.
Interests
- No interests listed.