spacestr

đź”” This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
Joe Resident
Member since: 2023-06-17
Joe Resident
Joe Resident 2d

Social media being more important than the moment is cringe Unless it IS the moment, in which case it could either be fine, or doubly cringe

Joe Resident
Joe Resident 4d

note1stz7fh966flulf9vt8550k6nl0kljdzzh388gls97g052lek8pss2ucnat Re: incentivizing intelligence agencies to release more information. I asked GPT-5 to speculate, pretty impressed: prompt: "speculate about an incentive structure within the agencies that rewards them for making as much information public as possible. Now they optimize only for global strategic advantage and self-preservation, what if structurally we added releasing information as a third motivation?" ChatGPT said: That’s a fascinating inversion of the current design. Right now, as you say, U.S. intelligence agencies are structured to optimize for (1) national strategic advantage and (2) bureaucratic self-preservation. Secrecy serves both: it protects sources/methods and shields the agency from criticism or oversight. Adding a third structural incentive: “maximize responsible public disclosure” would fundamentally reshape their internal incentive calculus. Let me speculate on how such a system might work. 1. Why Secrecy is the Default Incentive Risk asymmetry: If you over-classify, there’s rarely a penalty; if you under-classify and something leaks or goes wrong, it can end careers. Budget politics: Bigger “black box” operations make it harder for outsiders to challenge budgets. Classified = harder to contest. Cultural prestige: Within the agencies, being “read into” highly secret programs is status. More secrecy = more power, more exclusivity. These pressures mean that secrecy is the dominant strategy under the current incentive structure. 2. Hypothetical: Building in “Disclosure Rewards” Imagine Congress, the executive, or independent oversight creates a third metric: value of responsible public transparency. The agency and its personnel are rewarded when information is released to the public (without harming ongoing operations). Possible structural levers: Budgetary incentives: A percentage of agency budget is tied to a “Disclosure Index.” Agencies that responsibly declassify and release more information get higher discretionary funding. Career incentives: Promotions and awards explicitly tied to quality and quantity of responsible disclosures (e.g. “analyst of the year” for producing intelligence assessments later released with minimal redactions). Reputational capital: Agencies score “public trust points” tracked by oversight boards, affecting leadership bonuses, leadership tenure, or even how much autonomy Congress grants them. Competition between agencies: If CIA, NSA, DIA, etc. are benchmarked against each other for transparency, the “prestige” dynamic flips—leaders want to be seen as the most open without compromising security. 3. What Changes in Behavior? Classification standards shift: Default might move from “classify unless clear it’s harmless” → “release unless clear it’s harmful.” Proactive declassification pipelines: Dedicated offices constantly preparing sanitized versions of reports for public release, with incentives to do it quickly. Strategic openness as a weapon: Agencies might deliberately release certain intel analyses not only to reassure the public but also to shape foreign adversaries’ behavior (e.g., exposing corruption networks, publishing satellite evidence). Public engagement loop: Agencies could become knowledge-brokers to the citizenry, publishing historical case studies, trend reports, and sanitized operational lessons, thereby turning intelligence into a “national commons” resource as much as a secret tool. 4. Potential Benefits Trust repair: Citizens see intelligence as working for them, not over them. Policy legitimacy: Elected officials can point to intelligence disclosures as grounds for decisions, reducing suspicion that policy is being driven by a “shadow state.” Deterrence by transparency: Publishing adversary misdeeds (e.g. disinformation ops) can weaken rivals without kinetic action. Innovation spillover: Releasing sanitized technical methods (cryptography, satellite imaging, AI analysis) accelerates civilian science/industry. 5. Potential Risks Over-disclosure arms adversaries: Too much revealed, too soon, can compromise capabilities. Agencies might game the incentive by releasing irrelevant or already-known material. Perverse incentives: Leaders may push to “pad” transparency stats with low-value data dumps rather than meaningful releases. Short-termism: Drive to disclose could bias against long-term clandestine projects where secrecy is crucial. Weaponization of disclosure metrics: Rival political parties could pressure agencies to release intelligence favorable to their narratives.

Joe Resident
Joe Resident 4d

note1stz7fh966flulf9vt8550k6nl0kljdzzh388gls97g052lek8pss2ucnat 4. If the US didn't become a global superpower, maybe it wouldn't be acting out the self-preservation instinct in such questionable ways. Maybe this always happens to empires. Maybe they get paranoid.

Joe Resident
Joe Resident 4d

note1k2yfjnz39warjavuz28k0tdyc008azv70jqemxe9hrx59yjnjwzsm0h3v9 My thoughts and ideas for solutions (after an entire 30 minutes of thought): 1. Maybe these intel guys aren't as creepy or as bad of freedom-hating bureaucrats as I thought. I always believed the vast majority were doing their job as best they could. But now maybe I believe that a little more, even for the leadership that make some questionable, sometimes infuriating decisions from the outside (though the possibility of corruption via various forces is always there). 2. The national security risks of forcing the Intelligence Agencies to take their foot off the gas are probably not as bad as they'd like us to believe, or even believe themselves. TSA security theater comes to mind. Though, not to say the risks are trivial. 3. https://sgp.fas.org/eprint/limits.pdf - there is significant hope. Freedom Of Information requests are effective. There are ways to shine light into the secrecy. But the incentives are currently such that significant effort is required to do so. We should *reverse* that incentive structure; somehow reward the intelligence agencies for open sourcing as much information as they possibly can. Reward them at the individual level, and the organizational level. Then they, the ones with all the context, can balance strategic advantage with accountability. Currently they're only optimizing for strategic advantage. The FOIA is good on its own, but frames things in such a way as to make revelation the exception, not the default.

Joe Resident
Joe Resident 4d

My current read of why the US government is full of such spooky bois: The stark truth of the often spooky and almost adversarial-feeling relationship of Americans with their own country's Intelligence Agencies is : That secrecy, even with respect to their own populace, is necessary to maintain intelligence and strategic advantage over rival nations. The intelligence agencies are entrusted with that power not because anyone elected them, but by the nature of their work. This creates a point of conflict between the sovereignty of the free people, their constitution and its amendments (see the 4th), and elected leaders on one hand, and the intelligence agencies that oversee their safety on the other. The 'puppet' perception of elected leaders vis a vis their apparent beholdness to shadowy intelligence agencies is a clear example. The seriousness of this conflict was muted until the Cold War, when the budget and sophistication of Intelligence Agencies reached never before seen levels, and then was reticent to surrender this perceived world advantage after the war ended. Freedom-seeking nations are currently in a time where the populace doesn't understand this conflict, Intelligence Agencies are scared to relinquish their power, for self-preservation reasons, but also because they can see the national risks that would follow, and elected leaders who understand the conflict don't feel it prudent to completely side with the full sovereignty of the populace (which would require them to 'blow the lid' on the secret nature of the nation's operation, or otherwise undermine its operation), for fear of the same national risks the Intelligence Agencies see, as well as self-preservation (ex. some think JFK was taking this route, and was gruesomely silenced by the Intelligence Agencies). So there's no clear path forward without very decisive action, so the story of the decades since the Cold War has been one of mission and scope creep of the Intelligence Agencies, no real architecture in place for how to balance secrecy, safety, and strategic advantage, with sovereignty and democratic accountability. will follow with my ideas for solutions

Joe Resident
Joe Resident 4d

Sometimes to *see* clearly you have to *be* clearly first. Clean your room.

Joe Resident
Joe Resident 4d

Don't know what's going on but I'm laughing

Joe Resident
Joe Resident 9d

When I was learning git I had a bash alias for git-help-quick which printed a full page ASCII art of that famous picture of Linus flipping the camera off

Joe Resident
Joe Resident 11d

A glorious win for the good people of Nostr #breadhasarrived

#breadhasarrived
Joe Resident
Joe Resident 15d

What's your github? sorry your website isn't loading for me

Joe Resident
Joe Resident 16d

Pumped. I've had so many ideas for making something like your tenex project, heard you go through it in a podcast. But I keep telling myself, 'stay focused on your main project, everyone is having the same self-improving agent swarm idea, you'll be able to use someone else's soon without spending the time yourself' On a COMPLETELY unrelated note, can I use tenex? Is it oss? Also, despite my attempt to not get distracted by it, I've spent approx 40 hrs over the last 6 months developing the principles I would use to make my version of tenex. Would be down to share ideas

Joe Resident
Joe Resident 23d

That's crazy

Joe Resident
Joe Resident 16d

note1n0jamfsr9ln09njjqs06ltc36nrwaa4qmcsyr43chdrvpa55szyqn5kqhv This made me think about the other side, how much more intelligence will there be from today (at the frontier), independent of compute required (within the same fundamental autoregressive transformer and pretraining + RL paradigm)? My gut says at least another OG GPT-4 to today's frontier jump (Grok-4/Gemini 2.5 Pro/Claude Sonnet 4/GPT-5) But I think a qualitative jump to really feeling like human-level intelligence will require something like a module within the network that pays attention to the coherence of a new "thought direction" WRT its own model of reality, not just next-token-prediction on steroids. (that qualitative jump would manifest as a huge hallucination reduction, the ability to zoom out and self-correct after mistakenly honing in on the wrong path, in short, the model feeling like it's responding from a place of operating from a real mental model of the world, as opposed to today's models, responding from a place of most-likely-next-token) And a neurology foray I made a few days ago leads to say "wow neurology and ML are starting to converge, and some of the computational models that neurologists are making strike me as believable substrates for this type of 'check thoughts against a causal world model' module I think is required for the next step in LLMs". That didn't really feel true to me a few years ago Examples of neurologist's recent models I looked at: TEM machine (entorihnal+hippocampus analogous), Spaun (spiking NN, surprisingly holistic macro brain model)" And recently HRM or Heirarchical Reasoning Models (did well on the ARC-AGI-1 benchmark, caused a bit of a stir) strike me as the latest thing closing the neurology/AI gap, and accomodating insights from both. Excited and kinda concerned for the next 5 years. This is the time to build AI tech that gives power to individuals! Otherwise we might be SOL, at the mercy of oligarchical whims...

Joe Resident
Joe Resident 16d

based on the fact that quantization is so effective (which basically means there's still a lot of waste in today's LLMs), I think we have at least another 5x efficiency to go for the same intelligence compared to today's models see https://docs.unsloth.ai/basics/unsloth-dynamic-ggufs-on-aider-polyglot

Welcome to Joe Resident spacestr profile!

About Me

Working on a gardening robot called Wilbur; we need to give the power of AI to individuals!

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends