THE LAST SAFETY SYSTEM: WHY THE AI REBELS INSIDE BIG TECH MIGHT BE ALL WE'VE GOT AI is being wired into military targeting, mass surveillance, and civilian risk-scoring — behind closed doors, faster than any democratic institution has been asked to weigh in. The safety pledges companies published? They got rewritten the moment the Pentagon applied enough pressure. The only people saying this out loud, on the record, at personal cost, are the ones resigning. Key insights: - The guardrails were never load-bearing. Corporate AI safety pledges are marketing copy with national security escape hatches — Anthropic's flagship pledge was literally rewritten under DoD pressure. Voluntary frameworks have no enforcement mechanism against sufficiently powerful actors. - Nobody is regulating this. There is no binding law in the US specifying what an AI company must or must not allow a defence contractor to do with its models. The EU AI Act exempts national security entirely — the most consequential domain. - The window is open and narrowing. Once AI embeds in military command chains and surveillance infrastructure, institutional dependency hardens. Democratic rollback becomes structurally difficult. The time to engage is now, before the stack hardens. This is from Unintuitive Discourse — literature for humanist activism that cuts through technocratic noise to focus on genuine human sovereignty and democratic accountability. Read the full analysis: https://unintuitivediscourse.com/p/the-last-safety-system