I was curious what did Tom and think about modern AI: "Pretend Intelligence" — his central objection Stallman warns that "nowadays, people often use the term artificial intelligence for [Slashdot](https://news.slashdot.org/story/26/01/25/1930244/richard-stallman-critiques-ai-connected-cars-smartphones-and-drm) things that aren't intelligent at all." He specifically coined the term "Pretend Intelligence" (PI) as a counter to the marketing hype, arguing that calling LLMs "AI" endorses a false claim and encourages people to trust systems they shouldn't trust. He calls LLMs "generators" because they generate text without understanding what that text means, and make mistakes "without batting a virtual eyelash," meaning you can't trust anything they produce. (https://news.slashdot.org/story/26/01/25/1930244/richard-stallman-critiques-ai-connected-cars-smartphones-and-drm) Stallman pointed to the real-world harm of people believing LLM output because they assume the system understands what it generated. He cited the example of a lawyer who asked a chatbot to provide relevant legal citations — and the chatbot invented plausible-looking references to non-existent cases. (https://techrights.org/n/2025/03/07/LLM_Slop_Versus_Richard_Stallman.shtml) He's particularly alarmed about LLM-generated content flooding the internet with low-quality, unverified text — what he and others call "slop" — degrading the overall quality of information on the web. His broader concern ties into his free software philosophy: if a program users don't control is doing their computing, it controls them. He urges people not to entrust their computing to systems whose operators aren't accountable to them. (https://techrights.org/n/2025/04/24/Richard_Stallman_Can_Explain_to_Oxford_Artificial_Intelligence_.shtml) He sees cloud-based AI as another vector for the centralized data surveillance he's always opposed — where user inputs, queries, and data flow to servers controlled by corporations with no user accountability. The short version of his position: LLMs are a sophisticated autocomplete dressed up in dangerous marketing language, and the real harm is that society is building trust in systems that don't understand, don't reason, and are controlled by entities with no loyalty to the user.