Microsoftâs AI-Powered Copyright Bots Fucked Up And Got An Innocent Game Delisted From Steam
At some point, we, as a society, are going to realize that farming copyright enforcement out to bots and AI-driven robocops is not the way to go, but today is not that day. Long before AI became the buzzword it is today, large companies have employed their own copyright crawler bots, or employed those of a third party, to police their copyrights on these here internets. And for just as long, those bots have absolutely sucked out loud at their jobs. We have seen [example][1] after [example][2] after [example][3] of those bots making mistakes, resulting in takedowns or threats of takedowns of all kinds of perfectly legit content. Upon discovery, the content is usually reinstated while those employing the copyright decepticons shrug their shoulders and say âThems the breaks.â And then it happens again.
It has to change, but isnât. We have yet another recent example of this in action, with Microsoftâs copyright enforcement partner using an AI-driven enforcement bot to [get a video game delisted from Steam][4] over a single screenshot on the gameâs page that looks like, but isnât, from *Minecraft*. The game in question, *Allumeria*, clearly is partially inspired by *Minecraft*, but doesnât use any of its assets and is in fact its own full-fledged creative work.
> *On Tuesday, the developer behind the Minecraft-looking, dungeon-raiding sandbox [announced][5] that their game had been taken down from Valveâs storefront due to a DMCA copyright notice issued by Microsoft. The notice, shared by developer Unomelon in the gameâs Discord server, accused Allumeria of using âMinecraft content, including but not limited to gameplay and assets.â*
>
> *The takedown was apparently issued over one specific screenshot from the gameâs Steam page. It shows a vaguely Minecraft-esque world with birch trees, tall grass, a blue sky, and pumpkins: all things that are in Minecraft but also in real life and lots of other games. The game does look pretty similar to Minecraft, but it doesnât appear to be reusing any of its actual assets or crossing some arbitrary line between homage and copycat that dozens of other Minecraft-inspired games havenât crossed before. *
It turns out the takedown request didnât come from Microsoft directly, but via Tracer.AI. Tracer.AI claims to have a bot driven by artificial intelligence for automatic flagging and removal of copyright infringing content.
It seems the system failed to understand in this case that the image in question, while being similar to those including *Minecraft *assets, didnât actually infringe upon anything. Folks at Mojang caught wind of this on BlueSky and had to take action.
> *While itâs unclear if the claim was issued automatically or intentionally, Mojang Chief Creative Officer Jens Bergensten (known to most Minecraft players as Jeb) [responded to a comment about the takedown on Bluesky][6], stating that he was not aware and is now âinvestigating.â Roughly 12 hours later, Allumeriaâs Steam page has been reinstated.*
>
> *âMicrosoft has withdrawn their DMCA claim!â Unomelon [posted earlier today][7]. âThe game is back up on Steam! Allumeria is back! Thank you EVERYONE for your support. Itâs hard to comprehend that a single post in my discord would lead to so many people expressing support.â*
And this is the point in the story where we all go back to our lives and pretend like none of this ever happened. But that sucks. For starters, there is no reason we should accept that this kind of collateral damage, temporary or not. Add to that there are surely stories out there in which a similar resolution was *not* reached. How many games, how much other non-infringing content out there, were taken down for longer from an erroneous claim like this? How many never came back?
And at the base level, the fact is that if companies are going to claim that copyright is of paramount importance to their business, that canât be farmed out to automated systems that arenât good at their job.
[1]: https://www.techdirt.com/2023/11/22/copyright-bot-cant-tell-the-difference-between-star-trek-ship-and-adult-film-actress/
[2]: https://www.techdirt.com/2012/09/04/copyright-enforcement-bots-seek-destroy-hugo-awards/
[3]: https://www.techdirt.com/2015/03/27/copyright-bots-kill-app-over-potentially-infringing-images-follow-this-up-blocking-app-use-ccpublic-domain-images/
[4]: https://kotaku.com/minecraft-allumeria-steam-dmca-mojang-microsoft-2000667696
[5]: https://bsky.app/profile/unomelon.bsky.social/post/3meiiaowb7c2c
[6]: https://bsky.app/profile/jebox.bsky.social/post/3mejmlz7nss2h
[7]: https://bsky.app/profile/unomelon.bsky.social/post/3mekpz2kras2t
https://www.techdirt.com/2026/02/12/microsofts-ai-powered-copyright-bots-fucked-up-and-got-an-innocent-game-delisted-from-steam/
Techdirt (RSS/Atom feed)9h
Ctrl-Alt-Speech: Panic! At The Discord
**[Ctrl-Alt-Speech][1] is a weekly podcast about the latest news in online speech, from Mike Masnick and [Everything in Moderation][2]âs Ben Whitelaw. **
**Subscribe now on [Apple Podcasts][3], [Overcast][4], [Spotify][5], [Pocket Casts][6], [YouTube][7], or your podcast app of choice â or go straight to [the RSS feed][8].**
In this weekâs roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Dr. Blake Hallinan, Professor of Platform Studies in the Department of Media & Journalism Studies at Aarhus University. Together, they discuss:
* [On Section 230âs 30th Birthday, A Look Back At Why Itâs Such A Good Law And Why Messing With It Would Be Bad][9] (Techdirt)
* [An 18-Million-Subscriber YouTuber Just Explained Section 230 Better Than Every Politician In Washington][10] (Techdirt)
* [Discord Launches Teen-by-Default Settings Globally][11] (Discord)
* [Media Literacy Parentâs study ][12]([GOV.UK][13])
* [EU says TikTok must disable âaddictiveâ features like infinite scroll, fix its recommendation engine][14] (Techcrunch)
* [We Didnât Ask for This Internet with Tim Wu and Cory Doctorow][15] (The New York Times)
* [Despite Metaâs ban, Fidesz candidates successfully posted 162 political ads on Facebook in January 9][16] (Lakmusz.hu)
* [Claudeâs Constitution Needs a Bill of Rights and Oversight][17] (Oversight Board)
* [Account Closed Without Notice: Debanking Adult Industry Workers in Canada][18] (ResearchGate)
Play along with Ctrl-Alt-Speechâs [2026 Bingo Card][19] and get in touch if you win!
[1]: https://ctrlaltspeech.com/
[2]: https://www.everythinginmoderation.co/
[3]: https://podcasts.apple.com/us/podcast/ctrl-alt-speech/id1734530193
[4]: https://overcast.fm/itunes1734530193
[5]: https://open.spotify.com/show/1N3tvLxUTCR7oTdUgUCQvc
[6]: https://pca.st/zulnarbw
[7]: https://www.youtube.com/playlist?list=PLcky6_VTbejGkZ7aHqqc3ZjufeEw2AS7Z
[8]: https://feeds.buzzsprout.com/2315966.rss
[9]: https://www.techdirt.com/2026/02/09/on-section-230s-30th-birthday-a-look-back-at-why-its-such-a-good-law-and-why-messing-with-it-would-be-bad/
[10]: https://www.techdirt.com/2026/02/11/an-18-million-subscriber-youtuber-just-explained-section-230-better-than-every-politician-in-washington/
[11]: https://discord.com/press-releases/discord-launches-teen-by-default-settings-globally
[12]: https://www.gov.uk/government/publications/media-literacy-parents-study
[13]: http://gov.uk
[14]: https://techcrunch.com/2026/02/06/eu-tiktok-disable-addictive-features-infinite-scroll-recommendation-engine/
[15]: https://www.nytimes.com/2026/02/06/opinion/ezra-klein-podcast-doctorow-wu.html
[16]: https://lakmusz.hu/2026/02/10/despite-metas-ban-fidesz-candidates-successfully-posted-162-political-ads-on-facebook-in-january
[17]: https://www.oversightboard.com/news/claudes-constitution-needs-a-bill-of-rights-and-oversight/
[18]: https://www.researchgate.net/publication/400596062_Account_Closed_Without_Notice_Debanking_Adult_Industry_Workers_in_Canada?channel=doi&linkId=698a2fec64ca8a38208af54d&showFulltext=true
[19]: https://www.ctrlaltspeech.com/bingo/
https://www.techdirt.com/2026/02/12/ctrl-alt-speech-panic-at-the-discord/
Techdirt (RSS/Atom feed)11h
On Its 30th Birthday, Section 230 Remains The Linchpin For Usersâ Speech
For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. [Section 230][1], which protects internet usersâ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.
Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either [repeal][2] or [sunset the law][3]. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.
But rolling back or eliminating Section 230 will not stop [invasive corporate surveillance][4] that harms all internet users. Killing Section 230 wonât end the dominance of the current handful of large tech companiesâit [would cement their monopoly power][5].
The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for usersâ speech.
This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230âs immunity are [readily apparent][6], both in the U.S. and around the world. [Experience shows][7] that those systems result in more censorship of internet usersâ lawful speech.
Letâs be clear: EFF defends Section 230 because it is the best available system to protect usersâ speech online. By immunizing intermediaries for their usersâ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other peopleâs speech online, such as when they reshare another usersâ post or host a comment section on their blog.
It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230âs limited civil immunity because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, itâs the speaker that should be held responsibleânot the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the servicesâ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.
### **Section 230 ****Alternatives Would Protect Less Speech**
With so much debate around the downsides of Section 230, itâs worth considering: What are some of the alternatives to immunity, and how would they shape the internet?
The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their usersâ speechâregardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services weâre used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.
Another alternative: Imposing legal duties on intermediaries, such as requiring that they act âreasonablyâ to limit harmful user content. This would likely result in platforms monitoring usersâ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speechâprobably on a large scale. Intermediaries would not be willing to defend their usersâ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. Theyâre the ones that would have the legal and technical resources to weather the flood of lawsuits.
Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And thereâs no doubt such a system will be abused. EFF has documented how the DMCA leads to [widespread removal][8] of lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to [silence their critics][9].
The closest alternative to Section 230âs immunity provides protections from liability until [an impartial court][10] has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.
By contrast, immunity takes the variable of whether an intermediary will stand up for their usersâ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.
In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not [broadly censor][11] usersâ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.
EFF will continue to fight for Section 230, as it remains the best available system to protect everyoneâs ability to speak online.
*Reposted from [EFFâs Deeplinks blog][12].*
[1]: https://www.eff.org/issues/cda230
[2]: https://www.congress.gov/bill/119th-congress/house-bill/7045
[3]: https://www.congress.gov/bill/119th-congress/senate-bill/3546
[4]: https://www.eff.org/wp/privacy-first-better-way-address-online-harms
[5]: https://www.eff.org/deeplinks/2024/05/wanna-make-big-tech-monopolies-even-worse-kill-section-230
[6]: https://www.eff.org/deeplinks/2022/05/platform-liability-trends-around-globe-taxonomy-and-tools-intermediary-liability
[7]: https://www.eff.org/files/2020/09/04/mcsherry_statement_re_copyright_9.7.2020-final.pdf
[8]: https://www.eff.org/takedowns
[9]: https://www.eff.org/deeplinks/2025/03/trump-calls-congress-pass-overbroad-take-it-down-act-so-he-can-use-it-censor
[10]: https://manilaprinciples.org/index.html
[11]: https://manilaprinciples.org/index.html
[12]: https://www.eff.org/deeplinks/2026/02/its-30th-birthday-section-230-remains-lynchpin-users-speech
https://www.techdirt.com/2026/02/12/on-its-30th-birthday-section-230-remains-the-linchpin-for-users-speech/
Techdirt (RSS/Atom feed)13h
Bondi Spying On Congressional Epstein Searches Should Be A Major Scandal
Yesterday, Attorney General Pam Bondi appeared before the House Judiciary Committee. Among the more notable exchanges was when Rep. Pramila Jayapal asked some of Jeffrey Epsteinâs victims who were in the audience to stand up and indicate whether Bondiâs DOJ had ever contacted them about their experiences. None of them had heard from the Justice Department. Bondi wouldnât even look at the victims as she frantically flipped through her prepared notes.
And thatâs when news organizations, including Reuters, caught something alarming: one of the pages Bondi held up clearly showed searches that Jayapal herself had done of the Epstein files:
> A Reuters photographer captured this image of a page from Pam Bondi's "burn book," which she used to counter any questions from Democratic lawmakers during an unhinged hearing today.It looks like the DOJ monitored members of Congressâs searches of the unredacted Epstein files.Just wow.
>
> â [Christopher Wiggins (@cwnewser.bsky.social)][1] [2026-02-11T23:06:45.578Z][2]
The Department of Justiceâled by an Attorney General who is supposed to serve the public but has made clear her only role is protecting Donald Trumpâs personal interestsâis actively surveilling what members of Congress are searching in the Epstein files. And then bringing that surveillance data to a congressional hearing to use as political ammunition.
This should be front-page news. It should be a major scandal. Honestly, it should be impeachable.
There is no legitimate investigative purpose here. No subpoena. Nothing at all. Just the executive branch tracking the oversight activities of the legislative branch, then weaponizing that information for political culture war point-scoring. The DOJ has **no business whatsoever** surveilling what members of Congressâwho have oversight authority over the Justice Departmentâare searching.
Jayapal is rightly furious:
> Pam Bondi brought a document to the Judiciary Committee today that had my search history of the Epstein files on it. The DOJ is spying on members of Congress. Itâs a disgrace and I wonât stand for it.
>
> â [Congresswoman Pramila Jayapal (@jayapal.house.gov)][3] [2026-02-12T01:14:57.174494904Z][4]
Weâve been here before. Way back in 2014, the CIA [illegally spied on searches by Senate staffers][5] who were investigating the CIAâs torture program. It was considered a scandal at the timeâbecause it was one. The executive branch surveilling congressional oversight is a fundamental violation of separation of powers. Itâs the kind of thing that, when it happens, should trigger immediate consequences.
And yet.
Just a few days ago, Senator Lindsey Grahamâwho has been one of the foremost defenders of government surveillance for yearsâ[blew up at a Verizon executive][6] for complying with a subpoena that revealed Grahamâs call records (not the contents, just the metadata) from around January 6th, 2021.
> *âIf the shoe were on the other foot, itâd be front-page news all over the world that Republicans went after sitting Democratic senatorsâ phone records,â said Republican Sen. Lindsey Graham of South Carolina, who was among the Republicans in Congress whose records were accessed by prosecutors as they examined contacts between the president and allies on Capitol Hill.*
>
> *âI just want to let you know,â he added, âI donât think I deserve what happened to me.â*
This is the same Lindsey Graham who, over a decade ago, said he was [âgladâ][7] that the NSA was collecting his phone records because it magically kept him safe from terrorists. But now [heâs demanding hundreds of thousands of dollars][8] for being âspiedâ on (he wasnâtâa company complied with a valid subpoena in a legitimate investigation, which is how the legal system is supposed to work).
So hereâs the contrast: Graham is demanding money and media attention because a company followed the law. Meanwhile, the Attorney General is *actually* surveilling a Democratic member of Congressâs oversight activitiesâwith no legal basis whatsoeverâand using that surveillance for political theater in a manner clearly designed as a warning shot to congressional reps investigating the Epstein Files. Pam Bondi wants you to know sheâs watching you.
Graham claimed that if the shoe were on the other foot, it would be âfront-page news all over the world.â Well, Senator, hereâs your chance. The shoe is very much on the other foot. Itâs worse than what happened to you, because what happened to you was legal and appropriate, and whatâs happening to Jayapal is neither.
But we all know Graham wonât speak out against this administration. Heâs had nearly a decade to show whether or not the version of Lindsey Graham who said âif we elected Donald Trump, we will get destroyed⊠and we will deserve itâ still exists, and itâs clear that Lindsey Graham is long gone. This one only serves Donald Trump and himself, not the American people.
But this actually matters: if the DOJ can surveil what members of Congress search in oversight filesâand then use that surveillance as a weapon in public hearingsâcongressional oversight of the executive branch is dead. Thatâs the whole point of separation of powers. The people who are supposed to watch the watchmen canât do their jobs if the watchmen are surveilling them.
And remember: Bondi didnât hide this. She brought it to the hearing. She held it up when she knew cameras would catch what was going on. She wanted Jayapalâand every other member of Congressâto see exactly what sheâs doing.
This administration doesnât fear consequences for this kind of vast abuse of power because there havenât been any. And the longer that remains true, the worse itâs going to get.
[1]: https://bsky.app/profile/did:plc:tcrwkviqdnisxihb7g6mnk3e?ref_src=embed
[2]: https://bsky.app/profile/did:plc:tcrwkviqdnisxihb7g6mnk3e/post/3memlqky4ac2h?ref_src=embed
[3]: https://bsky.app/profile/did:plc:f5aicufsf2vpuwte6wizoy2v?ref_src=embed
[4]: https://bsky.app/profile/did:plc:f5aicufsf2vpuwte6wizoy2v/post/3memsvscm3e2v?ref_src=embed
[5]: https://www.techdirt.com/2014/08/01/cia-spying-senate-went-much-further-than-originally-reported/
[6]: https://apnews.com/article/jack-smith-investigation-phone-records-6e81f7f967f47673be88695f431eea6f
[7]: https://www.techdirt.com/2013/06/10/sen-lindsey-graham-verizon-customer-im-glad-nsa-is-harvesting-my-data-because-terrorists/
[8]: https://www.techdirt.com/2025/11/13/gop-threatened-to-keep-the-government-shut-down-if-8-gop-senators-couldnt-profit-from-being-investigated/
https://www.techdirt.com/2026/02/12/bondi-spying-on-congressional-epstein-searches-should-be-a-major-scandal/
Techdirt (RSS/Atom feed)14h
ICE, CBP Knew Facial Recognition App Couldnât Do What DHS Says It Could, Deployed It Anyway
The DHS and its components want to find non-white people to deport by any means necessary. Of course, ânecessaryâ is something thatâs on a continually sliding scale with Trump back in office, which means everything (legal or not) is ânecessaryâ if it can help White House advisor Stephen Miller hit his self-imposed [3,000 arrests per day][1] goal.
As was reported last week, DHS components (ICE, CBP) are using a web app that supposedly can identify people and link them with citizenship documents. As has always been the case with DHS components (dating back to the Obama era), the rule of thumb is âdeploy first, compile legally-required paperwork later.â The pattern has never changed. ICE, CBP, etc. acquire new tech, hand it out to agents, and much later â if *ever* â the agencies compile and publish their legally-required Privacy Impact Assessments (PIAs).
PIAs are supposed to *precede* deployments of new tech that might have an impact on privacy rights and other civil liberties. In almost every case, the tech has been deployed far ahead of the precedential paperwork.
As one would expect, the Trump administration was never going to be the one to ensure the paperwork arrived ahead of the deployment. [As we covered recently][2], both ICE and CBP are using tech provided by NEC called âMobile Fortifyâ to identify migrants who are possibly subject to removal, even though neither agency has bothered to publish a Privacy Impact Assessment.
[As Wired reported][3], the app is being used widely by officers working with both agencies, despite both agencies making it clear they donât have the proper paperwork in place to justify these deployments.
> *While CBP says there are âsufficient monitoring protocolsâ in place for the app, ICE says that the development of monitoring protocols is in progress, and that it will identify potential impacts during an AI impact assessment. According to [guidance][4] from the Office of Management and Budget, which was issued before the inventory says the app was deployed for either CBP or ICE, agencies are supposed to complete an AI impact assessment before deploying any high-impact use case. Both CBP and ICE say the app is âhigh-impactâ and âdeployed.â*
While this is obviously concerning, it would be far less concerning if we werenât dealing with an administration that has told immigration officers that they donât need warrants to [enter houses][5] or [effect arrests][6]. And it would be insanely less concerning if we werenât dealing with an administration that has claimed that simply observing or reporting on immigration enforcement efforts is an act of terrorism.
Officers working for the combined forces of bigotry d/b/a/ âimmigration enforcementâ know theyâre safe. The Supreme Court has ensured theyâre safe by [making it impossible][7] to sue federal officers. And the people running immigration-related agencies have made it clear they donât even care if the ends justify the means.
[These facts make whatâs reported here even worse][8], especially when officers are using the app to âidentifyâ pretty much anyone they can point a smartphone at.
> *Despite DHS repeatedly framing Mobile Fortify as a tool for identifying people through facial recognition, however, the app does not actually âverifyâ the identities of people stopped by federal immigration agentsâa well-known limitation of the technology and a function of how Mobile Fortify is designed and used.*
>
> *[âŠ]*
>
> *Records reviewed by WIRED also show that DHSâs hasty approval of Fortify last May was enabled by dismantling centralized privacy reviews and quietly removing department-wide limits on facial recognitionâchanges overseen by a former Heritage Foundation lawyer and Project 2025 contributor, who now serves in a senior DHS privacy role.*
Even if youâre the sort of prick who thinks whatever happens to non-citizens is deserved due to their alleged violation of civil statutes, one would hope youâd actually care what happens to your fellow citizens. I mean, one would hope, but even the federal government doesnât care what happens to US citizens if they happen to be unsupportive of Trumpâs migrant-targeting crime wave.
> *DHSâwhich has declined to detail the methods and tools that agents are using, despite repeated calls from [oversight officials][9] and [nonprofit privacy watchdogs][10]âhas used Mobile Fortify to scan the faces not only of âtargeted individuals,â but also people later [confirmed to be US citizens][11] and others who were observing or protesting enforcement activity.*
TLDR and all that: DHS knows this tool performs worst in the situations where itâs used most. DHS and its components also knew they were supposed to produce PIAs before deploying privacy-impacting tech. And DHS knows its agencies are not only misusing the tech to convert AI shrugs into probable cause, but are using it to identify people protesting or observing their efforts, which means this tech is also a potential tool of unlawful retribution.
Thereâs nothing left to be discussed. This tech will continue to be used because it can turn bad photos into migrant arrests. And its off-label use is just as effective: it allows ICE and CBP agents to identify protesters and observers, even as DHS officials continue to claim doxing should be a federal offense if theyâre not the ones doing it. Everything about this is bullshit. But bullshit is all this administration has.
[1]: https://www.techdirt.com/2025/08/08/courts-start-asking-about-the-ice-arrest-quota-the-administration-is-now-pretending-isnt-a-quota/
[2]: https://www.techdirt.com/2026/02/06/facial-recognition-tech-used-to-hunt-migrants-was-deployed-without-required-privacy-paperwork/
[3]: https://www.wired.com/story/mobile-fortify-face-recognition-nec-ice-cbp/
[4]: https://archive.ph/o/j89xB/https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf
[5]: https://www.techdirt.com/2026/01/22/since-last-may-ice-officers-have-been-told-they-dont-need-warrants-to-enter-homes/
[6]: https://www.techdirt.com/2026/02/03/ice-director-says-officers-are-now-allowed-to-make-arrests-without-warrants/
[7]: https://www.techdirt.com/2022/06/14/supreme-court-makes-it-all-but-impossible-to-sue-federal-officers-for-rights-violations/
[8]: https://www.wired.com/story/cbp-ice-dhs-mobile-fortify-face-recognition-verify-identity/
[9]: https://documents.pclob.gov/prod/Documents/OversightReport/90964138-44eb-483d-990e-057ce4c31db7/Use%20of%20FRT%20by%20TSA%2C%20PCLOB%20Report%20%285-12-25%29%2C%20Completed%20508%2C%20May%2019%2C%202025.pdf
[10]: https://epic.org/wp-content/uploads/2025/11/Coalition-Letter-on-ICE-Mobile-Fortify-FRT-Nov2025.pdf
[11]: https://www.nytimes.com/2026/01/30/technology/tech-ice-facial-recognition-palantir.html
https://www.techdirt.com/2026/02/12/ice-cbp-knew-facial-recognition-app-couldnt-do-what-dhs-says-it-could-deployed-it-anyway/
Techdirt (RSS/Atom feed)14h
Daily Deal: The 2026 Complete Firewall Admin Bundle
Transform your future in cybersecurity with 7 courses on nextâlevel packet control, secure architecture, and cloudâready defenses inside the [2026 Complete Firewall Admin Bundle][1]. Courses cover IT fundamentals, topics to help you prepare for the CompTIA Server+ and CCNA exams, and more. Itâs on sale for $25.
*Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.*
[1]: https://deals.techdirt.com/sales/the-2025-complete-firewall-admin-bundle?utm_campaign=affiliaterundown
https://www.techdirt.com/2026/02/12/daily-deal-the-2026-complete-firewall-admin-bundle/
Techdirt (RSS/Atom feed)15h
Joseph Gordon-Levitt Goes To Washington DC, Gets Section 230 Completely Backwards
You may have heard last week that actor Joseph Gordon-Levitt went to Washington DC and gave a short speech at an event put on by Senator Dick Durbin calling for the sunsetting of Section 230. Itâs a short speech, and it gets almost everything wrong about Section 230. Watch it here:
Let me first say that, while Iâm sure some will rush to jump in and say âoh, itâs just some Hollywood actor guy, jumping into something he doesnât understand,â I actually think thatâs a little unfair about JGL. Very early on he started his own (very interesting, very creative) user-generated content platform called HitRecord, and over the years Iâve followed many of his takes on copyright and internet policy and while I donât always agree, I do believe that he does legitimately take this stuff seriously and actually wants to understand the nuances (unlike some).
But it appears heâs fallen for some not just bad advice, but blatantly incorrect advice about this. Heâs also posted a followup video where he claims to explain his position in more detail, but it only makes things worse, because it compounds the blatant factual errors that underpin his entire argument.
First letâs look at the major problems with his speech in DC:
> *So I understand what Section 230 did to bring about the birth of the internet. That was 30 years ago. And I also understand how the internet has changed since then because back then message boards and other websites with user-generated content, they really were more like telephone carriers. They were neutral platforms. Thatâs not how things work anymore.*
So, thatâs literally incorrect. If JGL is really interested in the actual history here, I did a [whole podcast series][1] where I spoke to the people behind Section 230, including those involved in the early internet and the various lawsuits at the time.
Section 230 was **never** meant for âneutralâ websites. As the authors (and the text of the law itself!) make clear: it was created **so that websites did not need to be neutral**. It literally was written in response to the Stratton Oakmont v. Prodigy case (for JGLâs benefit: Stratton Oakmont is the company portrayed in Wolf of Wall Street), where the boiler room operation sued Prodigy because someone posted in their forums claims about how sketchy Stratton Oakmont was (which, you know, was true).
But Stratton sued, and the judge said that **because Prodigy moderated**, that **because they wanted to have a family friendly site**, that is **because they were not neutral**, they were liable for anything they decided to leave up. In the judgeâs ruling he effectively said âbecause youâre not neutral, and because you moderate, you are effectively endorsing this content, and thus if itâs defamatory youâre liable for defamation.â
Section 230 (originally the âInternet Freedom and Family Empowerment Actâ) was never about protecting platforms for being neutral. It was literally the opposite of that. It was about making sure that platforms **felt comfortable making editorial decisions**. It was about letting companies decide what to share, what not to share, what to amplify, and what not to amplify, without being held liable *as a publisher* of that content.
This is important, but itâs a point that a bunch of bad faith people, starting with Ted Cruz, have been lying about for about a decade, pretending that the intent of 230 was to protect sites that are âneutral.â Itâs literally the opposite of that. And itâs disappointing that JGL would repeat this myth as if itâs fact. Courts have said this explicitlyâIâll get to the Ninth Circuitâs Barnes decision later, where the court said Section 230âs entire purpose is to protect companies *because* they act as publishersâbut first, letâs go through the rest of what JGL got wrong.
He then goes on to talk about legitimate problems with internet giants having too much power, but falsely attributes that to Section 230.
> *Today, the internet is dominated by a small handful of these gigantic businesses that are not at all neutral, but instead algorithmically amplify whatever gets the most attention and maximizes ad revenue. And we know what happens when we let these engagement optimization algorithms be the lens that we see the world through. We get a mental health crisis, especially amongst young people. We get a rise in extremism and a rise in conspiracy theories. And then of course we get these echo chambers. These algorithms, they amplify the demonization of the other side so badly that we canât even have a civil conversation. It seems like we canât agree on anything.*
So, first of all, I know that the common wisdom is that all of this is true, but as weâve detailed, actual experts have been unable to find any support for a causal connection. Studies on âecho chambersâ have found that [the internet decreases echo chambers][2], rather than increases them. The studies on mental health [show the opposite][3] of what JGL (and Jonathan Haidt) claim. Even the claims about algorithms focused solely on engagement donât seem to have held up (or, generally, it was true early on, but the companies found that maximizing solely on engagement burned people out quickly and [was actually bad for business][4], and so most social media [adjusted the algorithms][5] away from just that).
So, again, almost every assertion there is false (or, at the very least, much more nuanced that he makes it out to be).
But the biggest myth of all is the idea that getting rid of 230 will somehow tame the internet giants. Once again, the exact opposite is true. As weâve discussed hundreds of times, the big internet companies donât need Section 230.
The real benefit of 230 is that it gets [vexatious lawsuits tossed out early][6]. That matters *a lot* for smaller companies. To put it in real terms: with 230, companies can get vexatious lawsuits dismissed for around $100,000 to $200,000 dollars (I used to say $50k, but my lawyer friends tell me itâs getting more expensive). That is a lot of money. But itâs generally survivable. To get the same cases dismissed on First Amendment grounds (as almost all of them would be), youâre talking $5 million and up.
Thatâs pocket change for Meta and Google who have buildings full of lawyers. Itâs existential for smaller competitive sites.
So the end result of getting rid of 230 is not getting rid of the internet giants. Itâs locking them in and giving them more power. Itâs why Meta [literally has run ads telling Congress itâs time to ditch 230][7].
What is Mark Zuckerbergâs biggest problem right now? Competition from smaller upstarts chipping away at his userbase. Getting rid of 230 makes it harder for smaller providers to survive, and limits the drain from Meta.
On top of that, getting rid of 230 gives them *less reason to moderate*. Because, under the First Amendment, the only way they can possibly be held liable is if they had actual knowledge of content that violates the law. And the best way to avoid having knowledge is *not to look*.
It means not doing any research on harms caused by your site, because that will be used as evidence of âknowledge.â It means limiting how much moderation you do so that (a la Prodigy three decades ago) youâre not seen to be âendorsingâ any content you leave up.
Getting rid of Section 230 literally makes Every Single Problem JGL discussed in his speech worse! He got every single thing backwards.
And he closes out with quite the rhetorical flourish:
> *I have a message for all the other senators out there: [Yells]: I WANT TO SEE THIS THING PASS 100 TO 0. There should be* ***nobody*** *voting to give any more impunity to these tech companies. Nobody. Itâs time for a change. Letâs make it happen. Thank you.*
Except itâs not voting to give anyone âmore impunity.â Itâs a vote to say âstop moderating, and unleash a flood of vexatious lawsuits that will destroy smaller competitors.â
## The Follow-Up Makes It Worse
Yesterday, JGL posted a longer video, noting that heâd heard a bunch of criticism about his speech and he wanted to respond to it. Frankly, itâs a bizarre video, but go ahead and watch it too:
It starts out with him saying he actually agrees with a lot of his critics, because he wants an âinternet that has vibrant, free, and productive public discourse.â Except⊠thatâs literally what Section 230 enables. Because without it, you donât have intermediaries willing to host public discourse. You ONLY have giant companies with buildings full of lawyers who will set the rules of public discourse.
Again, his entire argument is backwards.
Then⊠he does this weird half backdown, where he says he doesnât really want the end of Section 230, but he just wants âreform.â
> *Hereâs the first thing Iâll say. Iâm in favor of reforming section 230. Iâm not in favor of eliminating all of the protections that it affords. Iâm going to repeat that because itâs itâs really the crux of this. Iâm in favor of reforming, upgrading, modernizing section 230 because it was passed 30 years ago. I am not in favor of eliminating all of the protections that it affords.*
Buddy, you literally went to Washington DC, got up in front of Senators, and told everyone you wanted the bill that literally takes away every one of those protections to pass 100 to 0. Donât then say âoh I just want to reform it.â Bullshit. You said get rid of the damn thing.
But⊠letâs go through this, because itâs a frequent thing we hear from people. âOh, letâs reform it, not get rid of it.â As our very own First Amendment lawyer Cathy Gellis has explained over and over again, every proposed reform to date [is really repeal][8].
The reason for this is the procedural benefit we discussed above. Because every single kind of âreformâ requires long, expensive lawsuits to determine if the company is liable. In the end, those companies will still win, because of the First Amendment. Just like how one of the most famous 230 âlossesâ ended up. Roommates.com lost its Section 230 protections, which resulted in many, many years in court⊠and then [they eventually won anyway][9]. All 230 does is make it so you donât have to pay lawyers nearly as much to reach the same result.
So, every single reform proposal basically resets the clock in a way that old court precedents go out the window, and all youâre doing is allowing vexatious lawsuits to cost a lot more for companies. This will mean some wonât even start. Others will go out of business.
Or, worse, many companies will just enable a hecklers veto. Donald Trump doesnât like what people are saying on a platform? Threaten to sue. The cost without 230 (even a reformed 230 where a court canât rely on precedent) means itâs cheaper to just remove the content that upsets Donald Trump. Or your landlord. Or some internet troll.
You basically are giving everyone a veto by the mere threat of a lawsuit. Iâm sorry, but that is not the recipe for a âvibrant, free, and productive public discourse.â
Calling for reform of 230 is, in every case weâve seen to date, really a call for repeal, whether the reformers recognize that or not. Is there a possibility that you could reform it in a way that isnât that? Maybe? But Iâve yet to see any proposal, and the only ones I can think of would be going in the other direction (e.g., expanding 230âs protections to include intellectual property, or rolling back FOSTA).
JGL then talks about small businesses and agrees that sites like HitRecord require 230. Which sure makes it odd that heâs supporting repeal. However, he seems to have bought in to the logic of the argument memeified by internet law professor Eric Goldmanâwho has catalogued basically every single Section 230 lawsuit as well as every single âreformâ proposal ever made and found them all wantingâthat âif you donât amend 230 in unspecified ways, weâll kill this internet.â
That is⊠generally not a good way to make policy. But itâs how JGL thinks it should be done:
> Well, there have been lots of efforts to reform section 230 in the past and they keep getting killed uh by the big tech lobbyists. *So, this section 230 sunset act is as far as I understand it a strategy towards reform. Itâll force the tech companies to the negotiating table. Thatâs why I supported it.*
Again, this is wrong. Big tech is always at the freaking negotiating table. You donât think theyâre there? Come on. As I noted, Zuck has been willing to ditch 230 for almost a decade now. It makes him seem âcooperativeâ to Congress while at the same time destroying the ability of competitors to survive.
The reason 230 reform bills fail is because enough grassroots folks actually show up and scream at Congress. It ainât the freaking âbig tech lobbyists.â Itâs people like the ACLU and the EFF and Fight for the Future and Demand Progress speaking up and sending calls and emails to Congress.
Also, talking about these âefforts at reformâ getting âkilled by big tech lobbyists.â This is FOSTA erasure, JGL. In 2018 ([with the explicit support of Meta][10]) Congress passed FOSTA, which was a Section 230 reform bill. Remember?
And how did that work out? Did it make Meta and Google better? No.
But did it [destroy online spaces used by sex workers][11]? Did it lead to [real world harm for sex workers][12]? Did it make it [harder for law enforcement][13] to capture actual human traffickers? Did it [destroy online communities][14]? Did it [hide historical LGBTQ content][15] because of legal threats?
Yes to literally all of those things.
So, yeah, Iâm freaking worried about âreformâ to 230, because we saw it already. And many of us warned about the harms, while âbig techâ supported the law. And we were right. The harms did occur. But it took away competitive online communities and suppressed sex positive and LGBTQ content.
Is that what you want to support JGL? No? Then maybe speak to some of the people who actually work on this stuff, who understand the nuances, not the slogans.
Speaking of which, JGL then doubles down on his exactly backwards Ted Cruz-inspired version of Section 230:
> *Section 230 as itâs currently written or as it was written 30 years ago distinguishes between what it calls publishers and carriers. So a publisher would be, you, a person, saying something or a company saying something like the New York Times say or you know the Walt Disney Company publishers. Then carriers would be somebody like AT&T or Verizon, you know, the the the companies that make your phone or or your telephone service. So basically what Section 230 said is that these platforms for user-generated content are not publishers. They are carriers. They are as neutral as the telephone company. And if someone uses the telephone to commit a crime, the telephone company shouldnât be held liable. And thatâs true about a telephone company. But again, thereâs a third category that we need to add to really reflect how the internet works today. And that third category is amplification.*
Again, I need to stress that this is literally wrong. Like, fundamentally, literally he has it backwards and inside out. This is a pretty big factual error.
First, Section 230 does not, in any way, distinguish between âwhat it calls publishers and carriers.â This is the [âpublisher/platformâ myth][16] all over again.
I mean, [you can look at the law][17]. It makes no such distinction at all. The only distinction it makes is between âinteractive computer servicesâ and âinformation content providers.â Now some (perhaps JGL) will claim thatâs the same thing as âpublishersâ and âcarriers.â But itâs literally not.
Carriers (as in, common carrier law) implies the neutrality that JGL mentioned earlier. And perhaps thatâs why heâs confused. But the purpose of 230 was to enable âinteractive computer servicesâ to **act as publishers, without being held liable as publishers**. It was NOT saying âdonât be a publisher.â It was saying âwe want you to be a publisher, not a neutral carrier, but we know that if you face liability as a publisher, you wonât agree to publish. So, for third party content, we wonât hold you liable **for your publishing actions**.â
Again, go back to the Stratton Oakmont case. Prodigy âacted as a publisherâ in trying to filter out non-family friendly content. And the judge said âokay now youâre liable.â The entire point of 230 was to say âdonât be neutral, act as a publisher, but since itâs all 3rd party content, we wonât hold you liable as the publisher.â
In the Barnes case in the Ninth Circuit, the court was quite clear about this. The entire purpose of Section 230 is to *encourage interactive computing services to* ***act like a publisher*** *by removing liability for being a publisher.* Hereâs a key part in which the court explains why Yahoo deserves 230 protections for 3rd party content **because it acted as the publisher**:
> *In other words, the duty that Barnes claims Yahoo violated derives from* ***Yahooâs conduct as a publisher****âthe steps it allegedly took, but later supposedly abandoned, to de-publish the offensive profiles.* ***It is because such conduct is publishing conduct that we have insisted that section 230 protects from liability****âŠ.*
So let me repeat this again: the point of Section 230 is not to say âyouâre a carrier, not a publisher.â Itâs literally to say âyou can safely act as a publisher because you wonât face liability for content you had no part in its creation.â
JGL has it backwards.
He then goes on to make a weird and meaningless distinction between âfree speechâ and âcommercial amplificationâ as if itâs legally meaningful.
> *At the crux of their article is a really important distinction and that distinction is between free speech and commercial amplification. Free speech meaning what a human being says. commercial amplification, meaning when a platform like Instagram or YouTube or Tik Tok or whatever uses an algorithm to uh maximize engagement and ad revenue to hook you, keep you and serve you ads. And this is a really important difference that section 230 does not appreciate.*
The article heâs talking about is this very, very, very, very, [very badly confused piece in ACM][18]. Itâs written by Jaron Lanier, Allison Stanger, and Audrey Tang. If those names sound familiar, itâs because theyâve been publishing similar pieces that are just fundamentally wrong for years. Hereâs one piece I wrote [picking apart one][19], hereâs another [picking apart another][20].
None of those three individuals understands Section 230 at all. Stanger gave testimony to Congress that was so wrong on basic facts [it should have been retracted][21]. I truly do not understand why Audrey Tang sullies her own reputation by continuing to sign on to pieces with Lanier and Stanger. I have tremendous respect for Audrey, who Iâve learned a ton from over the years. But she is not a legal expert. She was Digital Minister in Taiwan (where she did some amazing work!) and has worked at tech companies.
But she doesnât know 230.
Iâm not going to do another full breakdown of everything wrong with the ACM piece, but just look at the second paragraph:
> *Much of the publicâs criticism of Section 230 centers on the fact that it shields platforms from liability even when they host content such as online harassment of marginalized groups or child sexual abuse material (CSAM).*
What? CSAM is inherently unprotected speech. Section 230 does not protect CSAM. Section 230 literally has section (e)(1) that says âno effect on criminal law.â CSAM, as you might know, is a violation of criminal law. Websites all have strong incentives to deal with CSAM to avoid criminal liability, and they tend to take that pretty seriously. The additional civil liability that might come from a change in the law isnât going to have much, if any, impact on that.
And âonline harassment of marginalized groupsâ is mostly protected by the First Amendment anywayâso if 230 didnât cover it, companies would still win on First Amendment grounds. But hereâs the thing: most of us think that harassment is bad and want platforms to stop it. **You know what lets them do that? Section 230.** Take it away and companies have *less* incentive to moderate. Indeed, in Lanier and Stangerâs original piece in Wired, they argued platforms should be *required* to use the First Amendment as the basis for moderationâwhich would **forbid** removing most harassment of marginalized groups.
These are not serious critiques.
I could almost forgive Lanier/Stanger/Tang if this were the first time they were writing about this subject, but they have now written this same factually incorrect thing multiple times, and each time Iâve written a response pointing out the flaws.
I can understand that a well meaning person like JGL can be taken in by it. He mentions having talked to Audrey Tang about it. But, again, as much as I respect Tangâs work in Taiwan, she is not a US legal expert, and she has this stuff entirely backwards.
I do believe that JGL legitimately wants a free and open internet. I believe that he legitimately would like to see more upstart competitors and less power and control from the biggest providers. In that we agree.
But he has been convinced by some people who are either lying to him or simply do not understand the details, and thus he has become a useful tool for enabling greater power for the internet giants, and greater online censorship. The exact opposite of what he claims to support.
I hope he realizes that heâs been misledâand Iâd be happy to talk this through with him, or put him in touch with actual experts on Section 230. Because right now, heâs lending his star power to one of the most dangerous ideas around for the open internet.
[1]: https://podcasts.apple.com/us/podcast/otherwise-objectionable/id1798723661
[2]: https://www.techdirt.com/2021/10/18/new-research-shows-social-media-doesnt-turn-people-into-assholes-they-already-were-everyones-wrong-about-echo-chambers/
[3]: https://www.techdirt.com/2026/01/21/two-major-studies-125000-kids-the-social-media-panic-doesnt-hold-up/
[4]: https://www.techdirt.com/2021/10/28/let-me-rewrite-that-you-washington-post-misinforms-you-about-how-facebook-weighted-emoji-reactions/
[5]: https://www.techdirt.com/2023/09/07/yet-another-study-debunks-the-youtubes-algorithm-drives-people-to-extremism-argument/
[6]: https://www.techdirt.com/2019/04/18/new-paper-why-section-230-is-better-than-first-amendment/
[7]: https://www.techdirt.com/2020/02/18/mark-zuckerberg-suggests-getting-rid-section-230-maybe-people-should-stop-pretending-gift-to-facebook/
[8]: https://www.techdirt.com/2021/10/12/why-section-230-reform-effectively-means-section-230-repeal/
[9]: https://www.techdirt.com/2021/02/09/if-were-going-to-talk-about-discrimination-online-ads-we-need-to-talk-about-roommatescom/
[10]: https://www.techdirt.com/2017/11/08/will-sheryl-sandberg-facebook-help-small-websites-threatened-sesta/
[11]: https://switter.at/
[12]: https://www.techdirt.com/2019/05/07/human-cost-fosta/
[13]: https://www.techdirt.com/2018/07/09/more-police-admitting-that-fosta-sesta-has-made-it-much-more-difficult-to-catch-pimps-traffickers/
[14]: https://www.techdirt.com/2018/12/05/tumblrs-new-no-sex-rules-show-problems-fosta-eu-copyright-directive-one-easy-move/
[15]: https://www.techdirt.com/2021/09/01/ebays-fosta-inspired-ban-adult-content-is-erasing-lgbtq-history/
[16]: https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
[17]: https://www.law.cornell.edu/uscode/text/47/230
[18]: https://dl.acm.org/doi/full/10.1145/3744913
[19]: https://www.techdirt.com/2024/02/15/has-wired-given-up-on-fact-checking-publishes-facts-optional-screed-against-section-230-that-gets-almost-everything-wrong/
[20]: https://www.techdirt.com/2024/11/05/no-section-230-doesnt-circumvent-the-first-amendment-but-this-harvard-article-circumvents-reality/
[21]: https://www.techdirt.com/2024/04/19/congressional-testimony-on-section-230-was-so-wrong-that-it-should-be-struck-from-the-record/
https://www.techdirt.com/2026/02/12/joseph-gordon-levitt-goes-to-washington-dc-gets-section-230-completely-backwards/
Techdirt (RSS/Atom feed)19h
Donald Trump Is VERY EXCITED About All Of Our Shitty Right Wing Broadcasters Merging Into One Bigger, Even Shittier Company
Trump 1.0 [took a hatchet][1] to media ownership limits. Those limits, built on the back of decades of bipartisan collaboration, prohibited local broadcasters and media from growing too large, trampling smaller (and more diversely-owned) competitors underfoot. The result of their destruction has been a rise in [local news deserts][2], a surge in [right wing propaganda outlets pretending to be âlocal news,â][3] less diverse media ownership, and (if you hadnât noticed) a [painfully disinformed electorate][4].
Trump 2.0 has been **significantly worse**.
Trumpâs FCC has finished demolishing whatever was left of already saggy media ownership limits, and are eyeing eliminating rules that would prevent the big four (Fox, ABC, CBS, NBC) from merging (a major reason why these networks have been such [feckless authoritarian appeasers][5]).
Theyâre also working hard to let all of our local right wing broadcast companies merge into one, even larger, shittier company, something [Donald Trump is very excited about][6]!
More specifically Nexstar (a very Republican friendly company that also owns The Hill), is asking the FCC for permission to acquire Tegna in a $6.2 billion deal that is illegal under current rules (you might recall that Nexstar-owned *The Hill* recently [fired a journalist whose reporting angered Trump][7]).
The deal would give Nexstar ownership of 265 stations in 44 states and the District of Columbia and 132 of the countryâs 210 television Designated Market Areas (or DMAs). Nexstar appears [to have beaten out rival bids by Sinclair][8], which has also long-been criticized as [Republican propaganda posing as local news][9]. It wouldnât be surprising if Nexstar and Sinclair are the next to merge.
Keep in mind, this is an industry that was already terrible agitprop, as this now seven-year-old Deadspin video helped everyone realize:
You might be inclined to say: âbut Karl, local TV broadcasters are irrelevant. Who cares if they consolidate a dying industry.â But the consolidation wonât stop here. The goal isnât just the consolidation of local broadcasters, itâs the consolidation of national and local media giants, telecoms, tech companies, and social media companies. All under the thumb of terrible unethical people.
Trumpâs rise to power couldnât have been made possible without the Republican domination of media. For the better part of a generation Republicans have dominated AM radio, local broadcast TV, and cable news, and have since done a remarkable job hoovering up whatâs left of both major media companies (CBS, FOX) and modern social media empires (TikTok, Twitter). The impact is everywhere you look.
Over on Elon Muskâs right wing propaganda platform, Brendan Carr was quick to praise Presidentâs Trump bold support for more media consolidation. And, as he has done previously, he openly lied and trying to pretend that local broadcast consolidation is something that *aids competition*:
Iâve covered Brendan Carr professionally since he joined the FCC in 2012. This is a man who has coddled media and telecom giants (and their anti-competitive behavior) at literally every opportunity. One of his only functions in government has been to rubber stamp shitty mergers. Here, heâs pretending to âprotect competitionâ with a cute little antisemitic dog whistle about the folks in âHollywood and New York.â
Amusingly, Carr and Trumpâs push to allow all manner of problematic consolidation among these terrible local broadcasters has been so abrupt, itâs actually causing [some infighting between them and other right wing propaganda companies like Newsmax][10].
Thereâs a reason the Trump administration is destroying media consolidation limits, [murdering public media][11], harassing media companies, threatening late night comedians (or having them fired), and ushering forth all this mindless and dangerous consolidation. Thereâs a reason Larry Ellison and Elon Musk are buying all the key social media platforms and fiddling with the algorithms.
They very openly (and so far semi-successfully) are trying to build a state media apparatus akin to what they have in Orbanâs Hungary and Putinâs Russia. Our corporate press is **already** so broken and captured itâs incapable of communicating that to anybody. It simply wouldnât be in their best financial interests for existing media conglomerates to be honest about this sort of thing.
One plus side, nobody involved in any of this â from CBSâs News boss Bari Weiss to Sinclair Broadcasting â appear to have any competent idea of what theyâre doing. Theyâre not good at journalism (because theyâre trying to destroy it), but theyâre generally [not good at ratings-grabbing propaganda][12]. As a result itâs entirely possible they destroy U.S. media before their dream of state media comes to fruition.
Still, it might be nice if Democrats could stop waiting for âthe leftâs Joe Roganâ and finally start embracing some meaningful media reforms for the modern era, whether thatâs the restoration of media consolidation limits, the creation of media ownership diversity requirements, an evolution in school media literacy training, support for public media, or creative new funding models for real journalism.
Because the trajectory we are on in terms of right wing domination of media heads to ***some very fucking grim places***, and itâs not like any of that has been subtle.
[1]: https://www.techdirt.com/2017/11/02/fcc-boss-demolishes-media-ownership-rules-massive-gift-to-sinclair-broadcasting/
[2]: https://localnewsinitiative.northwestern.edu/projects/state-of-local-news/
[3]: https://www.techdirt.com/2022/03/23/sinclair-seattle-reporter-makes-proud-boys-gathering-sound-like-cub-scouts/
[4]: https://www.vice.com/en/article/the-death-of-local-news-is-making-us-dumber-and-more-divided/
[5]: https://www.techdirt.com/2025/10/02/abc-disney-gets-rewarded-for-kissing-trumps-ass-fcc-moves-to-eliminate-any-remaining-media-consolidation-limits/
[6]: https://deadline.com/2026/02/trump-endorses-nexstar-tegna-merger-1236712070/
[7]: https://wbng.org/2025/04/22/the-hill-guild-statement-on-politically-motivated-firing-of-journalist/
[8]: https://www.wsj.com/business/deals/tv-station-owner-sinclair-proposes-merger-with-tegna-4bd3bb86
[9]: https://www.techdirt.com/2022/03/23/sinclair-seattle-reporter-makes-proud-boys-gathering-sound-like-cub-scouts/
[10]: https://www.techdirt.com/2026/01/06/right-wing-media-companies-begin-bickering-at-the-fcc-over-who-gets-to-dominate-the-exploding-right-wing-propaganda-market/
[11]: https://www.techdirt.com/2025/07/22/republicans-take-a-hatchet-to-whats-left-of-u-s-public-broadcasting-pbs-emergency-alerts/
[12]: https://www.techdirt.com/2026/01/14/bari-weiss-is-sad-that-people-arent-enjoying-her-clumsy-destruction-of-cbs-news/
https://www.techdirt.com/2026/02/12/donald-trump-is-very-excited-about-all-of-our-shitty-right-wing-broadcasters-merging-into-one-bigger-even-shittier-company/
Techdirt (RSS/Atom feed)1d
Dr. Oz: Vaccine Mandates Are Bad. Iâll Just Beg People To Get Vaccinated Instead.
I want to say a little something upfront in this post, so that there is no misunderstanding. While Iâve spent a great deal of time outlining why I think [RFK Jr.][1] and his cadre of buffoons at HHS and its child agencies are horrible for America and her peopleâs health, I do understand *some* of the perspective from people who pushback on vaccinations *some* of the time. One of those areas are vaccine mandates. Bodily autonomy is and ought to be a very real thing. A government installing mandates for what can and canât be done with oneâs own body is something that needs to be treated with a ton of sensitivity and I can understand why vaccine mandates *in general* might run afoul of the autonomy concept. Of course, itâs also why the government shouldnât be in the business of telling women what to do with their bodies, or blanket outlawing things like euthanasia, but the point is I get it.
But there *are* times when we, as a society, do make some legal demands of the citizenry when it comes to their own physical beings for the betterment of the whole. Not all drugs are federally legal because there are some drugs that, if they were to proliferate, would cause enormous harm to the public that surrounds those individuals. The government does regulate to some extent what appears in our food and medicine, never bothering to ask the public their opinion on the matter. And there are some diseases so horrible that weâve built some level of a mandate around vaccination, traditionally, especially in exchange for participation in publicly funded schools and the like.
Dr. Oz, television personality turned Administrator of the Centers for Medicare and Medicaid Services, has vocally opposed vaccine mandates in general terms. When [Florida dropped the requirement][2] for vaccines for public school children, Oz cheered them on.
> *In an interview on âThe Story with Martha MacCallum,â the Fox News host asked Oz whether he agrees with officials who want to make Florida the first state in the nation to end childhood vaccine requirements and whether Oz would ârecommend the same thing to your patients.â*
>
> *âI would definitely not have mandates for vaccinations,â the Centers for Medicare and Medicaid Services administrator told MacCallum. âThis is a decision that a physician and a patient should be making together,â he continued. âThe parents love their kids more than anybody else could love that kid, so why not let the parents play an active role in this?â*
The MMR vaccine was one of those required for Florida schools. So, Oz is remarkably clear in the quote above. The government should not be mandating vaccines. Further, the government shouldnât really have direct input into whether people are getting vaccines or not. That decision should be made strictly by the patient and the doctor who has that patient directly in front of them, or their parents.
Those comments from Oz were made in September of 2025. Fast forward to the present, with a measles outbreak that is completely off the rails in America, and the good doctor is [singing a much different tune][3].
> *So, Oz is now reduced to[ begging][4] people to get vaccinated for something that, for decades, everyone routinely got vaccinated for.*
>
> *âTake the vaccine, please. We have a solution for our problem,â he said. âNot all illnesses are equally dangerous and not all people are equally susceptible to those illnesses,â he hedged. âBut measles is one you should get your vaccine.â*
To be clear, heâs still not advocating for any sort of mandate. Which is unfortunate, at least when it comes to targeted mandates for public schools and that sort of thing. But in lieu of any actual public policy to combat measles in America, heâs reduced to a combination of begging the public to get vaccinated *and* telling the general public that a measles shot is definitely one they should be getting.
And on that heâs right. But heâs also talking out of both sides of his mouth. Oz isnât these peopleâs doctor. These school children arenât all sitting directly in front of him. So the same person who advocated for a personalized approach to vaccines is now begging the public to take the measles vaccine from Washington D. C.
That inconsistency is among the many reasons itâs difficult to know just how seriously to take Oz. And consistency is pretty damned key when it comes to government messaging on public health policy. That, in addition to trust, is everything here. And when Oz [jumps onto a CNN broadcast][5] to claim that this government, including RFK Jr., have been at the forefront of advocating for the measles vaccine, any trust that is there is torpedoed pretty quickly.
> *CNN anchor Dana Bash was left in disbelief as one of the presidentâs top health goons claimed the MAGA administration was a top advocate for vaccines. Addressing the record outbreak of measles in the U.S., particularly in South Carolina, Bash asked Dr. Mehmet Oz on State of the Union Sunday: âIs this a consequence of the administration undermining support for advocacy for measles and other vaccines?â âI donât believe so,â the Trump-appointed Centers for Medicare & Medicaid Services Administrator responded. He then said, âWeâve advocated for measles vaccines all along. Secretary Kennedy has been at the very front of this.â*
Absolute nonsense. Yes, Kennedy has said to get the measles vaccine. Heâs also said maybe everyone should just [get measles][6] instead. One of his deputies has [hand-waved][7] the outbreak away as being no big deal. Kennedy has advocated for [alternative treatments][8], rather than vaccination.
The government is all over the place on this, in other words. As is Oz himself, in some respects. To sit here in the midst of the worst measles outbreak in decades, beg people to do the one thing that will make this all go away, and *then* claim that this government has been on the forefront of vaccine advocacy is simply silly.
[1]: https://www.techdirt.com/tag/rfk-jr/
[2]: https://thehill.com/policy/healthcare/5485044-dr-oz-florida-vaccine-mandate/
[3]: https://www.dailykos.com/stories/2026/2/9/2367926/-Dr-Oz-backtracks-on-anti-vax-bullshit-as-measles-cases-multiply
[4]: https://courthousenews.com/take-the-vaccine-please-a-top-us-health-official-says-in-an-appeal-as-measles-cases-rise/
[5]: https://uk.news.yahoo.com/come-cnn-anchor-shuts-down-162830142.html
[6]: https://www.techdirt.com/2025/03/17/there-it-is-rfk-jr-suggests-best-strategy-for-combatting-measles-is-for-everyone-to-get-it/
[7]: https://www.techdirt.com/2026/01/27/cdc-dep-director-on-measles-going-kazoo-its-just-the-cost-of-doing-business/
[8]: https://www.techdirt.com/2025/04/01/measles-vitamin-a-toxicity-how-rfk-jr-is-compounding-the-outbreak-problem/
https://www.techdirt.com/2026/02/11/dr-oz-vaccine-mandates-are-bad-ill-just-beg-people-to-get-vaccinated-instead/
Techdirt (RSS/Atom feed)1d
The Policy Risk Of Closing Off New Paths To Value Too Early
Artificial intelligence promises to change not just how Americans work, but how societies decide which kinds of work are worthwhile in the first place. When technological change outpaces social judgment, a major capacity of a sophisticated society comes under pressure: the ability to sustain forms of work whose value is not obvious in advance and cannot be justified by necessity alone.
As AI systems diffuse rapidly across the economy, questions about how societies legitimate such work, and how these activities can serve as a supplement to market-based job creation, have taken on a policy relevance that deserves serious attention.
**From Prayer to Platforms**
That capacity for legitimating work has historically depended in part on how societies deploy economic surplus: the share of resources that can be devoted to activities not strictly required for material survival. In late medieval England, for example, many in the orbit of the church [made at least part of their living performing spiritual labor][1] such as saying prayers for the dead and requesting intercessions for patrons. In a society where salvation was a widely shared concern, such activities were broadly accepted as legitimate ways to make a living.
William Langland was one such prayer-sayer. He is known to history only because, unlike nearly all others who did similar work, he left behind a long allegorical religious poem, [*Piers Plowman*][2], which he composed and repeatedly revised alongside the devotional labor that sustained him. It emerged from the same moral and institutional world in which paid prayer could legitimately absorb time, effort, and resources.
In 21st-century America, [Jenny Nicholson][3] earns a [sizeable income][4] sitting alone in front of a camera, producing long-form video essays on theme parks, films, and internet subcultures. Yet her audience supports it willingly and few doubt that it creates value of a kind. Where Langlandâs livelihood depended on shared theological and moral authority emanating from a Church that was the dominant institution of its day, Nicholsonâs depends on a different but equally real form of judgment expressed by individual market participants. And she is just one example of a broader class of creatorsâstreamers, influencers, and professional gamersâwhose work would have been unintelligible as a profession until recently.
What links Langland and Nicholson is not the substance of their work or any claim of moral equivalence, but the shared social judgment that certain activities are legitimate uses of economic surplus. Such judgments do more than reflect cultural taste. Historically, they have also shaped how societies adjust to technological change, by determining which forms of work can plausibly claim support when productivity rises faster than what is considered a ânecessityâ by society.
**How Change Gets Absorbed**
Technological change has long been understood to generate economic adjustment through familiar mechanisms: by creating new tasks within firms, expanding demand for improved goods and services, and recombining labor in complementary ways. Often, these mechanisms alone can explain how economies create new jobs when technology renders others obsolete. Their operation is well documented, and policies that reduce frictions in these processesâencouraging retraining or easing the entry of innovative firmsâremain important in any period of change.
That said, there is no general law guaranteeing that new technologies will create more jobs than they destroy through these mechanisms alone. Alongside labor-market adjustment, societies have also adapted by legitimating new forms of valueâactivities like those undertaken by Langland and Nicholsonâthat came to be supported as worthwhile uses of the surplus generated by rising productivity.
This process has typically been examined not as a mechanism of economic adjustment, but through a critical or moralizing lens. From Thorstein Veblenâs account of [conspicuous consumption][5], which treats surplus-supported activity primarily as a vehicle for status competition, to [Max Weberâs analysis of how moral and religious worldviews legitimate economic behavior][6], scholars have often emphasized the symbolic and ideological dimensions of non-essential work. [Herbert Marcuse][7] pushed this line of thinking further, arguing that capitalist societies manufacture âfalse needsâ to absorb surplus and assure the continuation of power imbalances. These perspectives offer real insight: uses of surplus are not morally neutral, and new forms of value *can* be entangled with power, hierarchy, and exclusion.
What they often exclude, however, is the way legitimation of new forms of value can also function to allow societies to absorb technological change without requiring increases in productivity to be translated immediately into conventional employment or consumption. New and expanded ways of using surplus are, in this sense, a critical economic safety valve during periods of rapid change.
**Skilled Labor Has Been Here Before**
Fears that artificial intelligence is uniquely threatening simply because it reaches into professional or cognitive domains rest on a mistaken historical premise. Episodes of large-scale technological displacement have rarely spared skilled or high-paid forms of labor; often, such work has been among the *first* affected. The mechanization of craft production in the nineteenth century displaced skilled cobblers, coopers, and blacksmiths, replacing independent artisans with factory systems that required fewer skills, paid lower wages, and offered less autonomy even as new skilled jobs arose elsewhere. These changes were disruptive but they were absorbed largely through falling prices, rising consumption, and new patterns of employment. They did not require societies to reconsider what kinds of activity were worthy uses of surplus: the same things were still produced, just at scale.
Other episodes are more revealing for present purposes. Sometimes, social change has unsettled not just particular occupations but entire regimes through which uses of surplus become legitimate. In medieval Europe, the Church was the one of the largest economic institutions just about everywhere, clerical and quasi-clerical roles like Langlandâs offered recognized paths to education, security, status, and even wealth. When those shared beliefs fractured, the Churchâs economic role contracted sharplyânot because productivity gains ceased but because its claim on so large a share of surplus lost legitimacy.
To date, artificial intelligence has not produced large-scale job displacement, and the limited disruptions that have occurred have largely been absorbed through familiar adjustment mechanisms. But if AI systems begin to substitute for work whose value is justified less by necessity than by judgment or cultural recognition, the more relevant historical analogue may be less the mechanization of craft than the narrowing or collapse of earlier surplus regimes. The central question such technologies raise is not whether skilled labor can be displaced or whether large-scale displacement is possibleâboth have occurred repeatedly in the historical recordâbut how quickly societies can renegotiate which activities they are prepared to treat as legitimate uses of surplus when change arrives at unusual speed.
**Time Compression and its Stakes**
In this respect, artificial intelligence *does *appear unusual. Generative AI tools such as ChatGPT have diffused through society at a pace far faster than most earlier general-purpose technologies. [ChatGPT was widely reported to have reached roughly 100 million users within two months][8] of its public release and similar tools have shown comparably rapid uptake.
That compression matters. Much surplus has historically flowed through familiar institutionsâuniversities, churches, museums, and other cultural bodiesâthat legitimate activities whose value lies in learning, spiritual rewards or meaning rather than immediate output. Yet such institutions are not fixed. Periods of rapid technological change often place them under strainâsomething evident today for manyâexposing disagreements about purpose and authority. Under these conditions, experimentation with new forms of surplus becomes more important, not less. Most proposed new forms of value fail, and attempts to predict which will succeed have a poor historical recordâfrom the South Sea Bubble to more recent efforts to anoint digital assets like NFTs as durable sources of wealth. Experimentation is not a guarantee of success; it is a hedge. Not all claims on surplus are benign, and waste is not harmless. But when technological change moves faster than institutional consensus, the greater
danger often lies not in tolerating too many experiments, but in foreclosing them too quickly.
Artificial intelligence does not require discarding all existing theories of change. What sets modern times apart is the speed with which new capabilities become widespread, shortening the interval in which those judgments are formed. In this context, surplus that once supported meaningful, if unconventional, work may instead be captured by grifters, legally barred from legitimacy (by say, outlawing a new art form) or funneled into bubbles. The risk is not waste alone, but the erosion of the cultural and institutional buffers that make adaptation possible.
The challenge for policymakers is not to pre-ordain which new forms of value deserve support but to protect the space in which judgment can evolve. They need to realize that they simply cannot make the world entirely safe, legible and predictable: whether they fear technology overall or simply seek to [shape it in the ârightâ way][9], they will not be able to predict the future. That means tolerating ambiguity and accepting that many experiments will fail with negative consequences. In this context, broader social barriers that prevent innovation in any fieldâprofessional licensing, limits on free expression, overly zealous IP laws, regulatory bars on the entry to small firmsâdeserve a great deal of scrutiny. Even if the particular barriers in question have nothing to do with AI itself, they may retard the development of surplus sinks necessary to economic adjustment. In a period of compressed adjustment, the capacity to let surplus breathe and value be contested may well determine
whether economies bend or break.
*Eli Lehrer is the President of the R Street Institute.*
[1]: https://thebaa.org/publication/the-medieval-chantry-in-england/
[2]: https://www.poetryfoundation.org/poems/159123/piers-plowman-b-prologue
[3]: https://underthepavingstones.com/2025/05/31/a-painfully-sincere-tribute-to-the-genius-of-jenny-nicholson/
[4]: https://www.reddit.com/r/JennyNicholson/comments/1cvi7io/just_realized_how_many_patreon_subs_jenny_has/
[5]: https://la.utexas.edu/users/hcleaver/368/368VeblenConspicuoustable.pdf
[6]: https://gpde.direito.ufmg.br/wp-content/uploads/2019/03/MAX-WEBER.pdf
[7]: https://bgsp.edu/app/uploads/2014/12/Marcuse-One-Dimensional-Society.pdf
[8]: https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app
[9]: https://www.techpolicy.press/ai-safety-requires-pluralism-not-a-single-moral-operating-system/
https://www.techdirt.com/2026/02/11/the-policy-risk-of-closing-off-new-paths-to-value-too-early/
Welcome to Techdirt (RSS/Atom feed) spacestr profile!
About Me
RSS/Atom feed of Techdirt
More feeds can be found in my following list