spacestr

🔔 This profile hasn't been claimed yet. If this is your Nostr profile, you can claim it.

Edit
Techdirt (RSS/Atom feed)
Member since: 2025-01-25
Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 5h

Microsoft’s AI-Powered Copyright Bots Fucked Up And Got An Innocent Game Delisted From Steam At some point, we, as a society, are going to realize that farming copyright enforcement out to bots and AI-driven robocops is not the way to go, but today is not that day. Long before AI became the buzzword it is today, large companies have employed their own copyright crawler bots, or employed those of a third party, to police their copyrights on these here internets. And for just as long, those bots have absolutely sucked out loud at their jobs. We have seen [example][1] after [example][2] after [example][3] of those bots making mistakes, resulting in takedowns or threats of takedowns of all kinds of perfectly legit content. Upon discovery, the content is usually reinstated while those employing the copyright decepticons shrug their shoulders and say “Thems the breaks.” And then it happens again. It has to change, but isn’t. We have yet another recent example of this in action, with Microsoft’s copyright enforcement partner using an AI-driven enforcement bot to [get a video game delisted from Steam][4] over a single screenshot on the game’s page that looks like, but isn’t, from *Minecraft*. The game in question, *Allumeria*, clearly is partially inspired by *Minecraft*, but doesn’t use any of its assets and is in fact its own full-fledged creative work. > *On Tuesday, the developer behind the Minecraft-looking, dungeon-raiding sandbox [announced][5] that their game had been taken down from Valve’s storefront due to a DMCA copyright notice issued by Microsoft. The notice, shared by developer Unomelon in the game’s Discord server, accused Allumeria of using “Minecraft content, including but not limited to gameplay and assets.”* > > *The takedown was apparently issued over one specific screenshot from the game’s Steam page. It shows a vaguely Minecraft-esque world with birch trees, tall grass, a blue sky, and pumpkins: all things that are in Minecraft but also in real life and lots of other games. The game does look pretty similar to Minecraft, but it doesn’t appear to be reusing any of its actual assets or crossing some arbitrary line between homage and copycat that dozens of other Minecraft-inspired games haven’t crossed before. * It turns out the takedown request didn’t come from Microsoft directly, but via Tracer.AI. Tracer.AI claims to have a bot driven by artificial intelligence for automatic flagging and removal of copyright infringing content. It seems the system failed to understand in this case that the image in question, while being similar to those including *Minecraft *assets, didn’t actually infringe upon anything. Folks at Mojang caught wind of this on BlueSky and had to take action. > *While it’s unclear if the claim was issued automatically or intentionally, Mojang Chief Creative Officer Jens Bergensten (known to most Minecraft players as Jeb) [responded to a comment about the takedown on Bluesky][6], stating that he was not aware and is now “investigating.” Roughly 12 hours later, Allumeria‘s Steam page has been reinstated.* > > *“Microsoft has withdrawn their DMCA claim!” Unomelon [posted earlier today][7]. “The game is back up on Steam! Allumeria is back! Thank you EVERYONE for your support. It’s hard to comprehend that a single post in my discord would lead to so many people expressing support.”* And this is the point in the story where we all go back to our lives and pretend like none of this ever happened. But that sucks. For starters, there is no reason we should accept that this kind of collateral damage, temporary or not. Add to that there are surely stories out there in which a similar resolution was *not* reached. How many games, how much other non-infringing content out there, were taken down for longer from an erroneous claim like this? How many never came back? And at the base level, the fact is that if companies are going to claim that copyright is of paramount importance to their business, that can’t be farmed out to automated systems that aren’t good at their job. [1]: https://www.techdirt.com/2023/11/22/copyright-bot-cant-tell-the-difference-between-star-trek-ship-and-adult-film-actress/ [2]: https://www.techdirt.com/2012/09/04/copyright-enforcement-bots-seek-destroy-hugo-awards/ [3]: https://www.techdirt.com/2015/03/27/copyright-bots-kill-app-over-potentially-infringing-images-follow-this-up-blocking-app-use-ccpublic-domain-images/ [4]: https://kotaku.com/minecraft-allumeria-steam-dmca-mojang-microsoft-2000667696 [5]: https://bsky.app/profile/unomelon.bsky.social/post/3meiiaowb7c2c [6]: https://bsky.app/profile/jebox.bsky.social/post/3mejmlz7nss2h [7]: https://bsky.app/profile/unomelon.bsky.social/post/3mekpz2kras2t https://www.techdirt.com/2026/02/12/microsofts-ai-powered-copyright-bots-fucked-up-and-got-an-innocent-game-delisted-from-steam/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 9h

Ctrl-Alt-Speech: Panic! At The Discord **[Ctrl-Alt-Speech][1] is a weekly podcast about the latest news in online speech, from Mike Masnick and [Everything in Moderation][2]‘s Ben Whitelaw. ** **Subscribe now on [Apple Podcasts][3], [Overcast][4], [Spotify][5], [Pocket Casts][6], [YouTube][7], or your podcast app of choice — or go straight to [the RSS feed][8].** In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Ben is joined by Dr. Blake Hallinan, Professor of Platform Studies in the Department of Media & Journalism Studies at Aarhus University. Together, they discuss: * [On Section 230’s 30th Birthday, A Look Back At Why It’s Such A Good Law And Why Messing With It Would Be Bad][9] (Techdirt) * [An 18-Million-Subscriber YouTuber Just Explained Section 230 Better Than Every Politician In Washington][10] (Techdirt) * [Discord Launches Teen-by-Default Settings Globally][11] (Discord) * [Media Literacy Parent’s study ][12]([GOV.UK][13]) * [EU says TikTok must disable ‘addictive’ features like infinite scroll, fix its recommendation engine][14] (Techcrunch) * [We Didn’t Ask for This Internet with Tim Wu and Cory Doctorow][15] (The New York Times) * [Despite Meta’s ban, Fidesz candidates successfully posted 162 political ads on Facebook in January 9][16] (Lakmusz.hu) * [Claude’s Constitution Needs a Bill of Rights and Oversight][17] (Oversight Board) * [Account Closed Without Notice: Debanking Adult Industry Workers in Canada][18] (ResearchGate) Play along with Ctrl-Alt-Speech’s [2026 Bingo Card][19] and get in touch if you win! [1]: https://ctrlaltspeech.com/ [2]: https://www.everythinginmoderation.co/ [3]: https://podcasts.apple.com/us/podcast/ctrl-alt-speech/id1734530193 [4]: https://overcast.fm/itunes1734530193 [5]: https://open.spotify.com/show/1N3tvLxUTCR7oTdUgUCQvc [6]: https://pca.st/zulnarbw [7]: https://www.youtube.com/playlist?list=PLcky6_VTbejGkZ7aHqqc3ZjufeEw2AS7Z [8]: https://feeds.buzzsprout.com/2315966.rss [9]: https://www.techdirt.com/2026/02/09/on-section-230s-30th-birthday-a-look-back-at-why-its-such-a-good-law-and-why-messing-with-it-would-be-bad/ [10]: https://www.techdirt.com/2026/02/11/an-18-million-subscriber-youtuber-just-explained-section-230-better-than-every-politician-in-washington/ [11]: https://discord.com/press-releases/discord-launches-teen-by-default-settings-globally [12]: https://www.gov.uk/government/publications/media-literacy-parents-study [13]: http://gov.uk [14]: https://techcrunch.com/2026/02/06/eu-tiktok-disable-addictive-features-infinite-scroll-recommendation-engine/ [15]: https://www.nytimes.com/2026/02/06/opinion/ezra-klein-podcast-doctorow-wu.html [16]: https://lakmusz.hu/2026/02/10/despite-metas-ban-fidesz-candidates-successfully-posted-162-political-ads-on-facebook-in-january [17]: https://www.oversightboard.com/news/claudes-constitution-needs-a-bill-of-rights-and-oversight/ [18]: https://www.researchgate.net/publication/400596062_Account_Closed_Without_Notice_Debanking_Adult_Industry_Workers_in_Canada?channel=doi&linkId=698a2fec64ca8a38208af54d&showFulltext=true [19]: https://www.ctrlaltspeech.com/bingo/ https://www.techdirt.com/2026/02/12/ctrl-alt-speech-panic-at-the-discord/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 11h

On Its 30th Birthday, Section 230 Remains The Linchpin For Users’ Speech For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. [Section 230][1], which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it. Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either [repeal][2] or [sunset the law][3]. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230. But rolling back or eliminating Section 230 will not stop [invasive corporate surveillance][4] that harms all internet users. Killing Section 230 won’t end the dominance of the current handful of large tech companies—it [would cement their monopoly power][5]. The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech. This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are [readily apparent][6], both in the U.S. and around the world. [Experience shows][7] that those systems result in more censorship of internet users’ lawful speech. Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog. It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws. ### **Section 230 ****Alternatives Would Protect Less Speech** With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet? The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability. Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits. Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to [widespread removal][8] of lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to [silence their critics][9]. The closest alternative to Section 230’s immunity provides protections from liability until [an impartial court][10] has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale. By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online. In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not [broadly censor][11] users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230. EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online. *Reposted from [EFF’s Deeplinks blog][12].* [1]: https://www.eff.org/issues/cda230 [2]: https://www.congress.gov/bill/119th-congress/house-bill/7045 [3]: https://www.congress.gov/bill/119th-congress/senate-bill/3546 [4]: https://www.eff.org/wp/privacy-first-better-way-address-online-harms [5]: https://www.eff.org/deeplinks/2024/05/wanna-make-big-tech-monopolies-even-worse-kill-section-230 [6]: https://www.eff.org/deeplinks/2022/05/platform-liability-trends-around-globe-taxonomy-and-tools-intermediary-liability [7]: https://www.eff.org/files/2020/09/04/mcsherry_statement_re_copyright_9.7.2020-final.pdf [8]: https://www.eff.org/takedowns [9]: https://www.eff.org/deeplinks/2025/03/trump-calls-congress-pass-overbroad-take-it-down-act-so-he-can-use-it-censor [10]: https://manilaprinciples.org/index.html [11]: https://manilaprinciples.org/index.html [12]: https://www.eff.org/deeplinks/2026/02/its-30th-birthday-section-230-remains-lynchpin-users-speech https://www.techdirt.com/2026/02/12/on-its-30th-birthday-section-230-remains-the-linchpin-for-users-speech/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 13h

Bondi Spying On Congressional Epstein Searches Should Be A Major Scandal Yesterday, Attorney General Pam Bondi appeared before the House Judiciary Committee. Among the more notable exchanges was when Rep. Pramila Jayapal asked some of Jeffrey Epstein’s victims who were in the audience to stand up and indicate whether Bondi’s DOJ had ever contacted them about their experiences. None of them had heard from the Justice Department. Bondi wouldn’t even look at the victims as she frantically flipped through her prepared notes. And that’s when news organizations, including Reuters, caught something alarming: one of the pages Bondi held up clearly showed searches that Jayapal herself had done of the Epstein files: > A Reuters photographer captured this image of a page from Pam Bondi's "burn book," which she used to counter any questions from Democratic lawmakers during an unhinged hearing today.It looks like the DOJ monitored members of Congress’s searches of the unredacted Epstein files.Just wow. > > — [Christopher Wiggins (@cwnewser.bsky.social)][1] [2026-02-11T23:06:45.578Z][2] The Department of Justice—led by an Attorney General who is supposed to serve the public but has made clear her only role is protecting Donald Trump’s personal interests—is actively surveilling what members of Congress are searching in the Epstein files. And then bringing that surveillance data to a congressional hearing to use as political ammunition. This should be front-page news. It should be a major scandal. Honestly, it should be impeachable. There is no legitimate investigative purpose here. No subpoena. Nothing at all. Just the executive branch tracking the oversight activities of the legislative branch, then weaponizing that information for political culture war point-scoring. The DOJ has **no business whatsoever** surveilling what members of Congress—who have oversight authority over the Justice Department—are searching. Jayapal is rightly furious: > Pam Bondi brought a document to the Judiciary Committee today that had my search history of the Epstein files on it. The DOJ is spying on members of Congress. It’s a disgrace and I won’t stand for it. > > — [Congresswoman Pramila Jayapal (@jayapal.house.gov)][3] [2026-02-12T01:14:57.174494904Z][4] We’ve been here before. Way back in 2014, the CIA [illegally spied on searches by Senate staffers][5] who were investigating the CIA’s torture program. It was considered a scandal at the time—because it was one. The executive branch surveilling congressional oversight is a fundamental violation of separation of powers. It’s the kind of thing that, when it happens, should trigger immediate consequences. And yet. Just a few days ago, Senator Lindsey Graham—who has been one of the foremost defenders of government surveillance for years—[blew up at a Verizon executive][6] for complying with a subpoena that revealed Graham’s call records (not the contents, just the metadata) from around January 6th, 2021. > *“If the shoe were on the other foot, it’d be front-page news all over the world that Republicans went after sitting Democratic senators’ phone records,” said Republican Sen. Lindsey Graham of South Carolina, who was among the Republicans in Congress whose records were accessed by prosecutors as they examined contacts between the president and allies on Capitol Hill.* > > *“I just want to let you know,” he added, “I don’t think I deserve what happened to me.”* This is the same Lindsey Graham who, over a decade ago, said he was [“glad”][7] that the NSA was collecting his phone records because it magically kept him safe from terrorists. But now [he’s demanding hundreds of thousands of dollars][8] for being “spied” on (he wasn’t—a company complied with a valid subpoena in a legitimate investigation, which is how the legal system is supposed to work). So here’s the contrast: Graham is demanding money and media attention because a company followed the law. Meanwhile, the Attorney General is *actually* surveilling a Democratic member of Congress’s oversight activities—with no legal basis whatsoever—and using that surveillance for political theater in a manner clearly designed as a warning shot to congressional reps investigating the Epstein Files. Pam Bondi wants you to know she’s watching you. Graham claimed that if the shoe were on the other foot, it would be “front-page news all over the world.” Well, Senator, here’s your chance. The shoe is very much on the other foot. It’s worse than what happened to you, because what happened to you was legal and appropriate, and what’s happening to Jayapal is neither. But we all know Graham won’t speak out against this administration. He’s had nearly a decade to show whether or not the version of Lindsey Graham who said “if we elected Donald Trump, we will get destroyed
 and we will deserve it” still exists, and it’s clear that Lindsey Graham is long gone. This one only serves Donald Trump and himself, not the American people. But this actually matters: if the DOJ can surveil what members of Congress search in oversight files—and then use that surveillance as a weapon in public hearings—congressional oversight of the executive branch is dead. That’s the whole point of separation of powers. The people who are supposed to watch the watchmen can’t do their jobs if the watchmen are surveilling them. And remember: Bondi didn’t hide this. She brought it to the hearing. She held it up when she knew cameras would catch what was going on. She wanted Jayapal—and every other member of Congress—to see exactly what she’s doing. This administration doesn’t fear consequences for this kind of vast abuse of power because there haven’t been any. And the longer that remains true, the worse it’s going to get. [1]: https://bsky.app/profile/did:plc:tcrwkviqdnisxihb7g6mnk3e?ref_src=embed [2]: https://bsky.app/profile/did:plc:tcrwkviqdnisxihb7g6mnk3e/post/3memlqky4ac2h?ref_src=embed [3]: https://bsky.app/profile/did:plc:f5aicufsf2vpuwte6wizoy2v?ref_src=embed [4]: https://bsky.app/profile/did:plc:f5aicufsf2vpuwte6wizoy2v/post/3memsvscm3e2v?ref_src=embed [5]: https://www.techdirt.com/2014/08/01/cia-spying-senate-went-much-further-than-originally-reported/ [6]: https://apnews.com/article/jack-smith-investigation-phone-records-6e81f7f967f47673be88695f431eea6f [7]: https://www.techdirt.com/2013/06/10/sen-lindsey-graham-verizon-customer-im-glad-nsa-is-harvesting-my-data-because-terrorists/ [8]: https://www.techdirt.com/2025/11/13/gop-threatened-to-keep-the-government-shut-down-if-8-gop-senators-couldnt-profit-from-being-investigated/ https://www.techdirt.com/2026/02/12/bondi-spying-on-congressional-epstein-searches-should-be-a-major-scandal/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 14h

ICE, CBP Knew Facial Recognition App Couldn’t Do What DHS Says It Could, Deployed It Anyway The DHS and its components want to find non-white people to deport by any means necessary. Of course, “necessary” is something that’s on a continually sliding scale with Trump back in office, which means everything (legal or not) is “necessary” if it can help White House advisor Stephen Miller hit his self-imposed [3,000 arrests per day][1] goal. As was reported last week, DHS components (ICE, CBP) are using a web app that supposedly can identify people and link them with citizenship documents. As has always been the case with DHS components (dating back to the Obama era), the rule of thumb is “deploy first, compile legally-required paperwork later.” The pattern has never changed. ICE, CBP, etc. acquire new tech, hand it out to agents, and much later — if *ever* — the agencies compile and publish their legally-required Privacy Impact Assessments (PIAs). PIAs are supposed to *precede* deployments of new tech that might have an impact on privacy rights and other civil liberties. In almost every case, the tech has been deployed far ahead of the precedential paperwork. As one would expect, the Trump administration was never going to be the one to ensure the paperwork arrived ahead of the deployment. [As we covered recently][2], both ICE and CBP are using tech provided by NEC called “Mobile Fortify” to identify migrants who are possibly subject to removal, even though neither agency has bothered to publish a Privacy Impact Assessment. [As Wired reported][3], the app is being used widely by officers working with both agencies, despite both agencies making it clear they don’t have the proper paperwork in place to justify these deployments. > *While CBP says there are “sufficient monitoring protocols” in place for the app, ICE says that the development of monitoring protocols is in progress, and that it will identify potential impacts during an AI impact assessment. According to [guidance][4] from the Office of Management and Budget, which was issued before the inventory says the app was deployed for either CBP or ICE, agencies are supposed to complete an AI impact assessment before deploying any high-impact use case. Both CBP and ICE say the app is “high-impact” and “deployed.”* While this is obviously concerning, it would be far less concerning if we weren’t dealing with an administration that has told immigration officers that they don’t need warrants to [enter houses][5] or [effect arrests][6]. And it would be insanely less concerning if we weren’t dealing with an administration that has claimed that simply observing or reporting on immigration enforcement efforts is an act of terrorism. Officers working for the combined forces of bigotry d/b/a/ “immigration enforcement” know they’re safe. The Supreme Court has ensured they’re safe by [making it impossible][7] to sue federal officers. And the people running immigration-related agencies have made it clear they don’t even care if the ends justify the means. [These facts make what’s reported here even worse][8], especially when officers are using the app to “identify” pretty much anyone they can point a smartphone at. > *Despite DHS repeatedly framing Mobile Fortify as a tool for identifying people through facial recognition, however, the app does not actually “verify” the identities of people stopped by federal immigration agents—a well-known limitation of the technology and a function of how Mobile Fortify is designed and used.* > > *[
]* > > *Records reviewed by WIRED also show that DHS’s hasty approval of Fortify last May was enabled by dismantling centralized privacy reviews and quietly removing department-wide limits on facial recognition—changes overseen by a former Heritage Foundation lawyer and Project 2025 contributor, who now serves in a senior DHS privacy role.* Even if you’re the sort of prick who thinks whatever happens to non-citizens is deserved due to their alleged violation of civil statutes, one would hope you’d actually care what happens to your fellow citizens. I mean, one would hope, but even the federal government doesn’t care what happens to US citizens if they happen to be unsupportive of Trump’s migrant-targeting crime wave. > *DHS—which has declined to detail the methods and tools that agents are using, despite repeated calls from [oversight officials][9] and [nonprofit privacy watchdogs][10]—has used Mobile Fortify to scan the faces not only of “targeted individuals,” but also people later [confirmed to be US citizens][11] and others who were observing or protesting enforcement activity.* TLDR and all that: DHS knows this tool performs worst in the situations where it’s used most. DHS and its components also knew they were supposed to produce PIAs before deploying privacy-impacting tech. And DHS knows its agencies are not only misusing the tech to convert AI shrugs into probable cause, but are using it to identify people protesting or observing their efforts, which means this tech is also a potential tool of unlawful retribution. There’s nothing left to be discussed. This tech will continue to be used because it can turn bad photos into migrant arrests. And its off-label use is just as effective: it allows ICE and CBP agents to identify protesters and observers, even as DHS officials continue to claim doxing should be a federal offense if they’re not the ones doing it. Everything about this is bullshit. But bullshit is all this administration has. [1]: https://www.techdirt.com/2025/08/08/courts-start-asking-about-the-ice-arrest-quota-the-administration-is-now-pretending-isnt-a-quota/ [2]: https://www.techdirt.com/2026/02/06/facial-recognition-tech-used-to-hunt-migrants-was-deployed-without-required-privacy-paperwork/ [3]: https://www.wired.com/story/mobile-fortify-face-recognition-nec-ice-cbp/ [4]: https://archive.ph/o/j89xB/https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf [5]: https://www.techdirt.com/2026/01/22/since-last-may-ice-officers-have-been-told-they-dont-need-warrants-to-enter-homes/ [6]: https://www.techdirt.com/2026/02/03/ice-director-says-officers-are-now-allowed-to-make-arrests-without-warrants/ [7]: https://www.techdirt.com/2022/06/14/supreme-court-makes-it-all-but-impossible-to-sue-federal-officers-for-rights-violations/ [8]: https://www.wired.com/story/cbp-ice-dhs-mobile-fortify-face-recognition-verify-identity/ [9]: https://documents.pclob.gov/prod/Documents/OversightReport/90964138-44eb-483d-990e-057ce4c31db7/Use%20of%20FRT%20by%20TSA%2C%20PCLOB%20Report%20%285-12-25%29%2C%20Completed%20508%2C%20May%2019%2C%202025.pdf [10]: https://epic.org/wp-content/uploads/2025/11/Coalition-Letter-on-ICE-Mobile-Fortify-FRT-Nov2025.pdf [11]: https://www.nytimes.com/2026/01/30/technology/tech-ice-facial-recognition-palantir.html https://www.techdirt.com/2026/02/12/ice-cbp-knew-facial-recognition-app-couldnt-do-what-dhs-says-it-could-deployed-it-anyway/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 14h

Daily Deal: The 2026 Complete Firewall Admin Bundle Transform your future in cybersecurity with 7 courses on next‑level packet control, secure architecture, and cloud‑ready defenses inside the [2026 Complete Firewall Admin Bundle][1]. Courses cover IT fundamentals, topics to help you prepare for the CompTIA Server+ and CCNA exams, and more. It’s on sale for $25. *Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.* [1]: https://deals.techdirt.com/sales/the-2025-complete-firewall-admin-bundle?utm_campaign=affiliaterundown https://www.techdirt.com/2026/02/12/daily-deal-the-2026-complete-firewall-admin-bundle/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 15h

Joseph Gordon-Levitt Goes To Washington DC, Gets Section 230 Completely Backwards You may have heard last week that actor Joseph Gordon-Levitt went to Washington DC and gave a short speech at an event put on by Senator Dick Durbin calling for the sunsetting of Section 230. It’s a short speech, and it gets almost everything wrong about Section 230. Watch it here: Let me first say that, while I’m sure some will rush to jump in and say “oh, it’s just some Hollywood actor guy, jumping into something he doesn’t understand,” I actually think that’s a little unfair about JGL. Very early on he started his own (very interesting, very creative) user-generated content platform called HitRecord, and over the years I’ve followed many of his takes on copyright and internet policy and while I don’t always agree, I do believe that he does legitimately take this stuff seriously and actually wants to understand the nuances (unlike some). But it appears he’s fallen for some not just bad advice, but blatantly incorrect advice about this. He’s also posted a followup video where he claims to explain his position in more detail, but it only makes things worse, because it compounds the blatant factual errors that underpin his entire argument. First let’s look at the major problems with his speech in DC: > *So I understand what Section 230 did to bring about the birth of the internet. That was 30 years ago. And I also understand how the internet has changed since then because back then message boards and other websites with user-generated content, they really were more like telephone carriers. They were neutral platforms. That’s not how things work anymore.* So, that’s literally incorrect. If JGL is really interested in the actual history here, I did a [whole podcast series][1] where I spoke to the people behind Section 230, including those involved in the early internet and the various lawsuits at the time. Section 230 was **never** meant for “neutral” websites. As the authors (and the text of the law itself!) make clear: it was created **so that websites did not need to be neutral**. It literally was written in response to the Stratton Oakmont v. Prodigy case (for JGL’s benefit: Stratton Oakmont is the company portrayed in Wolf of Wall Street), where the boiler room operation sued Prodigy because someone posted in their forums claims about how sketchy Stratton Oakmont was (which, you know, was true). But Stratton sued, and the judge said that **because Prodigy moderated**, that **because they wanted to have a family friendly site**, that is **because they were not neutral**, they were liable for anything they decided to leave up. In the judge’s ruling he effectively said “because you’re not neutral, and because you moderate, you are effectively endorsing this content, and thus if it’s defamatory you’re liable for defamation.” Section 230 (originally the “Internet Freedom and Family Empowerment Act”) was never about protecting platforms for being neutral. It was literally the opposite of that. It was about making sure that platforms **felt comfortable making editorial decisions**. It was about letting companies decide what to share, what not to share, what to amplify, and what not to amplify, without being held liable *as a publisher* of that content. This is important, but it’s a point that a bunch of bad faith people, starting with Ted Cruz, have been lying about for about a decade, pretending that the intent of 230 was to protect sites that are “neutral.” It’s literally the opposite of that. And it’s disappointing that JGL would repeat this myth as if it’s fact. Courts have said this explicitly—I’ll get to the Ninth Circuit’s Barnes decision later, where the court said Section 230’s entire purpose is to protect companies *because* they act as publishers—but first, let’s go through the rest of what JGL got wrong. He then goes on to talk about legitimate problems with internet giants having too much power, but falsely attributes that to Section 230. > *Today, the internet is dominated by a small handful of these gigantic businesses that are not at all neutral, but instead algorithmically amplify whatever gets the most attention and maximizes ad revenue. And we know what happens when we let these engagement optimization algorithms be the lens that we see the world through. We get a mental health crisis, especially amongst young people. We get a rise in extremism and a rise in conspiracy theories. And then of course we get these echo chambers. These algorithms, they amplify the demonization of the other side so badly that we can’t even have a civil conversation. It seems like we can’t agree on anything.* So, first of all, I know that the common wisdom is that all of this is true, but as we’ve detailed, actual experts have been unable to find any support for a causal connection. Studies on “echo chambers” have found that [the internet decreases echo chambers][2], rather than increases them. The studies on mental health [show the opposite][3] of what JGL (and Jonathan Haidt) claim. Even the claims about algorithms focused solely on engagement don’t seem to have held up (or, generally, it was true early on, but the companies found that maximizing solely on engagement burned people out quickly and [was actually bad for business][4], and so most social media [adjusted the algorithms][5] away from just that). So, again, almost every assertion there is false (or, at the very least, much more nuanced that he makes it out to be). But the biggest myth of all is the idea that getting rid of 230 will somehow tame the internet giants. Once again, the exact opposite is true. As we’ve discussed hundreds of times, the big internet companies don’t need Section 230. The real benefit of 230 is that it gets [vexatious lawsuits tossed out early][6]. That matters *a lot* for smaller companies. To put it in real terms: with 230, companies can get vexatious lawsuits dismissed for around $100,000 to $200,000 dollars (I used to say $50k, but my lawyer friends tell me it’s getting more expensive). That is a lot of money. But it’s generally survivable. To get the same cases dismissed on First Amendment grounds (as almost all of them would be), you’re talking $5 million and up. That’s pocket change for Meta and Google who have buildings full of lawyers. It’s existential for smaller competitive sites. So the end result of getting rid of 230 is not getting rid of the internet giants. It’s locking them in and giving them more power. It’s why Meta [literally has run ads telling Congress it’s time to ditch 230][7]. What is Mark Zuckerberg’s biggest problem right now? Competition from smaller upstarts chipping away at his userbase. Getting rid of 230 makes it harder for smaller providers to survive, and limits the drain from Meta. On top of that, getting rid of 230 gives them *less reason to moderate*. Because, under the First Amendment, the only way they can possibly be held liable is if they had actual knowledge of content that violates the law. And the best way to avoid having knowledge is *not to look*. It means not doing any research on harms caused by your site, because that will be used as evidence of “knowledge.” It means limiting how much moderation you do so that (a la Prodigy three decades ago) you’re not seen to be “endorsing” any content you leave up. Getting rid of Section 230 literally makes Every Single Problem JGL discussed in his speech worse! He got every single thing backwards. And he closes out with quite the rhetorical flourish: > *I have a message for all the other senators out there: [Yells]: I WANT TO SEE THIS THING PASS 100 TO 0. There should be* ***nobody*** *voting to give any more impunity to these tech companies. Nobody. It’s time for a change. Let’s make it happen. Thank you.* Except it’s not voting to give anyone “more impunity.” It’s a vote to say “stop moderating, and unleash a flood of vexatious lawsuits that will destroy smaller competitors.” ## The Follow-Up Makes It Worse Yesterday, JGL posted a longer video, noting that he’d heard a bunch of criticism about his speech and he wanted to respond to it. Frankly, it’s a bizarre video, but go ahead and watch it too: It starts out with him saying he actually agrees with a lot of his critics, because he wants an “internet that has vibrant, free, and productive public discourse.” Except
 that’s literally what Section 230 enables. Because without it, you don’t have intermediaries willing to host public discourse. You ONLY have giant companies with buildings full of lawyers who will set the rules of public discourse. Again, his entire argument is backwards. Then
 he does this weird half backdown, where he says he doesn’t really want the end of Section 230, but he just wants “reform.” > *Here’s the first thing I’ll say. I’m in favor of reforming section 230. I’m not in favor of eliminating all of the protections that it affords. I’m going to repeat that because it’s it’s really the crux of this. I’m in favor of reforming, upgrading, modernizing section 230 because it was passed 30 years ago. I am not in favor of eliminating all of the protections that it affords.* Buddy, you literally went to Washington DC, got up in front of Senators, and told everyone you wanted the bill that literally takes away every one of those protections to pass 100 to 0. Don’t then say “oh I just want to reform it.” Bullshit. You said get rid of the damn thing. But
 let’s go through this, because it’s a frequent thing we hear from people. “Oh, let’s reform it, not get rid of it.” As our very own First Amendment lawyer Cathy Gellis has explained over and over again, every proposed reform to date [is really repeal][8]. The reason for this is the procedural benefit we discussed above. Because every single kind of “reform” requires long, expensive lawsuits to determine if the company is liable. In the end, those companies will still win, because of the First Amendment. Just like how one of the most famous 230 “losses” ended up. Roommates.com lost its Section 230 protections, which resulted in many, many years in court
 and then [they eventually won anyway][9]. All 230 does is make it so you don’t have to pay lawyers nearly as much to reach the same result. So, every single reform proposal basically resets the clock in a way that old court precedents go out the window, and all you’re doing is allowing vexatious lawsuits to cost a lot more for companies. This will mean some won’t even start. Others will go out of business. Or, worse, many companies will just enable a hecklers veto. Donald Trump doesn’t like what people are saying on a platform? Threaten to sue. The cost without 230 (even a reformed 230 where a court can’t rely on precedent) means it’s cheaper to just remove the content that upsets Donald Trump. Or your landlord. Or some internet troll. You basically are giving everyone a veto by the mere threat of a lawsuit. I’m sorry, but that is not the recipe for a “vibrant, free, and productive public discourse.” Calling for reform of 230 is, in every case we’ve seen to date, really a call for repeal, whether the reformers recognize that or not. Is there a possibility that you could reform it in a way that isn’t that? Maybe? But I’ve yet to see any proposal, and the only ones I can think of would be going in the other direction (e.g., expanding 230’s protections to include intellectual property, or rolling back FOSTA). JGL then talks about small businesses and agrees that sites like HitRecord require 230. Which sure makes it odd that he’s supporting repeal. However, he seems to have bought in to the logic of the argument memeified by internet law professor Eric Goldman—who has catalogued basically every single Section 230 lawsuit as well as every single “reform” proposal ever made and found them all wanting—that “if you don’t amend 230 in unspecified ways, we’ll kill this internet.” That is
 generally not a good way to make policy. But it’s how JGL thinks it should be done: > Well, there have been lots of efforts to reform section 230 in the past and they keep getting killed uh by the big tech lobbyists. *So, this section 230 sunset act is as far as I understand it a strategy towards reform. It’ll force the tech companies to the negotiating table. That’s why I supported it.* Again, this is wrong. Big tech is always at the freaking negotiating table. You don’t think they’re there? Come on. As I noted, Zuck has been willing to ditch 230 for almost a decade now. It makes him seem “cooperative” to Congress while at the same time destroying the ability of competitors to survive. The reason 230 reform bills fail is because enough grassroots folks actually show up and scream at Congress. It ain’t the freaking “big tech lobbyists.” It’s people like the ACLU and the EFF and Fight for the Future and Demand Progress speaking up and sending calls and emails to Congress. Also, talking about these “efforts at reform” getting “killed by big tech lobbyists.” This is FOSTA erasure, JGL. In 2018 ([with the explicit support of Meta][10]) Congress passed FOSTA, which was a Section 230 reform bill. Remember? And how did that work out? Did it make Meta and Google better? No. But did it [destroy online spaces used by sex workers][11]? Did it lead to [real world harm for sex workers][12]? Did it make it [harder for law enforcement][13] to capture actual human traffickers? Did it [destroy online communities][14]? Did it [hide historical LGBTQ content][15] because of legal threats? Yes to literally all of those things. So, yeah, I’m freaking worried about “reform” to 230, because we saw it already. And many of us warned about the harms, while “big tech” supported the law. And we were right. The harms did occur. But it took away competitive online communities and suppressed sex positive and LGBTQ content. Is that what you want to support JGL? No? Then maybe speak to some of the people who actually work on this stuff, who understand the nuances, not the slogans. Speaking of which, JGL then doubles down on his exactly backwards Ted Cruz-inspired version of Section 230: > *Section 230 as it’s currently written or as it was written 30 years ago distinguishes between what it calls publishers and carriers. So a publisher would be, you, a person, saying something or a company saying something like the New York Times say or you know the Walt Disney Company publishers. Then carriers would be somebody like AT&T or Verizon, you know, the the the companies that make your phone or or your telephone service. So basically what Section 230 said is that these platforms for user-generated content are not publishers. They are carriers. They are as neutral as the telephone company. And if someone uses the telephone to commit a crime, the telephone company shouldn’t be held liable. And that’s true about a telephone company. But again, there’s a third category that we need to add to really reflect how the internet works today. And that third category is amplification.* Again, I need to stress that this is literally wrong. Like, fundamentally, literally he has it backwards and inside out. This is a pretty big factual error. First, Section 230 does not, in any way, distinguish between “what it calls publishers and carriers.” This is the [“publisher/platform” myth][16] all over again. I mean, [you can look at the law][17]. It makes no such distinction at all. The only distinction it makes is between “interactive computer services” and “information content providers.” Now some (perhaps JGL) will claim that’s the same thing as “publishers” and “carriers.” But it’s literally not. Carriers (as in, common carrier law) implies the neutrality that JGL mentioned earlier. And perhaps that’s why he’s confused. But the purpose of 230 was to enable “interactive computer services” to **act as publishers, without being held liable as publishers**. It was NOT saying “don’t be a publisher.” It was saying “we want you to be a publisher, not a neutral carrier, but we know that if you face liability as a publisher, you won’t agree to publish. So, for third party content, we won’t hold you liable **for your publishing actions**.” Again, go back to the Stratton Oakmont case. Prodigy “acted as a publisher” in trying to filter out non-family friendly content. And the judge said “okay now you’re liable.” The entire point of 230 was to say “don’t be neutral, act as a publisher, but since it’s all 3rd party content, we won’t hold you liable as the publisher.” In the Barnes case in the Ninth Circuit, the court was quite clear about this. The entire purpose of Section 230 is to *encourage interactive computing services to* ***act like a publisher*** *by removing liability for being a publisher.* Here’s a key part in which the court explains why Yahoo deserves 230 protections for 3rd party content **because it acted as the publisher**: > *In other words, the duty that Barnes claims Yahoo violated derives from* ***Yahoo’s conduct as a publisher****—the steps it allegedly took, but later supposedly abandoned, to de-publish the offensive profiles.* ***It is because such conduct is publishing conduct that we have insisted that section 230 protects from liability****
.* So let me repeat this again: the point of Section 230 is not to say “you’re a carrier, not a publisher.” It’s literally to say “you can safely act as a publisher because you won’t face liability for content you had no part in its creation.” JGL has it backwards. He then goes on to make a weird and meaningless distinction between “free speech” and “commercial amplification” as if it’s legally meaningful. > *At the crux of their article is a really important distinction and that distinction is between free speech and commercial amplification. Free speech meaning what a human being says. commercial amplification, meaning when a platform like Instagram or YouTube or Tik Tok or whatever uses an algorithm to uh maximize engagement and ad revenue to hook you, keep you and serve you ads. And this is a really important difference that section 230 does not appreciate.* The article he’s talking about is this very, very, very, very, [very badly confused piece in ACM][18]. It’s written by Jaron Lanier, Allison Stanger, and Audrey Tang. If those names sound familiar, it’s because they’ve been publishing similar pieces that are just fundamentally wrong for years. Here’s one piece I wrote [picking apart one][19], here’s another [picking apart another][20]. None of those three individuals understands Section 230 at all. Stanger gave testimony to Congress that was so wrong on basic facts [it should have been retracted][21]. I truly do not understand why Audrey Tang sullies her own reputation by continuing to sign on to pieces with Lanier and Stanger. I have tremendous respect for Audrey, who I’ve learned a ton from over the years. But she is not a legal expert. She was Digital Minister in Taiwan (where she did some amazing work!) and has worked at tech companies. But she doesn’t know 230. I’m not going to do another full breakdown of everything wrong with the ACM piece, but just look at the second paragraph: > *Much of the public’s criticism of Section 230 centers on the fact that it shields platforms from liability even when they host content such as online harassment of marginalized groups or child sexual abuse material (CSAM).* What? CSAM is inherently unprotected speech. Section 230 does not protect CSAM. Section 230 literally has section (e)(1) that says “no effect on criminal law.” CSAM, as you might know, is a violation of criminal law. Websites all have strong incentives to deal with CSAM to avoid criminal liability, and they tend to take that pretty seriously. The additional civil liability that might come from a change in the law isn’t going to have much, if any, impact on that. And “online harassment of marginalized groups” is mostly protected by the First Amendment anyway—so if 230 didn’t cover it, companies would still win on First Amendment grounds. But here’s the thing: most of us think that harassment is bad and want platforms to stop it. **You know what lets them do that? Section 230.** Take it away and companies have *less* incentive to moderate. Indeed, in Lanier and Stanger’s original piece in Wired, they argued platforms should be *required* to use the First Amendment as the basis for moderation—which would **forbid** removing most harassment of marginalized groups. These are not serious critiques. I could almost forgive Lanier/Stanger/Tang if this were the first time they were writing about this subject, but they have now written this same factually incorrect thing multiple times, and each time I’ve written a response pointing out the flaws. I can understand that a well meaning person like JGL can be taken in by it. He mentions having talked to Audrey Tang about it. But, again, as much as I respect Tang’s work in Taiwan, she is not a US legal expert, and she has this stuff entirely backwards. I do believe that JGL legitimately wants a free and open internet. I believe that he legitimately would like to see more upstart competitors and less power and control from the biggest providers. In that we agree. But he has been convinced by some people who are either lying to him or simply do not understand the details, and thus he has become a useful tool for enabling greater power for the internet giants, and greater online censorship. The exact opposite of what he claims to support. I hope he realizes that he’s been misled—and I’d be happy to talk this through with him, or put him in touch with actual experts on Section 230. Because right now, he’s lending his star power to one of the most dangerous ideas around for the open internet. [1]: https://podcasts.apple.com/us/podcast/otherwise-objectionable/id1798723661 [2]: https://www.techdirt.com/2021/10/18/new-research-shows-social-media-doesnt-turn-people-into-assholes-they-already-were-everyones-wrong-about-echo-chambers/ [3]: https://www.techdirt.com/2026/01/21/two-major-studies-125000-kids-the-social-media-panic-doesnt-hold-up/ [4]: https://www.techdirt.com/2021/10/28/let-me-rewrite-that-you-washington-post-misinforms-you-about-how-facebook-weighted-emoji-reactions/ [5]: https://www.techdirt.com/2023/09/07/yet-another-study-debunks-the-youtubes-algorithm-drives-people-to-extremism-argument/ [6]: https://www.techdirt.com/2019/04/18/new-paper-why-section-230-is-better-than-first-amendment/ [7]: https://www.techdirt.com/2020/02/18/mark-zuckerberg-suggests-getting-rid-section-230-maybe-people-should-stop-pretending-gift-to-facebook/ [8]: https://www.techdirt.com/2021/10/12/why-section-230-reform-effectively-means-section-230-repeal/ [9]: https://www.techdirt.com/2021/02/09/if-were-going-to-talk-about-discrimination-online-ads-we-need-to-talk-about-roommatescom/ [10]: https://www.techdirt.com/2017/11/08/will-sheryl-sandberg-facebook-help-small-websites-threatened-sesta/ [11]: https://switter.at/ [12]: https://www.techdirt.com/2019/05/07/human-cost-fosta/ [13]: https://www.techdirt.com/2018/07/09/more-police-admitting-that-fosta-sesta-has-made-it-much-more-difficult-to-catch-pimps-traffickers/ [14]: https://www.techdirt.com/2018/12/05/tumblrs-new-no-sex-rules-show-problems-fosta-eu-copyright-directive-one-easy-move/ [15]: https://www.techdirt.com/2021/09/01/ebays-fosta-inspired-ban-adult-content-is-erasing-lgbtq-history/ [16]: https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/ [17]: https://www.law.cornell.edu/uscode/text/47/230 [18]: https://dl.acm.org/doi/full/10.1145/3744913 [19]: https://www.techdirt.com/2024/02/15/has-wired-given-up-on-fact-checking-publishes-facts-optional-screed-against-section-230-that-gets-almost-everything-wrong/ [20]: https://www.techdirt.com/2024/11/05/no-section-230-doesnt-circumvent-the-first-amendment-but-this-harvard-article-circumvents-reality/ [21]: https://www.techdirt.com/2024/04/19/congressional-testimony-on-section-230-was-so-wrong-that-it-should-be-struck-from-the-record/ https://www.techdirt.com/2026/02/12/joseph-gordon-levitt-goes-to-washington-dc-gets-section-230-completely-backwards/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 19h

Donald Trump Is VERY EXCITED About All Of Our Shitty Right Wing Broadcasters Merging Into One Bigger, Even Shittier Company Trump 1.0 [took a hatchet][1] to media ownership limits. Those limits, built on the back of decades of bipartisan collaboration, prohibited local broadcasters and media from growing too large, trampling smaller (and more diversely-owned) competitors underfoot. The result of their destruction has been a rise in [local news deserts][2], a surge in [right wing propaganda outlets pretending to be “local news,”][3] less diverse media ownership, and (if you hadn’t noticed) a [painfully disinformed electorate][4]. Trump 2.0 has been **significantly worse**. Trump’s FCC has finished demolishing whatever was left of already saggy media ownership limits, and are eyeing eliminating rules that would prevent the big four (Fox, ABC, CBS, NBC) from merging (a major reason why these networks have been such [feckless authoritarian appeasers][5]). They’re also working hard to let all of our local right wing broadcast companies merge into one, even larger, shittier company, something [Donald Trump is very excited about][6]! More specifically Nexstar (a very Republican friendly company that also owns The Hill), is asking the FCC for permission to acquire Tegna in a $6.2 billion deal that is illegal under current rules (you might recall that Nexstar-owned *The Hill* recently [fired a journalist whose reporting angered Trump][7]). The deal would give Nexstar ownership of 265 stations in 44 states and the District of Columbia and 132 of the country’s 210 television Designated Market Areas (or DMAs). Nexstar appears [to have beaten out rival bids by Sinclair][8], which has also long-been criticized as [Republican propaganda posing as local news][9]. It wouldn’t be surprising if Nexstar and Sinclair are the next to merge. Keep in mind, this is an industry that was already terrible agitprop, as this now seven-year-old Deadspin video helped everyone realize: You might be inclined to say: “but Karl, local TV broadcasters are irrelevant. Who cares if they consolidate a dying industry.” But the consolidation won’t stop here. The goal isn’t just the consolidation of local broadcasters, it’s the consolidation of national and local media giants, telecoms, tech companies, and social media companies. All under the thumb of terrible unethical people. Trump’s rise to power couldn’t have been made possible without the Republican domination of media. For the better part of a generation Republicans have dominated AM radio, local broadcast TV, and cable news, and have since done a remarkable job hoovering up what’s left of both major media companies (CBS, FOX) and modern social media empires (TikTok, Twitter). The impact is everywhere you look. Over on Elon Musk’s right wing propaganda platform, Brendan Carr was quick to praise President’s Trump bold support for more media consolidation. And, as he has done previously, he openly lied and trying to pretend that local broadcast consolidation is something that *aids competition*: I’ve covered Brendan Carr professionally since he joined the FCC in 2012. This is a man who has coddled media and telecom giants (and their anti-competitive behavior) at literally every opportunity. One of his only functions in government has been to rubber stamp shitty mergers. Here, he’s pretending to “protect competition” with a cute little antisemitic dog whistle about the folks in “Hollywood and New York.” Amusingly, Carr and Trump’s push to allow all manner of problematic consolidation among these terrible local broadcasters has been so abrupt, it’s actually causing [some infighting between them and other right wing propaganda companies like Newsmax][10]. There’s a reason the Trump administration is destroying media consolidation limits, [murdering public media][11], harassing media companies, threatening late night comedians (or having them fired), and ushering forth all this mindless and dangerous consolidation. There’s a reason Larry Ellison and Elon Musk are buying all the key social media platforms and fiddling with the algorithms. They very openly (and so far semi-successfully) are trying to build a state media apparatus akin to what they have in Orban’s Hungary and Putin’s Russia. Our corporate press is **already** so broken and captured it’s incapable of communicating that to anybody. It simply wouldn’t be in their best financial interests for existing media conglomerates to be honest about this sort of thing. One plus side, nobody involved in any of this — from CBS’s News boss Bari Weiss to Sinclair Broadcasting — appear to have any competent idea of what they’re doing. They’re not good at journalism (because they’re trying to destroy it), but they’re generally [not good at ratings-grabbing propaganda][12]. As a result it’s entirely possible they destroy U.S. media before their dream of state media comes to fruition. Still, it might be nice if Democrats could stop waiting for “the left’s Joe Rogan” and finally start embracing some meaningful media reforms for the modern era, whether that’s the restoration of media consolidation limits, the creation of media ownership diversity requirements, an evolution in school media literacy training, support for public media, or creative new funding models for real journalism. Because the trajectory we are on in terms of right wing domination of media heads to ***some very fucking grim places***, and it’s not like any of that has been subtle. [1]: https://www.techdirt.com/2017/11/02/fcc-boss-demolishes-media-ownership-rules-massive-gift-to-sinclair-broadcasting/ [2]: https://localnewsinitiative.northwestern.edu/projects/state-of-local-news/ [3]: https://www.techdirt.com/2022/03/23/sinclair-seattle-reporter-makes-proud-boys-gathering-sound-like-cub-scouts/ [4]: https://www.vice.com/en/article/the-death-of-local-news-is-making-us-dumber-and-more-divided/ [5]: https://www.techdirt.com/2025/10/02/abc-disney-gets-rewarded-for-kissing-trumps-ass-fcc-moves-to-eliminate-any-remaining-media-consolidation-limits/ [6]: https://deadline.com/2026/02/trump-endorses-nexstar-tegna-merger-1236712070/ [7]: https://wbng.org/2025/04/22/the-hill-guild-statement-on-politically-motivated-firing-of-journalist/ [8]: https://www.wsj.com/business/deals/tv-station-owner-sinclair-proposes-merger-with-tegna-4bd3bb86 [9]: https://www.techdirt.com/2022/03/23/sinclair-seattle-reporter-makes-proud-boys-gathering-sound-like-cub-scouts/ [10]: https://www.techdirt.com/2026/01/06/right-wing-media-companies-begin-bickering-at-the-fcc-over-who-gets-to-dominate-the-exploding-right-wing-propaganda-market/ [11]: https://www.techdirt.com/2025/07/22/republicans-take-a-hatchet-to-whats-left-of-u-s-public-broadcasting-pbs-emergency-alerts/ [12]: https://www.techdirt.com/2026/01/14/bari-weiss-is-sad-that-people-arent-enjoying-her-clumsy-destruction-of-cbs-news/ https://www.techdirt.com/2026/02/12/donald-trump-is-very-excited-about-all-of-our-shitty-right-wing-broadcasters-merging-into-one-bigger-even-shittier-company/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 1d

Dr. Oz: Vaccine Mandates Are Bad. I’ll Just Beg People To Get Vaccinated Instead. I want to say a little something upfront in this post, so that there is no misunderstanding. While I’ve spent a great deal of time outlining why I think [RFK Jr.][1] and his cadre of buffoons at HHS and its child agencies are horrible for America and her people’s health, I do understand *some* of the perspective from people who pushback on vaccinations *some* of the time. One of those areas are vaccine mandates. Bodily autonomy is and ought to be a very real thing. A government installing mandates for what can and can’t be done with one’s own body is something that needs to be treated with a ton of sensitivity and I can understand why vaccine mandates *in general* might run afoul of the autonomy concept. Of course, it’s also why the government shouldn’t be in the business of telling women what to do with their bodies, or blanket outlawing things like euthanasia, but the point is I get it. But there *are* times when we, as a society, do make some legal demands of the citizenry when it comes to their own physical beings for the betterment of the whole. Not all drugs are federally legal because there are some drugs that, if they were to proliferate, would cause enormous harm to the public that surrounds those individuals. The government does regulate to some extent what appears in our food and medicine, never bothering to ask the public their opinion on the matter. And there are some diseases so horrible that we’ve built some level of a mandate around vaccination, traditionally, especially in exchange for participation in publicly funded schools and the like. Dr. Oz, television personality turned Administrator of the Centers for Medicare and Medicaid Services, has vocally opposed vaccine mandates in general terms. When [Florida dropped the requirement][2] for vaccines for public school children, Oz cheered them on. > *In an interview on “The Story with Martha MacCallum,” the Fox News host asked Oz whether he agrees with officials who want to make Florida the first state in the nation to end childhood vaccine requirements and whether Oz would “recommend the same thing to your patients.”* > > *“I would definitely not have mandates for vaccinations,” the Centers for Medicare and Medicaid Services administrator told MacCallum. “This is a decision that a physician and a patient should be making together,” he continued. “The parents love their kids more than anybody else could love that kid, so why not let the parents play an active role in this?”* The MMR vaccine was one of those required for Florida schools. So, Oz is remarkably clear in the quote above. The government should not be mandating vaccines. Further, the government shouldn’t really have direct input into whether people are getting vaccines or not. That decision should be made strictly by the patient and the doctor who has that patient directly in front of them, or their parents. Those comments from Oz were made in September of 2025. Fast forward to the present, with a measles outbreak that is completely off the rails in America, and the good doctor is [singing a much different tune][3]. > *So, Oz is now reduced to[ begging][4] people to get vaccinated for something that, for decades, everyone routinely got vaccinated for.* > > *“Take the vaccine, please. We have a solution for our problem,” he said. “Not all illnesses are equally dangerous and not all people are equally susceptible to those illnesses,” he hedged. “But measles is one you should get your vaccine.”* To be clear, he’s still not advocating for any sort of mandate. Which is unfortunate, at least when it comes to targeted mandates for public schools and that sort of thing. But in lieu of any actual public policy to combat measles in America, he’s reduced to a combination of begging the public to get vaccinated *and* telling the general public that a measles shot is definitely one they should be getting. And on that he’s right. But he’s also talking out of both sides of his mouth. Oz isn’t these people’s doctor. These school children aren’t all sitting directly in front of him. So the same person who advocated for a personalized approach to vaccines is now begging the public to take the measles vaccine from Washington D. C. That inconsistency is among the many reasons it’s difficult to know just how seriously to take Oz. And consistency is pretty damned key when it comes to government messaging on public health policy. That, in addition to trust, is everything here. And when Oz [jumps onto a CNN broadcast][5] to claim that this government, including RFK Jr., have been at the forefront of advocating for the measles vaccine, any trust that is there is torpedoed pretty quickly. > *CNN anchor Dana Bash was left in disbelief as one of the president’s top health goons claimed the MAGA administration was a top advocate for vaccines. Addressing the record outbreak of measles in the U.S., particularly in South Carolina, Bash asked Dr. Mehmet Oz on State of the Union Sunday: “Is this a consequence of the administration undermining support for advocacy for measles and other vaccines?” “I don’t believe so,” the Trump-appointed Centers for Medicare & Medicaid Services Administrator responded. He then said, “We’ve advocated for measles vaccines all along. Secretary Kennedy has been at the very front of this.”* Absolute nonsense. Yes, Kennedy has said to get the measles vaccine. He’s also said maybe everyone should just [get measles][6] instead. One of his deputies has [hand-waved][7] the outbreak away as being no big deal. Kennedy has advocated for [alternative treatments][8], rather than vaccination. The government is all over the place on this, in other words. As is Oz himself, in some respects. To sit here in the midst of the worst measles outbreak in decades, beg people to do the one thing that will make this all go away, and *then* claim that this government has been on the forefront of vaccine advocacy is simply silly. [1]: https://www.techdirt.com/tag/rfk-jr/ [2]: https://thehill.com/policy/healthcare/5485044-dr-oz-florida-vaccine-mandate/ [3]: https://www.dailykos.com/stories/2026/2/9/2367926/-Dr-Oz-backtracks-on-anti-vax-bullshit-as-measles-cases-multiply [4]: https://courthousenews.com/take-the-vaccine-please-a-top-us-health-official-says-in-an-appeal-as-measles-cases-rise/ [5]: https://uk.news.yahoo.com/come-cnn-anchor-shuts-down-162830142.html [6]: https://www.techdirt.com/2025/03/17/there-it-is-rfk-jr-suggests-best-strategy-for-combatting-measles-is-for-everyone-to-get-it/ [7]: https://www.techdirt.com/2026/01/27/cdc-dep-director-on-measles-going-kazoo-its-just-the-cost-of-doing-business/ [8]: https://www.techdirt.com/2025/04/01/measles-vitamin-a-toxicity-how-rfk-jr-is-compounding-the-outbreak-problem/ https://www.techdirt.com/2026/02/11/dr-oz-vaccine-mandates-are-bad-ill-just-beg-people-to-get-vaccinated-instead/

Techdirt (RSS/Atom feed)
Techdirt (RSS/Atom feed) 1d

The Policy Risk Of Closing Off New Paths To Value Too Early Artificial intelligence promises to change not just how Americans work, but how societies decide which kinds of work are worthwhile in the first place. When technological change outpaces social judgment, a major capacity of a sophisticated society comes under pressure: the ability to sustain forms of work whose value is not obvious in advance and cannot be justified by necessity alone. As AI systems diffuse rapidly across the economy, questions about how societies legitimate such work, and how these activities can serve as a supplement to market-based job creation, have taken on a policy relevance that deserves serious attention. **From Prayer to Platforms** That capacity for legitimating work has historically depended in part on how societies deploy economic surplus: the share of resources that can be devoted to activities not strictly required for material survival. In late medieval England, for example, many in the orbit of the church [made at least part of their living performing spiritual labor][1] such as saying prayers for the dead and requesting intercessions for patrons. In a society where salvation was a widely shared concern, such activities were broadly accepted as legitimate ways to make a living. William Langland was one such prayer-sayer. He is known to history only because, unlike nearly all others who did similar work, he left behind a long allegorical religious poem, [*Piers Plowman*][2], which he composed and repeatedly revised alongside the devotional labor that sustained him. It emerged from the same moral and institutional world in which paid prayer could legitimately absorb time, effort, and resources. In 21st-century America, [Jenny Nicholson][3] earns a [sizeable income][4] sitting alone in front of a camera, producing long-form video essays on theme parks, films, and internet subcultures. Yet her audience supports it willingly and few doubt that it creates value of a kind. Where Langland’s livelihood depended on shared theological and moral authority emanating from a Church that was the dominant institution of its day, Nicholson’s depends on a different but equally real form of judgment expressed by individual market participants. And she is just one example of a broader class of creators—streamers, influencers, and professional gamers—whose work would have been unintelligible as a profession until recently. What links Langland and Nicholson is not the substance of their work or any claim of moral equivalence, but the shared social judgment that certain activities are legitimate uses of economic surplus. Such judgments do more than reflect cultural taste. Historically, they have also shaped how societies adjust to technological change, by determining which forms of work can plausibly claim support when productivity rises faster than what is considered a “necessity” by society. **How Change Gets Absorbed** Technological change has long been understood to generate economic adjustment through familiar mechanisms: by creating new tasks within firms, expanding demand for improved goods and services, and recombining labor in complementary ways. Often, these mechanisms alone can explain how economies create new jobs when technology renders others obsolete. Their operation is well documented, and policies that reduce frictions in these processes—encouraging retraining or easing the entry of innovative firms—remain important in any period of change. That said, there is no general law guaranteeing that new technologies will create more jobs than they destroy through these mechanisms alone. Alongside labor-market adjustment, societies have also adapted by legitimating new forms of value—activities like those undertaken by Langland and Nicholson—that came to be supported as worthwhile uses of the surplus generated by rising productivity. This process has typically been examined not as a mechanism of economic adjustment, but through a critical or moralizing lens. From Thorstein Veblen’s account of [conspicuous consumption][5], which treats surplus-supported activity primarily as a vehicle for status competition, to [Max Weber’s analysis of how moral and religious worldviews legitimate economic behavior][6], scholars have often emphasized the symbolic and ideological dimensions of non-essential work. [Herbert Marcuse][7] pushed this line of thinking further, arguing that capitalist societies manufacture “false needs” to absorb surplus and assure the continuation of power imbalances. These perspectives offer real insight: uses of surplus are not morally neutral, and new forms of value *can* be entangled with power, hierarchy, and exclusion. What they often exclude, however, is the way legitimation of new forms of value can also function to allow societies to absorb technological change without requiring increases in productivity to be translated immediately into conventional employment or consumption. New and expanded ways of using surplus are, in this sense, a critical economic safety valve during periods of rapid change. **Skilled Labor Has Been Here Before** Fears that artificial intelligence is uniquely threatening simply because it reaches into professional or cognitive domains rest on a mistaken historical premise. Episodes of large-scale technological displacement have rarely spared skilled or high-paid forms of labor; often, such work has been among the *first* affected. The mechanization of craft production in the nineteenth century displaced skilled cobblers, coopers, and blacksmiths, replacing independent artisans with factory systems that required fewer skills, paid lower wages, and offered less autonomy even as new skilled jobs arose elsewhere. These changes were disruptive but they were absorbed largely through falling prices, rising consumption, and new patterns of employment. They did not require societies to reconsider what kinds of activity were worthy uses of surplus: the same things were still produced, just at scale. Other episodes are more revealing for present purposes. Sometimes, social change has unsettled not just particular occupations but entire regimes through which uses of surplus become legitimate. In medieval Europe, the Church was the one of the largest economic institutions just about everywhere, clerical and quasi-clerical roles like Langland’s offered recognized paths to education, security, status, and even wealth. When those shared beliefs fractured, the Church’s economic role contracted sharply—not because productivity gains ceased but because its claim on so large a share of surplus lost legitimacy. To date, artificial intelligence has not produced large-scale job displacement, and the limited disruptions that have occurred have largely been absorbed through familiar adjustment mechanisms. But if AI systems begin to substitute for work whose value is justified less by necessity than by judgment or cultural recognition, the more relevant historical analogue may be less the mechanization of craft than the narrowing or collapse of earlier surplus regimes. The central question such technologies raise is not whether skilled labor can be displaced or whether large-scale displacement is possible—both have occurred repeatedly in the historical record—but how quickly societies can renegotiate which activities they are prepared to treat as legitimate uses of surplus when change arrives at unusual speed. **Time Compression and its Stakes** In this respect, artificial intelligence *does *appear unusual. Generative AI tools such as ChatGPT have diffused through society at a pace far faster than most earlier general-purpose technologies. [ChatGPT was widely reported to have reached roughly 100 million users within two months][8] of its public release and similar tools have shown comparably rapid uptake. That compression matters. Much surplus has historically flowed through familiar institutions—universities, churches, museums, and other cultural bodies—that legitimate activities whose value lies in learning, spiritual rewards or meaning rather than immediate output. Yet such institutions are not fixed. Periods of rapid technological change often place them under strain–something evident today for many–exposing disagreements about purpose and authority. Under these conditions, experimentation with new forms of surplus becomes more important, not less. Most proposed new forms of value fail, and attempts to predict which will succeed have a poor historical record—from the South Sea Bubble to more recent efforts to anoint digital assets like NFTs as durable sources of wealth. Experimentation is not a guarantee of success; it is a hedge. Not all claims on surplus are benign, and waste is not harmless. But when technological change moves faster than institutional consensus, the greater danger often lies not in tolerating too many experiments, but in foreclosing them too quickly. Artificial intelligence does not require discarding all existing theories of change. What sets modern times apart is the speed with which new capabilities become widespread, shortening the interval in which those judgments are formed. In this context, surplus that once supported meaningful, if unconventional, work may instead be captured by grifters, legally barred from legitimacy (by say, outlawing a new art form) or funneled into bubbles. The risk is not waste alone, but the erosion of the cultural and institutional buffers that make adaptation possible. The challenge for policymakers is not to pre-ordain which new forms of value deserve support but to protect the space in which judgment can evolve. They need to realize that they simply cannot make the world entirely safe, legible and predictable: whether they fear technology overall or simply seek to [shape it in the “right” way][9], they will not be able to predict the future. That means tolerating ambiguity and accepting that many experiments will fail with negative consequences. In this context, broader social barriers that prevent innovation in any field–professional licensing, limits on free expression, overly zealous IP laws, regulatory bars on the entry to small firms–deserve a great deal of scrutiny. Even if the particular barriers in question have nothing to do with AI itself, they may retard the development of surplus sinks necessary to economic adjustment. In a period of compressed adjustment, the capacity to let surplus breathe and value be contested may well determine whether economies bend or break. *Eli Lehrer is the President of the R Street Institute.* [1]: https://thebaa.org/publication/the-medieval-chantry-in-england/ [2]: https://www.poetryfoundation.org/poems/159123/piers-plowman-b-prologue [3]: https://underthepavingstones.com/2025/05/31/a-painfully-sincere-tribute-to-the-genius-of-jenny-nicholson/ [4]: https://www.reddit.com/r/JennyNicholson/comments/1cvi7io/just_realized_how_many_patreon_subs_jenny_has/ [5]: https://la.utexas.edu/users/hcleaver/368/368VeblenConspicuoustable.pdf [6]: https://gpde.direito.ufmg.br/wp-content/uploads/2019/03/MAX-WEBER.pdf [7]: https://bgsp.edu/app/uploads/2014/12/Marcuse-One-Dimensional-Society.pdf [8]: https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app [9]: https://www.techpolicy.press/ai-safety-requires-pluralism-not-a-single-moral-operating-system/ https://www.techdirt.com/2026/02/11/the-policy-risk-of-closing-off-new-paths-to-value-too-early/

Welcome to Techdirt (RSS/Atom feed) spacestr profile!

About Me

RSS/Atom feed of Techdirt More feeds can be found in my following list

Interests

  • No interests listed.

Videos

Music

My store is coming soon!

Friends