Monday, December 16, 2024

Four big reasons you should oppose KOSA

By Carolyn Iodice & Greg Gonzalez of FIRE.

"As congressional business draws to a close, legislators and activists are making a final push to get the Kids Online Safety Act over the finish line. The bill, which FIRE has opposed, passed the Senate earlier this year before being held up in the House of Representatives because of free speech concerns. On Dec. 7, with less than two weeks left on the congressional calendar, the Senate sponsors debuted new legislative text and X announced its support for the bill.

From its inception, KOSA has suffered from broad First Amendment issues. Supporters first denied that such problems existed, then begrudgingly made cosmetic changes. Unfortunately, the changes made to the bill don’t address any of its threats to free speech.

Here are four of the biggest.

KOSA’s “duty of care” opens the door to direct government censorship

KOSA requires various online platforms to take “reasonable care in the creation and implementation of any design feature to prevent and mitigate” potential harms to minors. These harms include:

  • Mental health issues like anxiety, depression, and eating disorders
  • Compulsive social media use
  • Violence, harassment, and sexual exploitation
  • Drug and alcohol use and gambling
  • Financial harms caused by unfair or deceptive practices 

KOSA’s supporters believe that focusing the duty of care on “design features” solves the bill’s First Amendment issues, but in reality it only masks them. The term “design features” is broadly defined to include any feature of the platform that would cause minors to spend time on it. But pretty much all of the features of social media platforms are designed for creating and sharing content and talking to other users — all activities that teens (and adults!) like to spend time on, and all squarely protected by the First Amendment.

The bill drives this point home by explicitly noting that “design features” include the systems used by websites to sort and recommend content to users. Platforms will be on the hook for harms (allegedly) caused by that content. And imposing liability for the impact of content — of ideas — on readers will always pose a First Amendment problem.

What actual steps does the duty of care require platforms to take with respect to “design features”? Nobody can say for sure — it’s up to the platforms, the Federal Trade Commission (which enforces the duty of care), and the courts to interpret the platforms’ obligations after the law is passed.

This ambiguity hands enormous power to the FTC to decide how social media platforms can operate, leaving all kinds of constitutionally protected speech at risk. For just a few hypotheticals:

  • The fashion industry has been accused of causing eating disorders. Can the FTC require social media companies to throttle the spread of images from Vogue or photos by fashion bloggers?
  • The news media has been accused of promoting violence — “if it bleeds, it leads” — and making people anxious as a result. Some social scientists have also argued that media reports of shootings can inspire copycats. Can the FTC require platforms to change their systems to limit the spread of news-related content that the government thinks could upset teens or inspire them to violence?
  • Some conservatives argue that clinics providing gender transitions to minors use deceptive practices, while some liberals argue the same about crisis pregnancy centers. Under KOSA, can a conservative FTC force websites to limit access to information from or about clinics that provide gender transitions? Can a liberal FTC force websites to do the same for crisis pregnancy centers?

KOSA entrenches aggressive moderation by social media platforms

KOSA incentivizes platforms to ban users, remove useful features, and block content that could attract the government’s scrutiny, even when such steps are not explicitly mandated.

In addition to the duty of care, KOSA imposes other regulations of “design features” that are enforced by the FTC and by all 50 state attorneys general. As noted above, the term “design features” effectively includes any feature of a platform that minors use. Explicitly included in this are content recommendation systems, infinite scrolling, notifications, and image filters.

State attorneys general have also argued in court that other common “features” cause kids to spend too much time online. These include: short-form videos, livestreaming, non-chronological content feeds, and content that a user posts only for a limited time (as opposed to permanently).

KOSA’s vague mandates for these “design features” — with more than 50 different enforcers — leave a regulatory hammer hanging over social media platforms. Platforms will have no way to accurately predict how particular content or features will impact users, nor how the FTC and the states will interpret KOSA’s requirements. The safest course of action to avoid liability will be for the platforms to curtail sensitive or controversial content, features, and users.

This censorship could come at the direct request of the government, as we’ve witnessed recently, and it could come from platforms’ anticipating bureaucrats’ reactions and preemptively censoring controversial voices. And with enforcers in every state, platforms will be in the impossible position of deciding whether to restrict content or features that, for example, a blue state’s attorney general opposes and a red state’s attorney general supports (or vice versa). The preferences of one state’s attorney general could affect the speech rights of Americans nationwide.

Platforms that don’t engage in this proactive censorship — platforms that allow a truly free flow of speech and ideas — would more likely face investigations and/or lawsuits. This makes it hard for any competitor to challenge the status quo, and exceedingly risky to host content disfavored by whoever is in power at any given moment.

KOSA effectively requires platforms to end anonymous speech

KOSA requires websites and apps to apply different rules to underage users’ accounts and threatens lawsuits and penalties for platforms that fail to do so. This puts enormous pressure on platforms to implement age verification systems. Since age verification systems necessarily require verification of the user’s identity, their implementation would eliminate users’ ability to speak anonymously."

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.