Saturday, February 14, 2026

No, We Shouldn’t Ban AI Chatbots

By Jennifer Huddleston and Christopher Gardner of Cato

"Banning chatbots would not be simple. Defining artificial intelligence (AI) is difficult, and limiting it to chatbots does not solve the problem. Even in a lighthearted debate, we had to account for the many uses of AI that are often overlooked, such as customer service and specific professional tools. In legislation, this is even more difficult, as laws lock in static definitions that could prevent both beneficial existing applications and innovative future uses of a technology.

Concerns about chatbots are often tied to their use by vulnerable kids and teens, concerns about particular types of content, like when Grok generated non-consensual sexual imagery or content linked to suicide or mental health. But attempts to limit the technology only to “beneficial” chatbots or those with more specific applications may eliminate innovative uses of general-purpose chatbots or stifle future advancements we aren’t yet aware of. 

For example, an educational purpose exception might be able to cover Khan Academy’s personal tutor, but it doesn’t take into account how a student, teacher, or parent might use a general-purpose chatbot for a similar purpose. Or worse, limit our creativity in how these tools could be used to solve problems by deeming them acceptable in only a narrow set of use cases."

"there are also positive examples of individuals who have used chatbots as a form of connection when they might not otherwise have been ready to seek help from a human or were unable to access resources. Just as some individuals have had an extremely negative experience with chatbots, others have found them beneficial in ways previously thought impossible."

"For many, chatbots offer a lifeline for those without strong support systems or access to professional help. They are available at all hours of the day, react without judgment, and represent a promising source of social support. Yet the impact of chatbots can go much further than just basic social support. For at least 30 people, GPT‑3 and GPT‑4 enabled chatbot Replika “stopped them from attempting suicide.”"

"ChatGPT’s multimodal capabilities can also help those with visual impairments by instantaneously describing their environment and answering questions."

"Chatbots, by contrast, are available on demand at any time of day. They can be accessed by one’s phone in almost any environment. And they are relatively cheap."

"A variety of solutions exist that are far less restrictive than banning chatbots more generally.

First, we are seeing the industry respond with various solutions that allow responses to common concerns. Both Meta and OpenAI have announced various parental controls on their general AI chatbot products. Other industry efforts include using red-teaming type AI models to determine potential risks and identify ways to improve models to prevent the likelihood of toxic or problematic responses. Additionally, civil society groups like Common Sense and the Family Online Safety Initiative provide resources for parents or other users who want to understand the risk of exposure to certain content. Much like the internet before it, these market-based responses can help resolve problems in ways that fit both different technologies and individual needs without governments dictating what approach or specific controls are best.

If the government were to set policy, there are many steps that would be less restrictive than a total ban on a particular technology or application. Many of these would raise their own speech concerns, such as banning certain lawful, if distasteful, content. In many cases, the content in question, like non-consensual intimate imagery, is likely already covered by existing law, or those laws could be updated to ensure it is. While Jennifer has discussed concerns about mandatory AI disclosures, particularly when they are applied more generally, requiring a chatbot to disclose that it is a chatbot is certainly less restrictive than banning the technology entirely." 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.