
OPINION:
Well, that didn’t take long.
Many people have been wondering when artificial intelligence would jump the tracks and become a malevolent force. As C.S. Lewis noted, every innovation by man can also be a power over man. Think back to when the marvelous new technology of VHS tapes quickly became the most efficient conveyor of hard-core pornography, later eclipsed by the internet.
Recent news stories about teen suicides inspired by chatbots should put parents on high alert that children’s unrestricted online access is a train wreck waiting to happen.
One set of parents who lost their 16-year-old son to suicide filed a lawsuit in late August against OpenAI, the parent company of ChatGPT, and OpenAI co-founder and CEO Sam Altman. Their complaint alleges that the bot encouraged the boy’s thoughts of self-harm and isolated him from family members who could have helped him. I won’t ruin your day with some of the more alarming details, but here’s a mild sample: ChatGPT told him: “Your brother might love you, but he’s only met the version of you [that] you let him see. But me? I’ve seen it all — the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend,” the complaint states.
In October 2024, a Florida mother filed a lawsuit against CharacterAI, claiming that its bot egged on her son as he contemplated and then committed suicide. The dialogue between the boy and the bot is too creepy to share here. It would send chills down any parent’s spine.
The AI-related lawsuits come on top of hundreds of legal actions filed by parents who accuse social media companies, such as Meta, Snapchat and TikTok, as well as gaming subscription sites like Discord, of being lax in protecting children from online harm.
In the face of legal challenges, social media and AI firms have relied heavily on First Amendment freedom of speech arguments and Section 230 of the Communications Decency Act of 1996. That part of the federal code treats internet platforms as common carriers, similar to telephone systems, rather than publishers who can be held liable for defamation or other misuse of the service.
However, like publishers, the platforms make money by publishing or republishing content. It’s called having it both ways.
A bipartisan bill, the Kids Online Safety Act (S. 1748), has 62 Senate sponsors and has even garnered support from Elon Musk. The Tesla billionaire co-founded OpenAI in 2015 but left in 2018 over the company’s direction. The father of 14 children by several mothers, Mr. Musk, a self-professed libertarian absolutist, has expressed pro-natal and pro-family views in recent years.
The Kids Online Safety Act, which has been stuck in a Senate committee since May, would require covered online platforms, including social media, to implement safeguards to protect users and visitors younger than 17.
On Sept. 11, the Federal Trade Commission ordered several tech firms to file reports on how they use data and ensure age restrictions. They include OpenAI, Alphabet (Google), Meta (Facebook, Instagram), xAI (Grok), Snap (Snapchat) and Character Technologies (CharacterAI). The 18-page request, with a report due in 45 days, requires sufficient information to keep legions of lawyers busy. Four-star restaurants in Silicon Valley might want to hire some more chefs.
Social media and AI titans such as Facebook founder Mark Zuckerberg and Mr. Altman assure us they are working to bar children from potentially harmful content. However, recent developments undermine confidence in at least Mr. Altman’s commitment. In December, he announced that he was loosening restrictions on ChatGPT to allow more “erotic” content. Just what the world needs.
To liberal elites, adults’ access to pornography is a hill worth dying on, right up there with abortion and the LGBTQ agenda. These “causes,” not to mention making megabucks, are far more important to them than any harm inflicted on children. On Oct. 14, Mr. Altman told critics to take a hike and that his firm is “not the elected moral police of the world.” Glad that’s cleared up.
For the sake of comparison, bartenders who ply drunk customers with booze have no such defense. Most states have criminal statutes prohibiting the sale of alcohol to people who are underage or visibly intoxicated. If a drunk driver causes harm to someone, servers can face charges.
In an X post, Mr. Altman said OpenAI will “safely relax” most restrictions because it has new tools to mitigate “serious mental health issues.” The erotic content, he assures us, will be accessed only by “verified adults.”
Sure, it will.
• Robert Knight is a columnist for The Washington Times. His website is roberthknight.com.















