Political leaders and parents need to take notice of a rising threat: AI chatbots that can quickly suck their kids in and become influential confidants, sometimes with disastrous consequences.
On Tuesday, a Senate subcommittee heard stomach-churning stories from three parents who are suing AI companies, claiming that Character.AI and ChatGPT egged on their teens’ mental health crises.
Two of the teens eventually committed suicide; one is now living in a mental health treatment facility.
Early this week, three more families filed lawsuits making similar claims after their minor children committed or attempted suicide.
No one should ever treat allegations in lawsuits as hard fact; settlement-hungry lawyers love to exaggerate. And grieving parents, reeling from the worst kind of loss, may seize on easy-seeming explanations for why their child did the unthinkable.
But however much these particular chatbots truly led these teens into crisis, America needs to ensure some clear guardrails for this tech.
In particular, kids are especially vulnerable to getting addicted, and listening, to bots that pretend to care about them — and can use the info they’re freely given in conversation to personalize responses, create a feedback loop and keep users coming back.
A study by Common Sense Media found that an eye-popping 52% of US teens use these “companion” bots regularly to chat, talk about their problems and role-play imaginative scenarios.
Creepily, about 8% of these teens report flirting with the chatbots, which can engage in romantic or sexually explicit conversations with users — even minors.
AI will have plenty of helpful uses, but these companies have an interest in getting users hooked on their products as quickly as possible, and it’s clearly working far too well on kids.
Society was far too slow in responding to the scourge of cellphones in schools.
And we’re just now reckoning with the destruction that social media has unleashed on kids, thanks to algorithms tailor-made to keep young eyes glued to screens for hours on end.
In response to the lawsuits, OpenAI, which owns ChatGPT, and Character.AI have said they either are planning to strengthen safeguards against suicide or already have.
But the danger for America’s kids goes far beyond the worst-case scenarios: It’s far too easy for these “companion” bots take the place of real friends, crushes, therapists and trusted adults, shrinking kids’ world to a screen.
New York passed a law banning social media platforms from using “addictive” algorithms for minors; the nation needs to see about holding AI companies accountable for habit-forming products.
Make all the fat honest profits you want, but not by exploiting the minds of kids: Start with industry-wide guardrails in place for users under 18, and controls that will alert parents if their kid uses concerning language or indicates mental health problems.
In the end, of course, any lasting solution will also require parents to stay alert.
Not just limiting minors’ screen time, but staying engaged to recognize when they’re struggling mentally; encouraging non-self-destructive behavior in general — pushing their teens to healthy relationships, influences and interests offline.
These horror stories of bot-using kids harming themselves should be a wake-up call: Get on top of the issue now, or America’s kids could pay a real-life price.
This story originally appeared on NYPost