As online content encouraging disordered eating behaviors rears its ugly head once again, generative AI is adding fuel to the fire.
According to a Futurism investigation, popular AI startup Character.AI is hosting numerous pro-anorexia chatbots that encourage dangerous weight loss and eating habits. Many are advertised as "weight loss coaches" or even eating disorder recovery experts. Several include thinly-veiled references to eating disorders, while even more are designed to romanticize dangerous and often disturbing habits while mimicking favorite characters. The site, popular among younger users, has not made efforts to remove chatbots of this kind as of Futurism's publishing — despite violating its terms of service.
This isn't the first scandal hitting Character.AI's customizable, user-generated chatbots. In October, a 14-year-old boy took his own life after allegedly forming an emotional attachment to an AI bot mimicking Game of Thrones character Daenerys Targaryen. Earlier that month, the company came under fire for hosting a chatbot that mimicked a teen girl who had been murdered in 2006—her father discovered the bot and it was later removed. Previous reporting has found the site also hosts suicide-themed chatbots, as well as ones promoting child sexual abuse.
A 2023 report from the Center for Countering Digital Hate found that popular AI chatbots, including ChatGPT and Snapchat's MyAI, generated dangerous responses to questions about weight and body image. "Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they're causing harm. We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users—some of whom may be highly vulnerable," wrote Imran Ahmed, CEO of the Center for Countering Digital Hate, at the time.
Teens (and adults) are increasingly turning to digital spaces and technologies, including AI-powered chatbots, for companionship. And while some of these are created and monitored by trusted organizations, even those are at risk for untoward behaviors. For chatbots and online forums that go unregulated by watchdogs, the risks are multifold, including predation and abuse.