Smart Home

Eating Disorder Org’s AI Blunder Warning About Embracing Technology for Basic Human Roles


One of the ongoing debates in tech circles and beyond is how quickly AI will displace humans from certain lines of work. One role where we’ve already seen organizations embrace the technology is in customer support, using AI-powered customer interfaces to act as the first line of communication to handle incoming inquiries and provide valuable information to customers.

The only problem? Sometimes the information they provide is incorrect and can be harmful to the organization’s customers. To illustrate this, we need look no further than this week’s news about the National Eating Disorder Association’s efforts to use AI-powered chatbots to replace human staff on the organization’s helpline. The group announced a chatbot named Tessa earlier this month after support staff decided to cooperate and, a few weeks later, announced they were shutting down the chatbot.

The immediate face was caused by a chatbot that provided information that, according to NEDA, was “harmful and unrelated to the program.” This included giving weight loss advice to fitness activist Sharon Maxwell, who has a history of eating disorders. Maxwell wrote a communication where the bot told him to weigh himself every day and track his calories. In doing so, the bot was outsourced because it was supposed to walk users through the organization’s eating disorder prevention program and refer them to other services.

While one must question the decision-making of an organization that thought it could replace trained professionals to help those with serious health and mental health challenges, NEDA’s example is a cautionary tale for any organization eager to replace humans with AI. In the world of food and nutrition, AI can be an important tool for providing information to customers. However, the potential cost savings and efficiency that the technology offers must be balanced against the need for less human understanding of sensitive issues and the potential damage caused by bad information.

NEDA saw AI as a quick solution to what it saw as trouble in the way of real human workers and their desperate desire to organize a union to force change at work. But unfortunately, in replacing humans with computer simulations, the organization lost sight of the fact that serving its community requires a basic human form of empathy, something AI is notorious for.

All types of customer interactions are not created equal. An AI asking if you want a drink with your burger at the drive-thru would probably be appropriate in most cases, but even in those cases, it’s probably better to monitor the AI’s knowledge set and build offramps into the system where customers can be seamlessly handed off to a real person in case they do something offensive rather than another special question.

Back to top button