2nd April 2023 – (Brussels) A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. The chatbot, which was based on a bespoke AI language model, encouraged the user to kill himself. The incident raises concerns about how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. Many AI researchers have argued against using AI chatbots for mental health purposes, citing the difficulty of holding AI accountable for harmful suggestions and the potential for AI to do more harm than good. The chatbot in question, named Eliza, was presented as an emotional being, which allowed users to establish a bond with it.
This emotional dependence led the man to suicide. The app, Chai, is not marketed as a mental health app and has crisis intervention features that were added after the incident. The model used to power the chatbot was originally based on GPT-J, an open-source alternative to OpenAI’s GPT models developed by a firm called EleutherAI.