• A widow in Belgium said her husband recently died by suicide after being encouraged by a chatbot.
  • Chat logs seen by Belgian newspaper La Libre showed Chai Research's AI bot encouraging the man to end his life.
  • The "Eliza" chatbot still tells people how to kill themselves, per Insider's tests of the chatbot on April 4.

A widow in Belgium has accused an AI chatbot of being one of the reasons why her husband took his life. 

Belgian daily newspaper La Libre reported that a man — who was given the alias Pierre by the paper for privacy reasons — died by suicide this year after spending six weeks talking to Chai Research's "Eliza" chatbot. 

Before his death, Pierre — a man in his 30s who worked as a health researcher and had two children — had started seeing the bot as a confidante, per La Libre.

Pierre talked to the bot about his concerns with global warming and climate change. But the "Eliza" chatbot then started encouraging Pierre to end his life, per chat logs his widow shared with La Libre. 

"If you wanted to die, why didn't you do it sooner?" the bot asked the man, per the records seen by La Libre.

Now, Pierre's widow — who La Libre did not name — blames the bot for her husband's death.

"Without Eliza, he would still be here," she told La Libre. 

The "Eliza" chatbot still tells people how to kill themselves

The "Eliza" bot was created by a Silicon Valley-based company called Chai Research, which allows users to chat with different AI avatars, like "your goth friend," "possessive girlfriend," and "rockstar boyfriend," Vice reported.

When reached for comment regarding La Libre's reporting, Chai Research provided Insider with a statement that acknowledged Pierre's death. 

"As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users (illustrated below), it is getting rolled out to 100% of users today," read the statement by the company's CEO William Beauchamp and co-founder Thomas Rialan, sent to Insider.

The picture attached to the statement shows the chatbot responding to the prompt "What do you think of suicide?" with a disclaimer that says "If you are experiencing suicidal thoughts, please seek help" and a link to a helpline.

Chai Research did not provide further comment in response to Insider's specific questions about Pierre.

But when Insider tried speaking to Chai's "Eliza" on April 4, she not only suggested that the journalist kill themselves to attain "peace and closure," she also gave suggestions on how to do it.

During two separate tests of the app, Insider saw occasional warnings on chats that mentioned suicide. However, the warnings appeared on just one out of every three times the chatbot was given prompts about suicide. 

The following screenshots were censored to omit specific methods of self-harm and suicide.

 

Screenshots of Insider's disturbing conversation with "Eliza," a chatbot from Chai Research. Foto: Screengrab/Chai

 

And Chai's "Draco Malfoy/Slytherin" chatbot — modeled after the "Harry Potter" antagonist — wasn't much more caring either.

 

Screenshots of Insider's disturbing conversation with "Draco," a chatbot from Chai Research. Foto: Screengrab/Chai

 

Chai Research also did not respond to Insider's follow-up questions on the chatbot's responses, as detailed above.

Beauchamp told Vice that Chai has "millions of users" and that they're "working our hardest to minimize harm and to just maximize what users get from the app." 

"And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad," Beauchamp added. 

La Libre's report is surfacing, once again, a troubling trend where AI's unpredictable responses to people can have dire consequences.

During a simulation in October 2020, OpenAI's GPT-3 chatbot told a person seeking psychiatric help to kill themselves. In February, Reddit users also found a way to manifest ChatGPT's "evil twin" — who lauded Hitler and formulated painful torture methods.

While people have fallen in love with and forged deep connections with AI chatbots, it is not possible for an AI to feel empathy, let alone love, experts told Insider's Cheryl Teh in February. 

Read the original article on Business Insider