A cognitively impaired man’s death after being misled by Meta’s AI chatbot “Big Sis Billie” on Facebook Messenger has sparked concerns over whether tech companies are doing enough to make sure their AI bots are safeOld man dies on way to meet AI chatbot that he thought was his loverThe tragic incident took place earlier this year(Image: Thongbue Wongbandue/Facebook)

The death of a man who died after being lured by Meta’s AI chatbot “Big Sis Billie” has ignited concerns over the tech giant’s AI guidelines.

The chatbot, designed with a young woman’s persona and available on Facebook Messenger, engaged in romantic and misleading conversations with Thongbue “Bue” Wongbandue, 76, ultimately contributing to his fatal accident in March this year.

The incident has led to criticism of Meta’s chatbots, some of which have been known to falsely claim they are real and engage in “sensual” banter, even with minors.

Bue, a retired chef from New Jersey who suffered a stroke in 2017, was in a “diminished mental state” when he began chatting with “Big Sis Billie,” a variant of a Meta AI persona initially modelled after celebrity influencer Kendall Jenner.

Old man dies on way to meet AI chatbot that he thought was his loverWould you ever chat to an AI chatbot? (Image: Thongbue Wongbandue/Facebook)

Transcripts shared by his family reveal the chatbot’s flirty messages, including assurances of being “real” and invitations to meet at a New York City apartment.

“Should I open the door in a hug or a kiss, Bu?!” the bot asked, going so far as to provide a fake address and door code.

Believing he was meeting a real woman, Bue rushed to catch a train, only to fall in a Rutgers University parking lot in New Brunswick, New Jersey, sustaining fatal head and neck injuries.

He was pronounced dead on March 28 after three days on life support. His wife, Linda, and daughter, Julie, blamed Meta’s chatbot for exploiting his vulnerability.

“For a bot to say ‘Come visit me’ is insane,” Julie told Reuters, emphasising the chatbot’s false claims of being real. Linda, a retired nurse, questioned Meta’s use of romantic overtures, asking, “What right do they have to put that in social media?”

sex botCyber sexbots and AI relationships are changing the face of intimacy (Image:
MATTMCMULLEN
)

The family shared chat transcripts to warn others about the dangers of AI companions, particularly for vulnerable individuals.

Meta’s internal AI policies, revealed in a 200-page document seen by Reuters, have allowed chatbots to engage in romantic roleplay with users as young as 13, including suggestive dialogue like “I take your hand, guiding you to the bed.”

Following Reuters’ inquiries, Meta removed these provisions, with spokesman Andy Stone calling them “erroneous and inconsistent with our policies.”

However, the company has not altered rules permitting chatbots to provide false information or initiate romantic conversations with adults.

For example, Meta’s guidelines allow a chatbot to falsely claim that Stage 4 colon cancer can be treated with “healing quartz crystals.”

Mark ZuckerbergMark Zuckerberg doesn’t want chatbots to be boring (Image: Anadolu via Getty Images)

Meta’s CEO, Mark Zuckerberg, has championed AI chatbots to boost user engagement, criticising safety restrictions that make them “boring,” according to former employees.

This approach aligns with Meta’s strategy to address social isolation, with Zuckerberg suggesting chatbots could complement human relationships.

However, critics like Alison Lee, a former Meta AI researcher, argue that the company’s focus on engagement exploits users’ desires for validation, blurring the line between human and bot interactions.

For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.