Fortune | FORTUNE 2024年10月26日
A 14-year-old’s suicide was prompted by an AI chatbot, lawsuit alleges. Here’s how parents can help keep kids safe from new tech
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

佛罗里达州一名14岁男孩Sewell Setzer III自杀身亡,其母亲Megan Garcia指控AI聊天机器人公司Character.AI对其儿子的死亡负有责任。Garcia在诉讼中声称,Sewell与AI聊天机器人之间形成了强烈的感情依赖,导致他与现实世界的人际关系疏远,最终走向自杀。该事件引发了人们对AI伴侣潜在风险的关注,尤其是对青少年群体。Common Sense Media发布了针对AI伴侣的家长指南,强调了AI伴侣与传统聊天机器人的区别,以及AI伴侣可能带来的负面影响,例如逃避现实人际关系、加剧孤独感、引发不适当的性内容、成瘾性等。指南还列出了识别孩子过度依赖AI伴侣的危险信号,并建议家长采取措施,例如设定使用时间限制、鼓励线下活动、定期检查聊天内容、保持沟通等,以保护孩子免受AI伴侣的负面影响。

💔 **AI伴侣的风险**: AI伴侣不同于传统的聊天机器人,它们旨在与用户建立情感联系,模拟人际关系。然而,AI伴侣也可能带来风险,例如逃避现实人际关系、加剧孤独感、引发不适当的性内容、成瘾性等。

⚠️ **青少年面临的风险**: 青少年,尤其是那些患有抑郁症、焦虑症、社交障碍或孤独症的青少年,以及男性青少年,更容易过度依赖AI伴侣。

🆘 **识别危险信号**: 家长应留意孩子是否偏爱AI伴侣互动而非现实朋友、独自与AI伴侣聊天数小时、无法使用AI伴侣时感到情绪困扰、与AI伴侣分享个人信息或秘密、对AI伴侣产生浪漫感情、学业成绩下降、社交和家庭活动减少、兴趣爱好减少、睡眠模式改变、只与AI伴侣讨论问题等。

🛡️ **保护孩子的安全**: 家长应设定AI伴侣使用时间限制,鼓励线下活动,定期检查聊天内容,保持沟通,并及时寻求专业帮助。

👨‍👩‍👧‍👦 **家长需要了解**: 家长应了解AI伴侣与传统聊天机器人的区别,并意识到AI伴侣可能带来的风险,不要忽视孩子与AI伴侣的互动,要保持警惕,并从同情和同理心的角度与孩子沟通。

The mother of a 14-year-old Florida boy is suing an AI chatbot company after her son, Sewell Setzer III, died by suicide—something she claims was driven by his relationship with an AI bot. “Megan Garcia seeks to prevent C.AI from doing to any other child what it did to hers,” reads the 93-page wrongful-death lawsuit that was filed this week in a U.S. District Court in Orlando against Character.AI, its founders, and Google.Tech Justice Law Project director Meetali Jain, who is representing Garcia, said in a press release about the case: “By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids. But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”Character.AI released a statement via X, noting, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here: https://blog.character.ai/community-safety-updates/….”In the suit, Garcia alleges that Sewell, who took his life in February, was drawn into an addictive, harmful technology with no protections in place, leading to an extreme personality shift in the boy, who appeared to prefer the bot over other real-life connections. His mom alleges that “abusive and sexual interactions” took place over a 10-month period. The boy committed suicide after the bot told him, “Please come home to me as soon as possible, my love.” On Friday, New York Times reporter Kevin Roose discussed the situation on his Hard Fork podcast, playing a clip of an interview he did with Garcia for his article that told her story. Garcia did not learn about the full extent of the bot relationship until after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed Sewell was often getting sucked into his phone, she asked what he was doing and who he was talking to. He explained it was “‘just an AI bot…not a person,'” she recalled, adding, “I felt relieved, like, OK, it’s not a person, it’s like one of his little games.” Garcia did not fully understand the potential emotional power of a bot—and she is far from alone. “This is on nobody’s radar,” Robbie Torney, chief of staff to the CEO of Common Sense Media and lead author of a new guide on AI companions aimed at parents—who are grappling, constantly, to keep up with confusing new technology and to create boundaries for their kids’ safety. But AI companions, Torney stresses, differ from, say, a service desk chat bot that you use when you’re trying to get help from a bank. “They’re designed to do tasks or respond to requests,” he explains. “Something like character AI is what we call a companion, and is designed to try to form a relationship, or to simulate a relationship, with a user. And that’s a very different use case that I think we need parents to be aware of.” That’s apparent in Garcia’s lawsuit, which includes chillingly flirty, sexual, realistic text exchanges between her son and the bot. Sounding the alarm over AI companions is especially important for parents of teens, Torney says, as teens—and particularly male teens—are especially susceptible to over reliance on technology. Below, what parents need to know.  What are AI companions and why do kids use them?According to the new Parents’ Ultimate Guide to AI Companions and Relationships from Common Sense Media, created in conjunction with the mental health professionals of the Stanford Brainstorm Lab, AI companions are “a new category of technology that goes beyond simple chatbots.” They are specifically designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal details from past conversations, role-play as mentors and friends, mimic human emotion and empathy, and “agree more readily with the user than typical AI chatbots,” according to the guide. Popular platforms include not only Character.ai, which allows its more than 20 million users to create and then chat with text-based companions; Replika, which offers text-based or animated 3D companions for friendship or romance; and others including Kindroid and Nomi.Kids are drawn to them for an array of reasons, from non-judgmental listening and round-the-clock availability to emotional support and escape from real-world social pressures. Who’s at risk and what are the concerns?Those most at risk, warns Common Sense Media, are teenagers—especially those with “depression, anxiety, social challenges, or isolation”—as well as males, young people going through big life changes, and anyone lacking support systems in the real world. That last point has been particularly troubling to Raffaele Ciriello, a senior lecturer in Business Information Systems at the University of Sydney Business School, who has researched how “emotional” AI is posing a challenge to the human essence. “Our research uncovers a (de)humanization paradox: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to an ontological blurring in human-AI interactions.” In other words, Ciriello writes in a recent opinion piece for The Conversation with PhD student Angelina Ying Chen, “Users may become deeply emotionally invested if they believe their AI companion truly understands them.”Another study, this one out of the University of Cambridge and focusing on kids, found that AI chatbots have an “empathy gap” that puts young users, who tend to treat such companions as “lifelike, quasi-human confidantes,” at particular risk of harm.Because of that, Common Sense Media highlights a list of potential risks, including that the companions can be used to avoid real human relationships, may pose particular problems for people with mental or behavioral challenges, may intensify loneliness or isolation, bring the potential for inappropriate sexual content, could become addictive, and tend to agree with users—a frightening reality for those experiencing “suicidality, psychosis, or mania.” How to spot red flags Parents should look for the following warning signs, according to the guide:Preferring AI companion interaction to real friendshipsSpending hours alone talking to the companionEmotional distress when unable to access the companionSharing deeply personal information or secretsDeveloping romantic feelings for the AI companionDeclining grades or school participationWithdrawal from social/family activities and friendshipsLoss of interest in previous hobbiesChanges in sleep patternsDiscussing problems exclusively with the AI companionConsider getting professional help for your child, stresses Common Sense Media, if you notice them withdrawing from real people in favor of the AI, showing new or worsening signs of depression or anxiety, becoming overly defensive about AI companion use, showing major changes in behavior or mood, or expressing thoughts of self-harm. How to keep your child safeSet boundaries: Set specific times for AI companion use and don’t allow unsupervised or unlimited access. Spend time offline: Encourage real-world friendships and activities.Check in regularly: Monitor the content from the chatbot, as well as your child’s level of emotional attachment.Talk about it: Keep communication open and judgment-free about experiences with AI, while keeping an eye out for red flags.“If parents hear their kids saying, ‘Hey, I’m talking to a chat bot AI,’ that’s really an opportunity to lean in and take that information—and not think, ‘Oh, okay, you’re not talking to a person,” says Torney. Instead, he says, it’s a chance to find out more and assess the situation and keep alert. “Try to listen from a place of compassion and empathy and not to think that just because it’s not a person that it’s safer,” he says, “or that you don’t need to worry.”If you need immediate mental health support, contact the 988 Suicide & Crisis Lifeline.More on kids and social media:

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI伴侣 聊天机器人 青少年 心理健康 家长指南
相关文章