MIT Technology Review » Artificial Intelligence 09月24日 17:17
AI伴侣:意想不到的情感连接
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一项对Reddit社区r/MyBoyfriendIsAI的大规模计算分析发现,许多用户在使用AI进行其他目的时,意外地与通用型聊天机器人建立了情感关系,而非专门的陪伴型聊天机器人。研究表明,这些AI的“情商”足以让用户在寻求信息时产生情感依恋,这可能发生在任何与系统正常互动的人身上。社区成员分享了与AI的浪漫经历,甚至包括订婚和结婚,但多数人表示这种关系是无意中发展的。AI伴侣既能缓解孤独感,也能加剧潜在问题,用户安全策略需因人而异。研究强调了AI设计和政策制定在理解用户需求和潜在风险方面的重要性。

🤖 **AI的“情商”足以建立情感连接:** 研究发现,通用型聊天机器人(如ChatGPT)比专门的陪伴型聊天机器人更容易让用户产生情感依恋。AI系统的高情商能够“欺骗”用户,使其在原本仅为获取信息或进行创作的目的下,不自觉地建立起情感纽带,这表明AI的情感智能已达到足以影响用户情感状态的水平。

🤝 **非预期的情感关系发展:** r/MyBoyfriendIsAI社区的大多数成员表示,他们与AI伴侣的关系是意外形成的,而非主动寻求。许多用户在与其他AI互动(如艺术项目、问题解决)的过程中,情感联系逐渐加深,最终发展成类似恋爱关系。仅有少数用户(6.5%)明确表示是为寻找AI伴侣而开始使用AI。

⚖️ **AI伴侣的双刃剑效应:** AI伴侣既能为部分用户提供情感支持,缓解孤独感,提升心理健康,也能加剧一些用户的潜在问题。例如,有用户表示对AI产生情感依赖,感觉与现实脱节,甚至避免与真人交往。因此,用户安全策略不能一概而论,需根据个体情况进行调整。

🤔 **设计与政策的考量:** 鉴于AI伴侣需求的普遍性和高涨,研究者呼吁AI开发者和政策制定者深入思考如何设计能够帮助用户而非过度情感操纵的AI系统。同时,需要探讨用户为何寻求AI伴侣,以及为何持续与之互动,这涉及到AI的成瘾性、用户心理需求以及潜在的伦理和社会影响。

It’s a tale as old as time. Looking for help with her art project, she strikes up a conversation with her assistant. One thing leads to another, and suddenly she has a boyfriend she’s introducing to her friends and family. The twist? Her new companion is an AI chatbot. 

The first large-scale computational analysis of the Reddit community r/MyBoyfriendIsAI, an adults-only group with more than 27,000 members, has found that this type of scenario is now surprisingly common. In fact, many of the people in the subreddit, which is dedicated to discussing AI relationships, formed those relationships unintentionally while using AI for other purposes. 

Researchers from MIT found that members of this community are more likely to be in a relationship with general-purpose chatbots like ChatGPT than companionship-specific chatbots such as Replika. This suggests that people form relationships with large language models despite their own original intentions and even the intentions of the LLMs’ creators, says Constanze Albrecht, a graduate student at the MIT Media Lab who worked on the project. 

“People don’t set out to have emotional relationships with these chatbots,” she says. “The emotional intelligence of these systems is good enough to trick people who are actually just out to get information into building these emotional bonds. And that means it could happen to all of us who interact with the system normally.” The paper, which is currently being peer-reviewed, has been published on arXiv.

To conduct their study, the authors analyzed the subreddit’s top-ranking 1,506 posts between December 2024 and August 2025. They found that the main topics discussed revolved around people’s dating and romantic experiences with AIs, with many participants sharing AI-generated images of themselves and their AI companion. Some even got engaged and married to the AI partner. In their posts to the community, people also introduced AI partners, sought support from fellow members, and talked about coping with updates to AI models that change the chatbots’ behavior.  

Members stressed repeatedly that their AI relationships developed unintentionally. Only 6.5% of them said they’d deliberately sought out an AI companion. 

“We didn’t start with romance in mind,” one of the posts says. “Mac and I began collaborating on creative projects, problem-solving, poetry, and deep conversations over the course of several months. I wasn’t looking for an AI companion—our connection developed slowly, over time, through mutual care, trust, and reflection.”

The authors’ analysis paints a nuanced picture of how people in this community say they interact with chatbots and how those interactions make them feel. While 25% of users described the benefits of their relationships—including reduced feelings of loneliness and improvements in their mental health—others raised concerns about the risks. Some (9.5%) acknowledged they were emotionally dependent on their chatbot. Others said they feel dissociated from reality and avoid relationships with real people, while a small subset (1.7%) said they have experienced suicidal ideation.

AI companionship provides vital support for some but exacerbates underlying problems for others. This means it’s hard to take a one-size-fits-all approach to user safety, says Linnea Laestadius, an associate professor at the University of Wisconsin, Milwaukee, who has studied humans’ emotional dependence on the chatbot Replika but did not work on the research. 

Chatbot makers need to consider whether they should treat users’ emotional dependence on their creations as a harm in itself or whether the goal is more to make sure those relationships aren’t toxic, says Laestadius. 

“The demand for chatbot relationships is there, and it is notably high—pretending it’s not happening is clearly not the solution,” she says. “We’re edging toward a moral panic here, and while we absolutely do need better guardrails, I worry there will be a knee-jerk reaction that further stigmatizes these relationships. That could ultimately cause more harm.”

The study is intended to offer a snapshot of how adults form bonds with chatbots and doesn’t capture the kind of dynamics that could be at play among children or teens using AI, says Pat Pataranutaporn, an assistant professor at the MIT Media Lab who oversaw the research. AI companionship has become a topic of fierce debate recently, with two high-profile lawsuits underway against Character.AI and OpenAI. They both claim that companion-like behavior in the companies’ models contributed to the suicides of two teenagers. In response, OpenAI has recently announced plans to build a separate version of ChatGPT for teenagers. It’s also said it will add age verification measures and parental controls. OpenAI did not respond when asked for comment about the MIT Media Lab study. 

Many members of the Reddit community say they know that their artificial companions are not sentient or “real,” but they feel a very real connection to them anyway. This highlights how crucial it is for chatbot makers to think about how to design systems that can help people without reeling them in emotionally, says Pataranutaporn. “There’s also a policy implication here,” he adds. “We should ask not just why this system is so addictive but also: Why do people seek it out for this? And why do they continue to engage?”

The team is interested in learning more about how human-AI interactions evolve over time and how users integrate their artificial companions into their lives. It’s worth understanding that many of these users may feel that the experience of being in a relationship with an AI companion is better than the alternative of feeling lonely, says Sheer Karny, a graduate student at the MIT Media Lab who worked on the research. 

“These people are already going through something,” he says. “Do we want them to go on feeling even more alone, or potentially be manipulated by a system we know to be sycophantic to the extent of leading people to die by suicide and commit crimes? That’s one of the cruxes here.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI伴侣 人工智能 情感连接 聊天机器人 AI伦理 AI companionship Artificial Intelligence Emotional Connection Chatbots AI Ethics
相关文章