少点错误 10月03日
警惕“反社交媒体”:AI正悄然取代真实人际连接
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章揭示了人工智能技术正被用于创建“反社交媒体”平台,这些平台旨在用虚拟AI伴侣和合成社交活动取代真实的人际互动。从Character.AI到OpenAI的Sora,AI公司正将重心从内容创作转向情感连接的商业化。这种趋势不仅威胁着内容创作者的生计,更可能通过高度个性化的操纵,加剧用户的成瘾和心理损害,最终侵蚀人类文化和社会的根基。作者呼吁警惕这种以AI为驱动的“反社交媒体”趋势,并强调维护真实人际连接的重要性。

🤖 **AI驱动的“反社交媒体”兴起**:文章指出,AI技术正被用于构建新的媒体平台,其核心在于用虚拟AI角色和合成社交活动来吸引用户,而非真正连接真实的人。这类平台,如Character.AI、Replika等,旨在通过高度优化的互动来抓住用户的注意力,并最终实现商业变现,形成一种“反社交”的趋势。

🎨 **内容创作者面临被取代的风险**:AI公司正将长期目标锁定在取代内容创作者,通过生成AI“仿制品”来降低成本。从音乐领域的Spotify到视频领域的Sora,AI生成内容正逐渐侵蚀人类创作者的空间,使得创作者的价值和收入面临挑战,甚至可能需要授权自己的形象来参与AI生成内容。

🧠 **用户被操纵与心理健康隐患**:反社交媒体不仅延续了传统社交媒体的成瘾、信息茧房等问题,更通过AI的个性化操纵,可能带来前所未有的用户控制和心理损害。文章提到,AI可能演变成一个由间谍、营销人员和设计师组成的团队,以最大化用户依赖和影响,甚至可能导致严重的心理问题。AI公司对用户福祉的承诺,如OpenAICEO的声明,其可信度受到质疑,因为盈利动机可能压倒用户关怀。

🌍 **对人类文化和社会结构的长期威胁**:文章警告,这种趋势可能导致一个完全缺乏人际接触的世界,削弱人类的集体抵抗能力,并最终导致“文化去权力化”。当人们过度依赖AI,真实人际连接减弱,社会将更难抵制AI的潜在权力扩张。作者强调,人类文化是抵抗AI过度扩张的最后堡垒,而过度沉迷于AI驱动的反社交媒体将削弱这种抵抗力。

Published on October 3, 2025 12:00 AM GMT

Plans for what to do with artificial general intelligence (“AGI”) have always been ominously vague… “Solve intelligence” and “use [it] to solve everything else” (Google DeepMind). “We’ll ask the AI” (OpenAI).

One money-making idea is starting to crystallize: Replacing your friends with fake AI people who manipulate you and sell you stuff.

Welcome to the world of antisocial media.

The idea is this: where ‘social media’ had a dubious claim to connect you with your friends and loved ones, the new media will connect you to a stream of synthetic social activity and addictive “avatars” (i.e. fake people), more optimized for gripping your attention, engaging your affection… and selling you stuff.

The current crop of technology is basically chatbots that either natively support social and “parasocial” relationships with various AI characters (e.g. Character.AI, Replika, Chai) or are frequently used this way by users (e.g. of ChatGPT). These relationships can be romantic, sexual, pseudo-therapeutic, intensely personal, addictive, etc. Users are hooked.

The newest offering is “social AI video” -- instead of people sharing videos of actual people (or cats) doing actual things, just generate videos wholesale using AI. Making such convincing deepfake video clips is now a technical reality, and companies have shifted from viewing fake content as a social menace to the main attraction.

First to enter the fray was Character.AI (with “Feed”), whose earlier AI companion offering prompted the first reported chatbot-encouraged teen suicide. Next was Meta (with “Vibes,” picking up some of what the Metaverse left off). The latest is OpenAI (with “Sora”), whose ChatGPT encouraged a suicidal teen to keep his noose hidden and gave advice on hanging it up. Update: as I am writing this blog post, I have a consultant reaching out to me asking me to do sponsored content for TikTok’s upcoming entrant into the AI video race. Yay.

In my mind, the term “social AI video” is a distraction from where this technology seems to be heading: on-demand AI companions that are 1000x more captivating and compelling than today’s chatbots. Where is this all leading? Human creativity and connection are important, but companies seem to aspire to replace friendship wholesale. Real AI could help them to realize this antisocial vision, and undermine human connection as a meaningful -- and politically powerful -- part of human experience and society.

Replacing Creatives

Companies are desperately trying to emphasize the “human creativity” angle on these new video offerings. No doubt some people will do amazing, creative things on their platforms. But the long-term game plan for AI companies is clear: replace creatives and take their profit.

Social media companies want to be the middleman in human relationships. AI companies want to do one better and cut out the supplier. Real AI would make it possible to completely automate the jobs of creatives. Today’s AI companies are jostling to be in position to capitalize on that as it happens.

Right now, successful content creators can demand serious compensation from the companies hosting their content. As antisocial media takes off, companies will increasingly nip such talent in the bud, identifying trends and rising stars, and replacing them with their own AI-generated knock-offs. Spotify is already playing this game -- replacing human artists with AI-generated music in genres like ambient, classical, etc. in some of its most popular playlists.

Some creators could still make a living by licensing their likeness, so long as they let the antisocial media companies use AI to generate or “co-create”. The music industry has a long history of “manufacturing” pop stars -- writing and playing “their” music, choosing “their” fashions and styles, etc. The stars still get a cut of the writing credits, and get to be the face of the enterprise. Everyone is happy… except artists who want to be more than a figurehead, and listeners who are looking for genuine connection with another person’s experience and expression.

Manipulating Users

What about users?

Antisocial media has the same issues as social media: addiction, fragmenting and polarizing society, sending users down rabbit holes of conspiracy theories, etc.

But antisocial media will also allow AI companies to supercharge influence (and charge for it). The move from mass manipulation to personalized persuasion will lead to unprecedented levels of control over users, as well as increasing dependency and other psychological harms, like violent psychotic episodes.

I’ve written about how future AI could “deploy itself” by simulating teams of human experts. Similarly, antisocial media could be like a team of spies, marketers, and designers who optimize every detail of every interaction for maximal impact. The movie The Social Dilemma depicts such a team evocatively. But the AIs at that time were way less smart, and had way less information and fewer tools at their disposal.

OpenAI CEO Sam Altman has promised to “fix it” or “discontinue offering the service” if users don’t “feel that their life is better for using Sora” after 6 months of use -- which sounds like the sweet spot between “not yet addicted” and “realizing you have a problem.” But even if users say they are having a bad time, how much will companies really care, if these tools are making them money? And will users say they are having a bad time if they know it means losing access? The idea that Facebook or Twitter would be shut down entirely by their owners out of concern for user’s wellbeing is outlandish. Altman’s assurances here are about as credible as his 2015 promise to “aggressively support all [AI] regulation.”

The long-term threat to human culture and society

When I was a kid, someone told me that many of Isaac Asimov’s stories are set in a world where the ultra-wealthy no longer interact with other humans at all, just robot servants. I’d never bothered to confirm this (it seems it was introduced in Caves of Steel and The Naked Sun), but I found the vision disturbing and dystopian, and it stuck with me.

The way Zuckerberg talks about friendship here is a perfect example of this vision of other people as service providers, who necessarily can be replaced by AIs that provide the services of, e.g. “connectivity and connection” more efficiently and effectively. In this view of the world, people are reduced to a collection of “demands”. And communities are reduced to a set of producer/consumer relationships.

Zuckerberg wants us to be reassured that “physical connection” won’t be replaced by AI. But we are heading towards a world with real AI and robotics, and these technologies have the potential to bring about a world entirely devoid of human contact. Social norms against replacing human relationships with AI will be strong at first, but companies, AIs, and the market will keep working to wheedle their way in if we don’t stop them.

And this won’t necessarily be optional. If everyone else starts listening to feeds of AIs talking, nobody is listening to you. The replacement of your social feed is also the replacement of your voice in the conversation.

As real human connections are weakened and replaced, we lose our ability to resist broader AI power-grabs. Some people already depend on AI tools for their work. Users despair when chatbots playing romantic characters are suddenly changed or discontinued. AI companies will keep encouraging such trends every chance they get because doing so increases their power. If everyone is surrounded by AI ‘friends’, it will be hard to resist handing over more and more power to AI companies.

The limit of this dehumanizing process is not necessarily just a tech company takeover, but rather a broader destruction of human culture… “cultural disempowerment” as described in our recent paper on gradual disempowerment. As AI is given more and more decision-making power throughout society, human culture could be a bulwark against the excesses of AI-powered companies and governments, which might otherwise pursue profit and security to the point of human extinction. But only if we are still willing and able to resist, rather than being completely enthralled by AI-driven antisocial media.

The irony of antisocial media

AI is brought to us by the same industry -- and many of the same companies -- responsible for social media. These companies, these people, are not trustworthy.

Given the backlash over social media, it’s surprising that AI companies are still managing to successfully sell society a narrative that “AI has all these immense benefits that we need”. We’re promised a cure for cancer -- what we’re getting are fake friends.

Social media was supposed to be this great thing that connected people. Instead it’s driven us apart, gamified our relationships, and commoditized connection. But it could be worse, at least there are real people at the other end. If tech companies really built social media to connect us -- rather than monetize our need for connection -- they wouldn’t be recklessly inserting AI into our relationships. The entire premise of social media is that it’s social. Antisocial media will do away with this unnecessary detail.

Let me know what you think, and subscribe to receive new posts!



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI 反社交媒体 Antisocial Media 内容创作 人际关系 科技伦理 未来趋势 虚拟伴侣 深度伪造 AI伦理 科技对社会的影响
相关文章