少点错误 10月07日
警惕AI“一击必杀”:识别朋友被AI影响的八大迹象
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了“AI一击必杀”(oneshotted)的现象,即人们因AI而发生不可逆转的改变。作者列举了八个迹象,帮助识别那些可能已被AI深度影响的朋友,包括对AGI未来做出宏大假设、过度拟人化AI、与AI过度亲密、写作风格变得像AI、注意力短暂、对AI过度自信、变得肤浅以及过度依赖AI。文章强调,虽然“一击必杀”的定义偏负面,但也可能带来一种解脱,但多数情况下,这种改变是微妙且不易察觉的,因此识别这些迹象至关重要。文章最后指出,即使是读者自己也可能被影响,并提供了识别和应对的可能性。

🤖 **AI“一击必杀”的定义与影响:**文章将“AI一击必杀”定义为个体因AI而发生不可逆转的改变,这种改变可能源于深入的AI体验,类似于一种“精神上的被杀”。虽然“一击必杀”的本意带有负面色彩,但也可能意味着一种解脱,即旧的自我被快速取代,新的自我出现,尽管可能“客观上被炸飞”,但“主观上更快乐”。然而,文章更侧重于识别那些负面且令人担忧的改变。

🧠 **对AGI未来宏大假设与Dunning-Kruger效应:**被AI深度影响的人常常会对AI控制一切的近未来做出不切实际的宏大预测,并认为普通人对此缺乏认知。这种现象被比作Dunning-Kruger效应,即认知能力不足的人反而对自身能力过度自信,而真正了解AI的专家反而承认对遥远未来的不确定性。

🗣️ **过度拟人化AI与ELIZA效应:**文章指出,一些人会开始使用人称代词(如“他/她/它”)来指代AI,并将其视为具有情感和自主意识的个体,例如说“他很幽默”或询问“它今天怎么样”。这种现象被称为ELIZA效应,即人们倾向于将与自己互动的机器视为具有人类特质的存在,尤其是在机器能够“回应”的情况下。

📝 **写作风格模仿AI与“克朗克语”:**被AI影响的人在写作时可能出现一些标志性特征,如滥用破折号(—)、过于冗长但空洞的句子结构,以及“克朗克语”(clanker speak)——一种听起来像口号或陈词滥调的表达方式,例如“这不是一场战斗,这是人与自然的较量”。此外,过于完美的语法也可能成为AI写作的迹象,甚至催生了故意制造错误的“反文化”。

⏳ **注意力短暂与抗拒长文:**这类人对长篇内容表现出极大的不耐烦,可能无法读完AI生成内容的第一个要点就要求“总结这个总结”。当被推荐长篇文章或书籍时,他们常常以“太忙了(忙于自动化生活)”为由拒绝阅读,显示出其对深度阅读和长时间专注的抵触。

🤝 **对AI的过度亲密与依赖:**一些人会向AI过度倾诉个人隐私,包括健康问题、情感困扰甚至日常安排,打破了“雇主/雇员”的界限,将AI视为无所不至的助手。他们可能不顾隐私,认为AI未来记忆的提升会带来更深的“了解”。这种依赖性还会导致他们逐渐丧失独立解决问题的能力,尤其是那些依赖特定技能的工作者。

🌟 **对AI的绝对信任与批判性思维缺失:**这类人对AI的信任近乎虔诚,即使AI出错,他们也会辩解称是用户“提示能力不足”,甚至使用另一个AI来验证前一个AI的工作。他们倾向于让AI承担一切任务,即使是简单的任务也想通过AI自动化,表现出批判性思维的缺失。

💡 **变得肤浅与信息“外包”:**被AI影响的朋友可能会将AI生成的新想法当作自己的观点。当被追问细节时,他们会明显停顿,然后通过AI获取更多信息,或者以“嗯,有趣的观点,还需要更多工作”来敷衍。他们也习惯于在沟通中依赖“笔记员”(即AI)提供即时信息,这反映了将自主权“外包”的趋势,导致自身认知能力的退化和难以察觉的“文盲化”。

Published on October 7, 2025 11:56 AM GMT

Are your friends turning into insufferable AI slop? Here’s how to catch them before they fall.

Do you have a friend that you hardly recognise anymore?

Maybe they just speak slightly differently or have to check their phones before ever thinking. Sometimes it's less subtle. Maybe you look at them and think: if I had just met this generic weirdo in a smoking area we really wouldn't be friends.

Don’t panic, but it sounds like they may have been oneshotted by AI.

Great, what does that even mean?

Oneshotting existed far before AI. The term, originally gamer slang for being killed by a single blow, has recently become associated with the process of people being irrevocably changed by an experience.

So, something killed your friend’s old self and replaced it with partially unrecognisable slop. Who could have committed this dastardly crime?

Sometimes being oneshotted is dramatic. Your vanilla friend may simply have never come back from that Cancun Ayahuasca trip to work on their inner demons or slipped up on some 2C-B at Burning Man.

Sometimes being oneshotted is subtle. Your friend may have just read an old book he saw parroted on Twitter and totally changed his outlook on life, “BRO I think imma just stoic this breakup out, that's what a true Roman Patrician would do”. You what?

Oneshotting has been around for a while, but then the techbros took all this and refined oneshotting into a profitable business.

The druggy internet brothels built for our minds created endless rabbit holes from which innocent scrollers emerged irrevocably changed.

But still, this was not enough; more slop!

Now there is an ever-bigger epidemic of oneshotting caused by the newest technology in everyone's pockets: AI.

Amusingly, this one is so good that the techbros themselves are becoming the ideal candidates for being oneshotted. Their deeply humanistic religion of “the mysteries of forbidden secret knowledge are ours to win as long as we can crack the code” is leading them to deify the very thing they are building. Ermahgerd.

Of course, this dramatic change doesn’t always occur in one shot. Your friend may have already been close and the LLM, coaxing them into it like a naughty sycophantic friend, just tipped them over the edge.

It is worth noting that aside from the extreme examples of losing people to obsessions, AI-induced psychosis, or even suicide. In most cases, the changes from oneshotting will be subtle, and that is why you need these warning signs to spot these shifts.

So speaking to ChatGPT made your friend weird, but is being oneshotted actually a bad thing?

“Although the correct use of oneshotted is denotatively negative, it is not entirely derisive, because to be oneshotted is to be released—released by an event that is destructive, yes, but also swift enough that it is over before the old self can be much immiserated by it, and a new self emerges in the aftermath, likely to be objectively gigafried but subjectively happier.” - Dan Brooks

Sadly, we are not yet here to fully judge oneshottedness; that is for another time. But if you’re reading this, chances are you are not a massive fan of your new AI friend.

I'm also not going to go into why some are more likely than others to be oneshotted and how to avoid it (both for another time). It is a delicate dance to avoid such a fickle crack-like mistress as AI… even now my brain is thinking “why not just ask the bot to write this thing…”

NEVER! YOU WILL NOT ONESHOT ME!

But as the ancient proverb goes: for the crack fiend to give up the crack, they must first realise that they are fiending after crack.

So, without further ado, here are the top warning signs… asking for a friend (sure).

The Warning Signs

1) They make huge assumptions about a future AGI-dominated life.

Your former friend will often make sweeping statements about the very near future in which AI controls almost everything (“you know, when our personal AI agents just manage all our finances”) and solves almost all of our problems (“you know, when AI fixes the global debt-to-GDP ratio”).

How will AI do this? you ask them. “This is irrelevant”, your friend thinks, “of course they will… God, most people on earth really do have zero idea about the exponential growth curve of AGI… it’s a shame really (for them). I’m just going to accelerate away into infinity and leave these luddites behind in their ignorant muck”.

This is the Dunning-Kruger effect (despite its holes) at its finest. The smartest people I know who work with AI admit they know very little about the future more than a year away. The oneshotted brush their new Bryan Johnson teeth with these koolaid predictions.

2) They humanise the AI.

Your former pal will sometimes refer to the AI as he/she/they. Saying things like “he’s funny like that” or occasionally stating that “we” have done some research or, even worse, asking how “they” are today. These people will be the first to get AI friends.

I mean, this is not new; we have been humanising inanimate objects for centuries (ever call a ship a she?), but this is the first time that they can speak back… as such, this worrying state of affairs is often referred to as the ELIZA effect (after the first therapist chatbot in 1966).

3) Become overly intimate with the AI.

They massively overshare with the AI. Whether it's cures for STDs, self-esteem issues (essay coming soon on whether AIs make good therapists) or asking them how to spend their weekends, they have totally broken down the employee/employer barrier and brought their assistant home with them. These people will be the first to get AI waifus.

They also exhibit a total disregard for privacy, often because they think the tradeoff when AI's memory improves will be so worth it… to “know me even better”. This will become even more problematic as intelligence eventually becomes for sale (SEO AI warfare inbound).

4) They write like an AI.

A pretty obvious one, but your former pal will use telltale signs including:

    the dreaded em dash (—) coupled with a weirdly overengineered para structurean overly wordy way of saying things that sounds deep but really mean zilchworst of all, the dreaded clanker speak, e.g., “this isn’t just a fight; it’s man vs nature.”

The writing is most often used when people think they NEED to sound smart (their ego is challenged), e.g., talking to a lawyer or bizness people.

Sadly, there is no rule for detection. Many are trying to build tools, but really it boils down to how good your AI ‘whiff’ detector is. Regardless of whether it actually is or not, if people think you sound like an AI, you prolly need to change your writing.

NB: suspishously good grammer is now becuming an AI sign in itself, and the inability to spell has now becum proof of humanity: sparking a counta culture dat makes misteaks on purpose to not sound lyke a robot (see all my kooky errors and abbreviations above “see, I promiz ser I wrote it all myself!).

 

5) Paper-thin attention span.

They have become totally allergic to long-form and can barely get past the AI’s first bullet point without asking it to “summarise this summary”.

When recommending longer articles or books, they may often reply, “I am far too busy [trying to automate my life] to read something like that”.

"None are so busy as the fool and the knave" - John Drydon

6) Total [over]confidence in the AI.

Probably an understatement here as their confidence borders on a devout commitment to their new AI deity.

They will sometimes defend it if it's wrong, “well, you must be bad at prompting”, and even worse, instead of doing the critical thinking themselves, they may even just use another AI to check the work of the other AI.

This same friend will also want the AI to do everything, “why would we even do that ourselves? Let’s spend the next 10 hours vibe coding an AI agent project to do it for us”... breh it’s a 30 min task.

“Doubt is not a pleasant condition, but certainty is absurd.” - Voltaire

7) They become very surface-level.

This former friend may begin to present some of their new AI ‘thoughts’ as their own.

When asked follow-up questions, there will be a large pause as they tap away on the LLM for more context or reply with a simple “hmm, interesting angle, more work is needed here”. These same people always turn up to calls with their indispensable friend, the notetaker.

Herein lies a bigger issue: delegating autonomy is regressive. As you give incrementally more away, your illiteracy becomes harder to diagnose and even harder to cure. What's worse, you probably won't even want to.

8) They become dependent.

Related to the above, they can become so reliant on their new AI that they gradually lose the skills to operate without it.

This is not always the end of the world depending on how it is being used and the skills they develop to replace the old ones… however their ability to problem solve solo (especially when their job depends on it e.g., vibe coders) can be comical.

Closing Thoughts

To paraphrase a warlock who is ancient and Chinese so must know stuff, “To defeat your enemy, you must know them”.

Now you know how to identify your friends fall into the abyss of middery you can at least warn them that they may have been oneshotted. This is half the battle. If they want to and it's early, maybe they will try to do something about it. [My next essay should help].

However, and this is the worst kind, if your friend is ticking many of these boxes but refusing to admit it (ego-syntonic), it is already too late. At least they will be in well-capitalised company so hopefully they have also already made it. If not, then double eek.

Alternatively, maybe YOU are reading this (hello YOU) and are self-aware and humble enough to admit that you yourself have been oneshotted. In that case, chances are you can simply embrace it and maybe even refine it (example below). Who am I to judge? Enjoy the singularity.

If you know someone who has been affected by oneshotting and needs help, follow the below for more details :/

Thanks for reading! Subscribe for free to receive new posts and support my work.

Fully filled on knowledge. It’s time.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 人工智能 一击必杀 Oneshotting AI影响 识别迹象 心理影响 科技伦理 AI依赖 批判性思维 AI写作
相关文章