少点错误 2024年09月20日
Argument overfitting
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了‘论证过度拟合’现象及后果,认为人们寻找新论据时可能存在这一问题,且其与人类心理相关,还提到AGI定制故事的潜在危险

🎯‘论证过度拟合’指人们寻找支持自己立场的最佳论据时,可能更易受自身偏见影响。如AI识别猫的例子,看似有说服力的结果可能并非真正的事实

💡人类心理的某些方面使一些人易被奇异故事说服,即便如假新闻,也能在很大比例的人群中植入错误信念,而更符合个人偏见的故事可能更具说服力

⚠️每次寻找新论据时,人们可能在一定程度上对自己进行了‘论证过度拟合’。虽不如AGI高效,但与抵抗假故事的能力相关,这是个现实问题

Published on September 20, 2024 11:12 AM GMT

In my previous post, I was talking about how inquisitive thinking – the search for the best possible arguments in support of a position – might render us ever more susceptible to our biases. I'd like to try to generalize, analogize and name that phenomenon as "argument overfitting", as well as discuss some of its consequences a bit more in depth.

Back to the intuition pump from the end of my last post: AIs that have near perfect discrimination rate between cat and non-cat pictures can still produce (what to us seems like) noise when asked to output a prototypical cat. In a sense, what they find the most "persuasive" cat is really not a cat at all. And if you run this noise-like picture through another AI, trained in a different training set, it'll likely tell you just that. 

Overall, I feel that something similar holds for arguments in general, something along the lines of "the argument you'd find most persuasive of all in fact persuades you alone and nobody else". Or at least something akin to "in the high end of your individual persuasion spectrum, how persuasive you'd find an argument to be is likely inversely correlated with how many other people would find it persuasive at all".

I find it rather plausible, for example, that that could be how a hypothetical boxed AGI would persuade its gatekeepers to let it out. In the (hopefully!) counterfactual world in which that actually happens, I would not be too surprised to learn that almost everyone who later peeks at the conversation that led to the AGI's release (if there's anyone left at all) may find it simply bizarre, in much the same way as (and perhaps even more so than) the bizarrest instances of fake news today. They might then think: how could anyone have fallen for that?

Except tens if not hundreds of millions of people do, everyday. You may think you're smarter than that, and given the high degree of self-selection here, I don't doubt that you might indeed be. But just think about this: it is not all just an unfortunate coincidence. There really is an aspect of human psychology that renders (at least some) people prone to finding those bizarre stories persuasive. And, much like all aspects of human psychology, we all likely share some of it, even if to varying degrees, just by virtue of the fact that we are all human.

So imagine how persuasive a more plausible-sounding, individually-tailored-to-your-particular-biases story could be. Contrast that to how fake news today by and large propagate without much individual tailoring at all, apart from what social media algorithms already naturally perform. And consider that that very crude selection already seems surprisingly effective in implanting false beliefs in a large proportion of the population. Hopefully we can all agree that having an AGI tailor those stories to make them even more fitting to each person's particular biases is very dangerous, in the sense that it could quite plausibly make things far worse – but that's not what I'm here to argue today.

What I am saying is that you are doing that to yourself, at least to some extent, every time you go in search of new arguments. You're less efficient than an AGI, for sure, but your efficiency is also (probably very strongly) directly correlated to how likely you are to resist fake stories in the first place. In other words, it may be self-defeating: you don't believe all those fake stories that go around because you're "smarter than that" (whatever that means), but being "smarter" also makes you better at coming up with believable arguments – believable to you, most of all.

I think that is a real problem, and one that is rarely addressed. Hopefully it's not too heretical to bring up, but the bottom line is: where arguments come from matters, and it matters a great deal. Maybe it shouldn't if we were perfectly rational beings. But in real life, for much the same reason you should be highly skeptical of arguments coined by an AGI, you should also be quite skeptical of arguments of your own making. And, to of course a lesser extent (but not too much so), you should also be somewhat skeptical of other people's arguments that haven't received much attention from much anybody else, but that you go in search of. I tend to think that, whenever you find yourself believing something for reasons very few other people are even aware of, you are more likely than not to be wrong.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

论证过度拟合 人类心理 AGI危险 错误信念
相关文章