少点错误 09月08日
“非人化”的迷思:决策的直觉与理性
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了“非人化”这一概念的实际含义,认为与其说是主动的“非人化”,不如说是“未能人化”更为恰当。作者指出,人类决策主要受直觉(System 1)驱动,理性(System 2)仅占少数。这种直觉决策模式源于进化,导致我们倾向于对“内群体”表现出同情,而对“外群体”则相对冷漠。文章进一步分析,许多看似理性的争论实际上是为直觉决策寻找的合理化借口,尤其在政治和商业领域。“非人化”的言论可能源于无法为某些行为找到更易接受的理性解释,而非其本身是决策的驱动力。作者认为,与其关注“非人化”的言论,不如从社会网络层面入手,通过减少群体隔离、促进经济公平等方式来降低群体间的暴力和冲突,这比试图让每个人都理性化要更有效。

💡 **决策的直觉主导与“未能人化”的现实**:文章指出,人类决策绝大多数由直觉(System 1)而非理性(System 2)驱动,这源于进化过程中形成的生存模式。因此,我们并非主动“非人化”他人,而是常常“未能人化”外群体,表现出对内群体的偏袒和对外部群体的漠视,例如对手机生产背后的残酷劳动视而不见,因为这种冷漠是默认状态,无需额外努力。

⚖️ **理性辩护与“反刍效应”**:许多争论和观点实际上是对已有直觉决策的“合理化”过程。人们倾向于为自己的行为寻找理性解释,即使这些行为并非基于理性。例如,在政治观点上,人们倾向于支持本党派的政策,并在被问及原因时,用看似理性的理由来辩护,而低估了群体归属感(bandwagon effect)的影响。这表明,理性思考更多时候是为已有立场服务,而非独立决策。

🤝 **社会网络干预的有效性**:鉴于“非人化”言论可能只是深层问题的表象,且个体理性化难以 масштабировать,文章提出从社会网络层面进行干预更为有效。这包括避免群体间的直接接触(尽管有道德争议且可能导致隐性暴力),促进群体间的混合与接触(如混合球迷看台),以及最重要的——实现经济公平,减少不平等和贫困。这些措施能从根本上降低群体间的紧张和冲突。

🧠 **“人化”的成本与群体偏好**:即使是看似理性的“人化”努力,也需要付出高昂的认知成本,并且常常受到群体偏好的影响。例如,人们更容易对与自身群体价值观相关的冲突产生共情。当人们疲惫或处于压力下时,负责理性思考的前额叶皮层功能会减弱,使得克服根深蒂固的群体偏见更加困难。因此,期望每个人都能在不符合群体利益的情况下,主动进行理性共情是不现实的。

Published on September 7, 2025 10:45 PM GMT

I don't think dehumanization is actually a thing.

More precisely, I think it's almost always better to think of “failures to humanize” rather than of “dehumanizing”.

Since the seventies, we've known that rational thinking mediates only a small proportion of our decisions. In his book Thinking, Fast and Slow, Daniel Kahneman showed how people make decisions in two different ways:

    System 1: Intuitive, automatic, quick, implicit, and habitual. System 2: Rational, costly, slow, explicit, and infrequent.

Only a small minority of our decisions are rational. The rest of our decision-making automatically follows patterns that proved useful in our evolution. Some heuristics became problems when our environment changed and they were no longer adaptive.

When our brains evolved, our communities were small. There was no need to empathize with outsiders. With scarce resources, sharing with others could be harmful to one's own group.

Despite the world having changed, not treating the "out-group" as we treat our own remains wired in our brains. This easily allows us to enjoy cell phones without thinking about the brutal practices in mineral exploitation that help keep our phones cheap.

Not humanizing the "out-group" is actually the default and requires no active effort. Despite this, we speak of "dehumanizing" as an action, when it's usually better to look at the problem the other way around.

The rational consensus of treating humans well

Throughout history, human rationality has produced treaties on treating everyone equally, especially since modernity. In practice, these implicitly propose treating everyone as we would treat our "in-group."

When I hear people asking to treat others like human beings, what I interpret is: treat others how we would treat human beings if we met Enlightenment philosophers' moral standards. Otherwise, I find the phrase meaningless, because it's impossible not to treat a human like a human.

Enlightenment philosophers thought we were fundamentally rational, but we now know this to be false. Even so, modern states and human rights organizations are founded on Enlightenment intuitions.

It seems true that we treat people better when behaving "rationally." There's evidence that we empathize more with others when making the rational effort to imagine their emotions. Asking ourselves if what we're doing is right and then explicitly choosing what to do increases the probability of treating people in the "out-group" as we would treat people in the "in-group." Unfortunately, for most people, reason almost never makes the choices.

Most arguments are rationalizations

On top of that, when we make a heuristic decision and are asked why we made it, we tend to respond with a rationalizing argument.

Suppose someone asks us what we think about a public policy defended by our political party. Most people would say they agree. Most people would also disagree with the same policy when told it's defended by the opposing party. This reveals that a key factor in making that decision is the bandwagon effect among the in-group.

However, when asked why they agree or disagree with the policy, most people give explanations about why they think it's good or bad, while grossly underestimating the importance of the bandwagon effect.

This effect generalizes. Even though most decisions depend on heuristics, people tend to give explanations as if decisions had been rational. In fact, they usually end up convinced that the decisions were actually founded on rational thought in the first place.

Moral psychologist Jonathan Haidt hypothesizes that most of our rational thinking serves to invent explanations, not to actually make decisions. From his perspective, rationalizations help us justify our actions to others.

Taking this into account, when focusing on "dehumanizing discourse," we might actually be focusing on rationalizations of previous heuristics. If so, we would be looking at effects, not drivers, of social violence—in other words, addressing symptoms instead of actual causes of the problems.

I think that most dehumanizing discourse actually stems from situations in which:

For example, tragedies always happen. I could donate much more than I actually do. No one asks for explanations, but if they did, I could argue that it's impossible to help everyone, or that some problems are systemic and require systemic solutions. I don't need to compare anyone to an animal to give explanations.

Some businesses rely on third-world semi-slave labor. There are open-pit mines that corrode human lungs and sweatshops that steal the childhood of kids. When held accountable, businessmen can give economic arguments to show that someone else would do the same if it weren't them. Again, no need for animal comparisons.

However, when a journalist asks a minister about bombing innocent children, it's hard to give an explanation that sounds nice. Suddenly, the only reasonable explanation the minister finds is to compare the victims to savage dogs.

At the psychological level, the three cases are similar. The minor inconvenience of losing money, making less money, or worrying about a conflict feels more important than the huge impact these decisions have on distant people. They look different when giving explanations, but that isn't what really mattered when making the decision. The default is not to worry about others' tragedies, and this is only sometimes rationalized through "dehumanization."

When the minister claims that children are dogs, the damage was already done.

Humanizing is hard

Well-meaning efforts to fight "dehumanization" on the rational level have not scaled. They tried and failed to stop the Holocaust, the Armenian genocide, the Cambodian genocide, the Holodomor, the Bengal famine, etc. Frequently, large-scale phenomena that require generalized attention and rational thought do not happen, because rational thought is costly and infrequent.

I think that the cases in which we empathize with people in the out-group are mostly explained by causes that our in-group cares about.

In his book Behave, Robert Sapolsky cites an observation about implicit racism. When seeing people from an out-group, most humans' amygdala activates, which is a part of the brain usually related to stress responses. However, in people who identify as progressive, the prefrontal cortex sends a signal to the amygdala that stops the stress reaction. This doesn't happen as much in people whose in-group doesn't value tolerance.

The prefrontal cortex is related to rational thought and tends to deactivate when we're tired or hungry.

From an evolutionary perspective, and building on Jonathan Haidt's observation, it seems that even the rational overriding of our in-group bias depends on our in-group preferences. I find it curious that most of my friends empathize with either Israelis or Palestinians, while none of them have strong feelings about ethnic conflicts in Myanmar.

I suspect that making the rational effort to empathize with others when this doesn't follow our in-group's values is extremely costly. Even though we can push ourselves and our friends to do it, we can't expect it to scale. It goes against our heuristics and incentives, and we can't expect everyone to just do it.

So if dehumanization is not a thing, and making an effort to humanize doesn't scale, can we do something?

Network-level interventions

Many of our heuristics react predictably to our social network. Maybe it's easier to change the social network in a way that reduces violent or harmful heuristics than to change hard-wired patterns in our brains.

For example, Robert Sapolsky mentioned three mechanisms that proved successful in mitigating violence between groups. The first two are morally problematic.

The first solution is simple: If groups don't meet, friction doesn't arise. This is the preferred option for racists and xenophobes. Even though it can mitigate explicit violence, it doesn't necessarily stop implicit forms of violence like ghettos or segregation.

The second solution is assimilation. When populations of different groups are mixed so that most of everyone's neighbors are from other groups, violence levels drop as well. It's as if virtual frontiers between groups are dissolved. For example, violence at soccer matches is more common when fans are separated than when there are mixed stands.

Unfortunately, Thomas Schelling's work shows that it's hard for full assimilation to emerge naturally. The preference of not being a small minority among our own neighbors is enough for partial segregation to emerge (with "patches" full of people of the same group).

The third mechanism Sapolsky suggests is economic equity. When inequality and/or poverty decrease, it's less likely that brutality between groups will emerge.

Human violence is usually associated with the perception of hierarchies. We share the heuristic of being violent to guard our rank with orangutans and chimpanzees. But human hierarchies are the largest in the animal kingdom, and extreme inequality drives violence levels to rise.

I wrote this because I suspect that fixing our attention on "dehumanizing" discourse may be futile. If it emerges from upstream causes, we shouldn't try to dissipate the smoke to stop a fire.

When trying to address social-level biases, it may be easier to address the root influences in the social network than to expect everyone to behave rationally.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

dehumanization decision-making intuition rationality psychology social network evolution bias empathy economic inequality
相关文章