AI Snake Oil 09月12日
《AI的蛇油》:区分AI炒作与现实的指南
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

《AI的蛇油》一书探讨了人工智能的现状,强调了区分AI炒作与实际应用的重要性。作者认为,尽管AI在许多领域都发挥着积极作用,但其潜力和风险往往被夸大。书中区分了预测性AI和生成性AI,并对前者持谨慎态度,认为其在决策中可能导致不公。作者的目标是提供基础知识,帮助读者理解AI,识别其真正的价值,并以一种务实、不走极端的视角看待其发展。这本书旨在帮助读者区分AI的真实进展和过度宣传,从而更清晰地认识AI在社会中的作用。

📚 **区分AI的真实价值与炒作**:本书的核心在于帮助读者识别AI的实际应用价值,而非被市场上的过度宣传所误导。作者认为,AI的许多有益应用已被视为理所当然,而当AI技术尚不成熟或存在争议时,才更容易被冠以“AI”之名。通过提供基础知识,本书旨在让读者能够区分真正的AI进展和那些只是听起来很厉害的炒作。

⚖️ **预测性AI的风险与生成性AI的双重性**:书中对用于预测个人未来行为和结果的“预测性AI”持高度警惕,指出其在招聘、医疗、司法等领域可能导致歧视和不公。而“生成性AI”则被视为一项具有长期积极潜力但同时也存在滥用风险的技术,作者将其比喻为“人手一把免费的链锯”,强调了其使用的复杂性和潜在的混乱局面。

🚀 **务实的AI愿景与避免极端叙事**:作者提出的AI愿景并非是乌托邦式的,而是强调发展那些可靠、能够默默改善我们生活的工具。他们反对AI安全、e/acc(有效加速主义)和AI伦理这三个看似对立的阵营,认为AI的讨论不应过于两极化。本书旨在提供一个基于证据的、不走向末日论或过度乐观的AI发展视角,鼓励关注AI的实际影响和监管的共同点。

The AI Snake Oil book was published last week. We’re grateful for the level of interest — it’s sold about 8,000 copies so far. We’ve received many questions about the book, both its substance and the writing process. Here are the most common ones.

Why don’t you recognize the benefits of AI?

We do! The book is not an anti-technology screed. If our point was that all AI is useless, we wouldn’t need a whole book to say it. It’s precisely because of AI’s usefulness in many areas that hype and snake oil have been successful — it’s hard for people to tell these apart, and we hope our book can help.

We also recognize that the harms we describe are usually not solely due to tech, and much more often due to AI being an amplifier of existing problems in our society. A recurring pattern we point out in the book is that "broken AI is appealing to broken institutions" (Chapter 8).

What’s your optimistic vision for AI, then?

There’s a humorous definition of AI that says “AI is whatever hasn’t been done yet”. When an AI application starts working reliably, it disappears into the background of our digital or physical world. We take it for granted. And we stop calling it AI. When a technology is new, doesn’t work reliably, and has double-edged societal implications, we’re more likely to call it AI. So it’s easy to miss that AI already plays a huge positive role in our lives.

There’s a long list of applications that would have been called AI at one point but probably wouldn’t be today: Robot vacuum cleaners, web search, autopilot in planes, autocomplete, handwriting recognition, speech recognition, spam filtering, and even spell check. These are the kinds of AI we want more of — reliable tools that quietly make our lives better. 

Many AI applications that make the news for the wrong reasons today — such as self-driving cars due to occasional crashes — are undergoing this transition (although, as we point out in the book, it has taken far longer than developers and CEOs anticipated). We think people will eventually take self-driving cars for granted as part of our physical environment. 

Adapting to these changes won’t be straightforward. It will lead to job loss, require changes to transportation infrastructure and urban planning, and have various ripple effects. But it will have been a good thing, because the safety impact of reliable self-driving tech can’t be overstated.

What’s the central message of the book?

AI is an umbrella term for a set of loosely related technologies and applications. To answer questions about the benefits or risks of AI, its societal impact, or how we should approach the tech, we need to break it down. And that’s what we do in the book. 

We’re broadly negative about predictive AI, a term we use to refer to AI that’s used to make decisions about people based on predictions about their future behavior or outcomes. It’s used in criminal risk prediction, hiring, healthcare, and many other consequential domains. Our chapters on predictive AI have many horror stories of people denied life opportunities because of algorithmic predictions.

It’s hard to predict the future, and AI doesn’t change that. This is not because of a limitation of the technology but because of inherent limits to predicting human behavior grounded in sociology. (The book owes a huge debt to Princeton sociologist Matt Salganik; our collaboration with him informed and inspired the book.) 

Generative AI, on the other hand, is a double-edged technology. We are broadly positive about it in the long run, and emphasize that it is useful to essentially every knowledge worker. But its rollout has been chaotic, and misuses have been prevalent. It’s as if everyone in the world has simultaneously been given the equivalent of a free buzzsaw. As we say in the book:

What else is in the book?

See the overview of the chapters here.

Isn’t your book going to be outdated soon?

We know that book publishing moves at a slower timescale than AI. So the book is about the foundational knowledge needed to separate real advances from hype, rather than commentary on breaking developments. In writing every chapter, and every paragraph, we asked ourselves: will this be relevant in five years? This also means that there’s very little overlap between the newsletter and the book. 

There seem to be three warring camps: AI safety, e/acc, and AI ethics. Which one are you in?

The AI discourse is polarized because of differing opinions about which AI risks matter, how serious and urgent they are, and what to do about them. In broad strokes:

In the past, the two of us worked on AI ethics and saw ourselves as part of that community. But we no longer identify with any of these labels. We view the polarization as counterproductive. We used to subscribe to the “distraction“ view but no longer do. The fact that safety concerns have made AI policy a priority has increased, not decreased policymakers’ attention to issues of AI and civil rights. These two communities both want AI regulation, and should focus on their common ground rather than their differences.

These days, much of our technical and policy work is on AI safety, but we have explained how we have a different perspective from the mainstream of the AI safety community. We see our role as engaging seriously with safety concerns and presenting an evidence-based vision of the future of advanced AI that rejects both apocalyptic and utopian narratives.

How long did it take to write the book?

It depends on what one means by writing the book. The book is not just an explainer, and developing a book’s worth of genuinely new, scholarly ideas takes a long time. Here’s a brief timeline:

What was the writing process like?

Doing the bulk of the writing in a year required a lot of things to go right. Here’s the process we used.

We hope you like the end result. Let us know what you think, in the comments or on Amazon.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 人工智能 AI Snake Oil Artificial Intelligence AI Hype AI Ethics Predictive AI Generative AI
相关文章