AI Snake Oil 09月12日
《AI蛇油》:一本书籍介绍与深度解析
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

《AI蛇油》一书旨在揭示人工智能领域的炒作与现实,作者Arvind Narayanan和Sayash Kapoor通过深入分析,帮助读者辨别真正有效的AI技术与夸大宣传。书中详细阐述了AI作为一顶“保护伞”术语的复杂性,并分别探讨了预测性AI、生成性AI以及社交媒体内容审核等不同AI应用。文章重点介绍了该书的结构,包括预测性AI的局限性、AI预测未来的能力、生成性AI的发展历程、AI的生存风险评估、社交媒体内容审核的挑战、AI神话的成因以及AI蛇油的未来走向。书中还引用了《纽约客》、《柯克斯评论》等多家媒体的正面评价,并提供了购买链接和相关活动信息。

📚 **《AI蛇油》核心目标:辨别AI炒作与现实** 本书由Arvind Narayanan和Sayash Kapoor撰写,旨在帮助读者区分人工智能领域中真正有效的技术与夸大宣传的“AI蛇油”。它将AI视为一个涵盖多种技术的“保护伞”术语,并分别审视了预测性AI、生成性AI等不同应用,旨在提供一种批判性的视角来理解AI的实际能力和潜在风险。

🔍 **预测性AI的局限与挑战** 文章详细阐述了预测性AI在实际应用中的不足之处,例如其在招聘、金融、教育等领域做出预测时可能出现的“重大现实失误”。书中通过案例研究深入剖析了预测性AI为何难以实现其开发者所宣称的承诺,并探讨了预测未来(无论是否借助AI)的固有难度,指出其在个体生活结果、文化产品成功或疫情预测等复杂场景下的局限性。

💡 **生成性AI的历史与未来展望** 书中追溯了生成性AI近十年来的巨大进步,并将其置于计算技术七十多年的发展历史中进行考察。通过回顾AI发展的历史轨迹,本书旨在帮助读者更好地理解当前生成性AI的成就,并对未来的发展方向和可能性做出更合理的预期。此外,文章还探讨了AI作为一种“正常技术”的未来,以及其对工作、监管和社会韧性的影响。

⚠️ **AI生存风险、内容审核与神话的持久性** 《AI蛇油》一书还批判性地评估了关于AI生存风险的普遍论调,指出了流行讨论中的缺陷和谬误。同时,书中深入分析了AI在社交媒体内容审核方面的应用及其局限性,并探讨了AI神话(包括夸大其能力和应用范围)产生和持续存在的原因。作者鼓励读者以批判性思维解读AI新闻,识别潜在的“AI蛇油”推销。

🌟 **权威评价与广泛影响** 本书获得了《纽约客》、《柯克斯评论》、《出版人周刊》和《福布斯》等权威媒体的积极评价,被誉为2024年度的必读科技书籍之一。这些评价强调了本书在区分事实与观点、提供清晰论证以及呼吁行动方面的价值,认为它对于政策制定者、职场人士以及普通用户都具有重要意义,能够帮助人们更谨慎地认识和使用AI。

The first chapter of the AI snake oil book is available online. It is 30 pages long and summarizes the book’s main arguments. If you haven't ordered the book yet, we hope that reading the introductory chapter will convince you to get yourself a copy.

Update (September 2025): It has been a year since the release of AI Snake Oil. In the time since its release, the two of us have given talks, appeared on podcasts, published exercises to accompany the book, and written a new preface and epilogue for the paperback edition of the book. The book was included in Nature’s list of the 10 best books of 2024, Bloomberg’s 49 best books of 2024, and Forbes’s 10 must-read tech books of 2024. It has received many positive reviews, including in the New Yorker. We are grateful to readers of the book for engaging deeply with its ideas.

We have now started working on our next project together, AI as Normal Technology. The project picks up where AI Snake Oil left off: whereas AI Snake Oil was an attempt to understand the present and near-term impacts of AI, AI as Normal Technology is a framework to think about its future impacts. The new name of this newsletter reflects this change. We hope you will follow along.

The single most confusing thing about AI

Our book is about demystifying AI, so right out of the gate we address what we think is the single most confusing thing about it: 

AI is an umbrella term for a set of loosely related technologies

Because AI is an umbrella term, we treat each type of AI differently. We have chapters on predictive AI, generative AI, as well as AI used for social media content moderation. We also have a chapter on whether AI is an existential risk. We conclude with a discussion of why AI snake oil persists and what the future might hold. By AI snake oil we mean AI applications that do not (and perhaps cannot) work. Our book is a guide to identifying AI snake oil and AI hype. We also look at AI that is harmful even if it works well — such as face recognition used for mass surveillance. 

While the book is meant for a broad audience, it does not simply rehash the arguments we have made in our papers or on this newsletter. We make scholarly contributions and we wrote the book to be suitable for adoption in courses. We will soon release exercises and class discussion questions to accompany the book.

What's in the book

Chapter 1: Introduction. We begin with a summary of our main arguments in the book. We discuss the definition of AI (and more importantly, why it is hard to come up with one), how AI is an umbrella term, what we mean by AI Snake Oil, and who the book is for. 

Generative AI has made huge strides in the last decade. On the other hand, predictive AI is used for predicting outcomes to make consequential decisions in hiring, banking, insurance, education, and more. While predictive AI can find broad statistical patterns in data, it is marketed as far more than that, leading to major real-world misfires. Finally, we discuss the benefits and limitations of AI for content moderation on social media.

We also tell the story of what led the two of us to write the book. The entire first chapter is now available online.

Chapter 2: How predictive AI goes wrong. Predictive AI is used to make predictions about people—will a defendant fail to show up for trial? Is a patient at high risk of negative health outcomes? Will a student drop out of college? These predictions are then used to make consequential decisions. Developers claim predictive AI is groundbreaking, but in reality it suffers from a number of shortcomings that are hard to fix. 

We have discussed the failures of predictive AI in this blog. But in the book, we go much deeper through case studies to show how predictive AI fails to live up to the promises made by its developers.

Chapter 3: Can AI predict the future? Are the shortcomings of predictive AI inherent, or can they be resolved? In this chapter, we look at why predicting the future is hard — with or without AI. While we have made consistent progress in some domains such as weather prediction, we argue that this progress cannot translate to other settings, such as individuals' life outcomes, the success of cultural products like books and movies, or pandemics. 

Since much of our newsletter is focused on topics of current interest, this is a topic that we have never written about here. Yet, it is foundational knowledge that can help you build intuition around when we should expect predictions to be accurate.

Chapter 4: The long road to generative AI. Recent advances in generative AI can seem sudden, but they build on a series of improvements over seven decades. In this chapter, we retrace the history of computing advances that led to generative AI. While we have written a lot about current trends in generative AI, in the book, we look at its past. This is crucial for understanding what to expect in the future. 

Chapter 5: Is advanced AI an existential threat? Claims about AI wiping out humanity are common. Here, we critically evaluate claims about AI's existential risk and find several shortcomings and fallacies in popular discussion of x-risk. We discuss approaches to defending against AI risks that improve societal resilience regardless of the threat of advanced AI.

Chapter 6: Why can't AI fix social media? One area where AI is heavily used is content moderation on social media platforms. We discuss the current state of AI use on social media, and highlight seven reasons why improvements in AI alone are unlikely to solve platforms' content moderation woes. We haven't written about content moderation in this newsletter.

Chapter 7: Why do myths about AI persist? Companies, researchers, and journalists all contribute to AI hype. We discuss how myths about AI are created and how they persist. In the process, we hope to give you the tools to read AI news with the appropriate skepticism and identify attempts to sell you snake oil.

Chapter 8: Where do we go from here? While the previous chapter focuses on the supply of snake oil, in the last chapter, we look at where the demand for AI snake oil comes from. We also look at the impact of AI on the future of work, the role and limitations of regulation, and conclude with vignettes of the many possible futures ahead of us. We have the agency to determine which path we end up on, and each of us can play a role.

We hope you will find the book useful and look forward to hearing what you think. 

Early reviews

Book launch events

Podcasts and interviews

We’ve been on many other podcasts that will air around the time of the book’s release, and we will keep this list updated.

Purchase links

The book is available to preorder internationally on Amazon.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI Snake Oil 人工智能 AI炒作 预测性AI 生成性AI AI风险 内容审核 Arvind Narayanan Sayash Kapoor
相关文章