Lorien Pratt 09月25日
人工智能发展中的周期性问题
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

人工智能的发展并非一帆风顺,而是经历了多次繁荣与低谷的周期。作者指出,当前人工智能领域的过度炒作导致了非理性的期望,形成了所谓的“人工智能寒冬”现象。这种周期性波动不仅影响了公众对人工智能的认知,也阻碍了真正有价值的AI应用落地。文章提出了五个应对策略:将研究基于现实需求、从算法导向转向问题导向、将资源从新算法研究转向实施与集成、奖励应用型研究、以及强调技术成熟度等级的透明度。这些措施旨在帮助人工智能领域走出周期性波动的怪圈,实现更可持续的发展。

🌱 人工智能的发展经历了多次繁荣与低谷的周期,当前的非理性期望导致了所谓的“人工智能寒冬”现象,这种现象不仅影响了公众对人工智能的认知,也阻碍了真正有价值的AI应用落地。

🔍 文章提出了五个应对人工智能领域周期性波动的策略:将研究基于现实需求,从算法导向转向问题导向,将资源从新算法研究转向实施与集成,奖励应用型研究,以及强调技术成熟度等级的透明度。

🚀 将资源从新算法研究转向实施与集成,作者以H2O框架为例,说明实用型AI工具在实际项目中可以取得突破性成果,而当前许多数据科学家更专注于算法竞赛而非实际应用。

🤝 支持应用型研究,作者指出应用型研究曾被视为“B”学生的工作,但现在需要更多优秀的实践者来推动AI的实际应用,而非仅仅追求基础研究。

📊 强调技术成熟度等级的透明度,作者建议采用ML-specific TRL模型,并要求在同行评审出版物中披露TRL信息,以避免将TRL1的原型误认为是TRL7的MVP,从而浪费资源。

🤖 决策智能(DI)的应用,作者认为DI可以系统化地将AI栈与人类利益相关者连接起来,解决当前AI项目中常见的从数据出发而非从问题出发的问题。

The introduction of the cotton gin wasn’t accompanied by an entire genre of Hollywood movies dedicated to the gin “singularity”. Nor did we usher in Golden Age of telecommunications with blockbuster killer “phone web” stories.

Artificial Intelligence is different. Like other disruptive technologies, it is having far-ranging effects, good and bad. But uniquely, quality AI information is clouded by the AI apocalypse narrative. If you google the field, you’ll be challenged to separate medical imaging wheat from AGI chaff. (Don’t tell anyone, friends, there’s no magic here, it’s just math.) AI alone is no more likely to take over the world than is your calculator. Well, unless it’s used as a deniability smokescreen: “It’s not my fault the killer robot smashed your house, it was the AI that did it”.

Honestly, what worries me most about AGI is the distraction it creates from the real ways that AI can make a massive positive difference in our lives. And the Winter/Summer AI cycle is a massive dampener.

AI winter system and status

AI hype is nonlinear: a bit of AI hype starts a flywheel effect, often pollinated by well-meaning journalists. AI hype is particularly mutant, which means that those of us trying to do some good in this world have faced a series of summer/winter cycles where Dutch tulip-like exuberance has led, inevitably, to a burst bubble.

Which hit me in the face personally; after riding the 1980s AI wave, by 1995 just saying “artificial intelligence” out loud pigeonholed me as a fuddy duddy so I rebranded myself as an “analytics” and data expert, for a decade or two.

Felix Hovsepian wrote a good set of Cliff notes on the AI Winter story today including pointers to hypebuster Roger Schank. And Mark Saroufin’s viral insider’s critique, casts a stark eye on academic AI incrementalism and the underlying risk/economic dynamics that have broken our social contract with basic research. We’re stuck in a strange attractor, and we’ll get out either abruptly against our will, or we’ll remove the attractor altogether by getting real.

Top five ways to stop the summer/winter oscillation

    Ground everything in reality. For early research, if you can’t at least name the decision or use case that your data and model could or should support, then you haven’t done your homework, and you shouldn’t be published. For more advanced research, you must provide rigorous results, tested at scale, on a nontrivial problem (and yes, showing results on non-training data—believe it or not, I still have to say this.).Shift from being solution-based and algorithm-focused to being problem-based, and decision-focused. As @thingskatedid puts it, “computers are magnificent, incredible achievements. unfortunately we run software on them.” Which breaks, a lot, increasingly in ML, without an engineering discipline (including design, planning, construction, and QA) that teaches us how to stop that, and which starts with ensuring that systems are “fit for purpose”.Shift resources from new algorithms to implementation and integration. I use the simple and powerful H2O framework (with R for orchestration) for most of my applied AI work, and it’s plenty, having given my projects breakthrough results dozens of times over the years. It feels like most of my clients are just trying to drive to the corner store, yet most data scientists I’ve met are trained as Formula I (ahem, tensorflow) mechanics, trying to win the next Kaggle competition. The diminishing return curve from this sort of work—compared to serious productization, AI orchestration, MLops, and ML-specific software engineering strategies—was crossed long ago.

    Along these lines, support new incentives to reward and applaud applied—or at least use-inspired—research. This is harder, and more desperately needs good practitioners, than foundational AI. It’s been a dirty name—for the “B” students—until now, and this needs to change.Insist on transparency around Technology Readiness Levels (TRLs) when you read, write, cover, or review ML stories. Has this algorithm/system been proven in the lab or in the field? On a toy problem or at scale? By one or by thousands? By academics alone, or is anybody making money off of it? As I’ve spoken about a lot, we will waste resources and go astray if we mistake a TRL1 prototype for, say, a TRL7 MVP. My ML customers have fallen into this rabbit hole more times than you’d like to know, and I have to gently tell them “No, I’m sorry but that reinforcement learning / genetic algorithm / <fill in favorite sexy AI-related tech> is not mature enough for you to profit from it within the next five years”.

    So yeah, sex sells. Even in the nerdy halls of backpropagatationdom.

    Alexander Lavin and Gregory Renard have a great ML-specific TRL model, which I suggest be adopted by all ML peer review publications, along with a TRL disclosure rule. Embrace and support the emerging field of decision intelligence (DI), which democratizes and systematizes the way that we connect the AI stack to human stakeholders. It’s unnecessarily gnarly today to stand up a new AI project—or even to figure out where AI fits into a situation—so we tend to fall back on starting with the data (the solution) instead of a potential end user’s problem. DI fixes that.

If you’re new to AI/ML or a journalist, please know that AI is a “fake news” siren. Friends don’t share AI hype with friends. Take a few minutes to learn about the AI Winter. Don’t share unvetted clickbaity drek. And go high, not low: earn your clicks with substance, not fluff, please.

Or, if you’re a senior technologist, please use your influence to nudge this ship away from the upcoming rocks. Without some courageous reprioritization, we’re headed to another winter.

Can I help your own AI/ML project to get real? Book me for a free consultation or send email.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能寒冬 AI周期性波动 技术成熟度等级 决策智能 应用型研究 人工智能炒作
相关文章