Fortune | FORTUNE 10月23日 06:04
呼吁暂停超强人工智能研发
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一封由未来生命研究所发起的公开信,获得了包括Geoffrey Hinton、Yoshua Bengio、Stuart Russell等人工智能先驱,以及Richard Branson、Steve Wozniak等商业领袖,甚至还有Joseph Gordon-Levitt、will.i.am、哈里王子和梅根等名人的联署。这封信呼吁领先的AI实验室暂停研发超越人类认知能力的“超强人工智能”,直到能够确保其安全可控,并获得广泛的公众认可。伴随这封信发布的一项民意调查显示,大多数美国民众支持对先进AI技术进行强有力的监管,并认为在确保安全之前不应开发超强人工智能。

✍️ 一封由未来生命研究所发起的公开信,呼吁暂停研发超强人工智能,获得了众多知名人士的联署,包括人工智能领域的先驱、商业领袖以及演员和皇室成员。

📊 一项与公开信一同进行的民意调查显示,大多数美国成年人支持对先进人工智能进行强有力的监管,并且认为在能够证明其安全和可控之前,不应该开发超级智能。

🛑 信中指出,包括Meta、Google DeepMind和OpenAI在内的多家领先AI实验室正在积极研发超强人工智能,而这封信呼吁这些机构暂停相关工作,直到达成广泛的科学共识,确信其安全性与可控性,并获得公众的广泛认可。

⚠️ 签署者认为,追求超强人工智能存在严重的经济取代、权力剥夺以及对国家安全和公民自由构成威胁的风险,并指责科技公司在缺乏保障、监督和公众同意的情况下推进这项潜在危险的技术。

The letter’s more notable signatories include AI pioneer and Nobel laureate Geoffrey Hinton, other AI luminaries such as Yoshua Bengio and Stuart Russell, as well as business leaders such as Virgin cofounder Richard Branson and Apple cofounder Steve Wozniak. It was also signed by celebrities, including actor Joseph Gordon-Levitt, who recently expressed concerns over Meta’s AI products, will.i.am, and Prince Harry and Meghan, Duke and Duchess of Sussex. Policy and national security figures as diverse as Trump ally and strategist Steve Bannon and Mike Mullen, chairman of the Joint Chiefs of Staff under Presidents George W. Bush and Barack Obama, also appear on the list of more than 1,000 other signatories.

New polling conducted alongside the open letter, which was written and circulated by the nonprofit Future of Life Institute, found that the public generally agreed with the call for a moratorium on the development of superpowerful AI technology.

In the U.S., the polling found that only 5% of U.S. adults support the current status quo of unregulated development of advanced AI, while 64% agreed superintelligence shouldn’t be developed until it’s provably safe and controllable. The poll found that 73% want robust regulation on advanced AI.

“95% of Americans don’t want a race to superintelligence, and experts want to ban it,” Future of Life president Max Tegmark said in the statement.

Superintelligence is broadly defined as a type of artificial intelligence capable of outperforming the entirety of humanity at most cognitive tasks. There is currently no consensus on when or if superintelligence will be achieved, and timelines suggested by experts are speculative. Some more aggressive estimates have said superintelligence could be achieved by the late 2020s, while more conservative views delay it much further or question the current tech’s ability to achieve it at all.

Several leading AI labs, including Meta, Google DeepMind, and OpenAI, are actively pursuing this level of advanced AI. The letter calls on these leading AI labs to halt their pursuit of these capabilities until there is a “broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

“Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years,” Yoshua Bengio, Turing Award–winning computer scientist, who along with Hinton is considered one of the “godfathers” of AI, said in a statement. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future,” he said.

The signatories claim that the pursuit of superintelligence raises serious risks of economic displacement and disempowerment, and is a threat to national security as well as civil liberties. The letter accuses tech companies of pursuing this potentially dangerous technology without guardrails, oversight, and without broad public consent.

“To get the most from what AI has to offer mankind, there is simply no need to reach for the unknowable and highly risky goal of superintelligence, which is by far a frontier too far. By definition, this would result in a power that we could neither understand nor control,” actor Stephen Fry said in the statement.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

超强人工智能 AI暂停 人工智能监管 未来生命研究所 公开信
相关文章