All Content from Business Insider 10月22日 19:23
900余公众人物呼吁暂停超智能AI研发
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

超过900位来自科技、商业、艺术和媒体领域的公众人物签署了一份声明,呼吁暂停开发超智能人工智能,直至其安全性和可控性获得广泛科学共识并获得公众认可。 signatories包括AI先驱、企业领袖和政治人物,他们对AI可能带来的失业、失控乃至人类灭绝等风险表示担忧。声明由Future of Life Institute组织,旨在确保AI发展服务于人类而非取代人类。尽管存在担忧,但也有人认为超智能AI的实现尚需时日,且届时可被控制。

🤖 **广泛的公众人物联署呼吁暂停超智能AI研发**:一份由超过900位各界知名人士签署的声明,包括AI领域的先驱、商业领袖、科技巨头联合创始人以及政治人物,共同表达了对超智能AI发展的深切担忧,并呼吁在确保其安全性和可控性之前,暂停相关研发进程。这标志着对AI发展方向的一次大规模的集体发声。

💡 **对超智能AI风险的担忧主要集中在几个关键方面**:公众人物们提出的主要顾虑包括:大规模的失业问题,即AI可能取代大量人类工作岗位;对AI系统失去控制的潜在风险,尤其当AI能力远超人类时;以及最严重的,对人类生存构成威胁的可能性,即AI可能导致人类灭绝。这些担忧随着AI技术的快速进步而日益加剧。

⚖️ **呼吁建立在安全与共识基础上的AI发展框架**:声明的核心诉求是“禁止超智能的发展,在获得广泛科学共识其能安全可控地完成,并获得强大的公众支持之前,不得解除”。这并非要求完全停止AI研究,而是强调在技术发展过程中,必须优先考虑人类的安全和福祉,并确保公众参与和科学验证。

🚀 **AI发展路径的理性与审慎辩论**:尽管有900多位公众人物发声,但AI领域内部也存在不同声音。一些专家认为,超智能AI的实现可能还需要数十年时间,并且届时可以通过技术手段进行有效控制。例如,Meta的首席AI科学家Yann LeCun就曾表示,人类将是超智能系统的“主导者”。这种辩论凸显了AI发展路径上需要平衡创新与风险的复杂性。

Prince Harry, will.i.am, and Steve Bannon signed a statement calling for a halt to developing superintelligent AI.

What do Steve Bannon, will.i.am, and Prince Harry have in common? They're all concerned about superintelligent AI that surpasses human intelligence.

They are among more than 900 public figures from business, tech, the arts, and media who have called for a ban on the development of the technology until there's a scientific consensus that it can be done safely.

Two of the "godfathers of AI," Yoshua Bengio and Geoffrey Hinton, are also among the statement's signatures, alongside business leaders such as Apple cofounder Steve Wozniak and Virgin founder Richard Branson.

Bannon, a former strategist for Donald Trump, joins political figures from the left, such as former Democratic US Rep. Joe Crowley, who added their names to the list, which continued to grow following its publication on Wednesday.

"We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in," the statement, organized by the Future of Life Institute, a nonprofit, said.

Those concerned about the potential of powerful AI have raised concerns about job losses, the loss of control over AI systems, and the possibility of human extinction. Those concerns have grown in recent years as companies like OpenAI and Google have launched increasingly advanced AI models.

"The future of AI should serve humanity, not replace it," said Prince Harry in a statement. "The true test of progress will be not how fast we move, but how wisely we steer."

However, others say AI superintelligence could take decades to achieve and will be controllable when it does arrive.

Yann LeCun, one of the "godfathers of AI" and chief AI scientist at Meta, said in March that humans would be the "boss" of superintelligent systems.

It's the latest statement organized by the Future of Life Institute, which has published several public statements raising concerns about the development of AI since it was founded in 2014. The nonprofit has previously received financial support from Elon Musk, whose company, xAI, has developed the AI chatbot Grok.

"This is not a ban or even a moratorium in the usual sense," said Stuart Russell, a professor of computer science at the University of California, Berkeley, in a statement accompanying his signature.

"It's simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?"

Read the original article on Business Insider

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

超智能AI 人工智能安全 AI伦理 科技发展 公众人物 Superintelligent AI AI Safety AI Ethics Technology Development Public Figures
相关文章