The Verge - Artificial Intelligences 09月23日
呼吁制定AI“红线”:国际社会寻求共识
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

200多位前国家元首、外交官、诺贝尔奖得主、AI领袖和科学家共同呼吁,应就AI的“红线”达成国际协议,禁止AI模仿人类或自我复制等行为。该“AI红线全球呼吁”倡议由法国AI安全中心(CeSIA)、未来社会(The Future Society)和加州大学伯克利分校人类兼容人工智能中心联合发起,旨在2026年底前达成一项“国际政治协议”。倡议强调预防AI带来的大规模、不可逆转的风险,而非事后补救。虽然已有欧盟AI法案等区域性规定,但全球性共识仍有待建立。倡议者认为,仅靠企业自愿承诺不足以实现有效监管,长远来看需要一个拥有实权的独立全球机构来定义、监督和执行AI红线,以确保AI安全发展,而非以牺牲安全为代价追求创新。

🤝 **国际社会呼吁制定AI“红线”:** 超过200位来自政界、学界、科学界及AI领域的知名人士联合签署了“AI红线全球呼吁”倡议,旨在推动各国政府在2026年底前达成一项国际政治协议,明确AI的禁区,如禁止AI模仿人类或自我复制等行为。此举旨在从源头上预防AI可能带来的大规模、不可逆转的风险。

⚖️ **建立全球性AI监管框架:** 尽管欧盟AI法案等区域性法规已存在,但全球范围内的AI安全共识仍显不足。倡议者认为,企业内部的自愿性承诺不足以实现有效的安全保障和执行,长远来看,需要一个具有实际约束力的独立全球机构来负责定义、监督和执行AI的“红线”。

💡 **安全与创新可兼得:** 针对AI监管可能阻碍经济发展和创新的担忧,倡议者 Stuart Russell 教授指出,可以发展用于经济目的的AI,同时避免失控的通用人工智能(AGI)。他强调,AI行业应借鉴核能开发的经验,在设计之初就内置安全机制,并确保其安全性得到验证,而非在对潜在风险有清晰认识之前就盲目推进。

🌐 **推动全球合作与问责:** 该倡议由CeSIA、The Future Society和加州大学伯克利分校人类兼容人工智能中心联合领导,并得到了联合国等国际平台的关注。诺贝尔和平奖得主Maria Ressa也强调了通过全球问责来“终结大型科技公司不受惩罚的时代”的必要性,凸显了该倡议在推动全球AI治理和问责方面的作用。

On Monday, more than 200 former heads of state, diplomats, Nobel laureates, AI leaders, scientists, and others all agreed on one thing: There should be an international agreement on “red lines” that AI should never cross — for instance, not allowing AI to impersonate a human being or self-replicate. 

They, along with more than 70 organizations that address AI, have all signed the Global Call for AI Red Lines initiative, a call for governments to reach an “international political agreement on ‘red lines’ for AI by the end of 2026.” Signatories include British Canadian computer scientist Geoffrey Hinton, OpenAI cofounder Wojciech Zaremba, Anthropic CISO Jason Clinton, Google DeepMind research scientist Ian Goodfellow, and others. 

“The goal is not to react after a major incident occurs… but to prevent large-scale, potentially irreversible risks before they happen,” Charbel-Raphaël Segerie, executive director of the French Center for AI Safety (CeSIA), said during a Monday briefing with reporters. 

He added, “If nations cannot yet agree on what they want to do with AI, they must at least agree on what AI must never do.” 

The announcement comes ahead of the 80th United Nations General Assembly high-level week in New York, and the initiative was led by CeSIA, the Future Society, and UC Berkeley’s Center for Human-Compatible Artificial Intelligence. 

Nobel Peace Prize laureate Maria Ressa mentioned the initiative during her opening remarks at the assembly when calling for efforts to “end Big Tech impunity through global accountability.” 

Some regional AI red lines do exist. For example, the European Union’s AI Act that bans some uses of AI deemed “unacceptable” within the EU. There is  also an agreement between the US and China that nuclear weapons should stay under human, not AI, control. But there is not yet a global consensus. 

In the long term, more is needed than “voluntary pledges,” Niki Iliadis, director for global governance of AI at The Future Society, said to reporters on Monday. Responsible scaling policies made within AI companies “fall short for real enforcement.” Eventually, an independent global institution “with teeth” is needed to define, monitor, and enforce the red lines, she said. 

“They can comply by not building AGI until they know how to make it safe,” Stuart Russell, a professor of computer science at UC Berkeley and a leading AI researcher, said during the briefing. “Just as nuclear power developers did not build nuclear plants until they had some idea how to stop them from exploding, the AI industry must choose a different technology path, one that builds in safety from the beginning, and we must know that they are doing it.” 

Red lines do not impede economic development or innovation, as some critics of AI regulation argue, Russell said. ”You can have AI for economic development without having AGI that we don’t know how to control,” he said. “This supposed dichotomy, if you want medical diagnosis then you have to accept world-destroying AGI — I just think it’s nonsense.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI红线 人工智能安全 AI治理 国际协议 AI Red Lines AI Safety AI Governance International Agreement
相关文章