Fortune | FORTUNE 前天 22:23
微软成立新团队专注“人文超级智能”研究
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

微软宣布成立新的人文超级智能(HSI)团队,由Suleyman领导,旨在开发能为人类服务的先进AI能力。此举标志着微软在与OpenAI的合作协议限制解除后,开始自主进行AGI研究。该团队将与OpenAI保持合作,并投资AI芯片以支持模型训练。微软强调其“人文超级智能”目标与竞争对手的技术驱动目标不同,旨在长远地为数十亿人带来具体、安全的益处,并避免信息误导、社会操纵等风险。新团队汇聚了来自Google DeepMind、Meta、OpenAI和Anthropic等公司的研究人员,由Karén Simonyan担任首席科学家。

🚀 **自主AGI研究新篇章:** 微软成立人文超级智能(HSI)团队,标志着在与OpenAI的合作协议限制解除后,公司得以摆脱AGI研究的约束,开始自主构建先进AI系统。该团队致力于开发“为人类服务、造福人类”的超级智能,与以往仅追求技术突破的AGI研究有所区别。

🤝 **合作与自主的平衡:** 新的HSI团队将在继续深化与OpenAI合作、确保早期模型和知识产权访问的同时,拥有自主研发超级智能的能力。这种“两全其美”的模式,使得微软既能利用外部先进技术,又能独立推动自身AI愿景的实现,尤其是在AI芯片和算力方面进行了大量投资。

💡 **“人文”定位与风险规避:** 微软强调其HSI目标是“人文的”,旨在与OpenAI、Meta等公司纯粹的技术驱动型AGI叙事形成区分。团队致力于长远、具体、安全的益处,并明确反对“AI竞赛”的叙事。同时,团队高度重视潜在风险,如信息误导、社会操纵和失控的自主系统,并承诺在确保安全的前提下加速AI发展。

🌟 **顶尖人才汇聚:** 新成立的HSI团队汇聚了AI领域的顶尖人才,包括来自Google DeepMind、Meta、OpenAI和Anthropic等知名机构的研究人员。首席科学家Karén Simonyan的加入,以及Suleyman本人,都为团队的研究实力提供了有力保障,预示着微软在AI前沿研究领域将有重大突破。

Because of Microsoft’s landmark deal with OpenAI, the company was barred from pursuing its own AGI research. The agreement even capped how large of a model Microsoft could train, restricting the company from building systems beyond a certain computing threshold. (This limit was measured in FLOPS, or the number of mathematical calculations an AI model performs per second. It is a rough approximation of the cumulative computing power used to train a model.) 

“For a company of our scale, that’s a big limitation,” Suleyman told Fortune.           

That’s all changing now: Suleyman announced the formation of the new MAI Superintelligence Team on Thursday. Led by Suleyman and part of the broader Microsoft AI business, the team will work towards “Humanist Superintelligence (HSI),” which Suleyman defined in a blog post as “incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally.”

Microsoft is the just latest company to rebrand its advanced AI efforts as a drive towards “superintelligence”—the idea of artificial intelligence systems that would potentially be wiser than all of humanity combined smarts. But for now, it’s better marketing than science. No such systems currently exist and scientists debate whether superintelligence is even achievable with current AI methods.

That has not stopped companies, however, from announcing superintelligence as a goal and setting up teams branded as “superintelligence.” Most notably, Meta rebranded its AI efforts as Meta Superintelligence Labs in June 2025. OpenAI CEO Sam Altman has written that his company has already figured out how to build artificial general intelligence, or AGI—the idea of an AI system that is as capable as an individual human at most cognitive tasks—and, even though it has yet to release an AI model that meets that initial goal, that it has begun to look beyond AGI to superintelligence.

Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist, cofounded an AI startup called Safe Superintelligence that is also dedicated to creating this hypothetical superpowerful AI and making sure it remains controllable. He had previously led a similar effort within OpenAI. AI company Anthropic also has a team dedicated to researching how to control a hypothetical future superintelligence.

Microsoft’s framing of its own new superintelligence drive as “humanist superintelligence” is a deliberate effort to contrast it to the more technological goals of rivals like OpenAI and Meta[hotlink]. “We reject narratives about a race to AGI, and instead see it as part of a wider and deeply human endeavor to improve our lives and future prospects,” Suleyman wrote in the blog post. “We also reject binaries of boom and doom; we’re in this for the long haul to deliver tangible, specific, safe benefits for  billions of people. We feel a deep responsibility to get this right.” </p><p>For the last year or so Microsoft AI has been on a journey to establish an AI “self-sufficiency effort,” Suleyman told <em>Fortune, </em>while also seeking to <a href="https://fortune.com/2025/10/28/openai-for-profit-restructuring-microsoft-stake/">extend its OpenAI partnership</a> through 2030 so that it continues to get early access to OpenAI’s best models and IP.</p><p>Now, he explained, “we have a best-of-both environment, where we’re free to pursue our own superintelligence and also work closely with them.” </p><p>That new self-sufficiency has required significant investments in AI chips for the team to train its models, though Suleyman declined to comment on the size of the team’s GPU stash. But most of all, he said, the effort is about “making sure we have a culture in the team that is focused on developing the absolute frontier [of AI research].” It will take several years before the company is fully on that path, he acknowledged, but said that it’s a “key priority” for Microsoft. </p><p>Karén Simonyan will serve as the chief scientist of the new Humanist Superintelligence team. Simonyan joined Microsoft in the <a href="http://Why Microsoft's surprise deal with $4 billion startup Inflection is the most important non-acquisition in AI | Fortune ↗">same March 2024 deal</a> that brought Suleyman and a nunber of other key researchers from the AI startup he founded, Inflection, to the company. The team also includes several researchers that Microsoft had already poached from [hotlink]Google DeepMind, Meta, OpenAI and Anthropic. 

The new superintelligence effort, with its focus on keeping humanity at the forefront, does not mean that the company won’t be innovating quickly, Suleyman insisted–even though at the same time he admitted that developing a “humanist” superintelligence would always involve being cautious about capabilities that are “not ready for prime time.” 

When asked about how his viewpoints align with AI leaders in the Trump Administration, such as AI and crypto ‘czar’ David Sacks, who are pushing for no-holds-barred AI acceleration and less regulation, Suleyman said that, in many ways, Sacks is correct. 

“David’s totally right, we should accelerate, it’s critical for America, it’s critical for the West in general,” he said. However, he added, AI developers can push the envelope while also understanding  potential risks like misinformation, social manipulation and autonomous systems that act outside of human intent. 

“We should be going as fast as possible within the constraints of making sure it doesn’t harm us,” he said.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

微软 人工智能 AGI 超级智能 OpenAI Microsoft AI AGI Superintelligence
相关文章