钛媒体:引领未来商业与生活新知 10月10日 14:27
蚂蚁集团发布并开源大模型Ling-1T
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

蚂蚁集团宣布发布并开源其最新的大语言模型Ling-1T,这是Bailing(灵)2.0系列中的旗舰模型,也是Bailing团队迄今为止开发的最大、最强大的非推理模型。在多项复杂推理任务的基准测试中,Ling-1T取得了领先的性能,尤其在代码生成、软件开发、竞赛数学和逻辑推理等高难度领域表现优于其他开源模型。例如,在AIME 25竞赛数学排行榜上,Ling-1T以更少的平均 token 使用量实现了更高的准确率,展现了其在效率和成本效益上的优势。此次发布正值全球AI开发加速之际,Ling-1T的出现预示着高性能AI正变得更加普惠,为全球研究者和开发者带来更多创新可能。

💡 Ling-1T是蚂蚁集团Bailing(灵)2.0系列中的旗舰模型,被誉为迄今为止Bailing团队开发的规模最大、性能最强的非推理模型。该模型在多项复杂推理任务的基准测试中表现出色,特别是在代码生成、软件开发、竞赛数学和逻辑推理等高难度领域,其性能超越了其他开源模型,显示了其在AI技术前沿的实力。

🚀 在竞赛数学领域,Ling-1T在AIME 25排行榜上以70.42%的准确率和仅略高于4000的平均 token 使用量,优于Google的Gemini-2.5-Pro,后者使用了超过5000的平均 token 并取得了70.10%的准确率。这一结果凸显了Ling-1T在提供更高准确率的同时,还能显著降低 token 使用量,展现了其在效率和成本效益上的卓越表现,将蚂蚁集团的技术推向了推理精度的最前沿。

⚙️ Ling-1T采用了Ling 2.0架构,并在超过20万亿个高质量、高推理密度的 token 上进行了预训练。它支持高达128K token 的上下文窗口,并运用了演进式思维链(Evo-CoT)方法,结合了训练中和训练后的处理过程,有效提升了推理效率和精准度。值得注意的是,Ling-1T采用了FP8混合精度技术进行训练,是已知最大的采用该技术的基座模型,实现了内存节省、灵活的并行化以及超过15%的训练加速。

🤝 蚂蚁集团将Ling-1T开源,旨在降低高性能AI的门槛,使其能够被更广泛的研究者、开发者和企业所使用。此举不仅推动了AI技术的普及,也为全球AI社区的合作与创新注入了新的活力。同时,蚂蚁集团正开发另一个万亿参数的深度推理模型Ring-1T,并已于9月30日开源了其预览版本,让开发者能提前体验蚂蚁最新的AI能力。

(Image source: Photo taken at the Ant Group booth during the 2025 World Artificial Intelligence Conference)

Ant Group on Thursday announced the release and open-sourcing of its latest large language model, Ling-1T, marking a major milestone in global AI development.

Ling-1T, the flagship model in Ant Group’s Bailing (Ling) 2.0 series, is touted as the largest and most powerful non-reasoning model developed by the Bailing team to date, demonstrating both unprecedented scale and performance.

According to benchmark tests, Ling-1T achieves state-of-the-art results across multiple complex reasoning tasks. It has outperformed other open-source models in high-difficulty areas including code generation, software development, competition mathematics, professional mathematics, and logical reasoning.

For instance, in the AIME 25 competition mathematics leaderboard, Ling-1T scored an accuracy rate of 70.42% with an average token usage of just over 4,000, surpassing Google’s Gemini-2.5-Pro, which used more than 5,000 tokens on average and scored 70.10%. By delivering higher accuracy with fewer tokens, Ling-1T demonstrates both superior efficiency and cost-effectiveness, positioning Ant Group’s technology at the forefront of reasoning precision.

Ling-1T’s release comes amid a broader acceleration of AI development in China and the U.S., with companies racing to expand capabilities across large language models and multimodal AI systems. Around the National Day and Mid-Autumn Festival holidays, several major AI initiatives were unveiled.

OpenAI launched its AI video model Sora2, announced GPT-5 Pro, and introduced the ChatGPT Apps SDK. Meanwhile, DeepSeek released DeepSeek-V3.2-Exp, which enhances training and inference efficiency and reduces API costs. Alibaba Tongyi unveiled the next-generation Qwen3-Omni multimodal model and its open-source Tongyi DeepResearch framework, while Zhipu released its flagship GLM-4.6, topping Hugging Face’s trending global chart and ranking fourth globally on LMArena.

As Kai-Fu Lee, founder and CEO of 01.AI, noted, foundational AI models represent a high-stakes “tech arms race,” with tens of billions of dollars at play. Nvidia CEO Jensen Huang highlighted surging AI computing demand over the past six months, emphasizing unprecedented interest in the company’s Blackwell architecture chips and calling this surge the beginning of a “new industrial revolution.” Nvidia recently announced plans to invest $100 billion over the next decade in OpenAI, supporting deployment of systems requiring 10 gigawatts of power — equivalent to millions of GPUs. OpenAI CEO Sam Altman added that the future of artificial general intelligence (AGI) depends on smarter models, longer context processing, and improved memory systems, with AI’s ability to generate new knowledge as the core metric.

Ant Group’s Ling team has steadily developed large models, beginning with the open-source Ling-Lite and Ling-Plus in March of this year. Ling-Lite contains 1.68 billion parameters, while Ling-Plus reaches 290 billion. These models have laid the foundation for more advanced applications in life services, finance, and healthcare.

Ling-1T adopts the Ling 2.0 architecture and has been pre-trained on over 20 trillion high-quality, high-inference-density tokens. It supports a context window of up to 128K tokens and employs an evolutionary chain-of-thought (Evo-CoT) approach that combines in-training and post-training processes, enhancing efficiency in reasoning and precise inference. Notably, Ling-1T is trained using FP8 mixed-precision technology — the largest known base model to do so — achieving memory savings, flexible parallelization, and over 15% acceleration in training.

The Ant Bailing team also introduced the Linguistics-Unit Policy Optimization (LingPO) algorithm during reinforcement learning, a sentence-level policy optimization approach that stabilizes the training of trillion-parameter models. Additionally, a hybrid reward mechanism ensures code correctness, feature completeness, and continuous improvements in visual aesthetics understanding. On the ArtifactsBench frontend benchmark, Ling-1T achieved a score of 59.31, ranking first among open-source models for visualization and frontend development tasks.

Alongside Ling-1T, Ant Group is developing Ring-1T, a trillion-parameter deep reasoning model. A preview version was open-sourced on September 30, providing developers early access to Ant’s latest AI capabilities.

Large language models often require high-performance AI chips such as H100 and H800, which can be scarce and costly. The Ant Ling team emphasizes a technical framework that allows seamless switching between heterogeneous computing units and distributed clusters, enabling performance optimization without reliance on cutting-edge GPUs. This approach aims to make high-quality AI accessible and cost-efficient even in resource-constrained environments.

The team, led by He Zhengyu, Ant Group’s Vice President and CTO, combines deep technical expertise with years of experience in large-scale systems. He holds a Ph.D. in Computer Science from Georgia Tech and previously led Google’s open-source gVisor project. Since joining Ant Group in 2018, He has overseen cloud-native transformation, green computing, confidential computing innovation, and the company’s open-source strategy. Under his leadership, the Bailing large model initiative now targets applications spanning life services, finance, and healthcare.

The release of Ling-1T coincides with record levels of AI venture capital investment. According to PitchBook, AI startups worldwide have attracted $192.7 billion since the start of 2025, with over half of all global venture capital expected to flow into AI this year. Mature startups dominate funding allocations, while smaller or non-AI companies face challenges securing capital amid tighter IPO and M&A conditions.

In the latest quarter, 62.7% of U.S. venture capital was invested in AI companies, compared to 53.2% globally. PitchBook’s research director, Kyle Sanford, observed that “the market is becoming increasingly polarized — either you’re working on artificial intelligence, or you’re not; either you’re a big company, or you’re not.” Analyst Dimitri Zabelin noted that most exits currently consist of frequent but low-value acquisitions, with a smaller number of high-value IPOs.

OpenAI exemplifies the rapid growth of AI startups. Since ChatGPT’s debut in 2022, its user base has grown to 800 million weekly active users. OpenAI’s $6.6 billion funding round recently boosted its valuation to $500 billion, surpassing SpaceX and making it the world’s most valuable startup. In the first seven months of 2025, OpenAI’s revenue roughly doubled, with annual revenue projected at $12 billion. Agreements for nearly $1 trillion in computing power position OpenAI as a potential leader in AI profitability.

Ling-1T’s release underscores the growing pace of competition in AI development, particularly between China and the United States. The model demonstrates advances not only in parameter scale but also in efficiency, accuracy, and reasoning capability. As foundational AI models become central to economic and technological strategy, companies like Ant Group are pushing the envelope on open-source accessibility, infrastructure optimization, and practical application scenarios.

With global venture capital surging and competition intensifying, the AI landscape is entering a critical phase. Ling-1T’s open-source debut signals that high-performance AI is no longer limited to a handful of tech giants, but increasingly accessible to researchers, developers, and enterprises worldwide — setting the stage for the next wave of innovation in large language models and artificial general intelligence.

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Ant Group Ling-1T 大模型 开源 人工智能 AI Large Language Model Open Source Artificial Intelligence
相关文章