MarkTechPost@AI 09月17日
TimesFM-2.5:更小、更长上下文的预测模型
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Google Research发布了TimesFM-2.5,一个拥有2亿参数、16K上下文长度的解码器专用时间序列基础模型,并原生支持概率预测。该模型在GIFT-Eval基准测试中,在零样本基础模型准确性指标(MASE、CRPS)上均位列榜首。与v2.0相比,TimesFM-2.5参数量减半,上下文长度显著增加,支持更深层次的历史数据分析,从而提高了预测稳定性和准确性,适用于零售、能源等多个行业,现已在Hugging Face上线。

💡 **模型效率与性能提升**: TimesFM-2.5在参数量减半(从5亿降至2亿)的情况下,显著提升了预测准确性,并在GIFT-Eval基准测试中,在零样本预测的MASE(点预测准确性)和CRPS(概率预测准确性)两项关键指标上均排名第一,显示了其在效率和性能上的重大进步。

📈 **更长的上下文窗口**: 新模型支持高达16,384个历史数据点,远超v2.0的2,048点。这一改进使得模型能够一次性捕捉多周期结构、模式转变和低频成分,无需复杂的拼接或分层处理,特别是在历史数据远超预测范围的场景(如能源负荷、零售需求)下,能显著减少预处理的启发式方法,并提高预测的稳定性。

📊 **原生概率预测能力**: TimesFM-2.5原生支持概率预测,并可选配30M参数的量化头,可实现长达1K范围的连续量化预测。这使得模型不仅能预测单一数值,还能提供预测值的概率分布,为决策者提供更全面的风险评估和更灵活的应用。

🚀 **易于部署与广泛可用性**: 该模型设计高效,支持概率预测,使其非常适合在实际生产环境中部署。TimesFM-2.5已在Hugging Face上线,未来还将集成到BigQuery和Model Garden中,旨在加速零样本时间序列预测在实际应用中的普及。

Google Research has released TimesFM-2.5, a 200M-parameter, decoder-only time-series foundation model with a 16K context length and native probabilistic forecasting support. The new checkpoint is live on Hugging Face. On GIFT-Eval, TimesFM-2.5 now tops the leaderboard across accuracy metrics (MASE, CRPS) among zero-shot foundation models.

What is Time-Series Forecasting?

Time-series forecasting is the practice of analyzing sequential data points collected over time to identify patterns and predict future values. It underpins critical applications across industries, including forecasting product demand in retail, monitoring weather and precipitation trends, and optimizing large-scale systems such as supply chains and energy grids. By capturing temporal dependencies and seasonal variations, time-series forecasting enables data-driven decision-making in dynamic environments.

What changed in TimesFM-2.5 vs v2.0?

Why does a longer context matter?

16K historical points allow a single forward pass to capture multi-seasonal structure, regime breaks, and low-frequency components without tiling or hierarchical stitching. In practice, that reduces pre-processing heuristics and improves stability for domains where context >> horizon (e.g., energy load, retail demand). The longer context is a core design change explicitly noted for 2.5.

What’s the research context?

TimesFM’s core thesis—a single, decoder-only foundation model for forecasting—was introduced in the ICML 2024 paper and Google’s research blog. GIFT-Eval (Salesforce) emerged to standardize evaluation across domains, frequencies, horizon lengths, and univariate/multivariate regimes, with a public leaderboard hosted on Hugging Face.

Key Takeaways

Summary

TimesFM-2.5 shows that foundation models for forecasting are moving past proof-of-concept into practical, production-ready tools. By cutting parameters in half while extending context length and leading GIFT-Eval across both point and probabilistic accuracy, it marks a step-change in efficiency and capability. With Hugging Face access already live and BigQuery/Model Garden integration on the way, the model is positioned to accelerate adoption of zero-shot time-series forecasting in real-world pipelines.


Check out the Model card (HF), Repo, Benchmark and Paper. Feel free to check out our GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.

The post Google AI Ships TimesFM-2.5: Smaller, Longer-Context Foundation Model That Now Leads GIFT-Eval (Zero-Shot Forecasting) appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

TimesFM-2.5 时间序列预测 基础模型 人工智能 Google Research Time-Series Forecasting Foundation Model AI Google Research
相关文章