cs.AI updates on arXiv.org 10月15日 13:01
在线强化学习中的SPI理论框架与DeepSPI算法
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文研究了在线强化学习中的安全策略改进(SPI),提出了一种理论框架,通过限制策略更新到当前策略的邻近区域,确保单调改进和收敛。并基于此,引入了DeepSPI算法,在ALE-57基准测试中,其性能优于或等于PPO和DeepMDPs等基线,同时保留了理论保证。

arXiv:2510.12312v1 Announce Type: cross Abstract: Safe policy improvement (SPI) offers theoretical control over policy updates, yet existing guarantees largely concern offline, tabular reinforcement learning (RL). We study SPI in general online settings, when combined with world model and representation learning. We develop a theoretical framework showing that restricting policy updates to a well-defined neighborhood of the current policy ensures monotonic improvement and convergence. This analysis links transition and reward prediction losses to representation quality, yielding online, "deep" analogues of classical SPI theorems from the offline RL literature. Building on these results, we introduce DeepSPI, a principled on-policy algorithm that couples local transition and reward losses with regularised policy updates. On the ALE-57 benchmark, DeepSPI matches or exceeds strong baselines, including PPO and DeepMDPs, while retaining theoretical guarantees.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

在线强化学习 SPI DeepSPI算法 策略改进 理论框架
相关文章