cs.AI updates on arXiv.org 10月30日 12:18
多目标强化学习:在最大化回报的同时分散访问奖励状态
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文提出了一种新的多目标强化学习(Multi-Goal RL)算法,旨在解决传统强化学习算法在面对需要分散访问奖励状态而非仅仅最大化单一回报源的问题。现有方法如熵正则化和内在奖励虽然鼓励探索,但可能无法实现奖励状态的广泛分布。而匹配目标分布的方法则需要预先知道目标状态,这在大型系统中不可行。该算法通过学习一个高回报策略混合,使状态分布均匀地分散在目标状态集合上。算法通过优化自定义的奖励函数,并结合离线强化学习算法来更新策略混合,同时证明了其性能保证和收敛性。

🎯 **多目标强化学习的挑战**: 传统强化学习侧重于最大化单一回报源,但在许多现实场景中,需要策略在最大化回报的同时,能够广泛地访问多个奖励状态,形成分散的边际状态分布。现有方法如熵正则化和内在奖励可能无法实现这一目标,而匹配目标分布的方法则依赖于预先可知的目标状态,这在复杂系统中难以实现。

💡 **新颖的算法设计**: 本文提出了一种新算法,通过学习策略的混合形式,使得状态分布能够均匀地分散在目标状态集合上,同时兼顾最大化预期回报。该算法的核心在于优化一个自定义的奖励函数,该函数基于当前策略混合和采样轨迹计算得出,并利用离线强化学习来更新策略混合。

📈 **理论保证与实验验证**: 该算法在理论上被证明具有性能保证,能够有效地收敛到优化预期回报和目标状态边际分布分散性的目标。研究者在合成的MDP和标准的RL环境中进行了实验,以评估该算法的有效性,并展示了其在实现分散访问和最大化回报方面的能力。

arXiv:2510.25311v1 Announce Type: cross Abstract: Reinforcement Learning algorithms are primarily focused on learning a policy that maximizes expected return. As a result, the learned policy can exploit one or few reward sources. However, in many natural situations, it is desirable to learn a policy that induces a dispersed marginal state distribution over rewarding states, while maximizing the expected return which is typically tied to reaching a goal state. This aspect remains relatively unexplored. Existing techniques based on entropy regularization and intrinsic rewards use stochasticity for encouraging exploration to find an optimal policy which may not necessarily lead to dispersed marginal state distribution over rewarding states. Other RL algorithms which match a target distribution assume the latter to be available apriori. This may be infeasible in large scale systems where enumeration of all states is not possible and a state is determined to be a goal state only upon reaching it. We formalize the problem of maximizing the expected return while uniformly visiting the goal states as Multi Goal RL in which an oracle classifier over the state space determines the goal states. We propose a novel algorithm that learns a high-return policy mixture with marginal state distribution dispersed over the set of goal states. Our algorithm is based on optimizing a custom RL reward which is computed - based on the current policy mixture - at each iteration for a set of sampled trajectories. The latter are used via an offline RL algorithm to update the policy mixture. We prove performance guarantees for our algorithm, showing efficient convergence bounds for optimizing a natural objective which captures the expected return as well as the dispersion of the marginal state distribution over the goal states. We design and perform experiments on synthetic MDPs and standard RL environments to evaluate the effectiveness of our algorithm.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

强化学习 多目标学习 策略优化 Reinforcement Learning Multi-Goal Learning Policy Optimization
相关文章