cs.AI updates on arXiv.org 08月11日
The Fair Game: Auditing & Debiasing AI Algorithms Over Time
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

机器学习领域存在“公平性”难题,现有方法多为事后量化,且定义常有冲突,难以适应动态社会环境。本文提出“Fair Game”新机制,通过构建一个包含“审计员”和“去偏算法”的闭环,并利用强化学习(RL)技术,实现对ML算法预测公平性的动态适应。该机制能模拟社会伦理法律框架的演变,允许根据审计员反馈随时调整公平性目标,为构建灵活、适应性强的公平ML系统提供了新思路,适用于部署前后。

现有公平机器学习(ML)方法多依赖于对已训练算法的观察性量化,其定义常有冲突且需事后应用,难以应对动态变化的社会环境,这与公平ML的初衷存在差距。

“Fair Game”提出一种动态机制,通过将“审计员”和“去偏算法”置于ML算法周围形成闭环,并结合强化学习(RL)技术,实现对ML预测公平性的持续适应。

该机制的核心在于其灵活性,允许通过修改审计员及其量化的偏见类型来动态调整公平性目标,从而模拟社会伦理和法律框架的演变。

“Fair Game”框架支持对ML系统进行预部署和后部署的公平性管理,能够根据与社会的交互反馈,不断优化算法的公平性表现,实现更具适应性的公平ML系统构建。

arXiv:2508.06443v1 Announce Type: new Abstract: An emerging field of AI, namely Fair Machine Learning (ML), aims to quantify different types of bias (also known as unfairness) exhibited in the predictions of ML algorithms, and to design new algorithms to mitigate them. Often, the definitions of bias used in the literature are observational, i.e. they use the input and output of a pre-trained algorithm to quantify a bias under concern. In reality,these definitions are often conflicting in nature and can only be deployed if either the ground truth is known or only in retrospect after deploying the algorithm. Thus,there is a gap between what we want Fair ML to achieve and what it does in a dynamic social environment. Hence, we propose an alternative dynamic mechanism,"Fair Game",to assure fairness in the predictions of an ML algorithm and to adapt its predictions as the society interacts with the algorithm over time. "Fair Game" puts together an Auditor and a Debiasing algorithm in a loop around an ML algorithm. The "Fair Game" puts these two components in a loop by leveraging Reinforcement Learning (RL). RL algorithms interact with an environment to take decisions, which yields new observations (also known as data/feedback) from the environment and in turn, adapts future decisions. RL is already used in algorithms with pre-fixed long-term fairness goals. "Fair Game" provides a unique framework where the fairness goals can be adapted over time by only modifying the auditor and the different biases it quantifies. Thus,"Fair Game" aims to simulate the evolution of ethical and legal frameworks in the society by creating an auditor which sends feedback to a debiasing algorithm deployed around an ML system. This allows us to develop a flexible and adaptive-over-time framework to build Fair ML systems pre- and post-deployment.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

公平机器学习 Fair Game 强化学习 算法偏见 动态适应
相关文章