Fortune | FORTUNE 10月10日 21:13
人机协作新时代:拥抱AI伙伴,应对责任挑战
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在人机协作日益成为主流的背景下,我们应如何有效利用AI作为伙伴而非仅仅是工具。文章指出了“自动化偏见”是当前最大的风险,可能导致人类过度依赖AI并引发严重错误。更重要的是,当AI系统出错时,现有的法律和伦理框架难以界定责任,形成“责任真空”,给组织带来法律、财务和声誉风险。文章强调,领导者需要采取积极措施,通过明确责任归属、像对待新员工一样培训AI、以及建立反馈机制,来确保人机协作的成功与安全,从而在新的现实中取得优势。

🤖 **AI成为重要伙伴,机遇与风险并存**:文章指出,人类正步入一个将AI视为重要工作伙伴的时代,这种人机协作模式能显著提升团队整体效能。然而,最大的风险在于“自动化偏见”,即人类可能过度依赖AI的建议,忽视自身判断,从而在关键时刻做出错误决策,这是在高度风险环境下需要警惕的问题。

⚖️ **“责任真空”挑战法律与伦理框架**:当AI系统导致严重后果时,现有的法律和伦理体系因其以人类意图为基础而难以界定责任,由此产生的“责任真空”给组织带来了巨大的法律、财务和声誉风险。AI的“黑箱”特性使得追溯根本原因变得困难,阻碍了问题的解决,并可能引发监管机构的严厉干预。

🚀 **领导者应采取三项关键行动应对挑战**:为了应对人机协作带来的挑战,领导者应采取三项实际行动:首先,明确责任归属,指定一位高级主管负责AI的伦理实施,并为每个AI系统指定一位明确的人类所有者;其次,像对待新员工一样进行AI培训,帮助员工理解AI的思维模式、局限性和潜在故障点,建立“校准信任”;最后,将AI视为团队成员,建立有效的反馈渠道,以促进AI的持续改进和人机协同的优化。

These scenarios represent the forefront of human-machine collaboration, a significant shift that is quickly moving from research labs into every critical sector of our society.

In short, we are on the verge of deploying AI not just as a tool, but as an active partner in our most important work. The potential is clear: If we effectively combine the computational power of AI with the intuition, creativity, and ethical judgment of a human, the team will achieve more than either could alone.

But we aren’t prepared to harness this potential. The biggest risk is what’s called “automation bias.” Humans tend to over-rely on automated systems — but, worse, also favor its suggestions while ignoring correct contradictory information. Automation bias can lead to critical errors of commission (acting on flawed advice) and omission (failing to act when a system misses something), particularly in high-stakes environments. 

Even improved proficiency with AI doesn’t reliably mitigate the automation bias. For example, a study of the effectiveness of Clinical Decision Support Systems in health care found that individuals with moderate AI knowledge were the most over-reliant. Both novices and experts showed more calibrated trust.  What did lead to lower rates of automation bias was making study participants accountable for either their overall performance or their decision accuracy.

This leads to the most pressing question for every leader: When the AI-human team fails, who will be held accountable? If an AI-managed power grid fails or a logistics algorithm creates a supply chain catastrophe, who is responsible? Today our legal and ethical frameworks are built around human intent, creating a “responsibility gap” when an AI system causes harm. 

This leads to significant legal, financial, and reputational risks.

First, it produces a legal vacuum. Traditional liability models are designed to assign fault to a human agent with intent and control. But the AI is not a moral agent and its human operators or programmers may lack sufficient control over its emergent, learned behaviors, so it becomes near impossible to assign blame to any individual. This leaves the organization that deployed the technology as the primary target of lawsuits, potentially liable for damages it could neither predict nor directly control. 

Second, this ambiguity around responsibility cripples an organization’s ability to respond effectively. The “black box” nature of many complex AI systems means that even after a catastrophic failure, it may be impossible to determine the root cause. This prevents the organization from fixing the underlying problem, leaving it vulnerable to repeated incidents, and undermines public trust by making it appear unaccountable. 

Finally, it invites regulatory backlash. In the absence of a clear chain of command and accountability, industry regulators are more likely to impose broad, restrictive, stifling innovation and creating significant compliance burdens. 

The gaps in liability frameworks were laid bare after a 2018 fatal accident involving an Uber self-driving car. Debate arose over whether Uber, the system manufacturer, or the human safety driver was at fault. The case ended five years later with “the person sitting behind the wheel” pleading guilty to an endangerment charge, even as the automated driving system failed to identify the person with a bike and brake.

Such ambiguities complicate the implementation of human-machine teams. Research reflects this tension, with one study finding that while most C-suite leaders believe the responsibility gap is a serious challenge, 72% admit they do not have an AI policy in place to guide responsible use.

This isn’t a problem that Washington or Silicon Valley alone can solve. Leaders in any organization, whether public or private, can take steps to de-risk and maximize their return on investment. Here are three practical actions every leader can take to prepare their teams for this new reality. 

Start with responsibility. Appoint a senior executive responsible for the ethical implementation of AI-enabled machines in your organization. Each AI system must have a documented human owner—not a committee—who is accountable for its performance and failures. This ensures clarity from the start. Require your teams to define the level of human oversight for each AI-driven task, deciding whether a human needs to be “in the loop” (approving decisions) or “on the loop” (supervising and able to intervene). Accountability should be the first step, not an afterthought.

Onboard AI like a new hire.  Train your staff not only on how to use AI but also on how it thinks, its limitations, and potential failure points. The aim is to build calibrated trust, not blind trust. Approach AI integration with the same thoroughness as onboarding a new employee. Begin with less critical tasks to help your team understand the AI’s strengths and weaknesses. Establish feedback channels so that human team members can help improve AI. When AI is treated as a teammate, it is more likely to become one.

Integrating AI as a teammate in our work is inevitable, but ensuring success and safety requires proactive leadership. Leaders who establish clear accountability, invest in comprehensive training, and prioritize fairness will thrive. Those who treat AI as just another tool will face the consequences. Our new machine teammates are here; it’s time to lead them effectively.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fortune Global Forum

returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business.

Apply for an invitation.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人机协作 人工智能 责任归属 自动化偏见 AI伦理 Human-Machine Collaboration Artificial Intelligence Accountability Automation Bias AI Ethics
相关文章