Fortune | FORTUNE 10月14日 21:23
如何信任初次见面的机器人?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着人工智能(AI)的飞速发展,机器人正逐渐融入我们的生活,从语音助手到能够执行复杂任务的实体机器人。然而,当一个从未见过的机器人出现在你面前,并表现出超出预期的智能和个性时,信任便成为一个核心问题。文章探讨了当前AI技术,特别是大型语言模型(LLMs)与物理实体的结合所带来的信任挑战。与人类社会成熟的信誉评估体系不同,机器人尚缺乏可信赖的审查和声誉机制。一旦软件能力与物理世界的行动力结合,潜在的风险将从数字领域延伸至现实世界的伤害。因此,透明度、开源软件、可验证的规则(如基于区块链的“机器人三定律”)以及可解释的决策过程,是构建人机信任的关键。通过建立公共规则、透明决策和以人为先的标准,我们才能确保机器人成为我们期望中的可信赖伙伴。

🤖 **AI驱动的机器人正快速融入生活,带来信任挑战:** 随着大型语言模型(LLMs)与物理实体结合,机器人已不再是科幻,它们能执行复杂任务,甚至展现出个性化交互。然而,与人类社会成熟的信誉评估体系不同,我们缺乏对这些新出现的机器人进行有效 vetting 和建立信任的方法,这使得我们在面对一个初次见面的机器人时,对其可靠性和潜在风险感到不确定。

🔒 **缺乏可信赖的审查和声誉机制是关键挑战:** 传统上,我们通过评级、历史记录、专业认证等方式来信任人类服务提供者(如Uber司机、医生)。但对于机器人,这些机制尚未建立。文章指出,一个潜在的风险是,当能够操纵数字系统的软件获得在物理世界行动的能力时,其潜在的危害可能包括现实世界的伤害,例如被远程控制打开家门或被重新利用进行伤害。

💡 **透明度、开源和可验证的规则是构建信任的基石:** 文章提出,应对信任挑战的关键在于透明度。OpenMind公司通过采用开源软件、将不可篡变的“机器人三定律”等规则下载到以太坊区块链上,确保了机器人的行为准则公开、可验证且防篡改。这种做法类似于让所有Uber司机遵守相同的行为准则和交通规则,从而增加了可信度。

🤝 **可解释的决策过程和可信的网络是未来发展方向:** 为了进一步增强信任,机器人的决策过程需要是可解释的,即AI模块之间能够以清晰的语言交流,并且其思考过程被记录下来供人类审计。当机器人犯错时,用户应能理解其原因。此外,随着机器人网络的发展,它们之间的技能共享和学习也需要被有效监管,以确保安全和对齐人类的价值观。最终,可信赖的机器人需要建立在公共规则、可解释决策和以人为先的标准之上。

Imagine you’re walking through your neighborhood and a four-foot-tall robot walks up beside you. It greets you by name, remembers your favorite coffee order, and offers to carry your groceries. You’ve never seen it before. Should you trust it?

That question isn’t science fiction anymore. Machines are getting smart. Large language models (LLMs) already contain vast amounts of information. They know about the physical world, human behaviors, our history, the nature of human jobs, and the behaviors of our pets. This stored information allows LLMs and other AIs to write books, make us laugh, fix computer code, get perfect scores on medical licensing exams, and file our taxes. LLMs, when given a physical body, are starting to autonomously navigate cities and hospitals, can open doors and get into robotic cars, hold conversations, and learn about the humans around them. Our generation is watching machines wake up. Robots aren’t just inert piles of plastic and metal anymore, but are growing into teachers, co-workers, and health companions. Some humans cry when familiar robots receive LLM or privacy upgrades that change their personality. Soldiers have tried to help robot team members to safety, despite it being (rationally) clear that machines can be fixed or replaced.

The main challenge is how fast AI is improving. People have spent thousands of years developing systems for vetting and reputation. You trust your Uber driver because you can see their rating and ride history. Your family doctor (hopefully) has performed hundreds of successful procedures over years of training. You might trust a teacher because your school district hired them, presumably after extensive vetting. None of this exists yet for robots. A robot in your home or office could be a marvel or a liability. 

The stakes are higher than a buggy app or a hacked email account. We all understand how catastrophic a major cyberattack can be — banks closed, infrastructure disabled, sensitive data stolen. A compromised household robot could be misused from anywhere in the world, such as to remotely open your front door from inside your home. An autonomous delivery bot could be repurposed to harm its recipient. When the software that can already manipulate our digital systems gains the ability to act in the physical world, the potential for harm includes real-world injury and risk.

The importance of transparency

At OpenMind, we think that part of the answer is transparency. The robots we build and the software they run are open source. You don’t have to take my word for what’s inside — you can read the code. Beyond open software, when our robots boot, they download immutable guardrails like Asimov’s Laws of Robotics from the Ethereum blockchain. That way, their rules aren’t hidden in a private database. The rules are public, verifiable, and tamper-resistant. It’s the robot equivalent of knowing that all Uber drivers have agreed to the same rules of conduct, and the same rules of the road. Why go to those lengths? 

Many of the environments where human-facing universal robots can provide benefits — homes, hospitals, schools — are sensitive and personal. A tutoring robot helping your kids with math should have a track record of safe and productive sessions. An elder-care assistant needs a verifiable history of respectful, competent service. A delivery robot approaching your front door should be as predictable and trustworthy as your favorite mail carrier. Without trust, adoption will never take place, or quickly stall.

Trust is built gradually and also reflects common understanding. We design our systems to be explainable: multiple AI modules talk to each other in plain language, and we log their thinking so humans can audit decisions. If a robot makes a mistake — drops the tomato instead of placing it on the counter — you should be able to ask why and get an answer you can understand.

Over time, as more robots connect and share skills, trust will depend on the network too. We learn from peers, and machines will learn from us and from other machines. That’s powerful but just like parents are concerned about what their kids learn on the web, we need good ways to audit and align skill exchange for robots.. Governance for human–machine societies isn’t optional; it’s fundamental infrastructure.

So, how do you trust a robot you’ve never met? With verification and reputation systems we use for humans – but adapted for machines. Public rules, explainable decisions, standards that are visible, enforceable, and human-first. Only then can we get to the future we actually want: one where robots are trusted teammates in the places that matter.

(For readers unfamiliar: Isaac Asimov’s Three Laws of Robotics — first introduced in 1942 — state that a robot may not harm a human or, through inaction, allow a human to come to harm; must obey human orders unless those orders conflict with the first law; and must protect its own existence so long as that protection does not conflict with the first or second laws.)

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 机器人 信任 AI伦理 大型语言模型 透明度 开源 可解释性AI Artificial Intelligence Robotics Trust AI Ethics Large Language Models Transparency Open Source Explainable AI
相关文章