AI News 09月23日
AI信任赤字阻碍发展,如何建立公众信任?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

一项新报告显示,尽管政界人士大力宣传人工智能(AI)的增长和效率潜力,但公众对这项技术存在信任赤字。许多人对此深感疑虑,这给政府的AI推广计划带来了巨大挑战。报告指出,缺乏信任是人们不愿使用生成式AI的最大障碍。使用AI的频率与信任度呈正相关,初次使用者和每周使用者对AI风险的看法差异显著。此外,年龄、职业等因素也影响着公众对AI的态度。人们对AI在改善生活方面的应用(如交通疏导、癌症检测)接受度较高,但对其用于工作监控或政治广告则表示担忧。要解决这一问题,政府需将AI的好处具体化,展示其在公共服务中的实际成效,并建立有效的监管和培训机制,以确保AI为所有人服务。

📊 **公众信任赤字是AI发展的主要障碍**:报告指出,公众对人工智能(AI)的信任度不足是阻碍其广泛应用和发展的关键因素。政界人士虽看好AI的增长和效率,但民众的疑虑根深蒂固,尤其是在生成式AI方面,缺乏信任已成为人们不愿意使用的最大原因。这种信任赤字直接影响了AI革命的进程。

🔄 **使用频率与信任度成正比**:研究发现,公众对AI的信任度与其使用频率密切相关。从未接触过AI的人群中有56%认为其对社会构成风险,而每周使用AI的人群中,这一比例骤降至26%。这种“熟悉带来安心”的现象表明,亲身体验AI的积极作用有助于缓解对AI的恐惧,并 counter AI将取代所有人的担忧。

🤔 **AI的应用场景决定接受度**:公众对AI的态度并非一成不变,而是取决于AI的具体应用。当AI被用于解决交通拥堵、加速癌症检测等直接改善民生的领域时,接受度较高。然而,当AI被提议用于监控员工绩效或进行政治广告定向投放时,公众的态度则迅速转变,接受度大幅下降。这表明,人们的担忧并非源于AI本身的发展,而是对其使用目的和潜在影响的考量。

🛡️ **建立“合理信任”的路径**:为弥合信任鸿沟,报告提出多项建议。政府应将AI的益处具体化,从宏观的GDP增长转向与民众生活息息相关的改善,如提升就医效率、优化公共服务。同时,需要提供AI在公共服务中切实提升民众体验的证据,而非仅关注技术指标。此外,建立健全的监管机制和提供必要的培训,确保AI安全有效且为公众所用,是构建“合理信任”的基石。

While politicians tout AI’s promise of growth and efficiency, a new report reveals a public trust deficit in the technology. Many are deeply sceptical, creating a major headache for governments’ plans.

A deep dive by the Tony Blair Institute for Global Change (TBI) and Ipsos has put some hard numbers on this feeling of unease. It turns out that a lack of trust is the biggest single reason people are shying away from using generative AI. It’s not just a vague worry; it’s a genuine barrier holding back the AI revolution politicians are so excited about.

Public trust in AI increases with usage

The report shows an interesting split in how we see AI. On one hand, more than half of us have dabbled with generative AI tools in the last year. That’s pretty fast adoption for a technology that was barely on the public radar a few years ago.

However, nearly half the country has never used AI, either at home or for work. This creates a huge divide in how people feel about AI and its growth. The data suggests the more you use AI, the more you tend to trust it.

For people who have never used AI, 56 percent see it as a risk to society. But for the folks who use it every week, that number is cut by more than half, dropping to 26 percent. It’s a classic case of familiarity breeding comfort. If you’ve never had a positive experience with AI, it’s much easier to believe the scary headlines. Seeing its limitations first-hand also helps to counter fears that everyone is about to be replaced by AI.

This divide in public trust towards AI is also shaped by who you are. Younger people are generally more optimistic, while older generations are warier. Professionals in the tech world feel ready for what’s coming, but those in sectors like healthcare and education? They’re feeling far less confident, even though their jobs are likely to be more affected by AI growth.

It’s not what you do, it’s the way that you do it

Among the most revealing parts of the report is that our feelings about AI change depending on the job it’s doing.

We’re quite happy for AI to help sort out traffic jams or speed up cancer detection. Why? Because we can see the direct, positive benefit to our lives. It’s technology that’s clearly working for us.

But ask people how they feel about AI monitoring their performance at work or being used to target them with political ads, and the mood sours instantly. The acceptance plummets. This shows our concerns aren’t really about the growth of AI itself, but about its purpose.

We want to know that AI is being used for good and that rules are in place so that big tech companies aren’t left completely in the driver’s seat.

How do we increase public trust in AI to support growth?

The TBI report doesn’t just point out the problem; it offers a clear path forward to build what it calls “justified trust.”

First, the government needs to change the way it talks about AI. Forget abstract promises of boosting GDP. Instead, talk about what it means for people’s lives: getting a hospital appointment faster, making public services easier to use, or cutting down your daily commute. Show, don’t just tell about the benefits of AI growth.

Next, prove it works. When AI is used in public services, we need to see the evidence that it’s actually making things better for real people, not just more efficient for a spreadsheet. The measure of success should be our experience, not just a technical benchmark.

Of course, none of this works without proper rules and training. Regulators need the power and know-how to keep AI in check, and we all need access to training to feel confident using these new tools safely and effectively. The goal is to make AI something we can all work with, not something that feels like it’s being done to us.

Building public trust in AI to support its growth is about building trust in the people and institutions in charge of it. If the government can show that it’s committed to making AI work for everyone, it might just bring the public along for the ride.

See also: Trump jokes about AI while US and UK sign new tech deal

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Public trust deficit is a major hurdle for AI growth appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI信任 人工智能发展 公众认知 信任赤字 AI应用 AI regulation public trust AI adoption AI skepticism generative AI
相关文章