AI News 08月07日
Alan Turing Institute: Humanities are key to the future of AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

由英国艾伦·图灵研究所等机构联合发起的“Doing AI Differently”计划,呼吁采取以人为中心的方法来发展人工智能。该计划认为,AI的产出应被视为文化制品,而非单纯的数学问题结果。当前AI缺乏对文化语境的理解,导致其在细微之处常有不足。研究者指出,AI设计同质化问题严重,导致偏见和局限性被广泛复制。为应对此挑战,该团队提出“解释性AI”概念,旨在设计能理解模糊性、多视角和深层语境的系统。长远愿景是构建人机协同的“解释性技术”,在医疗、气候行动等领域发挥关键作用,确保AI安全可靠,并放大人类的优势。

💡 **AI产出被视为文化制品而非数学结果**:研究者提出,AI生成的作品更类似于小说或绘画,而非电子表格。这种视角转变强调了AI在文化层面的影响,需要超越纯粹的计算逻辑来理解和评估其产出。

⚠️ **AI缺乏语境理解与同质化问题**:当前AI系统因缺乏“解释性深度”,在需要细微差别和语境理解的场景中常失效。同时,AI设计的高度同质化导致相同的盲点、偏见和局限性被复制到众多应用中,限制了其潜力和多样性。

🚀 **提出“解释性AI”新范式**:为解决上述问题,研究者倡导发展“解释性AI”,从设计之初就构建能理解模糊性、多重观点和深层语境的系统。这旨在打破现有AI设计的局限,实现更具适应性和鲁棒性的AI。

🤝 **构建人机协同新模式**:未来AI发展并非取代人类,而是强调人机协同(human-AI ensembles)。通过结合人类的创造力与AI的处理能力,共同应对复杂挑战,并在医疗、气候行动等领域提供更具人文关怀和实效的解决方案。

🔒 **安全与可靠性是核心考量**:对于如劳埃德船级社基金会等合作伙伴而言,确保未来AI系统的安全和可靠部署是首要任务。这不仅关乎技术进步,更关乎AI能否真正造福社会,放大人类的积极特质。

A powerhouse team has launched a new initiative called ‘Doing AI Differently,’ which calls for a human-centred approach to future development.

For years, we’ve treated AI’s outputs like they’re the results of a giant math problem. But the researchers – from The Alan Turing Institute, the University of Edinburgh, AHRC-UKRI, and the Lloyd’s Register Foundation – behind this project say that’s the wrong way to look at it.

What AI is creating are basically cultural artifacts. They’re more like a novel or a painting than a spreadsheet. The problem is, AI is creating this “culture” without understanding any of it. It’s like someone who has memorised a dictionary but has no idea how to hold a real conversation.

This is why AI often fails when “nuance and context matter most,” says Professor Drew Hemment, Theme Lead for Interpretive Technologies for Sustainability at The Alan Turing Institute. The system just doesn’t have the “interpretive depth” to get what it’s really saying.

However, most of the AI in the world is built on just a handful of similar designs. The report calls this the “homogenisation problem” and future AI development must overcome this.

Imagine if every baker in the world used the exact same recipe. You’d get a lot of identical, and frankly, boring cakes. With AI, this means the same blind spots, the same biases, and the same limitations get copied and pasted into thousands of tools we use every day.

We saw this happen with social media. It was rolled out with simple goals, and we’re now living with the unintended societal consequences. The ‘Doing AI Differently’ team is sounding the alarm to make sure we don’t make that same mistake with AI.

The team has a plan to build a new kind of AI, one they call Interpretive AI. It’s about designing systems from the very beginning to work the way people do; with ambiguity, multiple viewpoints, and a deep understanding of context.

The vision is to create interpretive technologies that can offer multiple valid perspectives instead of just one rigid answer. It also means exploring alternative AI architectures to break the mould of current designs. Most importantly, the future isn’t about AI replacing us; it’s about creating human-AI ensembles where we work together, combining our creativity with AI’s processing power to solve huge challenges.

This has the potential to touch our lives in very real ways. In healthcare, for example, your experience with a doctor is a story, not just a list of symptoms. An interpretive AI could help capture that full story, improving your care and your trust in the system.

For climate action, it could help bridge the gap between global climate data and the unique cultural and political realities of a local community, creating solutions that actually work on the ground.

A new international funding call is launching to bring researchers from the UK and Canada together on this mission. But we’re at a crossroads.

“We’re at a pivotal moment for AI,” warns Professor Hemment. “We have a narrowing window to build in interpretive capabilities from the ground up”.

For partners like Lloyd’s Register Foundation, it all comes down to one thing: safety.

“As a global safety charity, our priority is to ensure future AI systems, whatever shape they take, are deployed in a safe and reliable manner,” says their Director of Technologies, Jan Przydatek.

This isn’t just about building better technology. It’s about creating an AI that can help solve our biggest challenges and, in the process, amplify the best parts of our own humanity.

(Photo by Ben Sweet)

See also: AI obsession is costing us our human skills

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Alan Turing Institute: Humanities are key to the future of AI appeared first on AI News.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI发展 以人为本 解释性AI 人机协同 AI伦理
相关文章