钛媒体:引领未来商业与生活新知 10月03日 21:28
AI模型可解释性与未来发展趋势探讨
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

谷歌母公司Alphabet董事长约翰·L·亨内西在NEX-T Summit 2025上就AI模型的可解释性与未来发展趋势发表了见解。他强调,在医疗诊断等高风险领域,AI系统的透明度和可解释性至关重要。亨内西预测,未来AI模型将更加注重能效,小型化、边缘部署的模型将是关键。多模态模型将引领新一轮AI发展,并且混合专家模型将因其低推理成本而占据主导。高质量数据、数据所有权和公平补偿机制,以及硬件创新(如TPUs和类脑架构)也是AI进步的重要因素。此外,AI在医疗领域的巨大潜力,以及学术机构在培养创新人才中的作用也得到了讨论。

💡 AI模型的可解释性是关键挑战,尤其是在医疗诊断等高影响领域,AI系统需要提供透明的推理过程。这是当前研究的早期阶段,但至关重要。

🚀 未来AI模型将持续演进,推理的总计算量将远超训练量。小型化、边缘部署的模型对于降低能耗、提升可及性至关重要,能够实现分布式设备的实时推理。

🧠 多模态模型将定义下一波AI发展,能够同时处理文本、图像和视频。与早期大型语言模型不同,它们旨在激活相关节点,降低计算和能耗成本。混合专家模型因其低推理成本已在许多任务中占据主导地位。

📊 高质量数据是AI进步的关键支柱。数据数量的重要性逐渐让位于数据的质量和可获取性。数据所有权碎片化,尤其是在企业环境中,需要建立公平的补偿机制。联邦学习是解决数据隔离和隐私问题的有前景的方法。

💻 硬件创新是AI进步的核心。尽管GPU在生成式AI训练中仍占主导,但TPU和其他专用处理器在成本效益方面表现更优。量化等技术提高了效率,但芯片与内存之间的通信成本等物理限制将塑造未来硬件设计,可能需要借鉴人脑的能效来设计新架构。

On-site photo

TMTPOST — The explicability of AI models is a significant challenge, especially in high-impact fields such as medical diagnostics, AI systems must provide transparent reasoning for their outputs, said John L. Hennessy, the Chairman of Google’s parent company Alphabet.

Hennessy, also a Turing Award laureate, shared his insights at a dialogue named “Silicon Valley & AI Trends” with Lu Zhang, the Founding Partner of Fusion Fund, during the NEX-T Summit 2025 in Silicon Valley on September 27.

“If you're going to start doing medical diagnosis, for example, or other high impact kinds of things, you're going to have to explain. The model is going to have to explain somehow. There's work going on in this area, but it's early on this fundamental research. People are trying to get to improve this kind of explicability problem,” he explained. “I think we're going to continue to see the models evolved, because we're now at the point where, as the number of people using these AI models goes way up, the total amount of computation involved in inference is going to blow past the amount of time and training,” he noted.

Innovations in smaller, edge-deployable models were highlighted as crucial for reducing energy consumption, enhancing accessibility, and enabling real-time inference on distributed devices, he added.

Hennessy, the former president of Stanford University, pointed out that multimodal models—capable of processing text, images, and video simultaneously—will define the next wave of AI development. Unlike early large language models, these models aim to activate only relevant nodes, reducing computational and energy costs.

He mentioned that with the combination of text, images, videos, and other media, multimodal models are expected to play a leading role. He also noted that mixture-of-experts models have already taken over many tasks because of their lower inference costs.

High-quality data emerged as another critical pillar for AI advancement. While data quantity has historically driven AI performance, the quality and accessibility of data are now paramount. Hennessy pointed out that data ownership is often fragmented, especially in enterprise contexts, necessitating fair compensation mechanisms for data creators.

Zhang pointed out that federated learning is a promising approach to address data isolation and privacy concerns, allowing AI models to train across distributed datasets without compromising sensitive information.

Hardware innovation remains a central component of AI progress, Hennessy said. While GPUs continue to dominate generative AI training, TPUs and other specialized processors have demonstrated superior cost-performance ratios, he added.

Hennessy noted that techniques such as quantization using lower-precision arithmetic have enhanced efficiency, but inherent physical limitations, particularly communication costs between chips and memory, will shape future hardware design. Achieving further breakthroughs may require fundamentally new architectures inspired by the energy efficiency of the human brain.

“We have already used several of the big opportunities for improving performance. One was what's so called quantitization. So rather than do everything with 32 or 64 bit floating point, we now do things with four bit floating point,” he elaborated.

Zhang said healthcare as a sector with enormous AI potential. Currently, AI applications utilize less than 5% of available health-related data in the United States, despite healthcare representing nearly 20% of U.S. GDP. Opportunities exist across diagnostics, digital therapeutics, and workflow optimization.

“I personally think this year is the prime time for AI in healthcare that probably lots of you didn't know that first, healthcare is almost 20% of U.S. GDP, the whole industry, and also in the human society 30% of the data we have are healthcare related. Guess how much (is) being used for application right now? Less than 5%. So it's like a huge amount of value we haven't be able to discover with AI,” Zhang illustrated with numbers.

She stressed that AI should augment rather than replace physicians, improving efficiency and patient outcomes. Examples included AI-assisted radiology and automated medical coding, which allow healthcare professionals to spend more time with patients. Achieving large-scale adoption requires aligning incentives across diverse stakeholders, including insurers, pharmaceutical companies, regulators, and patients. Globally, AI could help address shortages of healthcare professionals, especially in low-resource regions.

They also discussed the role of academic institutions in nurturing entrepreneurship. Stanford University’s programs—such as the Stanford Technology Ventures Program and Lean LaunchPad—provide students with mentorship, funding access, and practical experience. Students are encouraged to become "π-shaped," combining deep technical expertise with cross-disciplinary knowledge to maximize innovative potential.

“With the depths of the research, horizontal is like actually exposure to different types of technology and also there's another creative innovation mindset. I think that's kind of the foundation for lots of students (who) want to become entrepreneurs,” Zhang explained.

International talent is vital to sustaining U.S. leadership in technology. Restrictive policies on research funding or global recruitment could undermine innovation ecosystems. By maintaining openness and supporting global scholars, universities can continue to produce world-leading entrepreneurs and breakthrough technologies, they both said.

“My view is that U.S. research universities are one of the jewels in the crown of this country, and undermining them, whether it's by cutting research or inhibiting our ability to bring the best and brightest from around the world, is a major mistake,” Hennessy warned as he concluded the dialogue. 

During the conference under the theme of the New Era of X-Technology, a galaxy of luminaries, including John Hennessy, the Chair of Google’s parent company Alphabet; Gary Gensler, the former Chair of the U.S. Securities and Exchange Commission (SEC); and Michael Snyder, a pioneer in genomics, shared their insights on artificial intelligence, innovation, global cooperation and  governance. The discussions at the event sparked ideas that are set to shape the future of the tech industry.  

更多精彩内容,关注钛媒体微信号(ID:taimeiti),或者下载钛媒体App

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 人工智能 可解释性 AI模型 多模态模型 数据质量 硬件创新 医疗AI Explainability AI Models Multimodal AI Data Quality Hardware Innovation Healthcare AI
相关文章