MIT News - Computer Science and Artificial Intelligence Laboratory 09月25日 18:00
AI模型通过视听同步增强对世界的理解
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

研究人员开发了一种新的AI模型,使其能够像人类一样通过视听信息学习,能够精确匹配视频中的声音和画面。该模型名为CAV-MAE Sync,通过将音频分解成更小的片段并调整训练方式,实现了对视频帧和对应音频之间更精细的关联性学习。此外,模型还引入了新的数据表示方式,以平衡对比学习和重构学习的目标。这些改进显著提升了模型在视频检索和视听场景分类任务中的准确性,甚至优于一些需要大量数据的先进方法,为机器人理解真实世界和多模态内容策展带来了新可能。

🧠 **视听信息同步学习**:该AI模型通过学习视频中的视觉画面与同步发生的音频,实现对世界的理解。它能够精确地将声音事件(如门关上的声音)与对应的视频帧(门关闭的画面)匹配起来,模拟人类通过多感官信息进行学习的方式,这对于理解真实世界至关重要。

🔍 **精细化关联性与音频分割**:与早期模型将整个音频片段与视频关联不同,CAV-MAE Sync将音频分解为更小的窗口,使得模型能够学习到每一帧视频画面与其对应极短时间段内声音之间的更精细关联。这种细粒度的匹配能力显著提升了模型的性能。

⚖️ **架构优化与双重学习目标**:该模型通过引入“全局令牌”和“寄存令牌”等新的数据表示方式,并调整了模型架构,使其能够更好地平衡对比学习(关联相似视听数据)和重构学习(根据查询恢复数据)这两个关键的学习目标,从而提升整体表现。

🚀 **性能提升与未来应用**:通过这些简单的改进,CAV-MAE Sync在视频检索和视听场景分类任务上的准确性得到了显著提升,甚至优于需要更多训练数据的复杂模型。未来,该技术有望集成到大型语言模型中,或用于机器人感知,拓展多模态AI的应用范围。

Humans naturally learn by making connections between sight and sound. For instance, we can watch someone playing the cello and recognize that the cellist’s movements are generating the music we hear.

A new approach developed by researchers from MIT and elsewhere improves an AI model’s ability to learn in this same fashion. This could be useful in applications such as journalism and film production, where the model could help with curating multimodal content through automatic video and audio retrieval.

In the longer term, this work could be used to improve a robot’s ability to understand real-world environments, where auditory and visual information are often closely connected.

Improving upon prior work from their group, the researchers created a method that helps machine-learning models align corresponding audio and visual data from video clips without the need for human labels.

They adjusted how their original model is trained so it learns a finer-grained correspondence between a particular video frame and the audio that occurs in that moment. The researchers also made some architectural tweaks that help the system balance two distinct learning objectives, which improves performance.

Taken together, these relatively simple improvements boost the accuracy of their approach in video retrieval tasks and in classifying the action in audiovisual scenes. For instance, the new method could automatically and precisely match the sound of a door slamming with the visual of it closing in a video clip.

“We are building AI systems that can process the world like humans do, in terms of having both audio and visual information coming in at once and being able to seamlessly process both modalities. Looking forward, if we can integrate this audio-visual technology into some of the tools we use on a daily basis, like large language models, it could open up a lot of new applications,” says Andrew Rouditchenko, an MIT graduate student and co-author of a paper on this research.

He is joined on the paper by lead author Edson Araujo, a graduate student at Goethe University in Germany; Yuan Gong, a former MIT postdoc; Saurabhchand Bhati, a current MIT postdoc; Samuel Thomas, Brian Kingsbury, and Leonid Karlinsky of IBM Research; Rogerio Feris, principal scientist and manager at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Hilde Kuehne, professor of computer science at Goethe University and an affiliated professor at the MIT-IBM Watson AI Lab. The work will be presented at the Conference on Computer Vision and Pattern Recognition.

Syncing up

This work builds upon a machine-learning method the researchers developed a few years ago, which provided an efficient way to train a multimodal model to simultaneously process audio and visual data without the need for human labels.

The researchers feed this model, called CAV-MAE, unlabeled video clips and it encodes the visual and audio data separately into representations called tokens. Using the natural audio from the recording, the model automatically learns to map corresponding pairs of audio and visual tokens close together within its internal representation space.

They found that using two learning objectives balances the model’s learning process, which enables CAV-MAE to understand the corresponding audio and visual data while improving its ability to recover video clips that match user queries.

But CAV-MAE treats audio and visual samples as one unit, so a 10-second video clip and the sound of a door slamming are mapped together, even if that audio event happens in just one second of the video.

In their improved model, called CAV-MAE Sync, the researchers split the audio into smaller windows before the model computes its representations of the data, so it generates separate representations that correspond to each smaller window of audio.

During training, the model learns to associate one video frame with the audio that occurs during just that frame.

“By doing that, the model learns a finer-grained correspondence, which helps with performance later when we aggregate this information,” Araujo says.

They also incorporated architectural improvements that help the model balance its two learning objectives.

Adding “wiggle room”

The model incorporates a contrastive objective, where it learns to associate similar audio and visual data, and a reconstruction objective which aims to recover specific audio and visual data based on user queries.

In CAV-MAE Sync, the researchers introduced two new types of data representations, or tokens, to improve the model’s learning ability.

They include dedicated “global tokens” that help with the contrastive learning objective and dedicated “register tokens” that help the model focus on important details for the reconstruction objective.

“Essentially, we add a bit more wiggle room to the model so it can perform each of these two tasks, contrastive and reconstructive, a bit more independently. That benefitted overall performance,” Araujo adds.

While the researchers had some intuition these enhancements would improve the performance of CAV-MAE Sync, it took a careful combination of strategies to shift the model in the direction they wanted it to go.

“Because we have multiple modalities, we need a good model for both modalities by themselves, but we also need to get them to fuse together and collaborate,” Rouditchenko says.

In the end, their enhancements improved the model’s ability to retrieve videos based on an audio query and predict the class of an audio-visual scene, like a dog barking or an instrument playing.

Its results were more accurate than their prior work, and it also performed better than more complex, state-of-the-art methods that require larger amounts of training data.

“Sometimes, very simple ideas or little patterns you see in the data have big value when applied on top of a model you are working on,” Araujo says.

In the future, the researchers want to incorporate new models that generate better data representations into CAV-MAE Sync, which could improve performance. They also want to enable their system to handle text data, which would be an important step toward generating an audiovisual large language model.

This work is funded, in part, by the German Federal Ministry of Education and Research and the MIT-IBM Watson AI Lab.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 机器学习 视听同步 多模态学习 计算机视觉 自然语言处理 人工智能 Machine Learning Audiovisual Synchronization Multimodal Learning Computer Vision Natural Language Processing
相关文章