T-VSL: Text-Guided Visual Sound Source Localization in Mixtures
CVPR 2024(2024)
摘要
Visual sound source localization poses a significant challenge in identifyingthe semantic region of each sounding source within a video. Existingself-supervised and weakly supervised source localization methods struggle toaccurately distinguish the semantic regions of each sounding object,particularly in multi-source mixtures. These methods often rely on audio-visualcorrespondence as guidance, which can lead to substantial performance drops incomplex multi-source localization scenarios. The lack of access to individualsource sounds in multi-source mixtures during training exacerbates thedifficulty of learning effective audio-visual correspondence for localization.To address this limitation, in this paper, we propose incorporating the textmodality as an intermediate feature guide using tri-modal joint embeddingmodels (e.g., AudioCLIP) to disentangle the semantic audio-visual sourcecorrespondence in multi-source mixtures. Our framework, dubbed T-VSL, begins bypredicting the class of sounding entities in mixtures. Subsequently, thetextual representation of each sounding source is employed as guidance todisentangle fine-grained audio-visual source correspondence from multi-sourcemixtures, leveraging the tri-modal AudioCLIP embedding. This approach enablesour framework to handle a flexible number of sources and exhibits promisingzero-shot transferability to unseen classes during test time. Extensiveexperiments conducted on the MUSIC, VGGSound, and VGGSound-Instruments datasetsdemonstrate significant performance improvements over state-of-the-art methods.
更多查看译文
关键词
Audio-Visual Learning,Multi-modal Foundation Model,Sound Source Localization,CLIP
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn