Language Repository for Long Video Understanding
CoRR(2024)
摘要
Language has become a prominent modality in computer vision with the rise of
multi-modal LLMs. Despite supporting long context-lengths, their effectiveness
in handling long-term information gradually declines with input length. This
becomes critical, especially in applications such as long-form video
understanding. In this paper, we introduce a Language Repository (LangRepo) for
LLMs, that maintains concise and structured information as an interpretable
(i.e., all-textual) representation. Our repository is updated iteratively based
on multi-scale video chunks. We introduce write and read operations that focus
on pruning redundancies in text, and extracting information at various temporal
scales. The proposed framework is evaluated on zero-shot visual
question-answering benchmarks including EgoSchema, NExT-QA, IntentQA and
NExT-GQA, showing state-of-the-art performance at its scale. Our code is
available at https://github.com/kkahatapitiya/LangRepo.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn