MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
Computer Vision and Pattern Recognition(2024)
摘要
We introduce MMMU: a new benchmark designed to evaluate multimodal models onmassive multi-discipline tasks demanding college-level subject knowledge anddeliberate reasoning. MMMU includes 11.5K meticulously collected multimodalquestions from college exams, quizzes, and textbooks, covering six coredisciplines: Art Design, Business, Science, Health Medicine, Humanities Social Science, and Tech Engineering. These questions span 30 subjects and183 subfields, comprising 30 highly heterogeneous image types, such as charts,diagrams, maps, tables, music sheets, and chemical structures. Unlike existingbenchmarks, MMMU focuses on advanced perception and reasoning withdomain-specific knowledge, challenging models to perform tasks akin to thosefaced by experts. The evaluation of 14 open-source LMMs as well as theproprietary GPT-4V(ision) and Gemini highlights the substantial challengesposed by MMMU. Even the advanced GPT-4V and Gemini Ultra only achieveaccuracies of 56improvement. We believe MMMU will stimulate the community to buildnext-generation multimodal foundation models towards expert artificial generalintelligence.
更多查看译文
关键词
Large Multimodal Models,Evaluation,Multimodal Large Language Models,LMMs,Large Language Models,LLMs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn