DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation

arXiv (Cornell University)(2024)

引用 0|浏览47
摘要
World models have demonstrated superiority in autonomous driving,particularly in the generation of multi-view driving videos. However,significant challenges still exist in generating customized driving videos. Inthis paper, we propose DriveDreamer-2, which builds upon the framework ofDriveDreamer and incorporates a Large Language Model (LLM) to generateuser-defined driving videos. Specifically, an LLM interface is initiallyincorporated to convert a user's query into agent trajectories. Subsequently, aHDMap, adhering to traffic regulations, is generated based on the trajectories.Ultimately, we propose the Unified Multi-View Model to enhance temporal andspatial coherence in the generated driving videos. DriveDreamer-2 is the firstworld model to generate customized driving videos, it can generate uncommondriving videos (e.g., vehicles abruptly cut in) in a user-friendly manner.Besides, experimental results demonstrate that the generated videos enhance thetraining of driving perception methods (e.g., 3D detection and tracking).Furthermore, video generation quality of DriveDreamer-2 surpasses otherstate-of-the-art methods, showcasing FID and FVD scores of 11.2 and 55.7,representing relative improvements of 30
更多
查看译文
关键词
Volume Rendering,Rendering,Visualization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
0
您的评分 :

暂无评分

数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn