STDAN: Deformable Attention Network for Space-Time Video Super-Resolution.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(2024)
摘要
The target of space-time video super-resolution (STVSR) is to increase the spatial-temporal resolution of low-resolution (LR) and low-frame-rate (LFR) videos. Recent approaches based on deep learning have made significant improvements, but most of them only use two adjacent frames, that is, short-term features, to synthesize the missing frame embedding, which cannot fully explore the information flow of consecutive input LR frames. In addition, existing STVSR models hardly exploit the temporal contexts explicitly to assist high-resolution (HR) frame reconstruction. To address these issues, in this article, we propose a deformable attention network called STDAN for STVSR. First, we devise a long short-term feature interpolation (LSTFI) module that is capable of excavating abundant content from more neighboring input frames for the interpolation process through a bidirectional recurrent neural network (RNN) structure. Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts in dynamic video frames are adaptively captured and aggregated to enhance SR reconstruction. Experimental results on several datasets demonstrate that our approach outperforms state-of-the-art STVSR methods. The code is available at https://github.com/littlewhitesea/STDAN.
更多查看译文
关键词
Super-Resolution,Stereo Vision,Multi-View Stereo,Scene Reconstruction,Visual Servoing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn