Towards Deep Reference Frame in Versatile Video Coding NNVC
2023 IEEE International Conference on Visual Communications and Image Processing (VCIP)(2023)
摘要
In this paper, we propose a deep reference frame generation method that aims to enhance bi-direction inter prediction under random access configuration in the latest video coding standard, Versatile Video Coding. Specifically, a pair of neighboring reconstructed frames are selected from decoded picture buffer and put into an optical-flow-based interpolation network to synthesize a new frame, similar to the current to-be-coded frame. Subsequently, this synthesized frame is incorporated into two-sided picture reference lists as additional reference frames. The proposed method is employed in both the encoding and decoding processes to eliminate bitstream signaling for supplementary information. The Small Ad-hoc Deep-Learning Library is utilized for implementing the proposed method. Experimental results demonstrate 3.67%/7.34%/6.51% coding efficiency improvements for Y/U/V components under the random access configuration when compared to the Versatile Video Coding NNVC reference software VTM-11_NNVC-5.0.
更多查看译文
关键词
inter prediction,neural network video coding (NNVC),Versatile Video Coding (VVC)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn