UPanGAN: Unsupervised Pansharpening Based on the Spectral and Spatial Loss Constrained Generative Adversarial Network.

INFORMATION FUSION(2023)

引用 13|浏览48
摘要
It is observed that, in most of the CNN-based pansharpening methods, the multispectral (MS) images are taken as the ground truth, and the downsampled panchromatic (Pan) and MS images are taken as the training data. However, the trained models from the downsampled images are not suitable for the pansharpening of the MS images with rich spatial and spectral information at their original spatial resolution. To tackle this problem, a novel iterative network based on spectral and textural loss constrained Generative Adversarial Network (GAN) is proposed for pansharpening. First, instead of directly outputting the fused imagery, the GAN focuses on generating the mean difference image. The input of the GAN is a good initial difference image, which will make the network work better. Second, the coarse-to-fine fusion framework is designed to generate the fused imagery. It uses two optimized discriminators to distinguish the generated images, and performs multi-level fusion processing on PAN and MS images to generate the best pansharpening image in full resolution. Finally, the well-designed loss functions are embedded into both the generator and the discriminators to accurately preserve the fidelity of the fused imagery. We validated our method by the images from QuickBird, GaoFen-2 and WorldView-2 satellites. The experimental results demonstrated that the proposed method obtained a better fusion performance than the state-of-the-art methods in both visual comparison and quantitative evaluation.
更多
查看译文
关键词
Pansharpening,Image fusion,Convolutional neural network,Generative Adversarial Network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
0
您的评分 :

暂无评分

数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn