Abstract

Pansharpening aims to fuse a multispectral (MS) image with an associated panchromatic (PAN) image, producing a composite image with the spectral resolution of the former and the spatial resolution of the latter. Traditional pansharpening methods can be ascribed to a unified detail injection context, which views the injected MS details as the integration of PAN details and bandwise injection gains. In this paper, we design a new detail injection based convolutional neural network (DiCNN) framework for pansharpening with the MS details being directly formulated in end-to-end manners, where the first detail injection based CNN (DiCNN1) mines MS details through the PAN image and the MS image, and the second one (DiCNN2) utilizes only the PAN image. The main advantage of the proposed DiCNNs is that they provide explicit physical interpretations and can achieve fast convergence while achieving high pansharpening quality. Furthermore, the effectiveness of the proposed approaches is also analyzed from a relatively theoretical point of view. Our methods are evaluated via experiments on real MS image datasets, achieving excellent performance when compared to other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.