Pansharpening is about fusing a high spatial resolution panchromatic image with a simultaneously acquired multispectral image with lower spatial resolution. In this paper, we propose a Laplacian pyramid pansharpening network architecture for accurately fusing a high spatial resolution panchromatic image and a low spatial resolution multispectral image, aiming at getting a higher spatial resolution multispectral image. The proposed architecture considers three aspects. First, we use the Laplacian pyramid method whose blur kernels are designed according to the sensors’ modulation transfer functions to separate the images into multiple scales for fully exploiting the crucial spatial information at different spatial scales. Second, we develop a fusion convolutional neural network (FCNN) for each scale, combining them to form the final multi-scale network architecture. Specifically, we use recursive layers for the FCNN to share parameters across and within pyramid levels, thus significantly reducing the network parameters. Third, a total loss consisting of multiple across-scale loss functions is employed for training, yielding higher accuracy. Extensive experimental results based on quantitative and qualitative assessments exploiting benchmarking datasets demonstrate that the proposed architecture outperforms state-of-the-art pansharpening methods. Code is available at https://github.com/ChengJin-git/LPPN.

Schematic Diagram of the Proposed Method

The architecture of our model is presented below:

Sub-Network (FCNN, fusion convolutional neural network) design is presented below:


Full paper: click here

TensorFlow code: click here


author = {Cheng Jin, Liang-Jian Deng, Ting-Zhu Huang and Gemine Vivone},
title = {Laplacian pyramid networks: A new approach for multispectral pansharpening},
journal = {Information Fusion},
volume = {78},
pages = {158-170},
year = {2022},
issn = {1566-2535},
doi = {https://doi.org/10.1016/j.inffus.2021.09.002}