Single Remote Sensing Image Super-Resolution via a Generative Adversarial Network With Stratified Dense Sampling and Chain Training

Bibliographic Details
Title: Single Remote Sensing Image Super-Resolution via a Generative Adversarial Network With Stratified Dense Sampling and Chain Training
Authors: Meng, Fanen, Wu, Sensen, Li, Yadong, Zhang, Zhe, Feng, Tian, Liu, Renyi, Du, Zhenhong
Source: IEEE Transactions on Geoscience and Remote Sensing; 2024, Vol. 62 Issue: 1 p1-22, 22p
Abstract: Super-resolution (SR) methods have significantly contributed to the improvement of the spatial resolution of remote sensing (RS) images. The development of deep learning empowers novel methods to learn informative feature representation from massive low-resolution (LR) and high-resolution (HR) image pairs. Conventional RS image SR methods, however, may fail in large-scale ( $\times 8$ and $\times 9$ ) SR tasks. Specifically, a larger scale factor corresponds to less information in LR images, which is a considerable challenge to SR. To address the issue, we propose a novel method for single RS image SR (SRSISR) based on stratified dense sampling to effectively extract image features. Specifically, the proposed SR dense-sampling residual attention network (SRDSRAN) combines dense sampling and residual learning to improve multilevel feature fusion and gradient propagation and employs local and global attentions to learn important features and long-range interdependence in the channel and spatial dimensions. Meanwhile, we also devise a discriminator model using local and global attentions and with the loss function integrating ${L}_{1}$ pixel loss, ${L}_{1}$ perceptual loss, and relativistic adversarial loss to obtain the perceptually realistic images. Besides, we introduce a chain training to promote performance and expedite the training process for large-scale SR. Experimental results on UC Merced image and other multispectral data demonstrated that our SRDSRAN outperformed the current state-of-the-art methods quantitatively and in visual quality and obtained a higher classification accuracy in scene classification, proving its potential for applications with other downstream tasks. The code of SRADSGAN will be available at https://github.com/Meng-333/SRADSGAN.
Database: Supplemental Index
More Details
ISSN:01962892
15580644
DOI:10.1109/TGRS.2023.3344112
Published in:IEEE Transactions on Geoscience and Remote Sensing
Language:English