In recent years, lunar exploration has become a hot spot in the world again. High-resolution lunar surface images are of great significance to lunar research, and at the same time are crucial to the safe landing of lunar probes. Due to the limitation of the orbital height and hardware, the resolution of the lunar remote sensing images is restricted, so it is particularly important to carry out super-resolution reconstruction of the lunar surface image. At present, most image super-resolution algorithms use a single fixed degradation model, such as using only bicubic interpolation algorithm for down-sampling, or adding specified blur, noise, etc. However, the real image degradation model is extremely complex and difficult to express with specific formulas, so this paper introduces a more complex degradation model when super-resolving the lunar image and simulates the complex degradation process in reality by adding more randomness. Secondly, this paper uses a deep learning network that combines a CNN network with residual structure and a transformer architecture for image super-resolution reconstruction, where the transformer architecture is used for deep feature extraction. The proposed method is experimented on Chang'e-2 7-meter resolution lunar surface remote sensing images, which verifies the effectiveness of the super-resolution algorithm proposed in this paper and outperforms the current popular methods in terms of visual effects and commonly used evaluation metrics. This work aims to improve the image clarity of the lunar surface in order to enhance the environment-awareness capability of the lunar probe and further improve its autonomous capability on the lunar surface.
|