Cloud cover is a persistent challenge in remote sensing imagery, hindering accurate interpretation and analysis. Existing research focuses on supervised approaches to removing clouds from satellite images. Due to the highly challenging task of acquiring paired data for the area of interest with and without obstructions, we propose an unsupervised approach for thin cloud removal using the latest image-to-image translation generative adversarial network (GAN) called the UNet vision transformer cycle-consistent GAN (UVCGAN), enhanced with variational mode decomposition (VMD). Thin cloud removal from satellite images is adopted as an image-to-image translation task in this approach. VMD is used to enhance the cloud-covered input image by retaining most of the image-specific features by reconstructing the enhanced image from modes that have the most image-specific features, quantitatively identified by modes with the highest entropy, contrast, and energy. The enhanced image is taken as input by the UVCGAN model to generate an image without thin clouds. The proposed methodology is compared against the latest methods, and quantitative evaluations indicate superior performances in terms of both full- and no-reference metrics, affirming the reliability and robustness of our approach. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Clouds
Satellites
Earth observing sensors
Satellite imaging
Gallium nitride
Modal decomposition
Image enhancement