Clouds are an important factor in predicting future weather changes. Cloud image classification is one of the basic issues in the field of ground-based cloud meteorological observation. Deep CNN mainly focuses on the local receptive field, and the processing of global information may be relatively weak. In ground-based cloud image classification, if there is a complex background, it will help to better model the long-range dependence of the image if the relationship between different locations in the image can be globally captured. A ground-based cloud image classification method is proposed based on the fusion of local features and global features (LG_CloudNet). The ground-based cloud image classification method integrates the global feature extraction module (GF_M) and the local feature extraction module (LF_M), using the attention mechanism to weight and merge features, respectively. The LG_CloudNet model enables richer and comprehensive feature representation at lower computational complexity. In order to ensure the learning and generalization capabilities of the model during training, AdamW (Adam weight decay) is combined with learning rate warm-up and stochastic gradient descent with warm restarts methods to adjust the learning rate. The experimental results demonstrate that the proposed method achieves favorable ground-based cloud image classification outcomes and exhibits robust performance in classifying cloud images. In the datasets of GCD, CCSN, and ZNCL, the classification accuracy is 94.94%, 95.77%, and 98.87%, respectively. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Clouds
Image classification
Feature extraction
Transformers
Rain
Education and training
Convolution