In recent years, deep-learning-based hyperspectral image (HSI) processing and analysis have made significant progress. However, models with high performance require sufficient training samples because scarce labeled samples limit their generalization ability. To solve this problem, we adopt a self-supervised learning strategy and conduct self-training for a neural network model by obtaining different views of the same sample (positive pairs). As a result, the network can learn representative features for classification from unlabeled samples. In addition, to increase the spatial receptive field compared with the use of conventional convolutions, we use the transformer to capture long-distance dependencies for feature enhancement and adequately combine their advantages. Experimental results on two publicly available HSI datasets demonstrate that the proposed method can extract robust features through self-training on unlabeled samples and can be adapted to HSI classification tasks under the small sample conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.