The increase of the spatial resolution of remote-sensing sensors helps to capture the abundant details related to the semantics of surface objects. However, it is difficult for the popular object-oriented classification approaches to acquire higher level semantics from the high spatial resolution remote-sensing (HSR-RS) images, which is often referred to as the “semantic gap.” Instead of designing sophisticated operators, convolutional neural networks (CNNs), a typical deep learning method, can automatically discover intrinsic feature descriptors from a large number of input images to bridge the semantic gap. Due to the small data volume of the available HSR-RS scene datasets, which is far away from that of the natural scene datasets, there have been few reports of CNN approaches for HSR-RS image scene classifications. We propose a practical CNN architecture for HSR-RS scene classification, named the large patch convolutional neural network (LPCNN). The large patch sampling is used to generate hundreds of possible scene patches for the feature learning, and a global average pooling layer is used to replace the fully connected network as the classifier, which can greatly reduce the total parameters. The experiments confirm that the proposed LPCNN can learn effective local features to form an effective representation for different land-use scenes, and can achieve a performance that is comparable to the state-of-the-art on public HSR-RS scene datasets.
Scene classification can mine high-level semantic information scene categories from low-level visual features for high spatial resolution remote sensing images (HSRIs). A multifeature probabilistic latent semantic analysis (MPLSA) algorithm is proposed to perform the task of scene classification for HSRIs. Distinct from the traditional probabilistic latent semantic analysis (PLSA) with a single feature, to utilize the spatial information of the HSRIs, in MPLSA, multiple features, including spectral and texture features, and the scale-invariant feature transform feature, are combined with PLSA. The visual words are characterized by the multifeature descriptor, and an image set is represented by a discriminative word-image matrix. During the training phase, the MPLSA model mines the visual words’ latent semantics. For unknown images, the MPLSA model analyzes their corresponding latent semantic distributions by combining the words’ latent semantics obtained from the training step. The spectral angle mapper classifier is utilized to label the scene class, based on the image’s latent semantic distribution. The experimental results demonstrate that the proposed MPLSA method can achieve better scene classification accuracy than the traditional single-feature PLSA method.
The radial basis function (RBF) neural network is a powerful method for remote sensing image classification. It has a
simple architecture and the learning algorithm corresponds to the solution of a linear regression problem, resulting in a
fast training process. The main drawback of this strategy is the requirement of an efficient algorithm to determine the
number, position, and dispersion of the RBF. Traditional methods to determine the centers are: randomly choose input
vectors from the training data set; vectors obtained from unsupervised clustering algorithms, such as k-means, applied to
the input data. These conduce that traditional RBF neural network is sensitive to the center initialization. In this paper,
the artificial immune network (aiNet) model, a new computational intelligence based on artificial immune networks
(AIN), is applied to obtain appropriate centers for remote sensing image classification. In the aiNet-RBF algorihtm, each
input pattern corresonds to an antigenic stimulus, while each RBF candidate center is considered to be an element, or cell,
of the immune network model. The steps are as follows: A set of candidate centers is initialized at random, where the
initial number of candidates and their positions is not crucial to the performance. Then, the clonal selection principle will
control which candidates will be selected and how they will be upadated. Note that the clonal selection principle will be
responsible for how the centers will represent the training data set. Finally, the immune network will identify and
eliminate or suppress self-recognizing individuals to control the number of candidate centers. After the above learning
phase, the aiNet network centers represent internal images of the inuput patterns presented to it. The algorithm output is
taken to be the matrix of memory cells' coordinates that represent the final centers to be adopted by the RBF network.
The stopping criterion of the proposed algorithm is given by a pre-defined number of iterations. The classification results
are evaluated by comparing with that of the k-means center selection procedures and other results from the literature
using remote sensing imagery. It is shown that aiNet-RBF NN algorithm outperform other algorithms and provides an
effective option for remote sensing image classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.