Natural ecosystems are highly dynamic and exhibit a highly nonlinear nature. We attempt to detect and analyze changes in a coastal mangrove ecosystem to predict the dynamics of the system and its biodiversity. Multitemporal hyperspectral data have been used to analyze the competition among mangrove and saline blank endmembers and their dominance with time. We aim to predict the ecodynamics of the area through subpixel analysis of multitemporal hyperspectral imagery. The biodiverse coastal zone of Sunderban Biosphere Reserve, West Bengal (a world heritage site), is considered as a case study for predicting the mangrove ecodynamics of the area through Markov chain analysis. The mangrove species of Sunderban vary in their abundance with time due to dynamic weather conditions. The model is applied to hyperspectral data from 2011 and 2014, collected over Henry Island in the Sunderban to predict species dynamics in 2017 and 2020. An endmember transition matrix is framed to determine the endmember dynamics in the area in terms of degradation and regeneration of mangroves and saline blank cover in the area. Based on the transition probability matrix, the abundance values have been predicted for 2017 and 2020. The predicted abundance values have been validated by ground truth values extracted during field visits in 2017. It is observed that, in certain locations, the increase in saline blanks has led to an overall decrease in the proportion of Phoenix paludosa and Ceriops decandra (a salt-intolerant mangrove species) over the years. However, there is an increase in Avicennia marina and Avicennia officinalis, which are salt tolerant and can sustain in extreme saline conditions.
Hyperspectral remote sensing data have been widely used in lithological identification and mapping. In existing studies, sufficient training samples are required. However, collecting sufficient labeled training samples for lithological classification in remote and inaccessible areas is generally time consuming, expensive, and even hard to implement, which causes the insufficient training sample problems in lithological classification using hyperspectral data. The semisupervised self-learning (SSL) method provides an alternative way of addressing this ill-posed problem by enlarging the training set from unlabeled samples using the limited labeled samples as a priori. This study evaluates and analyzes the SSL method in lithological mapping using EO-1 Hyperion hyperspectral image over a remote area. The performance of SSL with limited training samples was validated by comparing with multinomial logistic regression (MLR) and random forest (RF) classification using full training samples. The experimental results indicate that SSL method with limited training samples can produce results comparable with the MLR with all the training samples and better results than those of the RF with all the training samples. Therefore, the SSL method provides a useful way for hyperspectral lithological mapping with limited training samples over remote and inaccessible areas.
Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a “distance” factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.
Hyperspectral remote sensing allows for the detailed analysis of the surface of the Earth by providing high-dimensional images with hundreds of spectral bands. Hyperspectral image classification plays a significant role in hyperspectral image analysis and has been a very active research area in the last few years. In the context of hyperspectral image classification, supervised techniques (which have achieved wide acceptance) must address a difficult task due to the unbalance between the high dimensionality of the data and the limited availability of labeled training samples in real analysis scenarios. While the collection of labeled samples is generally difficult, expensive, and time-consuming, unlabeled samples can be generated in a much easier way. Semi-supervised learning offers an effective solution that can take advantage of both unlabeled and a small amount of labeled samples. Spectral unmixing is another widely used technique in hyperspectral image analysis, developed to retrieve pure spectral components and determine their abundance fractions in mixed pixels. In this work, we propose a method to perform semi-supervised hyperspectral image classification by combining the information retrieved with spectral unmixing and classification. Two kinds of samples that are highly mixed in nature are automatically selected, aiming at finding the most informative unlabeled samples. One kind is given by the samples minimizing the distance between the first two most probable classes by calculating the difference between the two highest abundances. Another kind is given by the samples minimizing the distance between the most probable class and the least probable class, obtained by calculating the difference between the highest and lowest abundances. The effectiveness of the proposed method is evaluated using a real hyperspectral data set collected by the airborne visible infrared imaging spectrometer (AVIRIS) over the Indian Pines region in Northwestern Indiana. In the paper, techniques for efficient implementation of the considered technique in high performance computing architectures are also discussed.
This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.
Spectral unmixing is focused in the identification of spectrally pure signatures, called endmembers, and their corresponding abundances in each pixel of a hyperspectral image. Mainly focused on the spectral information contained in the hyperspectral images, endmember extraction techniques have recently included spatial information to achieve more accurate results. Several algorithms have been developed for automatic or semi-automatic identification of endmembers using spatial and spectral information, including the spectral-spatial endmember extraction (SSEE) where, within a preprocessing step in the technique, both sources of information are extracted from the hyperspectral image and equally used for this purpose. Previous works have implemented the SSEE technique in four main steps: 1) local eigenvectors calculation in each sub-region in which the original hyperspectral image is divided; 2) computation of the maxima and minima projection of all eigenvectors over the entire hyperspectral image in order to obtain a candidates pixels set; 3) expansion and averaging of the signatures of the candidate set; 4) ranking based on the spectral angle distance (SAD). The result of this method is a list of candidate signatures from which the endmembers can be extracted using various spectral-based techniques, such as orthogonal subspace projection (OSP), vertex component analysis (VCA) or N-FINDR. Considering the large volume of data and the complexity of the calculations, there is a need for efficient implementations. Latest- generation hardware accelerators such as commodity graphics processing units (GPUs) offer a good chance for improving the computational performance in this context. In this paper, we develop two different implementations of the SSEE algorithm using GPUs. Both are based on the eigenvectors computation within each sub-region of the first step, one using the singular value decomposition (SVD) and another one using principal component analysis (PCA). Based on our experiments with hyperspectral data sets, high computational performance is observed in both cases.
KEYWORDS: Image classification, Hyperspectral imaging, Remote sensing, Feature extraction, Imaging systems, Spectroscopy, Surface plasmons, Scene classification, Principal component analysis, Signal to noise ratio
Hyperspectral remote sensing offers a powerful tool in many different application contexts. The imbalance between the high dimensionality of the data and the limited availability of training samples calls for the need to perform dimensionality reduction in practice. Among traditional dimensionality reduction techniques, feature extraction is one of the most widely used approaches due to its flexibility to transform the original spectral information into a subspace. In turn, band selection is important when the application requires preserving the original spectral information (especially the physically meaningful information) for the interpretation of the hyperspectral scene. In the case of hyperspectral image classification, both techniques need to discard most of the original features/bands in order to perform the classification using a feature set with much lower dimensionality. However, the discriminative information that allows a classifier to provide good performance is usually classdependent and the relevant information may live in weak features/bands that are usually discarded or lost through subspace transformation or band selection. As a result, in practice, it is challenging to use either feature extraction or band selection for classification purposes. Relevant lines of attack to address this problem have focused on multiple feature selection aiming at a suitable fusion of diverse features in order to provide relevant information to the classifier. In this paper, we present a new dimensionality reduction technique, called multiple criteria-based spectral partitioning, which is embedded in an ensemble learning framework to perform advanced hyperspectral image classification. Driven by the use of a multiple band priority criteria that is derived from classic band selection techniques, we obtain multiple spectral partitions from the original hyperspectral data that correspond to several band subgroups with much lower spectral dimensionality as compared with the original band set. An ensemble learning technique is then used to fuse the information from multiple features, taking advantage of the relevant information provided by each classifier. Our experimental results with two real hyperspectral images, collected by the reflective optics system imaging spectrometer (ROSIS) over the University of Pavia in Italy and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over the Salinas scene, reveal that our presented method, driven by multiple band priority criteria, is able to obtain better classification results compared with classic band selection techniques. This paper also discusses several possibilities for computationally efficient implementation of the proposed technique using various high-performance computing architectures.
Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC–MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm’s results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC–MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC–MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC–MRF-cluster showed good stability.
We propose a commodity graphics processing units (GPUs)–based massively parallel efficient computation for spectral-spatial classification of hyperspectral images. The spectral-spatial classification framework is based on the marginal probability distribution which uses all of the information in the hyperspectral data. In this framework, first, the posterior class probability is modeled with discriminative random field in which the association potential is linked with a multinomial logistic regression (MLR) classifier and the interaction potential modeling the spatial information is linked to a Markov random field multilevel logistic (MLL) prior. Second, the maximizers of the posterior marginals are computed via the loopy belief propagation (LBP) method. In addition, the regressors of the multinominal logistic regression classifier are inferred by the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm. Although the spectral-spatial classification framework exhibited state-of-the-art accuracy performance with regard to similar approaches, its computational complexity is very high. We take advantage of the massively parallel computing capability of NVIDIA Tesla C2075 with the compute unified device architecture including a set of GPU-accelerated linear algebra libraries (CULA) to dramatically improve the computation speed of this hyperspectral image classification framework. The shared memory and the asynchronous transfer techniques are also used for further computationally efficient optimization. Real hyperspectral data sets collected by the National Aeronautics and Space Administration’s airborne visible infrared imaging spectrometer and the reflective optics system imaging spectrometer system are used for effectiveness evaluation. The results show that we achieved a speedup of 92-fold on LORSAL, 69-fold on MLR, 127-fold on MLL, 160-fold on LBP, and 73-fold on the whole spectral-spatial classification framework as compared with its single-core central processing unit counterpart, respectively.
Anomaly detection is an important technique for remotely sensed hyperspectral data exploitation. In the last decades, several algorithms have been developed for detecting anomalies in hyperspectral images. The Reed-Xiaoli detector (RXD) is one of the most widely used approaches for this purpose. Since the RXD assumes that the distribution of the background is Gaussian, it generally suffers from a high false alarm rate. In order to address this issue, we introduce an unsupervised probabilistic anomaly detector (PAD) based on estimating the difference between the probabilities of the anomalies and the background. The proposed PAD takes advantage of the results provided by the RXD to estimate statistical information for the targets and background, respectively, and then uses an automatic strategy to find the most suitable threshold for the separation of targets from the background. The proposed technique is validated using a synthetic data set and two real hyperspectral data sets with ground-truth information. Our experimental results indicate that the proposed method achieves good detection ratios with adequate computational complexity as compared with other widely used anomaly detectors.
This paper proposes an edge-constrained Markov random field (EC-MRF) method for accurate land cover classification over urban areas using hyperspectral image and LiDAR data. EC-MRF adopts a probabilistic support vector machine for pixel-wise classification of hyperspectral and LiDAR data, while MRF performs as a postprocessing regularizer for spatial smoothness. LiDAR data improve both pixel-wise classification and postprocessing result during an EC-MRF procedure. A variable weighting coefficient, constrained by a combined edge extracted from both hyperspectral and LiDAR data, is introduced for the MRF regularizer to avoid oversmoothness and to preserve class boundaries. The EC-MRF approach is evaluated using synthetic and real data, and results indicate that it is more effective than four similar advanced methods for the classification of hyperspectral and LiDAR data.
KEYWORDS: Hyperspectral imaging, Feature extraction, Data transmission, Principal component analysis, Signal to noise ratio, Interference (communication), 3D modeling, Graphics processing units, Image classification, Computer architecture
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Occlusion is always a problem when counting vehicles in congested traffic. This paper tries to present an approach to solve the problem. The proposed approach consists of three main procedures. Firstly, a modified background subtraction is performed. The aim is to segment slow moving objects from an illumination-variant background. Secondly, object tracking is performed, where the CONDENSATION algorithm is used. This can avoid the matching problem. Thirdly, an inspecting procedure is executed. When a bus firstly occludes a car and then the bus moves away a few frames later, the car will appear in the scene. The inspecting procedure should find the “new” car and add it as a tracking object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.