Hyperspectral image unmixing is an important part of hyperspectral data analysis. The mixed pixel decomposition consists of two steps, endmember (the unique signatures of pure ground components) extraction and abundance (the proportion of each endmember in each pixel) estimation. Recently, a Discrete Particle Swarm Optimization algorithm (DPSO) was proposed for accurately extract endmembers with high optimal performance. However, the DPSO algorithm shows very high computational complexity, which makes the endmember extraction procedure very time consuming for hyperspectral image unmixing. Thus, in this paper, the DPSO endmember extraction algorithm was parallelized, implemented on the CUDA (GPU K20) platform, and evaluated by real hyperspectral remote sensing data. The experimental results show that with increasing the number of particles the parallelized version obtained much higher computing efficiency while maintain the same endmember exaction accuracy.
Anomaly detection is one of the most important techniques for remotely sensed hyperspectral data interpretation. Developing fast processing techniques for anomaly detection has received considerable attention in recent years, especially in analysis scenarios with real-time constraints. In this paper, we develop an embedded graphics processing units based parallel computation for streaming background statistics anomaly detection algorithm. The streaming background statistics method can simulate real-time anomaly detection, which refer to that the processing can be performed at the same time as the data are collected. The algorithm is implemented on NVIDIA Jetson TK1 development kit. The experiment, conducted with real hyperspectral data, indicate the effectiveness of the proposed implementations. This work shows the embedded GPU gives a promising solution for high-performance with low power consumption hyperspectral image applications.
One of the most important tasks in analyzing hyperspectral image data is the classification process[1]. In general, in order to enhance the classification accuracy, a data preprocessing step is usually adopted to remove the noise in the data before classification. But for the time-sensitive applications, we hope that even the data contains noise the classifier can still appear to execute correctly from the user’s perspective, such as risk prevention and response. As the most popular classifier, Support Vector Machine (SVM) has been widely used for hyperspectral image classification and proved to be a very promising technique in supervised classification[2]. In this paper, two experiments are performed to demonstrate that for the hyperspectral data with noise, if the noise of the data is within a certain range, SVM algorithm is still able to execute correctly from the user’s perspective.
We propose a commodity graphics processing units (GPUs)–based massively parallel efficient computation for spectral-spatial classification of hyperspectral images. The spectral-spatial classification framework is based on the marginal probability distribution which uses all of the information in the hyperspectral data. In this framework, first, the posterior class probability is modeled with discriminative random field in which the association potential is linked with a multinomial logistic regression (MLR) classifier and the interaction potential modeling the spatial information is linked to a Markov random field multilevel logistic (MLL) prior. Second, the maximizers of the posterior marginals are computed via the loopy belief propagation (LBP) method. In addition, the regressors of the multinominal logistic regression classifier are inferred by the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm. Although the spectral-spatial classification framework exhibited state-of-the-art accuracy performance with regard to similar approaches, its computational complexity is very high. We take advantage of the massively parallel computing capability of NVIDIA Tesla C2075 with the compute unified device architecture including a set of GPU-accelerated linear algebra libraries (CULA) to dramatically improve the computation speed of this hyperspectral image classification framework. The shared memory and the asynchronous transfer techniques are also used for further computationally efficient optimization. Real hyperspectral data sets collected by the National Aeronautics and Space Administration’s airborne visible infrared imaging spectrometer and the reflective optics system imaging spectrometer system are used for effectiveness evaluation. The results show that we achieved a speedup of 92-fold on LORSAL, 69-fold on MLR, 127-fold on MLL, 160-fold on LBP, and 73-fold on the whole spectral-spatial classification framework as compared with its single-core central processing unit counterpart, respectively.
KEYWORDS: Hyperspectral imaging, Feature extraction, Data transmission, Principal component analysis, Signal to noise ratio, Interference (communication), 3D modeling, Graphics processing units, Image classification, Computer architecture
We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.
Optical remotely sensed data, especially hyperspectral data have emerged as the most useful data source for regional
crop classification. Hyperspectral data contain fine spectra, however, their spatial coverage are narrow. Multispectral data
may not realize unique identification of crop endmembers because of coarse spectral resolution, but they do provide
broad spatial coverage. This paper proposed a method of multisensor analysis to fully make use of the virtues from both
data and to improve multispectral classification with the multispectral signatures convert from hyperspectral signatures
in overlap regions. Full-scene crop mapping using multispectral data was implemented by the multispectral signatures
and SVM classification. The accuracy assessment showed the proposed classification method is promising.
This paper proposed a GPU-based implementation of radiometric normalization algorithms, which is used as a representative case study of on-board data processing techniques for hyperspectral image. Three algorithms of radiometric normalization based on the column average and standard deviation of raw image statistical characteristics were implemented and applied to real hyperspectral images for evaluating their performance. These algorithms have been implemented using the compute device unified architecture (CUDA), and tested on the NVidia Tesla C2075 architecture. The airborne Pushbroom Hyperspectral Imager (PHI) was flown to acquire the spectrally contiguous images as experimental datasets. The results show that MN worked best among the three methods and the speedups achieved by the GPU implementation over their CPU counterparts are outstanding.
Military target detection is an important application of hyperspectral remote sensing. It highly demands real-time or near
real-time processing. However, the massive amount of hyperspectral image data seriously limits the processing speed.
Real-time image processing based on hardware platform, such as digital signal processor (DSP), is one of recent
developments in hyperspectral target detection. In hyperspectral target detection algorithms, correlation matrix or
covariance matrix calculation is always used to whiten data, which is a very time-consuming process. In this paper, a
strategy named spatial-spectral information extraction (SSIE) is presented to accelerate the speed of hyperspectral
image processing. The strategy is composed of bands selection and sample covariance matrix estimation. Bands selection
fully utilizes the high-spectral correlation in spectral image, while sample covariance matrix estimation fully utilizes the
high-spatial correlation in remote sensing image. Meanwhile, this strategy is implemented on the hardware platform of
DSP. The hardware implementation of constrained energy minimization (CEM) algorithm is composed of hardware
architecture and software architecture. The hardware architecture contains chips and peripheral interfaces, and software
architecture establishes a data transferring model to accomplish the communication between DSP and PC. In experiments,
the performance on software of ENVI with that on hardware of DSP is compared. Results show that the processing speed
and recognition result on DSP are better than those on ENVI. Detection results demonstrate that the strategy
implemented by DSP is sufficient to enable near real-time supervised target detection.
HJ1A HSI is an interferometric imaging spectrometer (Hyper-spectral Imager, sensor ID: HSI) of HJ-1 small
satellite. The hyper-spectral image data are organized and stored in hierarchical data format version 5 (HDF5) files.
This paper presents the data model, file structure, library and programming model of HDF5 file format. The
adapter design pattern is used for translating hdf5 interface into a compatible interface. Then, we give a detailed
analysis of HJ1A hyper-spectral image data. The HJ1A hyper-spectral image data model includes five groups:
'GlobalAttributes', 'ImageAttributes', 'ImageData', 'MapInformation', and 'ProductParameters'' under the root
group. The 'ImageData' group includes three datasets: 'BandData', 'CalibrationCoefficient', and 'WaveLength'.
Based on the relationships between the models and implementations, we give a flow chart of extraction HJ1A
hyper-spectral image data from hdf5 files. The level2 product of HJ1A hyper-spectral image data is used for
experiment. We present the RGB color composite image and 3D cube of the extracted data. Tests show that the data
extraction is correct and rapid with this approach. This work provides a solid foundation for quality evaluation and
application of HJ1A hyper-spectral image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.