PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623301 (2006) https://doi.org/10.1117/12.663591
In this paper, we perform an analytical comparison of two well-known detectors-the matched filter detector (MFD)
and the orthogonal subspace projection (OSP) detector for the subpixel target detection in hyperspectral images
under the assumption of the structured model. The OSP detector (equivalent to the least squares estimator) is a
popular detector utilizing background signature information. On the other hand, the MFD is intended for a model
without background information, and it is often used for its simplicity. The OSP detector seems to be more reliable
because it removes the interference of background signatures. However, it has been demonstrated in the literature
that sometimes the MFD can be more powerful. In this paper, we show analytical results explaining the relationship
between the two detectors beyond the anecdotal evidence from specific hyperspectral images or simulations. We
also give some guidelines on when the MFD may be more beneficial than the OSP, and when the OSP is better
because of being more robust against a wide range of conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623302 (2006) https://doi.org/10.1117/12.663592
In practical target detection, we often deal with situations where even a relatively small target is present in two or more adjacent pixels, due to its physical configuration with respect to the pixel grid. At the same time, a relatively large but narrow object (such as a wall or a narrow road) may be collectively present in many pixels but be only a small part of each single pixel. In such cases, critical information about the target is spread among many spectra and cannot be used efficiently by detectors that investigate each single pixel separately. We show that these difficulties can be overcome by using appropriate smoothing operators. We introduce a class of Locally Adaptive Smoothing detectors and evaluate them on three different images representing a broad range of blur that would interfere with the detection process in practical problems. The smoothing-based detectors prove to be very powerful in these cases, and they outperform the traditional detectors such as the constrained energy minimization (CEM) filter or the one-dimensional target-constrained interference-minimized filter (TCIMF).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623303 (2006) https://doi.org/10.1117/12.664112
In this paper, we present a kernel-based nonlinear version of canonical correlation analysis (CCA),
so called kernel canonical correlation analysis (KCCA), for
hyperspectral anomaly detection applications. CCA only measures linear dependency
between two sets of signal vectors (target and background) ignoring higher order correlations
crucial for distinguishing between man-made objects and background clutter.
In order to exploit nonlinear correlations we implicitly map the two
sets of data into a high dimensional feature space where correlations of nonlinear features
extracted from the original data are exploited by a kernel function.
A generalized eigenproblem is then formulated for KCCA. In this paper, both CCA and KCCA are applied
to real hyperspectral images and detection performance of CCA and KCCA are compared to the
well-known RX anomaly detection algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623304 (2006) https://doi.org/10.1117/12.668224
The large amount of spectral information in hyperspectral imagery allows the accurate detection of subpixel objects. The use of subspace models for targets and backgrounds allows detection that is invariant to changing environmental conditions. The non-Gaussian behavior of target and background distribution residuals complicates the development of subspace-based detection methods. In this paper, we use discriminant analysis for feature extraction for separating subpixel 3D objects from cluttered backgrounds. The nonparametric estimation of distributions is used to establish the statistical models using the length and direction of residuals. Candidate subspaces are then evaluated to maximize their discriminatory power which is measured between estimated distributions of targets and backgrounds. In this context, a likelihood ratio test is used based on background and mixed statistics for subpixel detection. The detection algorithm is evaluated for HYDICE images and a number of images simulated using DIRSIG under a variety of conditions. The experimental results demonstrate accurate detection performance on these data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623305 (2006) https://doi.org/10.1117/12.665754
Some of the earliest concepts for detecting targets with hyperspectral systems have not been realized in practice in real
time systems. First we review some of the earliest approaches, most of which were based on simplistic models of hyperspectral
clutter. Next, the algorithms employed by the first operational sensors are reviewed, and their relationship to the
earlier naive models are discussed. Variants that are likely to be incorporated in the next generation systems are described.
Finally, we speculate on the prospects of some futuristic detection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
R. Mayer, J. Antoniades, M. Baumback, D. Chester, J. Edwards, A. Goldstein, D. Haas, S. Henderson
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623306 (2006) https://doi.org/10.1117/12.663945
This study adapts a variety of techniques derived from multi-spectral image classification to find objects amid cluttered backgrounds in hyperspectral imagery. This study quantitatively compares the algorithms against a standard object search, the matched filter (MF) and recently developed object detector, Adaptive Cosine Estimator (ACE). These object searches require calculating the Mahalanobis distance between the average object spectral signature and the test pixel spectrum and needs the computation of a covariance matrix. The covariance matrix is generated using the entire image (Whitened Euclidean Distance, WED) or using pixels associated with the object (Maximum Likelihood Classifier, MLC). The latter computation requires a relatively large number of pixels to generate a non-singular, accurate covariance matrix. Regularizing object pixels via optimally mixing (likelihood maximization) diagonal, object, and entire image covariance matrices to generate the object covariance matrix estimate. This approximation is called the Regularized Maximum Likelihood Classifier (RMLC). The object searches MF, ACE, WED, MLC, and RMLC were applied to visible/near IR data collected from forest and desert environments. This study searched for objects using object signatures and covariance matrices taken directly from the scene and from statistically transformed object signatures and covariance matrices from another time. This study found a substantial reduction in the number of false alarms (factor of 10 to 1000) using WED, ACE, RMLC relative to MF searches for the two independent data collects. The regularization of in-scene and transformed covariance matrices substantially reduced false alarms relative to using unprocessed covariance matrices. This study adds simple, high performing algorithms to the object search arsenal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623307 (2006) https://doi.org/10.1117/12.669136
Covariance equalization (CE) is a method by which one can predict the change in an object's hyperspectral signature
due to changes in sun position, atmospheric conditions, and viewing angle and range. Specifically, CE produces a linear
transformation that relates the object's signature as measured at the sensor at a particular time to that measured at
another time and under different conditions. The transformation is based on the background statistics of a scene imaged
at the two times. Although CE was derived under the assumption that the two images cover mostly the same geographic
area, it also has been found to work well for objects that have moved from one location to another. The CE technique
has been previously verified with data from a nadir-viewing visible hyperspectral camera. In this paper, however, we
show results from the application of CE to highly oblique hyperspectral SWIR data. We evaluate the utility of CE
primaily through its effectiveness in transforming signatures acquired under one set of conditions for application to
matched-filter object detection under a second set of conditions (e.g., view angle, slant range, altitude, atmospheric
conditions, and time of day). Object detection with highly oblique sensors (75 deg. to 80 deg. off-nadir) is far more
difficult than with nadir-viewing sensors for several reasons: increased atmospheric optical thickness, which results in
lower signal-to-noise and higher adjacency effects; fewer pixels on object; the effects of the nonuniformity of the bidirection
reflectance function of most man-made objects; and the change in pixel size when measurements are taken at
different slant ranges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623308 (2006) https://doi.org/10.1117/12.665142
Hyperspectral imaging spectrometers have proven to be both versatile and powerful
instruments with applications in diverse areas such as medical diagnosis, land usage, military target
detection, and art forgery. In many applications scanning systems cannot be effectively employed
and true "flash" operation is necessary. Multiplex systems have been developed which can gather
information in multispectral bands simultaneously, and then produce a datacube after mathematical
restoration. Such system enjoy compact size, robust construction, inexpensive costs and zero moving
parts at the cost of highly complex mathematical restoration operations. Currently the limiting
feature of such tomographic hyperspectral imagers such as the FMDIS [1,2] is the speed of
restoration. Due to the large sizes of the restoration kernel, restorations are typically recursive and
require many iterations to achieve satisfactory results. Little can be done to make the systems
smaller since the size is determined by the number of colors and pixel size of the focal plane arrays
(FPA) employed. Thus, techniques must be investigated to speed up the restoration either by
reducing the number of iterations or reducing the number of operations within an iteration. It is
assumed that little can be done to reduce the number of operations in an iteration since the
operations are done in sparse format, we therefore investigate reducing the number of iterations
through mathematical accelerations. We assume this acceleration will work to advantage regardless
of the mechanism (PC-based or dedicated processor such as a gate array) by which the restoration is
implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623309 (2006) https://doi.org/10.1117/12.664903
This research continues the development of the Model-Based Spectral Image Deconvolution (MBSID) algorithm first presented elsewhere. The deconvolution algorithm is based on statistical estimation and is used to spectrally deconvolve images collected from a spectral imaging sensor. The development of the algorithm requires only two key elements, 1) the statistics of the photon arrival and 2) an in-depth knowledge of the spectral imaging sensor. With these two elements, the MBSID algorithm can, through image post-processing, increase the spectral resolution of the images. While MBSID algorithms can be developed for any spectral imaging system, this research focuses on an algorithm developed for ASIS (AEOS Spectral Imaging Sensor), a new spectral imaging sensor installed with the 3.6m Advanced Electro-Optical System (AEOS) telescope at the Maui Space Surveillance Complex (MSSC). The primary purpose of ASIS is to take spatially resolved spectral images of space objects. The stringent requirements associated with imaging these objects, especially the low-light levels and object motion, required a sensor design with less spectral resolution than required for image analysis. However, by applying MBSID to the collected data, the sensor will be capable of achieving a much higher spectral resolution, allowing for better spectral analysis of the space object. Before the algorithm is used on data collected with ASIS, it is proven with data collected using a set-up similar to that of ASIS. The lab data successfully shows that the MBSID algorithm can improve both the spatial and spectral resolution for a collected spectral image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330A (2006) https://doi.org/10.1117/12.667974
In hyperspectral imaging, the quality of the collected spectral signatures can be degraded by blurring due to the channel weighting function of the imaging spectrometer. In this work, we are investigating reconstruction
techniques to enhance salient features and remove degradation effects in measured spectra to assist in
subsequent machine analysis. Here preliminary work is presented showing spectral restoration using simulated
data and real data of the AVIRIS NW Indian Pines hyperspectral image using different restoration
algorithms. The restored AVIRIS image was classified and the classification accuracy was used to assess the
usefulness of the restoration process. All the methods gave comparable results with the Jansson method
giving slightly higher classification accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joseph Dirbas, Paula Henderson, Robert Fries, Alexander R. Lovett
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330B (2006) https://doi.org/10.1117/12.665084
PAR Government Systems Corporation (PAR) has deployed their turret mounted Mission Adaptable Narrowband
Tunable Imaging System (MANTIS-3T) and collected nearly 300 GBytes of multispectral data over mine-like targets in
a desert environment in support of mine counter measures (MCM), intelligence, surveillance, and reconnaissance study
applications. Multispectral processing algorithms such as RX and SEM have demonstrated success with hyperspectral
data when searching for large targets. As target size decreases relative to sensor resolution, false alarms increase and
performance declines. Detection of recently placed mine-like objects, however, can be enhanced by adding a temporal
dimension to the spectral processing. An automated color-to-color and frame-to-frame registration algorithm has been
developed as a first, and required, step to an automated multispectral change detection algorithm. The automated
registration algorithms are used to process multispectral desert data collected with MANTIS-3T. Performance results and
processing difficulties are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
John Kerekes, Michael Muldowney, Kristin Strackerjan, Lon Smith, Brian Leahy
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330C (2006) https://doi.org/10.1117/12.666121
Hyperspectral imagery has the capability of capturing spectral features of interest that can be used to differentiate
among similar materials. While hyperspectral imaging has been demonstrated to provide data that enable classification
of relatively broad categories, there remain open questions as to how fine of discrimination is possible. An application
of this fine discrimination question is the potential that spectral features exist in the surface reflectance of ordinary
civilian vehicles that would enable tracking of a particular vehicle across repeated hyperspectral images in a cluttered
urban area.
To begin to explore this question a vehicle tracking experiment was conducted in the summer of 2005 on the Rochester
Institute of Technology (RIT) campus in Rochester, New York. Several volunteer vehicles were moved around campus
at specific times coordinated with over flights of RIT's airborne Modular Imaging Spectrometer Instrument (MISI).
MISI collected sequential images of the campus in 70 spectral channels from 0.4 to 1.0 microns with a ground
resolution of approximately 2.5 meters. Ground truth spectra and photographs were collected for the vehicles.
These data are being analyzed to determine the ability to uniquely associate a vehicle in one image with its location in a
subsequent image. Initial results have demonstrated that the spectral measurement of a specific vehicle can be used to
find the same vehicle in a subsequent image, although this is not always possible and is very dependent upon the
specifics of the situation. Additionally, efforts are presented that explore predicted performance for variations in scene
and sensor parameters through an analytical performance prediction model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330D (2006) https://doi.org/10.1117/12.666748
One of the key requirements of real-time processing systems for remote sensors is the ability to accurately and automatically geo-locate events. This capability often relies on the ability to find control points to feed into a registration-based geo-location algorithm. Clouds can make the choice of control points difficult. If each pixel in a given image can be identified as cloudy or clear, the geo-location algorithm can limit the control point selection to clear pixels, thereby improving registration accuracy. Most cloud masking algorithms rely on a large number of spectral bands for good results, e.g., MODIS, whereas with our sensor, we have only three simultaneous bands available. This paper discusses a promising new approach to generating cloud masks in real-time with a limited number of spectral bands. The effort investigated statistical methods, spatial and texture-based approaches and evaluated performance on real remote sensing data. Although the spatial and texture-based approaches did not exhibit good performance due to sensor limitations in spatial resolution and too much variation in spectral response of both surface features and clouds, the statistical classification approach applied to only two bands performed very well. Images from three daytime remote sensing collects were analyzed to determine features that best separate pixels into cloudy and clear classes. A Bayes classifier was then applied to feature vectors computed for each pixel to generate a binary cloud mask. Initial results are excellent and show very good accuracy over a variety of terrain types, including mountains, desert, and coastline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330E (2006) https://doi.org/10.1117/12.665935
In this paper, we present a set of numerical tools, namely principal component analysis, clustering methods, and a covariance propagation model, that when appropriately assembled, form what we refer to as the mean-class propagation (MCP) method. The MCP method generates clusters of similar class materials in hyperspectral imaging (HSI) scenes while preserving scene spectral clutter information for radiometric transport modeling. We will demonstrate how various implementations of the MCP method can be employed to generate unique HSI products with varying levels of statistical realism across regions in the scene. Such implementations of the MCP method, compared with traditional pixel-based methods, may allow for faster generation of HSI scene data, better insight on how environmental conditions alter the statistical properties of measured scene clutter, and lays a foundation for the formulation of more robust spectral matched filter operations. To quantify the differences between the MCP method and a pixel-based method, we present a comparison computational processing time for each method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330F (2006) https://doi.org/10.1117/12.665089
The Noise-Adjusted Principal Components (NAPC) transform, or Maximum Noise Fraction (MNF) transform, has received considerable interest in the remote sensing community. Its basic idea is to reorganize the data such that the principal components are ordered in terms of signal to noise ratio (SNR), instead of variance as used in the ordinary principal components analysis (PCA). The NAPC transform is very useful in multi-dimensional image analysis, because SNR is directly related to image quality. As a result, object information can be better compacted into the first several principal components. This paper reviews the fundamental concept of the NAPC transform and its practical implementation issue, i.e., how to get accurate noise estimation, the key to the success of its implementation. Three applications of the NAPC transform in hyperspectral image analysis are presented, which are image classification, image compression, and image visualization. The AVIRIS data is used for demonstration, which shows that using the NAPC transform the performance of the following data analysis can be significantly improved because of more informative major principal components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330G (2006) https://doi.org/10.1117/12.668222
We use physical considerations to show that an affine transformation can be used to model the effect of environmental changes on hyperspectral image distributions. This allows the generation of a vector of moment invariants that describes an image distribution but does not depend on the environmental conditions. These vectors maintain the invariant property after each image band is spatially filtered which allows the representation to capture spatial properties. We use the distribution invariants and the Fisher discriminant to reduce the size of the representation by selecting optimized spectral bands. We apply the methods developed in this work to the illumination-invariant classification and recognition of regions in airborne images. We also show that the distribution transformation model can be used for change detection in regions viewed under unknown conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330H (2006) https://doi.org/10.1117/12.665994
We investigate the use of convex optimization to identify sparse linear filters in hyperspectral imagery. A linear filter is sparse if a large fraction of its coefficients are zero. A sparse linear filter can be advantageous because it only needs to access a subset of the available spectral channels, and it can be applied to high-dimensional data more cheaply than a standard linear detector. Finding good sparse filters is nontrivial because there is a combinatorially large number of discrete possibilities from which to choose the optimal subset of nonzero coefficients. But, by converting the optimality criterion into a convex loss function, and by employing an L1 penalty, one can obtain sparse solutions that are globally optimal. We investigate the performance of these sparse filters as a function of their sparsity, and compare the convex optimization approach with more traditional alternatives for feature selection. The methodology is applied both to the adaptive matched filter for weak signal detection, and to the Fisher linear discriminant for terrain categorization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330I (2006) https://doi.org/10.1117/12.666041
Hyperspectral imaging sensors capture digital images in hundreds of contiguous spectral bands, allowing remote material identification. Most algorithms for identifying materials characterize the materials according to spectral
information only, ignoring potentially valuable spatial relationships. This paper investigates the use of integrated spatial
and spectral information for characterizing materials. It examines the specific situation where a set of pixels has
resolution such that it contains spatial patterns of mixed pixels. An autoregressive Gauss-Markov random field (GMRF)
is used to model the predictability of a target pixel from neighboring pixels. At the resolution of interest, the GMRF
model can successfully classify spatial patterns of aircraft and a residential area from the HYDICE airborne sensor
Desert Radiance field collection at Davis Monthan Air Force Base, Arizona.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330J (2006) https://doi.org/10.1117/12.665285
A hyperspectral imaging sensor images a scene using hundreds of contiguous spectral channels to uncover many substances that cannot be resolved by multspectral sensors with tens of discrete spectral channels. Many spectral measures used for target discrimination and identification in hyperspectral imagery have been derived directly from multsispectral imagery rather than from a hyperspectral imagery viewpoint. This paper demonstrates that on many occasions such spectral measures are generally not effective when it is applied to real hyperspectral data for discrimination and identification due to the fact that they do not take into account the very high sample spectral correlation (SSC) provided by hyperspectral sensors. In order to address this issue, two approaches, referred to as a priori sample spectral correlation (PR-SSC) and a posteriori SSC (PSSSC) are developed to account for spectral variability within real data to achieve better target discrimination and identification. While the former can be used to derive a family of a priori hyperspectral measures via orthogonal subspace projection (OSP) to eliminate interfering effects caused by undesired signatures, the latter results in a family of a posteriori hyperspectral measures that include sample covariance/correlation matrix as a posteriori information to increase ability in discrimination and identification. Interestingly, some well-known measures such as Euclidean distance (ED) and spectral angle mapper (SAM) can be shown to be special cases of the proposed PR-SSC and PS-SSC hyperspectral measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330K (2006) https://doi.org/10.1117/12.664993
Target detection for remotely sensed imagery has been invasively researched for decades. Many detection algorithms are
designed and claimed to be outperform others. In order to make an objective comparison, two issues need to be solved.
The first one is to have standardized data sets with accurate ground truth, and the second one is to use objective
performance analysis techniques. The Receiver Operating Characteristic (ROC) curve is one of the most recognized
tools for detection performance analysis. It is based on binary hypothesis test approach. First it constructs two hypothesis
distributions (null and alternative hypotheses) and then draws the ROC curve by calculating all the possible detection
probability and false-alarm probability pairs. The larger area under the curve means the better detection performance of
the algorithm. But one issue is rarely discussed. In ROC analysis, the alternative hypothesis means target exists, but we
seldom discuss how much target is presented. In this paper, we include target abundance as the third dimension to form
3-Dimension ROC. The proposed technique can be used to analyze the performance of detection algorithms or the
sensor instruments from the different point of views. It can perform the detection probability versus false-alarm
probability test as the original ROC, and it can also be use to estimate the minimum target abundance the algorithm can
detect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330L (2006) https://doi.org/10.1117/12.673576
The wavelength of the spectral bands of a pixel from a hyperspectral imaging camera is not automatically known. A simple linear regression model can be fitted to the known abscissa and the corresponding known wavelength to acquire a calibration equation to determine the wavelength for any given abscissa. In our experiment the pixels show significant trend and serial correlation mainly in one of the spatial domain. An algorithm to remove the trend and serial correlation from the pixels using local linear regression model will be presented. Numerical results will be presented to show the improvement in the accuracy of the calibration equation computed using the corrected pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330M (2006) https://doi.org/10.1117/12.672989
The detection, determination of location, and identification of unknown and uncued energetic events within a large field of view represents a common operational requirement for many staring sensors. The traditional imaging approach involves forming an image of an extended scene and then rejecting background clutter. However, some important targets can be limited to a class of energetic, transient, point-like events, such as explosions, that embed key discriminants within their emitted, temporally varying spectra; for such events it is possible to create an alternative sensor architecture tuned specifically to these objects of interest. The resulting sensor operation, called pseudo imaging, includes: optical components designed to encode the scene information such that the spectral-temporal signature from the event and its location are easily derived; and signal processing intrinsic to the sensor to declare the presence of an event, locate the event, extract the event spectral-temporal signature, and match the signature to a library in order to identify the event.
This treatise defines pseudo imaging, including formal specifications and requirements. Two examples of pseudo imaging sensors are presented: a sensor based on a spinning prism, and a sensor based on an optical element called a Crossed Dispersion Prism. The sensors are described, including how the sensors fulfill the definition of pseudo imaging, and measured data is presented to demonstrate functionality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330O (2006) https://doi.org/10.1117/12.666494
A novel, compact visible multispectral, polarimetric camera is under development. The prototype is capable of megapixel imaging with sixteen wavebands and three polarimetric images. The entire system encompasses a volume less than 125mm x 100mm x 75mm. The system is based on commercial megapixel class CMOS sensors and incorporates real time processing of hyperspectral cube data using a proprietary processor system based on state of the art FPGA technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330P (2006) https://doi.org/10.1117/12.666063
As an offshoot of hyperspectral imaging, which typically acquires tens to slightly more than 100 spectral
bands, ultraspectral imaging, with typically more than 1000 bands, provides the ability to use molecular or
atomic lines to identify surface or airborne contaminants. Surface Optics Corporation has developed a very
high-speed Fourier Transform Infrared (FTIR) imaging system. This system operates from 2 μm to 12 μm,
collecting 128 ×128 images at up to 10,000 frames-per-second. The high-speed infrared imager is able to
synchronize to almost any FTIR that provides at least mirror direction and laser clock signals. FTIRs rarely
produce a constant scan speed, due to the need to physically move a mirror or other optical device to
introduce an optical path difference between two beams. The imager is able to track scan speed jitter, as
well as changes in position of the zero path difference (ZPD) position, and perform real-time averaging if
desired. Total acquisition time is dependent on the return stroke speed of the FTIR, but 16 cm-1 (1024
point) spectral imagery can be generated in less than 1/5 second , with 2 cm-1 (8192 point) spectral imagery
taking proportionately longer. The imager is currently configured with X-Y position stages to investigate
surface chemistry of varied objects. Details of the optical design, focal plane array, and electronics that
allow this high-speed FTIR imager to function are presented. Results of using the imager for several
applications are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330Q (2006) https://doi.org/10.1117/12.665906
Hyperspectral imaging in the 2-5 um band has held interest for applications in detection and discrimination of objects of
interest. Real time instrumentation is particularly powerful as a tool for characterization and field measurement. A
compact, real-time, refractive MWIR hyperspectral imaging instrument has been designed and tested. The system has
been designed for cryogenic operation to improve signal to noise ratio, reduce background noise, and enable real-time
hyperspectral video processing. The system is a a 2-5 μm 32-band hyperspectral imager capable of collecting and
processing complete hyperspectral image cubes at 15 cubes-per-second. Details of the system and object discrimination
using this system are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330R (2006) https://doi.org/10.1117/12.665889
Corning has developed a number of manufacturing and test techniques to meet the challenging requirements of imaging hyperspectral optical systems. These processes have been developed for applications in the short-wave visible through long-wave IR wavelengths. Optical designs for these imaging systems are typically Offner or Dyson configurations, where the critical optical components are powered gratings and slits. Precision alignment, system athermalization, and harsh environmental requirements, for these systems drive system level performance and production viability.
This paper will present the results of these techniques including all aluminum gratings and slits, innovative grating profiles, snap together self-aligning mechanical designs, and visible test techniques for IR systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330S (2006) https://doi.org/10.1117/12.665593
Fourier transform imaging spectroscopy (FTIS) can be performed with a Fizeau imaging interferometer by recording a series of images with various optical path differences (OPDs) between subapertures of the optical system and postprocessing. The quality of the spectral data is affected by misregistration of the raw image measurements. A Fizeau FTIS system possesses unique degrees of freedom that can be used to facilitate image registration without further complication of the system design. We describe a registration technique based on the fact that certain spatial frequencies of the raw imagery are independent of the OPDs introduced between subapertures. Operational and post-processing tradeoffs associated with this technique are described, and the technique is demonstrated using computer-simulated data with image shift misregistrations under realistic noise conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330T (2006) https://doi.org/10.1117/12.665742
Most traditional spectral sensors have spectrally adjacent bands with little overlap. This overlap is usually ignored in image processing because band-to-band correlation due to oversampling of the scene is almost always dominant. A new proposed class of adaptive spectral sensor based on bias-tunable quantum-dot infrared photodetectors (QDIPs) are different in that they have significant band-to-band overlaps. The influence of these overlaps to image processing results cannot be ignored for such sensors. To facilitate the analysis of such sensors, a generalized geometry-based model is provided here for spectral sensors with arbitrary spectral responses. It starts from the mathematical description of the interaction between sensor and the radiation from scene reaching it. In this model, the spectral responses of a sensor are used to define a sensor space. The spectral sensing process is shown to represent a projection of scene spectrum onto sensor space. The projected spectrum, which can be calculated through the output photocurrents and sensor's spectral responses, is the least-square error reconstruction of the scene spectrum. With this data interpretation, we can remove the influence of band overlap to the data. The band overlap also introduce correlation between noise of different bands, This correlation is also analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Special Event: The Use of Civil Remote Sensing in Improving Hurricane Forecasting and Assisting Emergency Responders
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330U (2006) https://doi.org/10.1117/12.673221
The assimilation of remotely sensed data from aircraft and satellites has contributed substantially to the current accuracy of operational hurricane forecasting. In the 1960's, satellite imagery revolutionized hurricane detection and forecasting. Since that time, quantitative remotely sensed data (eg. atmospheric motion winds, passive infrared and microwave radiances or retrievals of temperature, moisture, surface wind and rain rate, active microwave measurements of surface wind and rain rate) and significant advances in modeling and data assimilation have increased the accuracy of hurricane track forecasts very significantly. The development of advanced next-generation models in combination new types of remotely sensed observations (eg. space-based lidar winds) should yield significant further improvements in the timing and location of landfall and in the predicted intensification of hurricanes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330W (2006) https://doi.org/10.1117/12.666106
Quantitative methods to assess or predict the quality of a spectral image continue to be the subject of a number of
current research activities. An accepted methodology would be highly desirable for use in data collection tasking or
data archive searching in ways analogous to the current prediction of panchromatic image quality through the National
Imagery Interpretation Rating Scale (NIIRS) using the General Image Quality Equation (GIQE). A number of
approaches to the estimation of quality of a spectral image have been published, but most capture only the performance
of automated algorithms applied to the spectral data. One recently introduced metric, however, the General Spectral
Utility Metric (GSUM), provides for a framework to combine the performance from the spectral aspects together with
the spatial aspects. In particular, this framework allows the metric to capture the utility of a spectral image resulting
when the human analyst is included in the process. This is important since nearly all hyperspectral imagery analysis
procedures include an analyst.
To investigate the relationships between candidate spectral metrics and task performance from volunteer human
analysts in conjunction with the automated results, simulated images are generated and processed in a blind test. The
performance achieved by the analysts is then compared to predictions made from various spectral quality metrics to
determine how well the metrics function.
The task selected is one of finding a specific vehicle in a cluttered environment using a detection map produced from
the hyperspectral image along with a panchromatic rendition of the image. Various combinations of spatial resolution,
number of spectral bands, and signal-to-noise ratios are investigated as part of the effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330X (2006) https://doi.org/10.1117/12.665696
This study investigated appropriate methodologies for displaying hyperspectral imagery based on knowledge of human color vision as applied to Hyperion and AVIRIS data. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) were used to reduce the data dimensionality in order to make the data more amenable to visualization in three-dimensional color space. In addition, these two methods were chosen because of their underlying relationships to the opponent color model of human color perception. PCA and ICA-based visualization strategies were then explored by mapping the first three PCs or ICs to several opponent color spaces including CIELAB, HSV, YCrCb, and YUV. The gray world assumption, which states that given an image with sufficient amount of color variations, the average color should be gray, was used to set the mapping origins. The rendered images are well color balanced and can offer a first look capability or initial classification for a wide variety of spectral scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Michael A. Porter, Richard C. Olsen, Richard M. Harkins, Angela M Puetz
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330Y (2006) https://doi.org/10.1117/12.668728
The Lineate Imaging Near Ultraviolet Spectrometer (LINUS)1,2 has been used to remotely detect and measure sulfur
dioxide (SO2). The sensor was calibrated in the lab, with curves of growth created for the 0.29 - 0.31
μ spectral range of
the LINUS sensor. Field observations were made of a coal burning plant in St. John's, Arizona at a range of 537 m. The
Salt River Coronado plant stacks were emitting on average about 100 ppm and 200 ppm from the left and right stacks
respectively. Analysis of the LINUS data matched those values within twenty percent. Possible uses for this technology
include remote verification of industry emissions and detection of unreported SO2 sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
C. J. Wong, H. S. Lim, M. Z. MatJafri, K. Abdullah
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330Z (2006) https://doi.org/10.1117/12.665425
Modern digital technology allows image data transfer over the internet protocol, which
provides real time observation and more frequent air quality studies can be carried at
multi locational simultaneously. The objective of this study is to evaluate the suitability
of using internet protocol camera to transfer image data, and then these data were
analysed using a developed algorithm to determine air quality information. The
concentrations of particulate matter of size less than 10 micron (PM10) were collected
simultaneously with the image data acquisitions. The atmospheric reflectance
components were subtracted from their corresponding recorded radiance values for
algorithm regression analysis. The proposed algorithm produced high correlation
coefficient (R) and low root-mean square error (RMS) values. The efficiency of the
present algorithm, in comparison to other forms of algorithm, was also investigated.
Based on the values of the correlation coefficient and root-mean-square deviation, the
proposed algorithm is considered superior. The accuracy of using IP camera data was
compared with a normal digital camera, Kodak DC290 data in this study. This
preliminary study gave promising results of air quality studies over USM campus by
using internet protocol data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623310 (2006) https://doi.org/10.1117/12.660301
Firstly, the drawbacks of infrared image histogram equalization and its improved algorithm are analyzed. A novel technique which can not only enhance the contrast but also preserve detail information of infrared image is presented. It is called adaptive histogram subsection modification in this paper. The property of infrared image histogram is applied to determine the subsection position adaptively. The second-order differential coefficient of gray level probabilistic density curve is calculated from top down direction. The first inflexion is chosen as the subsection point between high probabilistic density gray levels and low probabilistic density gray levels in the histogram of infrared image. Then the histogram of low probabilistic density section and high probabilistic density section are mapped and modified respectively. Finally, subsection images are combined together and an output infrared image is reconstructed. The contrast is enhanced and the original gray levels are mostly preserved simultaneously during extending the dynamic range of gray levels in infrared image. Meanwhile, suitable distance is kept between gray levels to avoid large isolated grains defined as patchiness in the image. Several infrared images are adopted to demonstrate the performance of this method. Experimental results show that the infrared image quality is greatly improved by this approach. Furthermore, the proposed algorithm is simple and easy to perform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623311 (2006) https://doi.org/10.1117/12.665699
Robert M. Haralick, et. al., described a technique for computing texture features based on gray-level spatial dependencies using a Gray Level Co-occurrence Matrix (GLCM). The traditional GLCM process quantizes a gray-scale image into a small number of discrete gray-level bins. The number and arrangement of spatially co-occurring gray-levels in an image is then statistically analyzed. The output of the traditional GLCM process is a gray-scale image with values corresponding to the intensity of the statistical measure. A method to calculate Spectral Texture is modeled on Haralick's texture features. This Spectral Texture Method uses spectral-similarity spatial dependencies (rather than gray-level spatial dependencies). In the Spectral Texture Method, a spectral image is quantized based on discrete spectral angle ranges. Each pixel in the image is compared to an exemplar spectrum, and a quantized image is created in which pixel values correspond to a spectral similarity value. Statistics are calculated on spatially co-occurring spectral-similarity values. Comparisons between Haralick Texture Features and the Spectral Texture Method results are made, and possible uses of Spectral Texture features are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623312 (2006) https://doi.org/10.1117/12.667961
Change detection is the process of automatically identifying and analyzing regions that have undergone spatial or spectral changes from multi temporal images. Detecting and representing change provides valuable information of the possible transformations a given scene has suffered over time. Change detection in sequences of hyperspectral images is complicated by the fact that change can occur in the temporal and/or spectral domains. This work studies the use of Temporal Principal Component Analysis (TPCA) for change detection in multi/hyperspectral images. Two additional methods were implemented in order to compare its results with TPCA. These were: Image Differencing and Conventional Principal Component Analysis. Experimental results using phantom hyperspectral imagery taken with Surface Optics SOC-700 hyperspectral camera are presented. The algorithms were implemented using Matlab, and their performance is compared in terms of false alarms, missed changes and overall error. Results show that the performance of TPCA was the best, obtaining the smallest percentages of error, missed changes, and false alarms using global or local threshold. TPCA with local threshold gave the best performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623313 (2006) https://doi.org/10.1117/12.665280
Virtual dimensionality (VD) is a new concept which was developed to estimate the number of spectrally distinct
signatures present in hyperspectral image data. Unlike intrinsic dimensionality which is mainly of theoretical interest, the
VD is a very useful and practical notion. It is derived from the Neyman-Pearson detection theory. Unfortunately, its
utility in hyperspectral data exploitation has yet to be explored. This paper presents several applications to which the VD
is applied successfully. Since the VD is derived from a binary hypothesis testing problem for each spectral band, it can
be used for band selection. When the test fails for a band, it indicates that there is a signal source in that particular band
which must be selected. By the same token it can be further used for dimensionality reduction. For principal components
analysis (PCA) or independent component analysis (ICA), the VD helps to determine the number of principal
components or independent components are required for exploitation such as detection, classification, compression, etc.
For unsupervised target detection and classification, the VD can be used to determine how many unwanted signal
sources present in the image data so that they can be eliminated prior to detection and classification. For endmember
extraction, the VD provides a good estimate of the number of endmembers needed to be extracted. All these applications
are justified by experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623314 (2006) https://doi.org/10.1117/12.665283
Independent component analysis (ICA) has shown success in many applications. This paper investigates a new
application of the ICA in endmember extraction and abundance quantification for hyperspectral imagery. An
endmember is generally referred to as an idealized pure signature for a class whose presence is considered to be rare.
When it occurs, it may not appear in large population. In this case, the commonly used principal components analysis
(PCA) may not be effective since endmembers usually contribute very little in statistics to data variance. In order to
substantiate our findings, an ICA-based approach, called ICA-based abundance quantification algorithm (ICA-AQA) is
developed. Three novelties result from our proposed ICA-AQA. First, unlike the commonly used least squares
abundance-constrained linear spectral mixture analysis (ACLSMA) which is a 2nd order statistics-based method, the
ICA-AQA is a high order statistics-based technique. Second, due to the use of statistical independence it is generally
thought that the ICA cannot be implemented as a constrained method. The ICA-AQA shows otherwise. Third, in order
for the ACLSMA to perform abundance quantification, it requires an algorithm to find image endmembers first then
followed by an abundance-constrained algorithm for quantification. As opposed to such a two-stage process, the ICAAQA
can accomplish endmember extraction and abundance quantification simultaneously in one-shot operation.
Experimental results demonstrate that the ICA-AQA performs at least comparably to abundance-constrained methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623315 (2006) https://doi.org/10.1117/12.667964
For two decades, techniques based on Partial Differential Equations (PDEs) have been used in monochrome and color image processing for image segmentation, restoration, smoothing and multiscale image representation. Among these techniques, parabolic PDEs have found a lot of attention for image smoothing and image restoration purposes. Image smoothing by parabolic PDEs can be seen as a continuous transformation of the original image into a space of progressively smoother images identified by the "scale" or level of image smoothing. The semantically meaningful objects in an image can be of any size, that is, they can be located at different image scales, in the continuum scale-space generated by the PDE. The adequate selection of an image scale smoothes out undesirable variability that at lower scales constitute a source of error in segmentation and classification algorithms. This paper proposes a framework for generating a scale space representation for a hyperspectral image using PDE methods. We illustrate some of our ideas by hyperspectral image smoothing using nonlinear diffusion. The extension of scalar nonlinear diffusion to hyperspectral imagery and a discussion of how the spectral and spatial domains are transformed in the scale space representation are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Antonio Plaza, Chein-I Chang, Javier Plaza, David Valencia
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623316 (2006) https://doi.org/10.1117/12.665464
The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623317 (2006) https://doi.org/10.1117/12.669861
An algorithm to determine the abscissa of the partial pixels that corresponds to the peaks of an absorbance spectrum from a hyperspectral imaging camera will be described. The algorithm is based on local linear regression models in variable order and variable sample size mode. The sample size is determined by using the estimated critical points and inflection points. The order is determined by statistically comparing the sum of squares error of the regression models for different orders. Numerical results on spectra from a hyperspectral cube will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric Instrumentation, Measurements, and Forecasting
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623318 (2006) https://doi.org/10.1117/12.666222
More than three years of data from the Atmospheric InfraRed Sounder (AIRS) have given many new insights into infrared hyperspectral sounding from space. Among other things they have shown that 1) Absolute accuracy at the 200 mK level and stability at the better than 20 mK/year level can be deduced for the AIRS data by comparing the observed brightness temperatures with those predicted by NOAA/NCEP's RTGSST for surface channels and those predicted from the ECMWF temperature and moisture profiles for all channels except those sensitive to stratospheric temperatures or water vapor. While the 2616 cm-1 window channel used with AIRS for the validation of the calibration relative to the RTGSST is superior, the 1231 cm-1 window channel can be used for IASI and CRIS with slightly degraded accuracy. 2) Residual cloud contamination of about 240 mK for tightly cloud-filtered data establishes an effective noise floor for channels with weighting function below the cloud tops. With the AIRS 13.5 km footprints and mean NeDT of 0.2 K, the noise floor is reached by filtering out all but 2% of the spectra. 3) The absolute value of the difference between adjacent clear ocean footprints at night provides an excellent estimate of the NeDT in all spectral regions not affected by water vapor. Water vapor spatial scene inhomogeneity on a 13.5 km scale even in clear footprints acts as a source of noise up to a factor of six larger than the NeDT in water sensitive channels.
The methodology established by AIRS for system performance evaluation and trend analysis, including the use of the RTGSST and the ECMWF profiles, lays the foundation and establishes the benchmark for the validation of the performance of future hyperspectral sounders. AIRS was launched into polar 705 km altitude orbit on the EOS Aqua spacecraft on May 4, 2002. It covers the 3.7 to 15.4 micron region of the thermal infrared spectrum with spectral resolution of 1200. Since the start of routine data gathering in September 2002 AIRS has returned 2.9 million spectra of the upwelling radiance each day.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623319 (2006) https://doi.org/10.1117/12.665018
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar
orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily
global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud
related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean
temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80
percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called
the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch
algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used
by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward
the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology
has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the
channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU
observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear
column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results
comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331A (2006) https://doi.org/10.1117/12.665061
AIRS contains 2376 high spectral resolution channels between 650 cm-1 and 2665 cm-1, including channels in both the
15 micron (near 667 cm-1) and 4.2 micron (near 2400 cm-1) CO2 sounding bands. Use of temperature sounding channels
in the 15 micron CO2 band has considerable heritage in infra-red remote sensing. Channels in the 4.2 micron CO2 band
have potential advantages for temperature sounding purposes because they are essentially insensitive to absorption by
water vapor and ozone, and also have considerably sharper lower tropospheric temperature sounding weighting
functions than do the 15 micron temperature sounding channels. Potential drawbacks with regard to use of 4.2 micron
channels arise from effects on the observed radiances of solar radiation reflected by the surface and clouds, as well as
effects of non-local thermodynamic equilibrium on shortwave observations during the day. These are of no practical
consequences, however, when properly accounted for. We show results of experiments performed utilizing different
spectral regions of AIRS, conducted with the AIRS Science Team candidate Version 5 algorithm. Experiments were
performed using temperature sounding channels within the entire AIRS spectral coverage, within only the spectral region
650 cm-1 to 1614 cm-1; and within only the spectral region 1000 cm-1-2665 cm-1. These show the relative importance of
utilizing only 15 micron temperature sounding clouds, only the 4.2 micron temperature sounding channels, and both,
with regards to sounding accuracy. The spectral region 2380 cm-1 to 2400 cm-1 is shown to contribute significantly to
improve sounding accuracy in the lower troposphere, both day and night.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331B (2006) https://doi.org/10.1117/12.665724
The hyperspectral resolution measurements from the NASA Atmospheric Infrared Sounder (AIRS) are advancing climate research by mapping atmospheric temperature, moisture, and trace gases on a global basis with unprecedented accuracy. Using a sophisticated retrieval scheme, the AIRS is capable of diagnosing the atmospheric temperature in the troposphere with accuracies of less than 1 K over 1 km-thick layers and 10-20% relative humidity over 2 km-thick layers, under both clear and cloudy conditions. A unique aspect of the retrieval procedure is the specification of a vertically varying error estimate for the temperature and moisture profile for each retrieval. The error specification allows for the more selective use of the profiles in subsequent processing. In this paper, we describe a procedure to assimilate AIRS data into the Weather Research and Forecasting (WRF) model to improve short-term weather forecasts. The ARPS Data Analysis System (ADAS) developed by the University of Oklahoma is configured to optimally blend AIRS data with model background fields based on the AIRS error profiles. The WRF short-term forecasts with selected AIRS data show improvement over the control forecast. The use of the AIRS error profiles maximizes the impact of high quality AIRS data from portions of the profile in the assimilation/forecast process without degradation from lower quality data in the other portions of the profile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331C (2006) https://doi.org/10.1117/12.666572
In this paper, samples of AIRS data in the 1215 to 1615 cm-1 spectral region are analyzed to better understand the effects of water vapor in the mid to upper tropospheric region. Two days representing mid-latitude (20°-40° N) summer (warm and moist) and winter (cold and dry) maritime conditions are selected with cloud-free and 100% cloudy FOVs. The data, both in trend and differences, are well explained by the respective changes in atmospheric temperature and water vapor. These data are then compared with model simulation using MODTRAN. The results also compare favorably. Model simulation further illustrates the value of high spectral resolution for monitoring change in water vapor particularly in the upper troposphere. With the future GOES-R and NPOESS hyperspectral sensors expected to provide much improved atmospheric profile information, better monitoring of atmospheric water vapor will lead to improvements both in weather and climate applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331D (2006) https://doi.org/10.1117/12.665046
Global energy balance of the Earth-atmosphere system may change due to natural and man-made climate
variations. For example, changes in the outgoing longwave radiation (OLR) can be regarded as a crucial
indicator of climate variations. Clouds play an important role -still insufficiently assessed- in the global energy
balance on all spatial and temporal scales, and satellites provide an ideal platform to measure cloud and largescale
atmospheric variables simultaneously. The TOVS series of satellites were the first to provide this type of
information since 1979. OLR [Mehta and Susskind1], cloud cover and cloud top pressure [Susskind et al.2] are
among the key climatic parameters computed by the TOVS Pathfinder Path-A algorithm using mainly the
retrieved temperature and moisture profiles. AIRS, regarded as the 'new and improved TOVS', has a much
higher spectral resolution and greater S/N ratio, retrieving climatic parameters with higher accuracy.
First we present encouraging agreements between MODIS and AIRS cloud top pressure (Ctp) and
'effective' (Aeff, a product of infrared emissivity at 11 μm and physical cloud cover or Ac) cloud fraction
seasonal and interannual variabilities for selected months. Next we present validation efforts and preliminary
trend analyses of TOVS-retrieved Ctp and Aeff. For example, decadal global trends of the TOVS Path-A and
ISCCP-D2 Pc and Aeff/Ac values are similar. Furthermore, the TOVS Path-A and ISCCP-AVHRR [available
since 1983] cloud fractions correlate even more strongly, including regional trends.
We also present TOVS and AIRS OLR validation effort results and (for the longer-term TOVS Pathfinder
Path-A dataset) trend analyses. OLR interannual spatial variabilities from the available state-of-the-art CERES
measurements and both from the AIRS [Susskind et al.3,4] and TOVS OLR computations are in remarkably
good agreement. Global monthly mean CERES and TOVS OLR time series show very good agreement in
absolute values also. Finally, we will assess correlations among long-term trends of selected parameters,
derived simultaneously from the TOVS Pathfinder Path-A dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331E (2006) https://doi.org/10.1117/12.664712
A nonlinear stochastic method for the retrieval of atmospheric temperature and moisture profiles has been developed
and evaluated with sounding data from the Atmospheric InfraRed Sounder (AIRS) and the Advanced
Microwave Sounding Unit (AMSU), and is presently being adapted for use with the NPOESS Cross-track Infrared
Microwave Sounding Suite (CrIMSS) consisting of the hyperspectral Cross-track Infrared Sounder (CrIS) and the
Advanced Technology Microwave Sounder (ATMS). The algorithm is implemented in three stages, motivating
the name, SCENE (Stochastic Cloud clearing,1 followed by Eigenvector radiance compression and denoising,
followed by Neural network Estimation). First, the infrared radiance perturbations due to clouds are estimated
and corrected by combined processing of the infrared and microwave data. Second, a Projected Principal Components
(PPC) transform2 is used to reduce the dimensionality of and optimally extract geophysical profile
information from the cloud-cleared infrared radiance data. Third, an artificial feedforward neural network is
used to estimate the desired geophysical parameters from the projected principal components. This paper has
two major components. First, details of the SCENE algorithm are discussed, including both the architectural
implementation and parameter selection and optimization. Second, the performance of the SCENE algorithm is
compared with that of the AIRS Level 2 algorithm (version 4.0.9) 3 currently being used for the Aqua mission.
The stochastic cloud-clearing algorithm estimates infrared radiances that would be observed in the absence
of clouds. This algorithm examines 3×3 sets of nine AIRS fields of view, selects the clearest ones, and then
in a series of simple linear and non-linear operations on both the infrared and microwave channels estimates a
single cloud-cleared infrared spectrum for the 3×3 set. The algorithm is both trained and tested using global
numerical weather analyses within 60 degrees of the equator. The analyses were generated by the European
Center for Medium-range Weather Forecasting (ECMWF), and were converted to radiances using the SARTA
v1.04 radiative transfer package.
The PPC compression technique was used to reduce the infrared radiance dimensionality by a factor of 100,
while retaining over 99.99% of the radiance variance that is correlated to the geophysical profiles. A feedforward
neural network (NN) with a single hidden layer of approximately 3000 degrees of freedom was then used to
estimate the atmospheric moisture and temperature profiles at approximately 60 levels from the surface to 20
km.
The performance of the SCENE algorithm was evaluated using global, ascending EOS-Aqua orbits colocated
with ECMWF forecasts (generated every three hours on a 0.5-degree lat/lon grid) for a variety of days throughout
2002 and 2003. Over 300,000 fields of regard (3×3 arrays of footprints) over ocean were used in the study. The
RMS temperature and moisture profile retrieval errors for the SCENE algorithm were compared to those of
the AIRS Level 2 algorithm, and the performance of the SCENE algorithm exceeded that of the AIRS Level 2
algorithm throughout most of the troposphere. The SCENE algorithm requires significantly less computation
than traditional variational retrieval methods while achieving comparable performance, thus the algorithm is
particularly suitable for quick-look retrieval generation for post-launch CrIMSS performance validation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331F (2006) https://doi.org/10.1117/12.665077
The MODTRAN5 radiation transport (RT) model is a major advancement over earlier versions of the MODTRAN atmospheric transmittance and radiance model. New model features include (1) finer spectral resolution via the Spectrally Enhanced Resolution MODTRAN (SERTRAN) molecular band model, (2) a fully coupled treatment of auxiliary molecular species, and (3) a rapid, high fidelity multiple scattering (MS) option. The finer spectral resolution improves model accuracy especially in the mid- and long-wave infrared atmospheric windows; the auxiliary species option permits the addition of any or all of the suite of HITRAN molecular line species, along with default and user-defined profile specification; and the MS option makes feasible the calculation of Vis-NIR databases that include high-fidelity scattered radiances. Validations of the new band model algorithms against line-by-line (LBL) codes have proven successful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331G (2006) https://doi.org/10.1117/12.666082
The Optimal Spectral Sampling (OSS) method models band averaged radiances as weighted sums of monochromatic radiances. The method is fast and accurate and has the advantage over other existing techniques that it is directly applicable to scattering atmospheres. Other advantages conferred by the method include flexible handling of trace species and ability to select variable species at run time without having to retrain the model, and the possibility of large speed gains by specializing the model for a particular application. The OSS method is used in the CrIS and CMIS retrieval algorithms and it is currently being implemented in the Joint Center for Satellite Assimilation (JCSDA) Community Radiative Transfer Model (CRTM). A version of OSS is currently under development for direct inclusion within MODTRANTM, as an alternative to the current band models. This paper discusses the OSS interface to MODTRANTM, presents model results, and identifies new developments applicable to narrowband and broadband radiative transfer modeling across the spectrum and the training of OSS for scattering atmospheres.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331H (2006) https://doi.org/10.1117/12.668216
We present an algorithm to estimate the orientation of a ground material corresponding to a pixel in a hyperspectral
image acquired by an airborne sensor under unknown atmospheric conditions. A physics-based image
formation model is used in which the spectral reflectance of the ground material, orientation of the material surface,
and the atmospheric and illumination conditions determine the sensor radiance of a pixel. The algorithm
uses a low-dimensional coupled subspace model for the solar radiance, sky radiance, and path-scattered radiance.
The common inter-dependence of these spectra on the environmental condition and viewing geometry is considered
by using the coupled subspace model. The physics-based image formation model used by the algorithm
uses two orientation parameters which are used to determine the surface orientation. A constrained nonlinear
optimization method is used to estimate the orientation and the coupled-subspace model parameters. We have
tested the utility of our algorithm using a large set of 0.42-1.74 micron sensor radiance spectra simulated for
varying surface orientations of different materials.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Phenomenology, Measurements, and Experiments
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331I (2006) https://doi.org/10.1117/12.665521
As hyperspectral sensors become more available, an understanding of the impact of "natural change" on exploitation of that imagery is required. During the summer of 2005, the RIT Digital Imaging and Remote Sensing Laboratory, in conjunction with the Laboratory for Imaging Algorithms and Systems, undertook a collection campaign of a common target scene with a Vis / NIR hyperspectral sensor. The Modular Imaging Spectrometer Instrument (MISI) has 70 channels from 0.4μm to 1.0μm and was flown over the RIT campus on six different dates between May and September along a common flightline. Flights were spaced by as little as 3 days and as long as a month. Twice, multiple flightlines were collected on a single day, separated by minutes and hours. Several experiments were run during individual flights, but the goal here is to describe and understand the temporal aspects of the data. Results from classifying each image are presented to show how local weather history, slightly different collection geometries, and real scene change affect the results. Similarly, common regions of interest in the imagery are defined and comparisons are made of the statistical variations in the regions across the season. Additionally, signature prediction forward in time using the method of Covariance Equalization is examined as to its applicability to this dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331J (2006) https://doi.org/10.1117/12.665821
Techniques to determine the proportions of constituent materials within a single pixel spectrum are well documented in the reflective (0.4-2.5μm) domain. The same capability is also desirable for the thermal (7-14μm) domain, but is complicated by the thermal contributions to the measured spectral radiance. Atmospheric compensation schemes for the thermal domain have been described along with methods for estimating the spectral emissivity from a spectral radiance measurement and hence the next stage to be tackled is the unmixing of thermal spectral signatures. In order to pursue this goal it is necessary to collect data of well-calibrated targets which will expose the limits of the available techniques and enable more robust methods to be designed. This paper describes the design of a set of ground targets for an airborne hyperspectral imager, which will test the effectiveness of available methods. The set of targets include panels to explore a number of difficult scenarios such as isothermal (different materials at identical temperature), isochromal (identical materials, but at differing temperatures), thermal adjacency and thermal point sources. Practical fabrication issues for heated targets and selection of appropriate materials are described. Mathematical modelling of the experiments has enabled prediction of at-sensor measured radiances which are used to assess the design parameters. Finally, a number of useful lessons learned during the fielding of these actual targets are presented to assist those planning future trials of thermal hyperspectral sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331K (2006) https://doi.org/10.1117/12.665527
Tree canopy closure is often a desired metric in ecological applications of spectral remote sensing. There are numerous models and field protocols for estimating this variable, many of which are specialized or may have poor accuracies. Specialized instruments are also available but they may be cost prohibitive for small programs. An expedient alternative is the use of in-situ handheld digital photography to estimate canopy closure. This approach is cost effective while maintaining accuracy. The objective of this study was to develop and test an efficient field protocol for determining tree canopy closure from zenith-looking and oblique digital photographs.
Investigators created a custom software package that uses Euclidean distance to cluster pixels into sky and non-sky categories. The percentages of sky and tree canopy are calculated from the clusters. Acquisition protocols were tested using JPEG photographs taken at multiple upward viewing angles and along transects within an open stand of loblolly pine trees and a grove of broadleaf-deciduous trees. JPEG lossy compression introduced minimal error but provided an appropriate trade-off given limited camera storage capacity and the large number of photographs required to meet the field protocol. This is in contrast to relatively larger error introduced by other commonly employed measurement techniques such as using gridded template methods and canopy approximations calculated by tree diameter measurements.
Experiment results demonstrated the viability of combining image classification software with ground-level digital photographs to produce fast and economical tree canopy closure approximations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331L (2006) https://doi.org/10.1117/12.665574
Water is a common, transient soil material that can be distributed as lattice water bound in crystalline particles, as water of adhesion on the soil particles, and as interstitial or capillary water. It can have important effects on soil reflectance spectra over the visible-near infrared-short wave infrared electromagnetic spectrum, 0.4-2.5 μm. This study's objective was to determine the changes in soil reflectance spectra relative to differences in soil water content. An initial small water application greatly reduced the soil reflectance, masked the spectral features of the air-dry soils, and enhanced water absorption features. As wavelength values increased, water absorptance increased and transmittance decreased, which created non-uniform change in the soil reflectance spectrum. These changes occurred as water filled the interstitial spaces within the soil's optical depth. Water filling the pore space below the sample's optical depth increased the substrate's moisture content but had no effect on substrate reflectance. These water absorption features were amplified over the 0.4-2.5 μm region and spectral sensitivities to water increased directly with wavelength. Soil reflectance maxima in five spectral bands, centered at 0.800, 1.080, 1.265, 1.695, and 2.220 μm, varied inversely with sample water content. The 0.800 and 1.080 μm bands varied more slowly with water content than did the 1.265, 1.695, and 2.220 μm bands. Multiple normalized difference indices (NDI) using these bands correlated strongly with sample water content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331M (2006) https://doi.org/10.1117/12.665586
Three liquid hydrocarbons of different volatilities were incrementally applied to a quartz sand substrate. These liquids were gasoline, diesel fuel, and motor oil. The reflectance spectra of the hydrocarbon-sand samples varied directly with the amount (weight) of liquid on the sand. Liquid-saturated sand samples were then left to age in ambient, outdoor, environmental conditions. At regular intervals, the samples were re-measured for the residual liquid and the associated change in sample reflectance. The results outlined temporal windows of opportunity for detecting these products on the sand substrate. The windows ranged from less than 24-hours to more than a week, depending on liquid volatility.
Each hydrocarbon darkened the sand and produced hydrocarbon absorption features near 1.70 and 2.31 μm and a hydrocarbon plateau at 2.28-2.45 μm. These features were used to differentiate the liquid-sand samples. A normalized difference index metric based on one of these features and a spectral continuum band described the reflectance-weight loss and reflectance-time relations. The normalized difference hydrocarbon index (NDHI) using the 1.60 and 2.31 μm bands best characterized the samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331N (2006) https://doi.org/10.1117/12.666200
Gaseous plume detection in the LWIR (thermal infrared) region of the
spectrum (7-14 μm) with the use of hyper-spectral imaging sensors is a rapidly advancing technology.1-2 There are many industrial pollutants
that have unique or strong absorption/emission signatures in the mid-wave infrared (MWIR) region of the spectrum. The C-H vibrational frequency
modes of hydrocarbons are clustered in the 3-4 micron region. Until recently
the use of the MWIR region has been hampered, in part, by a lack of
detailed and quantitative characterization of the phenomenology influencing the at-sensor radiance.3 The goal of this paper is to increase understanding of this phenomenology and thus the utility of the MWIR spectral region for industrial pollution monitoring applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
H. S. Lim, M. Z. MatJafri, K. Abdullah, N. M. Saleh, C. J. Wong
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331O (2006) https://doi.org/10.1117/12.665429
Air quality is a major concern in many large cities of the world. This paper studies the
relationship between particulate matter of size less than 10-micron meter (PM10) and satellite
observation from OCTS. The objective of this study is to map the PM10 distribution in
Peninsular Malaysia using visible and thermal bands data. The in-situ PM10 measurements were
collected simultaneously with the acquisition of the satellite image. The reflectance values in the
visible and digital numbers in the thermal bands were extracted corresponding to the ground-truth
locations and later used for algorithm regression analysis of the air quality. A developed
algorithm was used to predict the PM10 values from the satellite image. The novelty of this study
is the algorithm uses a combination of reflectance measurements from the visible bands and the
corresponding apparent temperature values of the thermal band. This study investigates the
relationship between the extracted OCTS signals and the PM10 values. The reflectance at 3.55-3.88 micro meter is computed after correction for the emission by the atmosphere. The surface
emissivity values were computed based on the NDVI values. The developed algorithm produced
a high correlation between the measured and estimated PM10 values of 0.97. Finally, a PM10
map was generated over Peninsular Malaysia using the proposed algorithm. This study has
indicated the potential of OCTS satellite data for air quality study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331P (2006) https://doi.org/10.1117/12.668736
LIDAR data taken over the Elkhorn Slough in Central California were analyzed for terrain classification. The specific
terrain element of interest is vegetation, and in particular, tree type. Data taken on April 12th, 2005, were taken over a
10 km × 20 km region which is mixed use agriculture and wetlands. Time return and intensity were obtained at ~2.5 m
postings. Multi-spectral imagery from QuickBird was used from a 2002 imaging pass to guide analysis. Ground truth
was combined with the orthorectified satellite imagery to determine regions of interest for areas with Eucalyptus, Scrub
Oak, Live Oak, and Monterey Cyprus trees. LIDAR temporal returns could be used to distinguish regions with trees
from cultivated and bare soil areas. Some tree types could be distinguished on the basis of the relationship between
first/last extracted feature returns. The otherwise similar Eucalyptus and Monterey Cyprus could be distinguished by
means of the intensity information from the imaging LIDAR. The combined intensity and temporal data allowed
accurate distinction between the tree types, a task not otherwise practical with the satellite spectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331Q (2006) https://doi.org/10.1117/12.666133
The Hyperspectral Polarimetric Imaging Testbed contains a VNIR, SWIR, and three-axis
imaging polarimeter, each operating simultaneously through a common fore-optic. The
system was designed for the detection of man-made objects in natural scenes. The
imagery produced by the various imaging legs of the system is readily fused, due to the
identical image format, FOV and IFOV of each optical leg. The fused imagery is shown
to be useful for the detection of a variety of man-made surfaces. This paper describes the
general design and function of the mature system, the Stochastic Gaussian Classifier
processing method used for hyperspectral anomaly detection, the polarimetric image
processing methods, and a logical decision structure for the identification of various
surface types. The paper will also describe in detail the detection results for a variety of
targets obtained in field testing conducted with the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331R (2006) https://doi.org/10.1117/12.668622
Hyperspectral imagers tend to have lower spatial resolution than multispectral ones. This often results in a (sometimes difficult) trade-off between spectral and spatial resolution. One means of addressing this spatial/spectral resolution trade-off is to acquire both multispectral and hyperspectral data simultaneously, and then combine the two to produce a hyperspectral image with the high spatial resolution of the multispectral image. This process, called 'sharpening', results in a product that fuses the rich spectral content of a hyperspectral image with the high spatial content of the multispectral image. The approach we have been investigating compares the spectral information present in the multispectral image to the spectral content in the hyperspectral image and derives a set of equations to approximately transform the multispectral image into a synthetic hyperspectral image. This synthetic hyperspectral image is then recombined with the original low-spatial-resolution hyperspectral image to produce a sharpened product. We have evaluated this technique against several types of data for terrain classification and it has demonstrated good performance across all data sets. The spectra predicted by the sharpening algorithm match truth spectra in synthetic image tests, and performance with detection algorithms show little, if any, degradation of detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331S (2006) https://doi.org/10.1117/12.666672
A persistent problem with new unregistered geospatial data is geometric image distortion caused by different sensor/camera location. Often this distortion is modeled by means of arbitrary affine transformations. However in most of the real cases such geometric distortion is combined with other distortions caused by different image resolutions, different feature extraction techniques and others. Often images overlap only partially. Thus, the same objects on two images can differ significantly. The simple geometric distortion preserves one-to-one match between all points of the same object in the two images. In contrast when images are only partially overlapped or have different resolution there is no one-to-one point match. This paper explores theoretical and practical limits of building algorithms that are both robust and invariant at the same time to geometric distortions and change of image resolution. We provide two theorems, which state that such ideal algorithms are impossible in the proposed formalized framework. On the practical side we explored experimentally the ways to mitigate these theoretical limitations. Effective point placement, feature interpolation, and super-feature construction methods are developed that provide good registration/conflation results for the mages of very different resolutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331T (2006) https://doi.org/10.1117/12.662373
IMAGE FUSION is 'the combination of two or more different images to form a fused image by using a fusion algorithm'. In this paper, an algorithm is designed in which extracts the pixels from the stacked images. Principal component analysis is carried out which aims at reducing a large set of variables to a small set that still containing most of the information that was available in the large set. The technique of principal component analysis enables us to create and use a reduced set of variables, which are called principal factors. A reduced set is much easier to analyze and interpret. In this paper, fusion of images obtained from a visible camera and that from an infrared camera is been done.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331U (2006) https://doi.org/10.1117/12.665608
When a matched filter is used for detecting a weak target in a cluttered background (such as a gaseous plume in a hyperspectral image), it is important that the background clutter be well-characterized. A statistical characterization can be obtained from the off-plume pixels of a hyperspectral image, but if on-plume pixels are inadvertently included, then that background characterization will be contaminated. In broad area search scenarios, where detection is the central aim, it is by definition unknown which pixels in the scene are off-plume, so some contamination is inevitable. In general, the contaminated background degrades the ability of the matched-filter to detect that signal. This could be a practical problem in plume detection. A linear analysis suggests that the effect is limited, and actually vanishes in some cases. In this study, we take into account the Beer's Law nonlinearity of plume absorption, and we investigate the effect of that nonlinearity on the signal contamination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331V (2006) https://doi.org/10.1117/12.665899
A new algorithm, optimized land surface temperature and emissivity retrieval (OLSTER), was developed to compensate for atmospheric effects and retrieve land surface temperature (LST) and emissivity
from airborne thermal infrared hyperspectral data. The OLSTER algorithm is designed to retrieve both natural and man-made materials. Multi-directional or multi-temporal observations are not required, and the scenes do not have to be dominated by blackbody features. The OLSTER algorithm consists of a preprocessing step, an iterative near-blackbody pixels search, and an iterative constrained optimization loop. The preprocessing step provides initial estimates of LST per pixel and the atmospheric parameters of transmittance and upwelling radiance for the entire image. Pixels that are under- or over-compensated for the atmospheric parameters are classified as near-blackbody and lower emissivity pixels, respectively. A constrained optimization of the atmospheric parameters using generalized reduced gradients on the near-blackbody pixels ensures physical results. The downwelling radiance is estimated from the upwelling radiance by applying a look-up table of coefficients based on a polynomial regression of radiative transfer model runs for the same sensor altitude. The LST and emissivity per pixel are retrieved simultaneously using the well established ISSTES algorithm. The OLSTER algorithm can retrieve LST within about ± 2.0 K, and emissivities within about ± 0.01 based on numerical simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331W (2006) https://doi.org/10.1117/12.665233
A new approach to perform temperature/emissivity separation (TES) has been developed and is described in this paper. The Planck-Modeled Temperature Emissivity Separation (PM-TES) technique provides a unique approach to the derivation of surface emissivity parameters from longwave infrared (LWIR) hyperspectral imagery (HSI) without incorporating the direct assumptions required in most current TES techniques. Accurate calculation of emissivity from at-sensor radiance values is complicated by the impact of temperature on the detected signal. Known are the ground leaving radiance (GLR) values observed by the sensor (after atmospheric compensation), while unknown are the emissivity values and the temperature of the surface material. Emissivity is difficult to determine at this juncture because there are N+1 unknown terms (N being the number of spectral bands contained in the HSI dataset) and N known terms, resulting in an indeterminate problem. Prior methods have incorporated assumptions associated with one of the unknown terms, which allows for the calculation of all additional unknown terms (e.g., Kahle and Alley, 1992) [1]. While these techniques have shown success in a variety of circumstances, the accuracy of their results is most often dependant upon a singular assumption. Minor miscalculations in these assumptions are thus propagated through the resultant HSI dataset for the extent of spectral processing. The PM-TES technique proposed in this paper does not make specific assumptions about either the temperature or emissivity characteristics of the in-scene materials to calculate emissivity parameters which are comparable in accuracy to current standard industry techniques. The proposed technique requires the input of temperature and emissivity ranges over which the Planck function is modeled to solve for the relationship between GLR and emissivity terms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331X (2006) https://doi.org/10.1117/12.669802
Infrared airborne spectral measurements were collected over the Gulf Coast area during the aftermath of Hurricanes Katrina and Rita. These measurements allowed surveillance for potentially hazardous chemical vapor releases from industrial facilities caused by storm damage. Data was collected with a mid-longwave infrared multispectral imager and a hyperspectral Fourier transform infrared spectrometer operating in a low altitude aircraft. Signal processing allowed detection and identification of targeted spectral signatures in the presence of interferents, atmospheric contributions, and thermal clutter. Results confirmed the presence of a number of chemical vapors. All detection results were immediately passed along to emergency first responders on the ground. The chemical identification, location, and vapor species concentration information were used by the emergency response ground teams for identification of critical plume releases and subsequent mitigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331Y (2006) https://doi.org/10.1117/12.665139
The ability to remotely detect explosive devices and explosive materials has generated
considerable interest over the last several years. The study of remote sensing of explosives date back
many decades but recent world events have forced the technology to respond to changing events and
bring new technologies to the field in shorter development times than previously thought possible.
Several applications have proven both desirable and illusive to date. Most notable is the
desire to sense explosives within containers, sealed or otherwise. This requires a sensing device to
penetrate the walls of the container, a difficult task when the container is steel or thick cement.
Another is the desire to detect explosive vapors which escape from a container into the ambient air.
This has been made difficult because explosives are generally formulated to have extremely low
vapor pressure. (This has made many gas detection technique not strong candidates for explosive
vapor detection due to the low vapor pressure of explosive materials [1].) Because of the many
difficulties in a general explosive remote detection, we have attempted to bound the problem into a
series of achievable steps with the first step a simple remote detection of TNT-bearing compounds.
Prior to discussing our technology, we will first discuss our choice for attacking the problem in this
manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62331Z (2006) https://doi.org/10.1117/12.661273
This paper discusses the formulation and implementation of an acceleration approach for the MCScene code, a high
fidelity model for full optical spectrum (UV to LWIR) hyperspectral image (HSI) simulation. The MCScene simulation
is based on a Direct Simulation Monte Carlo approach for modeling 3D atmospheric radiative transport, as well as
spatially inhomogeneous surfaces including surface BRDF effects. The model includes treatment of land and ocean
surfaces, 3D terrain, 3D surface objects, and effects of finite clouds with surface shadowing. This paper will review an
acceleration algorithm that exploits spectral redundancies in hyperspectral images. In this algorithm, the full scene is
determined for a subset of spectral channels, and then this multispectral scene is unmixed into spectral end members and
end member abundance maps. Next, pure end member pixels are determined at their full hyperspectral resolution, and
the full hyperspectral scene is reconstructed from the hyperspectral end member spectra and the multispectral abundance
maps. This algorithm effectively performs a hyperspectral simulation while requiring only the computational time of a
multispectral simulation. The acceleration algorithm will be demonstrated, and errors associated with the algorithm will
be analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623321 (2006) https://doi.org/10.1117/12.665652
Detection of anomalies in hyperspectral clutter is an important task in military surveillance. Most algorithms for unsupervised anomaly detection make either explicit or implicit assumptions about hyperspectral clutter statistics: for instance that the abundance is either normally distributed or elliptically contoured. In this paper we investigate the validity of such claims. We show that while non-elliptical contouring is not necessarily a barrier to anomaly detection, it may be possible to do better. In this paper we show how various generative models which replicate the competitive behaviour of vegetation at a mathematically tractable level lead to hyperspectral clutter statistics which do not have Elliptically Contoured (EC) distributions. We develop a statistical test and a method for visualizing the degree of elliptical contouring of real data. Having observed that in common with the generative models much real data fails to be elliptically contoured, we develop a new method for anomaly detection that has good performance on non-EC data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623322 (2006) https://doi.org/10.1117/12.663965
The Johnson System for characterizing an empirical distribution is used to model the non-normal behavior of
Mahalanobis distances in material clusters extracted from hyperspectral imagery data. An automated method for
determining Johnson distribution parameters is used to model Mahalanobis distance distributions and is compared to an
existing method which uses mixtures of F distributions. The results lead to a method for determining outliers and
mitigating their effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623323 (2006) https://doi.org/10.1117/12.666081
The Army has gained a renewed interest in hyperspectral (HS) imagery for military surveillance. As a result, a HS
research team has been established at the Army Research Lab (ARL) to focus exclusively on the design of innovative
algorithms for target detection in natural clutter. In 2005 at this symposium, we presented comparison performances
between a proposed anomaly detector and existing ones testing real HS data. Herein, we present some insightful results
on our general approach using analyses of statistical performances of an additional ARL anomaly detector testing 1500
simulated realizations of model-specific data to shed some light on its effectiveness. Simulated data of increasing
background complexity will be used for the analysis, where highly correlated multivariate Gaussian random samples
will model homogeneous backgrounds and mixtures of Gaussian will model non-homogeneous backgrounds. Distinct
multivariate random samples will model targets, and targets will be added to backgrounds. The principle that led to the
design of our detectors employs an indirect sample comparison to test the likelihood that local HS random samples
belong to the same population. Let X and Y denote two random samples, and let Z = X U Y, where U denotes the union.
We showed that X can be indirectly compared to Y by comparing, instead, Z to Y (or to X). Mathematical
implementations of this simple idea have shown a remarkable ability to preserve performance of meaningful detections
(e.g., full-pixel targets), while significantly reducing the number of meaningless detections (e.g., transitions of
background regions in the scene).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623324 (2006) https://doi.org/10.1117/12.673224
The vision of the National Geospatial-Intelligence Agency (NGA) is to "Know the Earth...Show the Way." To achieve this vision, the NGA provides geospatial intelligence in all its forms and from whatever source-imagery, imagery intelligence, and geospatial data and information-to ensure the knowledge foundation for planning, decision, and action. Academia plays a key role in the NGA research and development program through the NGA Academic Research Program. This multi-disciplinary program of basic research in geospatial intelligence topics provides grants and fellowships to the leading investigators, research universities, and colleges of the nation. This research provides the fundamental science support to NGA's applied and advanced research programs. The major components of the NGA Academic Research Program are:
*NGA University Research Initiatives (NURI): Three-year basic research grants awarded competitively to the best investigators across the US academic community. Topics are selected to provide the scientific basis for advanced and applied research in NGA core disciplines.
*Historically Black College and University - Minority Institution Research Initiatives (HBCU-MI): Two-year basic research grants awarded competitively to the best investigators at Historically Black Colleges and Universities, and Minority Institutions across the US academic community.
*Intelligence Community Post-Doctoral Research Fellowships: Fellowships providing access to advanced research in science and technology applicable to the intelligence community's mission. The program provides a pool of researchers to support future intelligence community needs and develops long-term relationships with researchers as they move into career positions.
This paper provides information about the NGA Academic Research Program, the projects it supports and how researchers and institutions can apply for grants under the program. In addition, other opportunities for academia to engage with NGA through training programs and recruitment are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623325 (2006) https://doi.org/10.1117/12.666273
The hyperspectral subpixel detection and classification problem has been intensely studied in the downward-looking case, typically satellite imagery of agricultural and urban areas. In contrast, the hyperspectral imaging case when "looking up" at small or distant satellites creates new and unforeseen problems. Usually one pixel or one fraction of a pixel contains the imaging target, and spectra tend to be time-series data of a single object collected over some time period under possibly varying weather conditions; there is little spatial information available. Often, the number of collected traces is less than the number of wavelength bins, and a materials database with imperfect representative spectra must be used in the subpixel classification and unmixing process. A procedure is formulated for generating a "good" set of classes from experimentally collected spectra by assuming a Gaussian distribution in the angle-space of the spectra. Specifically, Kernel K-means, a suboptimal ML-estimator, is used to generate a set of classes. Covariance information from the resulting classes and weighted least squares methods are then applied to solve the linear unmixing problem. We show with cross-validation that Kernel K-means separation of laboratory material classes into "smaller" virtual classes before unmixing improves the performance of weighted least squares methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623326 (2006) https://doi.org/10.1117/12.667977
This paper presents a comparison of different algorithms to compute the constrained positive matrix factorization and their application to the unsupervised unmixing problem. We study numerical methods based on the Gauss-Newton algorithm, the Seung-Lee approach, the Gauss-Seidel algorithm, and penalty methods. Preliminary results using a Hyperion image from southwestern Puerto Rico presented. Algorithms will be compared in terms of their convergence performance, and quality of the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623327 (2006) https://doi.org/10.1117/12.668624
When analyzing a hyperspectral image using the linear mixture model, one makes a variety of assumptions relating to the distribution of error and the underlying mixture model. In order to test the validity of these assumptions, a simple model of hyperspectral data is examined. Generally, simple linear unmixing is performed assuming that sensor error rates are the same for each band. This assumption is violated quite easily when unmixing reflectance data. Assuming a perfect sensor, image data that perfectly obeys the linear mixture model, and perfectly known end-member spectra, the error rate for least squares linear unmixing is determinable using a simple formula. When data is transformed into reflectance, the error rates for the unmixed image increases by a significant factor due to the poor statistical normalization of the resulting data. As a means of mitigating error in unmixed imagery, two alternative unmixing methods are examined: non-negative least squares, and total least squares. Non-negative least squares can be shown to significantly outperform simple least squares, while total least squares behaves pathologically. Unmixing hyperspectral images inherently transfers error from the original hyperspectral image to the unmixed fraction plane image. Care should be taken when unmixing, so that this error is known and minimized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623328 (2006) https://doi.org/10.1117/12.664954
Abundances of material components in objects are usually computed using techniques such as linear spectral unmixing on individual pixels captured on hyperspectral imaging devices. However, algorithms such as unmixing have many flaws, some due to implementation, and others due to improper choices of the spectral library used in the unmixing (as well as classification). There may exist other methods for extraction of this hyperspectral abundance information. We propose the development of spatial ground truth data from which various unmixing algorithm analyses can be evaluated. This may be done by implementing a three-dimensional hyperpspectral discrete wavelet transform (HSDWT) with a low-complexity lifting method using the Haar basis. Spectral unmixing, or similar algorithms can then be evaluated, and their effectiveness can be measured by how well or poorly the spatial and spectral characteristics of the target are reproduced at full resolution (which becomes single object classification by pixel).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 623329 (2006) https://doi.org/10.1117/12.666773
Quantum-dot infrared photodetectors (QDIPs) are emerging as a promising technology for midwave- and longwave-infrared remote sensing and spectral imaging. One of the key advantages that QDIPs offer is their bias-dependent spectral response, which is brought about by the asymmetric bandstructure of the dot-in-a-well (DWELL) configuration. Photocurrents of a single QDIP, driven by different operational biases can, therefore, be viewed as outputs of different bands. It has been shown that this property, combined with post-processing strategies (applied to the outputs of a single sensor operated at different biases), can be used to perform adaptive spectral tuning and matched filtering. However, unlike traditional sensors, bands of a QDIP exhibit significant spectral overlap, an attribute that calls for the development of novel methods for feature selection. Additionally, the presence of detector noise further complicates such feature selection. In this paper, the theoretical foundations for discriminant analysis, based on spectrally adaptive feature selection, are developed and applied to data obtained from QDIP sensors in the presence of noise. The approach is based on a generalized canonical-correlation-analysis framework that is used in conjunction with an optimization criterion for the selection of feature subspaces. The criterion ranks the best linear combinations of the overlapping bands, providing minimal energy norm (a generalized Euclidean norm) between the centers of classes and their respective reconstructions in the space spanned by sensor bands. Experiments using ASTER-based synthetic QDIP data are used to illustrate the performance of rock-type Bayesian classification according to the proposed feature-selection method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62332A (2006) https://doi.org/10.1117/12.666290
Fukunaga-Koontz Transform based technique offers some attractive properties for desired class oriented dimensionality reduction in hyperspectral imagery. In FKT, feature selection is performed by transforming into a new space where feature classes have complimentary eigenvectors. Dimensionality reduction technique based on these complimentary eigenvector analysis can be described under two classes, desired class and background clutter, such that each basis function best represent one class while carrying the least amount of information from the second class. By selecting a few eigenvectors which are most relevant to desired class, one can reduce the dimension of hyperspectral cube. Since the FKT based technique reduces data size, it provides significant advantages for near real time detection applications in hyperspectral imagery. Furthermore, the eigenvector selection approach significantly reduces computation burden via the dimensionality reduction processes. The performance of the proposed dimensionality reduction algorithm has been tested using real-world hyperspectral dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62332B (2006) https://doi.org/10.1117/12.665286
Linearly constrained adaptive beamforming has been used to design hyperspectral target detection algorithms such
as constrained energy minimization (CEM) and linearly constrained minimum variance (LCMV). It linearly constrains a
desired target signature while minimizing interfering effects caused by other unknown signatures. This paper
investigates this idea and further uses it to develop a new approach to band selection, referred to as linear constrained
band selection (LCBS) for hyperspectral imagery. It interprets a band image as a desired target signature while
considering other band images as unknown signatures. With this interpretation, the proposed LCBS linearly constrains a
band image while also minimizing band correlation or dependence caused by other band images. As a result, two
different methods referred to as Band Correlation Minimization (BCM) and Band Correlation Constraint (BCC) can be
developed for band selection. Such LCBS allows one to select desired bands for data analysis. In order to determine the
number of bands required to select, p, a recently developed concept, called virtual dimensionality (VD) is used to
estimate the p. Once the p is determined, a set of p desired bands can be selected by LCBS. Finally, experiments are
conducted to substantiate the proposed LCBS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62332C (2006) https://doi.org/10.1117/12.665281
Spectral signature coding has been used to characterize spectral features where a binary code book is designed to encode
an individual spectral signature and the Hamming distance is then used to perform signature discrimination. The
effectiveness of such a binary signature coding largely relies on how well the Hamming distance can capture spectral
variations that characterize a signature. Unfortunately, in most cases, such coding does not provide sufficient
information for signature analysis, thus it has received little interest in the past. This paper reinvents the wheel by
introducing a new concept, referred to as spectral feature probabilistic coding (SFPC) into signature coding. Since the
Hamming distance does not take into account the band-to-band variation, it can be considered as a memoryless distance.
Therefore, one approach is to extend the Hamming distance to a distance with memory. One such coding technique is the
well-known arithmetic coding (AC) which encodes a signature in a probabilistic manner. The values resulting from the
AC is then used to measure the distance between two signatures. This paper investigates AC-based signature coding for
signature analysis and conducts a comparative analysis with spectral binary coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62332D (2006) https://doi.org/10.1117/12.666799
The ultraspectral sounder data features strong correlations in disjoint spectral regions due to the same type of absorbing gases. This paper compares the compression performance of two robust data preprocessing schemes, namely Bias-Adjusted reordering (BAR) and Minimum Spanning Tree (MST) reordering, in the context of entropy coding. Both schemes can take advantage of the strong correlations for achieving higher compression gains. The compression methods consist of the BAR or MST preprocessing schemes followed by linear prediction with context-free or context-based arithmetic coding (AC). Compression experiments on the NASA AIRS ultraspectral sounder data set show that MST without bias-adjustment produces lower
compression ratios than BAR and bias-adjusted MST for both context-free and context-based AC. Biasadjusted MST outperforms BAR for context-free arithmetic coding, whereas BAR outperforms MST for
context-based arithmetic coding. BAR with context-based AC yields the highest average compression ratios in comparison to MST with context-free or context-based AC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62332E (2006) https://doi.org/10.1117/12.666175
We propose a new adaptive branch and bound (ABB) algorithm for selecting the optimal subset of features in hyperspectral applications. The algorithm improves the search speed by avoiding unnecessary criterion function calculations at nodes in the solution tree. Our algorithm includes the following new properties: (i) ordering the tree nodes by the significance of features during construction of the tree, (ii) obtaining a large "good" initial bound by a floating search method, (iii) a new method to select an initial starting search level in the tree, and (iv) a new adaptive jump search strategy to select subsequent search levels to avoid redundant criterion function calculations. Our experimental results for two databases demonstrate that our method is significantly faster than other versions of the branch and bound algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.