PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478736
Image pixels represent either distinct materials (end members) that are present in the image, or mistures of two or more of these pure materials. Estimates of pure end member spectra are needed for spectral libraries and for input into pixel unmixing codes. We investigate three algorithms for estimating end member spectra: (1) the convex hull method in which an n-dimensional surface is shrink- wrapped around the data cloud; (2) a pixel-by-pixel search method in which pixels that have sufficiently different spectral angles are declared end members; (3) a pixel-by- pixel search method using Euclidean distance as a measure, followed by clustering to improve the estimate of the spectra. The convex hull technique should provide an estimate of pure end member spectra while the pixel-by-pixel search methods should find both distinct end members and distinct mixtures. Each method requires user-set thresholds to find distinct spectra, which can be expressed in spectral angle degrees or image-dependent units for Euclidean distance. Estimates for the lower threshold (below which two spectra are considered to be the same material) and the upper threshold (above which two spectra are definitely different materials) are derived empirically. Low-altitude AVIRIS data will be used to demonstrate the utility of these end member extraction methods. We will illusxtrate how well each technique compare to the other, and compare how well individual algorithms work across adjacent scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478745
We develop a method for automatic end-member selection in hyperspectral images. The method models a hyperspectral pixel as a linear mixture of an unknown number of materials. In contrast to many end-member selection methods, the new method selects end-members based on the statistics of large numbers of pixels rather than attempting to identify a small number of the purest pixels. The method is based on maximizing the independence of material abundances at each pixel. We show how independent component analysis algorithms can be adapted for use with this problem. We show properties of the method by application to synthetic hyperspectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478755
We have developed a methodology for wavelength band selection. This methodology can be used in system design studies to provide an optimal sensor cost, data reduction, and data utility trade-off relative to a specific application. The methodology combines an information theory- based criterion for band selection with a genetic algorithm to search for a near-optimal solution. We have applied this methodology to 612 material spectra from a combined database to determine the band locations for 6, 9, 15, 30, and 60- band sets in the 0.42 to 2.5 microns spectral region that permit the best material separation. These optimal bands sets were then evaluated in terms of their utility related to anomaly ddetection and material identification using multi-band data cubes generated from two HYDICE cubes. The optimal band locations and their corresponding entropies are given in this paper. Our optimal band locations for the 6, 9, and 15-band sets are compared to the bands of existing multi-band systems such as Landsat 7, Multispectral Thermal Imager, Advanced Land Imager, Daedalus, and M7. Also presented are the anomaly detection and material identification results obtained from our generalted multi- band data cubes. Comparisons are made between these exploitation results with those obtained from the original 210-band HYDICE data cubes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478766
This paper presents a two-stage band optimal band selection algorithm for hyperspectral imagery. The algorithm tries to compute the closest subset of bands to the principal components in the sense of having the smallest canonical correlation. The first stage of the algorithm computes and initial guess for the closest bands using matrix-factorization-based band subset selection. The second stage refines the subset of bands using a steepest ascent algorithm. Experimental results using AVIRIS imagery from the Cuprite Mining District are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478775
We introduce a representation for hyperspectral textures using unichrome and opponent features computed from Gabor filter outputs. The unichrome features are computed from the spectral bands independently while the opponent features combine information across different bands at different scales. Using a database of AVIRIS image regions, we evaluate the performance of the multiscale approach using opponent features for recognizing hyperspectral textures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478784
An airborne infrared (IR) line-scanner and a Fourier transform infrared (FT-IR) spectrometer operating in the 3- 5micrometers and 8-12micrometers spectral regions provide a rapid wide- area surveillance capability. The IR scene containing target vapors is mapped remotely with the wide fields of view (FOV) multi-spectral IR line-scanner using 14 bands. The narrow FOV FT-IR spectrometer permits remote verification of target vapor plume contents within the IR scene. The IR image and FT-IR interferogram analysis supply a near real-time detection that provides visual monitoring of potential downwind vapor hazards. This capability is demonstrated using the target vapor methanol. An active mono-static FT-IR configuration furnishes ground-truth monitoring for methanol released from an industrial stack and a nearby ground-level area. The airborne and ground-truth results demonstrate the usefulness of this approach in alerting first responders to potential downwind vapor hazards from an accidental release.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478737
Terrain categorization and target detection algorithms applied to Hyperspectral Imagery (HSI) typically operate on the measured reflectance (of sun and sky illumination) by an object or scene. Since the reflectance is a non-dimensional ratio, the reflectance by an object is nominally not affedted by variations in lighting conditions. Atmospheric Correction (also referred to as Atmospheric Compensation, Characterization, etc.) Algorithms (ACAs) are used in application of remotely sensed HSI datat to correct for the effects of atmospheric propagation on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is an ACA created for HSI applications in the visible through shortwave infrared (Vis-SWIR) spectral regime. FLAASH derives its physics-based mathematics from MODTRAN4.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Michael D. Abel, Jill M. Zenner, Gary A. Petrick, Alan T. Buswell, Martin L. Pilati, William R. Czyzewski, Lawrence P. Alessandro, Sandra K. Weaver
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478738
A combined radiative transfer model and statistical approach to atmospheric correction of hyperspectral data, called MODFULL, has been developed and tested. MODFULL retrieves in-scene water vapor and aerosol (without the aid of dark pixels) which it uses for atmospheric correction. It also provides the user with checks on spectral registration of the hyperspectral data being processed. A description of this model will be given along with an analysis of its strengths and weaknesses. A summery of test results using comparisons with an industry standard Empirical Line Method (ELM) correction will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478739
The High-accuracy Atmosphere Correction for Hyperspectral Data (HATCH) algorithm retrieves surface reflectance from imaging spectrometer data in the 350-2500nm wavelength region with resolutions of 5nm or greater. A key feature of HATCH is that it derives calibration for the sensor spectral response functions (band centers and FWHMs) from the data themselves. This is approached by utilizing known atmospheric absorption features (e.g. water vapor and oxygen bands) and the application of a new technique dubbed the
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478740
Space Computer Corporation has developed an innovative atmospheric retrieval algorithm called OPRA (Oblique Projection Retrieval of the Atmosphere). This algorithm is designed to retrieve both path radiance and atmospheric transmissivity directly from calibrated LWIR radiance spectra through a two-stage application of oblique projection operators. The OPRA method assumes the surface in the pixel field of view has an emissivity close to unity. Under this condition, the sensed radiance can be accurately modeled as the blackbody ground radiance attenuated by a multiplicative transmissivity and enhanced by an additive path radiance. The oblique projection operator is defined in terms of a range space H and a null space S. The subspaces H and S are independent, although not necessarily orthogonal. The properties of the operator are such that when it is applied to a measured signal all components spanned by the null space S are eliminated, while those spanned by the range space H are preserved. Stage 1 of OPRA nullifies the surface radiance multiplied by the transmissivity and retrieves the path radiance. Stage 2 is applied to the logarithm of the measured signal minus the retrieved path radiance to nullify the log of the Planck function and thereby retrieve the log of the transmissivity. The OPRA algorithm has been applied to both model data and SEBASS LWIR data and initial results indicate that atmospheric retrieval errors are sensitive to instrument artifacts not included in the various subspace definitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478741
We propose an empirical radiometric correction method for the effects, such as atmospheric effects and anisotropic reflection of the surface, in optical remote sensing data. These distortions are sensor viewing (scanning) angle dependent, thus they can be significant for data received from airborne sensors due to their wide field of view. The procedure is based solely on the digital image data and consists of several steps. First, the initial image region near nadir (minimal distortions) is clustered by an extended k-means algorithm, which automatically detects the clusters (surface types) in an image. Then, for each cluster an average line profile is calculated. These profiles (initially defined in a middle part of an image line) are extrapolated to the whole line of an image by a polynomial approximation. Finally, from these polynomial functions the linear regression over all clusters is build using the radiative transfer equation, which allows the radiometric correction for each viewing angle in an image relative to the reference angle, usually nadir. The procedure is iterative, that is the correction is first performed for a narrow part around the initial region. Then the procedure is initialized with this newly corrected image region and repeated until the whole image is corrected. The experiments for data acquired by airborne multispectral scanner DAEDALUS AADS 1268 ATM show the effectiveness of the proposed method especially for the mosaicking and classification applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478742
Research is being conducted into the usefulness of hyperspectral data for detailed geologic mapping applications. The data being analyzed were collected by the HYDICE (VIS-SWIR) and SEBASS (LWIR) airborne imaging spectrometers. Hyperspectral data provides a means of identifying surface minerology, which indicates lithology. In addition, because the data are collected in image format, photo-geologic observations can be made, such as the presence and orientation of stratification and faulting. The results of hyperspectral-based geologic mapping are summarized for an area of volcanic and sedimentary rocks in southwest Nevada. Analysis of the data revealed 11 mineral endmembers representing eight lithologic units. The hyperspectral-derived maps were directly compared to the best ground-based geologic maps available. Results indicate the ability to produce general geologic maps at scales better than 1:24,000 using 1-meter resolution airborne spectroscopy. Also, a more thorough mapping was achieved because of the increased compositional information gained by using both eht SWIR and LWIR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fred A. Kruse, Joseph W. Boardman, Jonathan F. Huntington
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478743
Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well established. The launch of NASA's EO-1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4 to 2.5 mum micrometer range with 242 spectral bands at approximately 10nm spectral resolution and 30m spatial resolution. Initial Hyperion analysis results for a site at Cuprite, Nevada, with established ground truth and years of airborne hyperspectral data show that Hyperion is performing to specifications and that data from the SWIR spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped at Cuprite include kaolinite, alunite, buddingtonite, calcite, muscovite, and hydrothermal silica. Hyperion data collected at other sites under optimum conditions (summer season, bright targets) allow subtle distinctions such as determining the difference between calcite and dolomite and mapping spectral differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data (AVIRIS) to the Hyperion data establishes that Hyperion provides similar information content, with the principal limitations being reduced spatial distinctions caused by the 30m spatial resolution and limited mapping of fine spectral detail based on lower signal-to-noise ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478744
The University of Hawaii's Efficient Materials Mapping program aims to automatically and rapidly produce material maps from hyperspectral scenes. The program combines an end- member determination algorithm and a material identification algorithm to produce context maps in real time without user intervention. The material identification algorithm is a combination of a spectral databse and analytic code; each spectrum in the library augmented with computer readable diagnostic instructions. At present, the material library consists of over three hundred different spectra, generally geological materials from the USGS digital spectral library, however selected spectra from other libraries have been incorporated. Our method has been applied to an AVIRIS sceme taken over Kaneohe Bay, Hawaii. This scene contains large expanses of ocean, developed and undeveloped land, thus providing a good test bed for the program. The results of applying this methodolgy were verified by ground truth where possible by team equipped with hand held spectrometer. Algorithm derived archetypical en-member locations were well matched well by the material identification database, however the end-member determination itself operated sub- optimally on this scene. These results will guid progress with respect to the continued development of this program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478746
This paper describes a work in progress to develop an updated road mapping system. The system is designed to generate map products working directly from multispectral imagery. The updated system is uses a resolution hierarchy to match the size of the roads, measured in image pixels, to the optimal processing configurations. The original system was designed to work with low resolution Landsat TM imagery while the updated system is designed to be more versatile with the ability to generate products from the new systems now available such as Landsat VII, IKONOS, and Digital Globe. The majority of the map production is performed in an automated mode requiring no user interaction. A Java interface to support final editing of the automated results has been built to support application on multiple platforms. This paper describes the mapping algorithms, the special editing interface designed for road vector maps, and results of some processing experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478747
This paper presents the latest results of a series of experiments deigned to identify situations in which linear and nonlinear mixing models are expect to occur. It continues to investigate the possibility that there may be naturally occurring situation in which the typically used linear mixture model may not provide the most accurate spectrum. It has already been shown that for specific situations, in binary mixing cases, the nonlinear mixing model can produce more accurate endmember abundance estimates. These results are extended to include ternary and quaternary mixtures, as well as hyperspectral imagery collected over Cuprite, Nevada. In order to test these hypotheses, laboratory endmember and mixture data are collected in various scenarios for analysis. As shown in experiments, ternary and quaternary mixtures are more complicated than binary mixtures and the nonlinear mixing is more likely to occur in ternary and quaternary mixtures than in binary mixtures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478748
We examine the utility of using near-infrared hyperspectral images for the recognition of human subjects over a database of 137 subjects. Hyperspectral images were collected using a CCD camera equipped with a liquid crystal tunable filter and calibrated to spectral reflectance. The face recognition algorithm exploits spectral measurements for individual facial tissue types and combinations of facial tissue types. We demonstrate experimentally that hyperspectral images provide the opportunity to recognize faces independent of facial expression and face orientation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478749
The Pronghorn Field Tests were held at the Nevada Test Site for a two-week period in June 2001. Two passive infrared sensors were tested for inclusion into the Joint Service Wide Area Detection Program. The Adaptive InfraRed Imaging Spectroradiometer (AIRIS) and Compact Atmospheric Sounding Interferometer (CATSI) systems were tested with good results. This field test was a joint effort between the US (SBCCOM) and Canada (DREV). Various chemicals were detected and quantified from a distance of 1.5 kilometers. Passive ranging of Chemical Plumes was demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478750
A method for the extraction of spectral and spatial scene statistics from hyperspectral data is discussed. The method is designed to work on atmospherically compensated data in the visible/SWIR or the Thermal IR (TIR). The statistics are determined from the fractional abundance images obtained from spectral un-mixing of the scene. The statistical quantities that are extracted include endmember abundance means, variances, and correlation lengths. These quantities are used to construct a high spatial resolution reflectance or emissivity/temperature surface using a fast autoregressive texture generation tool. The spectral complexity of the synthetic surfaces have been evaluated by inserting objects for detection and calculating ROC curves. Preliminary results indicate that synthetic scenes with realistic levels of spectral clutter can be generated using spectral and spatial statistics determined from endmember fractional abundance maps. This work is motivated by the need for realistic hyperspectral scene generation capabilities to test future hyperspectral sensor concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478751
In this paper, a new group of noise reduction methods for multispectral images is presented. First, a 1-dimensional Self-Organizing Map (SOM) is taught using the pixel vectors of the noisy multispectral image. Then, a gray-level index image is formed containing the indexes of the SOM vectors. Several gray-level noise reduction methods are applied to the index image for three noise types: impulse, Gaussian, and coherent noise. Tests are made for three kinds of noise distrubutions: for all channels, for channels 30-50, and for 9 selected channels. Error measures imply that the obtained results are very good for coherent noise images, but rather poor for other noise categories, compared to the bandwise coherent filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478752
The purpose of this paper is to provide an overview of the most useful practical algorithms for the detection of targets with known spectral signatures and anomaly detection. First, we provide an overview of adaptive matched filter and anomaly detectors, including their key theoretical assumptions, design parameters, and computational complexity. The emphasis is on the basic ideas that underline the operation of the different algorithms and the geometrical or statistical concepts explaining their performance limitations. Second, we investigate how effectively the signal models used for the development of detection algorithms characterize the HYDICE data. The accurate modeling of the background is crucial for the development of constant false alarm rate (CFAR) detectors. Third, we look at some practical considerations and how they affect the performance of the various algorithms. Finally, we compare the different algorithms with regard to the following two desirable performance properties: capacity to operate in CFAR mode and target visibility enhancement. Since most of these issues are covered more comprehensively in a special issue of IEEE Signal Processing Magazine on Exploiting Hyperspectral Imagery (January 2002), we limit the coverage of this paper to a conceptual framework and a highlight of some experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478753
The normal compositional model (NCM) is a descriptive model that explicitly accounts for sub-pixel mixing and random variation of the spectrum of a material. In this paper the normal compositional model, defined in an earlier work, is extended to include an additive term that may represent path radiance and additive sensor noise. If the covariance matrix of the additive term is non-singular, as may be assumed since it includes the covariance matrix of the additive noise, the covariance matrix of the other classes need not be non-singular. Thus the current model synthesizes the linear unmixing and Gaussian clustering algorithms. Anomaly and matched target detection algorithms based on these three models are compared using ocean hyperspectral imagery, and for these data the NCM approach reduces the false alarm probability by more than an order of magnitude. The linear mixture and normal compositional models separate surface reflections and upwelling light more effectively than the Gaussian clustering algorithm. Furthermore, greater inter-band correlation is estimated using the subpixel covariance estimation methodology than using the pure pixel modeling approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478754
The standard approach to solving detection problems in which clutter and/or target ditributions are modeled with unknown parameter is to apply the generalized likelihood ratio (GLR) test. This procedure automatically gernerates new estimates of the unknown model parameter for each new feature test value. An alternative approach is to estimate prior distribution for the unknown parameters. The associated Bayesian Likelihood Ratio (BLR) test can be used to generate many standard detectors for example, matched filtering or the GLR as special cases. For the particular problem of Joint Subspace Detection (JSD), several such Bayesian problems often lead to the same test as some GLR problem. Formulating such problems can lend insight into what types of background and target distributions are appropriate for a given GLR test. In addition, the added generality afforded by the new approach, in the form a selectable prior distributions, defines a wider exploratory space fro target detection. JSD can, for example, permit the incorporation of general types of experience gleaned from measurement programs. This paper explores these potentialities by applying several Bayesian formulations of the detection problem to hyperspectral data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478756
With the improvement of remote sensing sensor techniques, hyperspectral imagery is widely used today. Hundreds of frequency channels are used to collect radiance from the ground, which results in hundreds of co-registered images. How to process this huge amount of data is a great challenge, especially when no information of the image scene is available. Under this circumstance, anomaly detection becomes more difficult. Several methods are devoted to this problem, such as the well-known RX algorithm which takes advantage of the second-order statistics. In this paper we propose an effective algorithm for anomaly detection and discrimination based on high-order statistics. They include the normalized third central moment referred to as skewness and the normalized fourth central moment referred to as kurtosis, which measure the asymmetry and the flatness of a distribution respectively. The Gaussian distribution is completely determined by the first two statistics and has zero skewness and kurtosis, so these two indices tell us the deviation of a distribution from the Gaussian and are suitable to anomaly detection. The proposed algorithm can be generalized to use any high-order moment. The experimental results with AVIRIS data demonstrate that it can provide comparable detection results with low computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478757
A simulated annealing method of partitioning hyperspectral imagery, initialized by a supervised classification method, is investigated to provide spatially smooth class labeling for terrain mapping applications. The method is used to obtain an estimate of the mode a Gibbs distribution defined over a symmetric spatial neighborhood system that is based on an energy function characterizing spectral disparities in Euclidean distance and spectral angle. Experiments are conducted on a 210-band HYDICE scene that contains a diverse range of terrain features and that is supported with ground truth. Both visual and quantitative results demonstrate a clear benefit of this method as compared to spectral-only supervised classification or unsupervised annealing that has been initialized randomly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478758
Unsupervised classification of multispectral and hyperspectral data is useful for a range of military and commercial remote sensing applications. These include terrain categorization, material detection and identification, and land use quantification. Here we show the development and application of an adaptive Gaussian Spectral Clustering approach to unsupervised classification of hyperspectral data. The method is built on adaptively estimating the parameters of a Gaussian mixture model from over local regions, and includes methods for adjusting to inevitable non-stationarity of hyperspectral image data. The algorithm is suitable for application to streaming hyperspectral data as would be required for real-time applications. In this paper we outline the model used, estimation techniques, and methods for adaptively estimating key model parameters required to characterize hyperspectral imagery. The key elements of the approach are demonstrated on reflective band hyperspectral data from NRL WarHORSE and NASA AVIRIS hyperspectral imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478759
The spatial resolution of spaceborne instrument has increased substantially in the three decades since Landsat-1 was launched. Higher spatial resolution has made some applications possible. But it has also brought about new challenges in ground cover classification. At a resolution around 1 meter, vegetation often displays distinct textures. Hence texture may make differentiation among some cover types possible. Ikonos panchromatic and multispectral data are used to examine how spatial features improve classification accuracy. In this study, textural features are extracted from co-occurrence matrices, contextual features are derived from neighborhood properties, and maximum likelihood method is used for classifications. It is shown that for the test data both types of spatial features, and especially the contextual measures, can significantly improve the classification accuracies. Discrete wavelet transform is used to extract textural features for two types of vegetation. Transformed divergence, a measure of separability, is shown much enhanced when textural features are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478760
Many imaging techniques commonly involve the extraction of mixed signal information from a pixel. In most mixed pixel cases, this is assumed to be a linear mixture and signal separation routines have been developed with this mixing compositions scheme in mind. One such signal separation routine incorporates the Expectation Maximization Maximum Likelihood (EMML) algorithm for the determination of signal mixtures in a pixel. This routine, however is very inefficient in that it requires large iteration values to converge to a solution. This report is the result of the implementation of a Re-scaled Block Iterative EMML approach, commonly used in the medical field for emission tomography image processing, to perform signal separation, while greatly increasing the efficiency in computation and rate of convergence to a solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478761
The watershed-clustering algorithm was adapted for use in multi-dimentional spectral space and was used to define clusters in Hyperspectral Digital Imagery Collection Experiment (HYDICE) data. This algorithm identifies clusters as peaks in a B-dimensional topographic relief, where B is the number of wavelength bands. Image pixel spectra are represented as points in this multi-dimensional space. Analysis is done at increasing values of radiometric resolution, defined by the number of segments into which each wavelength axis is divided. Segmentation of the axes divides the multi-dimensional space into bins, and the number pixels in each bin is determined. The histogram of the bin populations defines the topography for the watershed analysis. Spectral clusters correspond to mountains or islands on this multi-dimensional surface. The algorithm is analogous to submerging this topography under water, and revealing clusters by determining when mountain peaks appear as the water surface is lowered. Testing of this algorithm reveals some surprising features. Although increasing the radiometric resolution (bins per axis) generally results in large clusters breaking up into greater numbers of small clusters., this is not always the case. Under some circumstances, the separate clusters can recombine into one large cluster when radiometric resolution is increased. This behavior is caused by the existence of single-pixel voxels, which smooths out the topography, and by the fact that the voxels retain a surprising degree of connectivity, even at high radiometric resolutions. These characteristics of the high-dimensional spectral data provide the basis for further development of the watershed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478762
In many applications of remotely-sensed imagery, one of the first steps is partitioning the image into a tractable number of regions. In spectral remote sensing, the goal is often to find regions that are spectrally similar within the region but spectrally distinct from other regions. There is often no requirement that these region be spatially connected. Two goals of this study are to partition a hyperspectral image into groups of spectrally distinct materials, and to partition without human intervention. To this end, this study investigates the use of multi- resolution, multi-dimensional variants of the watershed- clustering algorithm on Hyperspectral Digital Imagery Collection Experiment (HYDICE) data. The watershed algorithm looks for clusters in a histogram: a B-dimensional surface where B is the number of bands used (up to 210 for HYDICE). The algorithm is applied to HYDICE data of the Purdue Agronomy Farm, for which ground truth is available. Watershed results are compared to those obtained by using the commonly-available Iterative Self-Organizing Data Analysis Technique (ISODATA) algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478763
In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products (LEVEL1B_R_COREG and LEVEL1B_R_GEO) resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct ``dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to tweak the precision of the band-to-band registration using correlations in the imagery itself.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478764
Surface temperatures and emissivities can be estimated using multispectral thermal infrared (TIR) data, from various instruments. In this paper the temperature-emissivity separation algorithm (TES) is modified to recover surface temperatures and emissivities using Multispectral Thermal Imager (MTI) data from two mid infrared (MIR) and three TIR bands. As TES was originally designed for use with the five TIR bands from the Advanced Spaceborne Thermal Emission and Reflection (ASTER) instrument, broadening its application to MIR wavelengths requires careful evaluation of possible atmospheric and reflected daytime solar illumination effects. Numerical simulations show that TES results for MTI data, assuming error-free atmospheric corrections, are statistically similar to TES results for ASTER data, with surface temperature recovery within +/- 1.5K and emissivity recovery within +/- 0.02. However, strong atmospheric absorption (as high as 61%), and expected daytime reflected solar illumination (as high as 50% of measured radiance) in the MIR bands suggest that TES results for MTI data are more sensitive to errors in atmospheric compensation. Furthermore, the relatively steep slope of Planck's radiation curve for typical terrestrial temperatures in the MIR wavelengths, suggests that inverting temperatures from measured MIR radiance using Planck's law will be more sensitive to error. Numerical simulations and preliminary image analysis suggest that the three TIR MTI bands are not sufficient to obtain the desired TES results. However, omitting one of the MIR bands and using a four-band configuration decreases sensitivity to atmospheric effects, while still maintaining acceptable theoretical TES performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478765
Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478767
Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478768
We present a comparison of images from the ETM+ sensor on Landsat-7 and the ALI instrument on EO-1 over a test site in Rochester, NY. The site contains a variety of features, ranging from water of varying depths, deciduous/coniferous forest, grass fields, to urban areas. The nearly coincident cloud-free images were collected just one minute apart on 25 August, 2001. We atmospherically corrected each image with the 6S atmosphere model, using aerosol optical thickness and water vapor column density measured by a Cimel sun photometer within the Aerosol Robotic Network (Aeronet), along with ozone density derived from NCEP data. We present three-color composites from each instrument that show excellent qualitative agreement. We present ETM+ and ALI reflectance spectra for water, grass, and urban targets. We make a more detailed comparison for our forest site, where we use measured geometric and optical properties as input to the SAIL canopy reflectance model, which we compare to the ETM+, ALI, and EO-1 Hyperion reflectance spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478769
There has been considerable interest in the application of real-time processing techniques to the problem of hyperspectral scene analysis. Recent satellite and aircraft systems can produce data at a rate far faster than the data can be analyzed by interactive computer procedures. Automated and fast procedures for preparing the data for analyst inspection are required for even laboratory use of the large quantities of data. In addition, there are several real-time applications where the data must be processed as it is being acquired. A typical application is a computing system on-board an airplane for operator analysis of the scene as the hyperspectral sensor collects data. In this paper the possible tradeoffs fore rapid analysis are discussed, including choice of algorithm, possible dimensionality reduction, and reduced display level. A real time anomaly detection processing system based on the N- FINDR algorithm has been designed and implemented for the Night Vision Imaging Spectrometer (NVIS). The N-FINDR algorithm is a linear unmixing based algorithm that automatically finds spectral endmembers. The algorithm works by inflating a simplex inside the data, beginning with a random set of pixels. Once these endmember spectra have been found, the image cube can be unmixed using a least-squares approach into a map of fractional abundances of each endmember material in each pixel. In addition to the N-FINDR algorithm, the real-time processing system performs calibration, bad pixel removal, and display of selected fraction planes. The real-time processor is implemented in a commercial Pentium IV computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478770
This paper describes a means of achieving fault-tolerance and architecture extensibility for parallel/distributed systems that support spectral analysis. These attributes are essential to critical 24/7/366 operations and they improve upon systems that only enhance throughput. They also address the single-point-of-failure issues attendant upon architectures that commit critical operations to single machines. Graceful throughput degradation is achieved to mitigate all-or-nothing approaches. Parallel/distributed processing has three important goals. The first is for the subject application to provide faster throughput than it would while running on a single CPU or computer. The second goal is to make best use of existing capital equipment. For critical systems, the third goal is fault tolerance via redundancy. This project addresses the third goal. It seeks to demonstrate a means to make parallel/distributed processing systems fault tolerant so that crashes of individual machines ina cluster do not bring the entire system down. In spite of individual machine failures, it also seeks to ensure the completion of all tasks so that system throughput degrades gracefully. These goals can be met by a system composed of a generic TCP/IP LAN connecting some number of ordinary office computers and laboratory workstations that are heterogeneous and of unknown reliability. Described here is concept formulation and design. Other projects in this arena are referenced. These provide essential technology to this present effort. Particular application is made to detecting unspecified anomalies in unspecified data streams drawn from staring continuous-dwell sensors. This application enables the reliable non-stop detection of unexpected events, with the results immediately made available to human analysts or additional automated processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478771
Because of its fine wavelength resolution, hyperspectral imaging (HSI) offers the possibility of detecting and identifying objects of interest by their spectral characteristics. The Automatic Target Cueing, Detection and Recognition (ATC/D/R) community is developing new methods to predict and measure HSI ATC/D/R systems performance. The variation of spectral signatures due to target characteristics, atmospheric effects, and other environmental factors contribute to the challenge of developing and evaluating robust algorithms for HSI ATC/D/R systems. A rigorous method for test and evaluation is necessary to determine system performance and define the most efficient and effective sensor/algorithm solutions for a proposed mission. The AFRL Sensors Directorate Comprehensive Performance Assessment of Sensor Exploitation (COMPASE) Center has developed standardized tools and methods that permit performance comparison of candidate ATC algorithms1,2. This paper defines the methodology employed for an independent evaluation of HSI ATC algorithms. The performance metrics, truthing and scoring techniques, and the importance of understanding the operating conditions (OCs) represented by each data set are discussed. The OC definitions for spectral systems are different from the OCs as defined by radar systems. The environmental considerations drive data collection planning and truthing requirements. Knowledge of performance of an algorithm in different OCs is essential information when considering the transition of an HSI sensor/algorithm system or the design of future HS algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478772
In operational remote sensing, the implicit model for cloud geometry is a homogeneous plane-parallel slab of infinite horizontal extent. Each pixel is indeed processed as if it exchanged no radiant energy whatsoever with its neighbors. The shortcomings of this conceptual model have been well documented in the specialized literature but rarely mitigated. The worst-case scenario is probably high-resolution imagery where dense isolated clouds are visible, often both bright (reflective) and dark (transmissive) sides being apparent from the same satellite viewing angle: the low transmitted radiance could conceivably be interpreted in plane-parallel theory as no cloud at all. An alternative to the plane-parallel cloud model is introduced here that has the same appeal of being analytically tractable, at least in the diffusion limit: the spherical cloud. This new geometrical paradigm is applied to radiances from cumulus clouds captured by DOE's Multispectral Thermal Imager (MTI). Estimates of isolated cloud opacities are a necessary first step in correcting radiances from surface targets that are visible in the midst of a broken-cloud field. This type of advanced atmospheric correction is badly needed in remote sensing applications such as nonproliferation detection were waiting for a cloud-free look in the indefinite future is not a viable option.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478773
VNIR-SWIR data from DOE MTI satellite are used to demonstrate the retrieval of aerosol and cloud properties. MTI data offer high spatial resolution and high SNR data. Furthermore, collection from both nadir and off-nadir views offer a unique opportunity to assess atmospheric path length effects both through clear and cloud conditions. Data sets were acquired to investigate cloud and aerosol properties: 29 July and 22 August 2000 over the coastal region of Massachusetts near Plymouth. Two topics are investigated: (1) retrieval of aerosol optical properties, and (2) characterization of water and ice clouds at nadir and off-nadir views. Data collection on 22 August 2000 represents a relatively clear atmospheric condition in the vicinity of Pilgrim Power Plant, Plymouth. Data over both vegetated land and ocean are analyzed. Two algorithms for aerosol retrieval over land are compared: the conventional dense-dark vegetation (DDV) algorithm and a generalized VIS-SWIR reflectance correlation and scatter-plot analysis (VSP) algorithm. Optical depths at multiple wavelengths and aerosol type were derived and compared with ground based AERONET data. It is demonstrated that the VSP algorithm captures the spectral variability in aerosol extinction, and thus performs better. Data collection from 29 July 2000 over the same area was investigated for cloud characteristics at different viewing geometries. Top-of-the-Atmosphere (TOA) reflectance statistics is computed for a common cloudy region. It is observed that in cloud free regions, nadir TOA reflectance is lower than that from off-nadir observations. This is due to the increased atmospheric scattering effect from the longer paths. On the other hand, TOA reflectance over cloud area depends on the scattering phase function and the look angle. Here we use simple expressions to illustrate that the effects for water and ice particles can be quite different resulting in very different viewing geometry effects between cumulus and cirrus clouds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478774
Although playas are specifically mentioned as protected in Federal regulations regarding the Clean Water Act, there is a lack of supporting evidence describing basic concepts relevant to vegetation, soils, and hydrology for the purposes of establishing a scientifically sound method of playa delineation. To date no hydrological studies have attempted to describe the frequency and duration associated with ponded water on desert playa systems. This study analyzed the duration and frequency of inundation on hard playas in the western Mojave Desert on Edwards Air Force Base, CA during the wet season (Oct.-Mar.) using satellite imagery over the last 19 years and precipitation records over the last 59 years. Our results indicate that ponding on these playas, lasting at least 16 days, occurs with a frequency of 0.51 or every other year. Average precipitation needed to initiate ponding is 8.29 cm. Years in which ponding was verified for 16 days between October and March could also be verified for ponding for at least 16 days during the growing season (Mar.-Nov.). Total duration of ponding through the wet season was found to range between 1 and 32 weeks with a second order non-linear relationship between length of inundation and total rainfall during the wet season.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brian Cairns, Barbara E. Carlson, Ruoxian Ying, J. Laveigne
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478776
In this paper tradeoffs between speed and accuracy for the atmospheric correction of hyperspectral imagery are examined. Among the issues addressed are: the use of scattering calculations on a sparse spectral grid and consequent accuracy and speed tradeoffs; methods of minimizing the required number quadrature points in multiple scattering calculations; effects of the vertical profiles of aerosols and absorbing gases on atmospheric correction; and efficient approaches from including the effects of sensor variability, or imperfections, on atmospheric correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478777
The ACORN atmospheric correction routine was evaluated using criteria established in a precious performance assessment effort. The data utilized in this analysis represented a variety of background and atmospheric conditions, and were collected by the HYDICE imaging spectrometer. The baseline technique used to match retrieved reflectance spectra with ground truth was the matched filter with the bad bands deleted. Additional investigations were conducted to examine the effects on performance when the spectral angle mapper and the mixture tuned matched filter algorithms were used in place of the matched filter and when different numbers of bands were employed during spectral matching. Results substantiated the conclusions drawn from the previous study that the empirical line method ground truth-based atmospheric correction technique generally out-performs existing model-based techniques, such as ACORN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478778
We have presented apparently new points of view for both radiative transfer problems in the frequency and time domains. Apparently for the first time a simple physical picture has emerged of the inderlying essence of scattered radiance when dealing with isotropic axially-symmetric scattering as a single set of cutoff traveling waves in nonconservative linear media. First, it is shown in the frequency domain. Its accuracy is demonstrated by applying it to a solved problem in the frequency domain whose solution was published by Chandrasekhar some five decades ago. He determined it by dealing with the conventional 95- year-old, usually difficult to solve, integro-differential equation and we bolster confidence in our method by showing how the new method produces the same analytical answer. The new technique converts the integro-differential equation formulation of radiative transfer into a pure differential equation formulation, consisting here in a mixture of ordinary and partial derivatives, and solves that. This paper also analyzes the situation in the time domain in a different manner but the same type of result is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478779
The Optics Department of ONERA has developed and implemented an inverse algorithm, COSHISE, to correct hyperspectral images of the atmosphere effects in the visible-NIR-SWIR domain (0,4-2,5 micrometers ). This algorithm automatically determine the integrated water-vapor content for each pixel, from the radiance at sensor level by using a LIRR-type (Linear Regression Ratio) technique. It then retrieves the spectral reflectance at ground level using atmospheric parameters computed with Modtran4, included the water-vapor spatial dependence as obtained in the first stop. The adjacency effects are taken into account using spectral kernels obtained by two Monte-Carlo codes. Results obtained with COCHISE code on real hyperspectral data are first compared to ground based reflectance measurements. AVIRIS images of Railroad Valley Playa, CA, and HyMap images of Hartheim, France, are use. The inverted reflectance agrees perfectly with the measurement at ground level for the AVIRIS data set, which validates COCHISE algorithm/ for the HyMap data set, the results are still good but cannot be considered as validating the code. The robustness of COCHISE code is evaluated. For this, spectral radiance images are modeled at the sensor level, with the direct algorithm COMANCHE, which is the reciprocal code of COCHISE. The COCHISE algorithm is then used to compute the reflectance at ground level from the simulated at-sensor radiance. A sensitivity analysis has been performed, as a function of errors on several atmospheric parameter and instruments defaults, by comparing the retrieved reflectance with the original one. COCHISE code shows a quite good robustness to errors on input parameter, except for aerosol type.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478780
The distorting effect of the atmosphere on the results of monitoring of small-sized temperature anomalies (hot objects) using AVHRR/NOAA satellite measurements (third channel, 3.75 micrometers ) is studied through numerical simulation with the actual data on meteorological parameters of the atmosphere and geometry of satellite observations for Tomsk in May-September 1999. The following results were obtained for the third AVHRR channel: (a) statistical data on variations of the characteristics of upward radiation as applied to the Tomsk Region, (b) estimates of the distorting effect of the atmosphere on the results of reconstruction of hot object radiance, and (c) the dependence of the solar radiation scattered by the atmosphere on the geometry of observations and aerosol characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478781
This paper discusses a method for searching a database of known material signatures to find the closest match with an unknown signature. This database search method combines fuzzy logic and voting methods to achieve a high level of classification accuracy with the signatures and data cubes tested. This paper discusses the method in detail to include background and test results. It makes reference to public literature concerning components used by the method but developed elsewhere. This paper results from a project whose main objective is to produce an easily integrated software tool that makes an accurate best-guess as to the material(s) indicated by the signature of a pixel found to be interesting according to some analysis method, such as anomaly detection and scene characterization. Anomaly detection examines a spectral cube and determines which pixels are unusual relative to the majority background. Scene characterization finds pixels whose signatures are representative of the unique pixel groups. The current project fully automates the process of determining unknown pixels of interest, taking the signatures from the flagged pixels, searching a database of known signatures, and making a best guess as to the material(s) represented by each pixel's signature. The method ranks the possible materials by order of likelihood with the purpose of accounting for multiple materials existing in the same pixel. In this way it is possible to deliver multiple reportings when more than one material is closely matched within some threshold. This facilitates human analysis and decision-making for productions purposes. The implementation facilitates rapid response to interactive analysis need in support of strategic and tactical operational requirements in both the civil and defense sectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478782
Orthogonal Subspace Projection (OSP) has been shown a successful technique for hyperspectral image analysis. It requires a linear mixture model with complete target knowledge to perform sub-pixel detection and mixed classification. Constrained energy minimization (CEM) has been also shown to be effective in sub-pixel detection and mixed pixel classification which only needs the knowledge of targets of interest. RX-algorithm which has been widely used for anomaly detection in signal processing does not require any prior target information. Interestingly, these three techniques are closely related from an aspect of information being used in these three techniques. They all perform some sort of matched filter with different levels of information used in the filter. This paper investigates and explores their relationship that sheds light on their algorithm design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478783
We investigate two well-known techniques Constrained Energy Minimization (CEM) and Orthogonal Subspace Projection (OSP) and find out that they can be theoretically treated as equivalent methods when the noise is independent and identically distributed (i.i.d.) And its variance is small enough compared to signals, i.e., SNR is large enough. In order to make the condition true, two approaches are applied to modify the OSP technique. One is to estimate the noise covariance matrix and whiten the noise to be i.i.d. and unit variance. The other is to estimate sufficient undesired signatures such that the noise approaches i.i.d. when the data is projected onto the orthogonal subspace of undesired signatures. The results using simulated and real data demonstrate that our conclusion on the relationship between the OSP and the CEM is correct. It will be instructive to algorithm selection in practical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478785
The following paper describes a recent data collection exercise in which the WAR HORSE visible-near-infrared hyperspectral sensor was employed in the collection of wide- area hyperspectral data sets. Two anomaly detection algorithms, Subspace RX (SSRX) ans Gaussian Spectral Clustering (GSC), were used on the data and their performance is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478786
This paper describes our hyperspectral reflectance modeling of a forest canopy based on measured input parameters and comparison with Earth Observing - 1 (EO-1) Advanced Land Imager (ALI) and Hyperion data. The model uses a high resolution, three-dimensional (3D) ray-tracing approach to estimate the intercepted and scattered solar radiation at multiple narrow wavelength bands. We present the comparisons of the effects of woody biomass, leaf litter, and clumping on reflectance signatures. The experimental data used for the model were collected in a hardwood forest canopy in Rochester, New York. Model calculations also are compared to a more simplified, low-resolution 3D model and a simple, multi-layer differential equation model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Harvey C. Schau, Ross E. Soulon, Garrett L. Hall, Dennis J. Garrood
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478787
Computed Tomographic Imaging Spectrometers are described and shown to be capable of providing real-time flash multispectral imagery. Restoration equations and techniques for optimizing performance are described. Experimental results are shown illustrating system optimization and practical usage of CTIS systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478788
Advances in imaging technology and sensors have made airborne remote sensing systems viable for many applications that require reasonably good resolution at low cost. Digital cameras are making their mark on the market by providing high resolution at very high rates. This paper describes an aircraft-mounted imaging system (AMIS) that is being designed and developed at Texas A&M University-Corpus Christi (A&M-CC) with the support of a grant from NASA. The approach is to first develop and test a one-camera system that will be upgraded into a five-camera system that offers multi-spectral capabilities. AMIS will be low cost, rugged, portable and has its own battery power source. Its immediate use will be to acquire images of the Coastal area in the Gulf of Mexico for a variety of studies covering vast spectra from near ultraviolet region to near infrared region. This paper describes AMIS and its characteristics, discusses the process for selecting the major components, and presents the progress.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478789
The fundamental component of a stationary interferometer is a beam-splitter semi-transparent plate that provides phase-delay between the two interfering rays. The phase-delay changes with varying the incident angle of the entering ray, thus producing the entire interference pattern while moving the device over the surface of the observed target. Due to their technical characteristics these interferometers can reliably be adapted for aerospace remote sensing applications. Their ability to produce the interference of the incoming radiation over a broad spectrum of optical wavelength together with the possibility to accommodate the spectral resolution by changing its optical aperture and the sampling step make these instruments interesting for Earth remote sensing. This paper describes the results coming from laboratory experiments and numerical simulations carried out in order to investigate the use of the static interferometers for remote sensing purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478790
Numerous researchers have demonstrated the accuracy and utility of improved spatial resolution multispectral imagery by sharpening it with higher spatial resolution panchromatic imagery. A much more limited number of researchers have sharpened hyperspectral imagery with panchromatic imagery. In this research we have developed an algorithm that spatially sharpens specific ranges of hyperspectral bands with spectrally correlated multispectral bands of a higher spatial resolution to improve the spatial resolution of the hyperspectral imagery while maintaining or improving it's spectral fidelity. Preliminary validation of the algorithm has been conducted using a 7m AVIRIS scene of the Maryland Eastern Shore containing corn, soybean, and wheat fields. This data was used to simulate 28m HSI and 7m MSI that were used in the sharpening process. Initial analysis has verified the spectral accuracy of the sharpened data. In the next phase of the study, airborne spectral data from two different sensors will be used in the sharpening process with the results used as input for USDA/ARS crop yield and stress models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478791
This paper describes a study effort to investigate the impact of SHARP (Sharpening Hyperspectral and Reprojection), a hyperspectral sharpening algorithm, on the utility of the sharpened data for nonliteral exploitation for the 0.4 to 0.9 micron wavelength region. The goals are to investigate if SHARP maintains radiometric integrity and to quantify its effects on various exploitation applications. Reduced spatial resolution hyperspectral data cubes and panchromatic images were generated from HYDICE (Hyperspectral Digital Imagery Collection Experiment) data cubes. SHARP was applied to each panchromatic image and reduced spatial resolution hyperspectral cube pair ro produce a sharpened hyperspectral cube of the same spatial resolution as the panchromatic image. Two nonliteral exploitation functions (i.e. anomaly detection and material identification) were performed on the reduced spatial resolution hyperspectral cubes, the sharpened hyperspectral cubes, and the original HYDICE cubes to facilitate performance comparison. Results showed that sharpening a lower spatial resolution hyperspectral cube with a higher spatial resolution panchromatic image can not fully recover the capability provided by the original higher spatial resolution cube to detect anomalies or identify materials. Results also showed that such fusion can outperform the use of the lower spatial resolution hyperspectral cube by itself, but only for anomaly detection in a desert scene. Sharpening did not improve anomaly detection in a forest scene, nor did it improve material identification in either setting. Results also indicated that although SHARP has maintained the visual interpretability of the data, it did not preserve the radiometric fidelity crucial to nonliteral exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478792
Hyperspectral imaging systems are assuming a greater importance for a wide variety of commercial and military systems. The reason for this increased interest is the fact that a hyperspectral sensor of a give4n spatial resolution or pixel sized will reveal information on the scene that are not obtainable by single band or multi-spectral sensors. There have been several approaches to using a single higher spatial resolution band to improve the spatial resolution fo the hyperspectral data. In this paper, a new technique for improving the spatial resolution of hyperspectral image data will be presented. This technique, called Joint End-member Determination and Unmixing, combines a high-resolution image with a lower spatial resolution hyperspectral image to produce a product that has the spectral properties of the hyperspectral image at a spatial resolution approaching that of the panchromatic image. Instead of using statistical methods to sharpen hyperspectral imagery, a physical model is used where the data present in both the hyperspectral and high-resolution data are assumed to follow linear mixing model. In this paper, the new mixture model based resolution enhancement approach will be compared to the statistical approach using data from NASA/JPL AVIRIS hyperspectral sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478793
We have composed several lossy compression methods for multispectral images. These methods include the Self-Organizing Map (SOM), Principal Component Analysis (PCA) and the three-dimensional wavelet transform combined with traditional lossless coding methods. The two-dimensional DCT/JPEG, JPEG2000 and SPIHT compression methods were applied to eigenimages produced by the PCA. The information loss from the compression was measured with Signal-to-Noise-Ratio (SNR) and Peak-Signal-to-Noise ratio (PSNR). To get more illustrative error measures C-means clustering and Euclidean distance for spectral matching were used. The test image was an AVIRIS image with 224 bands and 512 lines in 614 columns. The PCA in the spectral dimension was the best method in terms of image quality and compression speed. If required, JPEG2000 or SPIHT can be applied in spatial dimensions to get better compression ratios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, (2002) https://doi.org/10.1117/12.478794
This paper proposes an interband version of the linear prediction approach for hyperspectral images. Linear prediction represents one of the best performing and most practical and general purpose lossless image compression techniques known today. The interband linear prediction method consists of two stages: predictive decorrelation producing residuals and entropy coding of the residuals. Our method achieved a compression ratio in the range of 3.02 to 3.14 using 13 Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.