A number of hyperspectral (x, y, λ) imaging systems work on the principle of limited angle tomography. In such
systems there exists a region of spatial and spectral frequencies called the "missing cone" that the imaging system cannot
recover from data using any direct reconstruction algorithms. Wavelets are useful for imaging objects that are spatially
and in many cases also spectrally compact. However wavelet expansion functions have three-dimensional frequency
content intersecting the missing cone region; this means the wavelets themselves are altered thus compromising the
corresponding datacube reconstructions. As the missing cone of frequencies is fixed for a given imaging system, it is
reasonable to adjust parameters in the wavelets themselves in order to reduce the intersection between the wavelets'
frequency content and the missing cone. One wavelet system is better than another when the frequency content of the
former has a smaller amount of overlap with the missing cone. We will do this analysis with a couple of classic wavelet
families, the Morlet and the Difference of Gaussian (DOG) for an existing hyperspectral tomographic imaging system to
show the feasibility of this procedure.
Using mathematical techniques recently adapted for the analysis of hyperspectral imaging systems such as the CTIS, we
have performed datacube reconstructions for a number of binary star systems. The CTIS images in the visible (420nm to
720nm) wavelength range were obtained in 2001 using the 3.67m Advanced Electro Optical System (AEOS) of the
Maui Space Surveillance System (MSSS). These methods used an analytical model of the CTIS to construct an imaging
system operator from optical, focal plane array and Computer Generated Holographic (CGH) disperser parameters in the
CTIS. We used the adjoint of this operator to construct matched filtered estimates of the datacubes from the image data.
In these reconstructions we are able to simultaneously obtain information on the geometry and relative photometry of the
binary systems as well as the spectrum for each component of the system.
There are certain classes of astronomical objects that have rather involved spectra that can also be a composite of a number of different spectral signatures, as well as spatial characteristics that can be used for identification and analysis. Such objects include galaxies and quasars with active nuclei, colliding / interacting galaxies, and globular cluster systems around our own Milky Way and other galaxies. Flash hyperspectral imaging adds coherence-time limited functionality so that Earth orbiting spacecraft and solar system objects such as planets, asteroids and comets can be spectrally imaged as well, as these also have both spatial and spectral structure rotating and moving within much shorter time spans. Flash hyperspectral imaging systems are, therefore, also useful for faster simultaneous spatial and spectral feature analysis. Previous work has explored spectral unmixing and other types of feature extraction of these general types of objects, but without consideration of the hyperspectral imaging system involved, neither in how the data is collected nor in how the datacube is reconstructed. We will present a proof of concept simulation of a resolved object as it is imaged through such a physically modeled imaging system and its datacube reconstructed. Finally, we provide a demonstration of the capability with astronomical data, Venus and a binary star, when constrained by our physical model of the instrumental transfer function.
In hyperspectral imaging systems with a continuous-to-discrete (CD) model, the goal is to solve the matrix equation g =
Hθ + n for θ. Here g is a data vector obtained on pixels on a focal plane array (FPA), and n is the additive pixel noise
vector. The hyperspectral object cube f(x, y, λ) to be recovered is represented by θ, which is the vectorized set of
expansion coefficients of f with respect to a family of functions. The imaging operator is the system matrix H of which
its columns represent the projection of each expansion function onto the FPA. Hence an estimate of the object cube f(x,
y, λ) is reconstructed from these recovered expansion function projection coefficients. Furthermore H is equivalently a
calibration matrix, and amenable to an analytic description. Since the number of expansion functions is large, and the
number of pixels on an FPA is large, H becomes huge and very unwieldy to store. We describe a means by which we
can reduce the effective size of H by taking advantage of the analytic model of the imaging system and converting H
into a series of look-up tables. By this method we have been able to drastically reduce the storage requirements for H
from terabytes to sub-megabyte sizes. We provide an example of this technique in isoplanatic and polychromatic
calibration of a flash hyperspectral imaging system. These sets of lookup tables are expansion function independent and
also independent of object cube sampling.
The effectiveness of many hyperspectral feature extraction algorithms involving classification (and linear spectral
unmixing) are dependent on the use of spectral signature libraries. If two or more signatures are roughly similar to each
other, these methods which use algorithms such as singular value decomposition (SVD) or least squares to identify the
object will not work well. This especially goes for these procedures which are combined with three-dimensional discrete
wavelet transforms, which replace the signature libraries with their corresponding lowpass wavelet transform
coefficients. In order to address this issue, alternate ways of transforming these signature libraries using bandpass or
highpass wavelet transform coefficients from either wavelet or Walsh (Haar wavelet packet) transforms in the spectral
direction will be described. These alternate representations of the data emphasize differences between the signatures
which lead to improved classification performance as compared to existing procedures.
Abundances of material components in objects are usually computed using techniques such as linear spectral unmixing
on individual pixels captured on hyperspectral imaging devices. The effectiveness of these algorithms usually depends
on how distinct the spectral signatures in the libraries used in them are. This can be measured by SVD or Least Squares
based figures of merit such as the condition number of the matrix consisting of the library signatures. However, it must
be noted that each library signature usually is the mean of a number of signatures representing that material, or class of
objects. This aspect of how individual library spectral signatures vary in real-world situations needs to be addressed in
order to more accurately assess linear unmixing techniques. These same considerations also goes for signature libraries
transformed into new ones by wavelet or other transforms. Figures of merit incorporating variations within each library
signature (which more accurately reflects real measurements) will be implemented and compared with other figures of
merit not taking these variations into account.
Abundances of material components in objects are usually computed using techniques such as linear spectral unmixing on individual pixels captured on hyperspectral imaging devices. However, algorithms such as unmixing have many flaws, some due to implementation, and others due to improper choices of the spectral library used in the unmixing (as well as classification). There may exist other methods for extraction of this hyperspectral abundance information. We propose the development of spatial ground truth data from which various unmixing algorithm analyses can be evaluated. This may be done by implementing a three-dimensional hyperpspectral discrete wavelet transform (HSDWT) with a low-complexity lifting method using the Haar basis. Spectral unmixing, or similar algorithms can then be evaluated, and their effectiveness can be measured by how well or poorly the spatial and spectral characteristics of the target are reproduced at full resolution (which becomes single object classification by pixel).
Feature extraction from hyperspectral imagery consumes large amounts of memory. Hence the algorithms to do this have high computational complexity and require large amounts of additional computer memory. To address these issues previous work has concentrated on algorithms that are combinations of a fast integer-based hyperspectral discrete wavelet transform (HSDWT) with a specialized implementation of the Haar basis and improved implementations of linear spectral unmixing. Extensions of that previous work are presented here that modify and extend these algorithms to investigate feature extraction of arbitrary shaped spatial regions and incorporate more general biorthogonal bases for processing of spectral signatures. Finally, these wavelet transform implementations have also been used to simulate linear spectral unmixng techniques on spatially unresolved objects such as binary stars and globular star clusters.
An ongoing problem in remote sensing is that imagery generally consumes considerable amounts of memory and transmittance bandwidth, thus limiting the amount of data acquired. The use of high quality image compression algorithms, such as the wavelet-based JPEG2000, has been proposed to reduce much of the memory and bandwidth overhead; however, these compression algorithms are often lossy and the remote sensing community has been wary to implement such algorithms for fear of degradation of the data. We explore this issue for the JPEG2000 compression algorithm applied to Landsat-7 Enhanced Thematic Mapper (ETM+) imagery. The work examines the effect that lossy compression can have on the retrieval of the normalized difference vegetation index (NDVI). We have computed the NDVI from JPEG2000 compressed red and NIR Landsat-7 ETM+ images and compared with the uncompressed values at each pixel. In addition, we examine the effects of compression on the NDVI product itself. We show that both the spatial distribution of NDVI and the overall NDVI pixel statistics in the image change very little after the images have been compressed then reconstructed over a wide range of bitrates.
An ongoing problem for feature extraction in hyperspectral imagery is that such data consumes large amounts of memory and transmittance bandwidth. In many applications, especially on space based platforms, fast, low power feature extraction algorithms are necessary, but not feasible. To overcome many of the problems due to the large volume of hyperspectral data we have developed a fast, low complexity feature extraction algorithm that is a combination of a fast integer-valued hyperspectral discrete wavelet transform (HSDWT) using a specialized implementation of the Haar basis and an improved implementation of linear spectral unmixing. The Haar wavelet transform implementation involves a simple weighted sum and a weighted difference between pairs of numbers. Features are found by using a small subset of the transform coefficients. More refined spatial and/or spectral identifications can then be made by localized fast inverse Haar transforms using very small numbers of additional coefficients in the spatial or spectral directions. The computational overhead is reduced further since much of the information used for linear spectral unmixing is precomputed and can be stored using a very small amount of additional memory.
The dominant image processing tasks for hyperspectral data are compression and feature recognition. These tasks go hand-in-hand. Hyperspectral data contains a huge amount of information that need to be processed (and often very quickly) depending on the application. The discrete wavelet transform is the ideal tool for this type of data structure. There are applications that require such processing (especially feature recognition or identification) be done extremely
fast and efficiently. Furthermore the higher number of dimensions implies a number of different ways to do these transforms. Much of the work in this area to the present time has been focused on JPEG2000 type compression of each component image involving fairly sophisticated coding techniques; relatively little attention has been paid to other configurations of wavelet transforms of such data, as well as rapid feature identification where compression may not be
necessary at all. This paper describes other versions of the 3D wavelet transform that allow the resolution in both the spatial domain and spectral domain to be adjusted separately. Other issues associated with low complexity feature recognition with and without compression using versions of the 3D hyperspectral wavelet transforms will be discussed along with some illustrative calculations.
The recent development of channelled spectropolarimetry presents opportunities for spectropolarimetric measurements of dynamic phenomena in a very compact instrument. We present measurements of stress-induced birefringence in an ordinary plastic by both a reference rotating-compensator fixed-analyzer polarimeter and a channelled spectropolarimeter. The agreement between the two instruments shows the promise of the channelled technique and provides a proof-of-principle that the method can be used for a very simple conversion of imaging spectrometers into imaging spectropolarimeters.
Imaging systems such as the Computed Tomographic Imaging Spectrometer (CTIS) are modeled by the matrix equation g = Hf, which is the discretized form of the general imaging integral equation.. The matrix H describes the contribution to each element of the image g from each element of the hyperspectral object cube f. The vector g is the image of the spatial/spectral projections of f on a focal plane array (FPA). The matrix H is enormous, sparse and rectangular. It is extremely difficult to discretize the integral operator to obtain the matrix operator H. Normally H is constructed empirically from a series of monochromatic calibration images, which is a time consuming process. However we have been able to synthetically construct H by numerically modeling how the optical and diffractive elements in the CTIS project monochromatic point source data onto the FPA. We can evaluate a CTIS system by solving the imaging equation for f using both the empirical and synthetic H from some test data g. Comparison between the two results provides a means to evaluate and improve CTIS system calibration procedures noting that the synthetic system matrix H represents a baseline ideal system.
Continuous wavelet transforms (CWT) and frames have always been useful for noise suppression, edge detection and medical signal processing. However, these transforms are generally shied away from since computational complexity prevents their widespread use. However, recently developed processor technology that uses analog rather than digital signal processing hardware may be the ideal means to implement and apply these algorithms. It is then appropriate to consider new types of frames and continuous wavelet systems. We propose two families of tunable continuous wavelet systems with widely varying frame bounds and scaling behavior, and illustrate examples of computations involving these systems.
Detection and classification of footsteps and other impulsive signals are a critical function of urban surveillance systems. An example is the wireless integrated network sensor (WINS) system, which is designed to meet this requirement for both law enforcement and military agencies. The detection and classification algorithms should be sufficiently robust to handle a wide variety of environments, but remain of low complexity to allow low power implementation. We present a modified time-domain method for impulse signal classification based on the Haar wavelet transform. The Haar wavelet basis is ideal for short time signals as it provides the best localization in the time domain. Further, the Haar transform has the shortest and simplest filter/basis system, with the scaling function filter using the average of two points and the wavelet filter being the difference between two points. Our classification scheme uses the Haar transform of the input signal to obtain the signal envelope, which is described by the decimated low pass filter coefficients. When implemented on many WINS nodes, this simple procedure has the further advantage of being able to do signal source detection in both location and time.
KEYWORDS: Wavelets, Fast wavelet transforms, Electronic filtering, Digital filtering, Filtering (signal processing), Convolution, Reconstruction algorithms, Wavelet transforms, Signal to noise ratio, Signal processing
We have successfully compressed audio signals using wavelet packets based on a recently developed fast wavelet transform (FWT) scheme using circular convolution with an adaptive hybrid filter/basis system. This algorithm gives perfect reconstruction of the data; edge effects are removed entirely. As a result, the quality of audio signal compression is much improved. To illustrate this, we present results from our comparison study where we compressed a test signal using these `circular wavelet packets' and wavelet packets based on the standard FWT.
The fast wavelet transform for images developed by Mallat is a useful and powerful tool for image processing, with its main applications being compression, feature extraction and image enhancement. In particular, since this algorithm segments an image with respect to both spatial frequency and orientation, image enhancement becomes more precise and efficient. This idea is demonstrated by the removal of background noise and flaws from two digitized images of the faint galaxy VV371c. This object was first studied in the early 1980's using older and less sophisticated computer technologies and image processing techniques. We present results of the wavelet based image analysis of VV371c which yield new conclusions as to this galaxy's structure and morphological classification.
Polarization-resolved coherent beam combination via nondegenerate two-wave mixing have been observed in water-glycerol suspensions of shaped polytetrafluoroethylene microparticles. Experiments detect coherent energy transfer arising from two different types of moving index gratings: translational and orientational. Additionally, the dependence of the two-wave mixing gain coefficient on the frequency-difference of the beams, pump intensity, and microparticle volume fraction was measured and found to be in accord with theory. Finally, beam combination was also achieved using degenerate laser beams and moving the suspension relative to the laser interference pattern.
Recently, energy transfer has been observed between two pulsed, degenerate laser beams
copropagating almost collinearly into isotropic Kerr media with temporal relaxation.
Theoretical calculations have been made which agree with the experimental data. This paper
will briefly describe the experiment, and develop the theory which simulates the lab situation.
The physical mechanism regulating this beam combination will also be discussed. Comparisons
between the numerical simulations and the experimental data will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.