PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Some of the proto-history of the theory of wavelets was written as Quantum Mechanics was first developed; but a long consolidation period had to occur before the magic clouds dissipated to let the more general structures energy, ready for applications which the Founders were not anticipating. This lecture will recount some of the salient episodes belonging in the earlier part of this saga--and bow out at the threshold of the history of wavelets proper, thus focusing on the hatching grounds provided by coherent states.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The area of applicability of coherent states is generally quantum theory, while that of wavelets is generally signal analysis. Nevertheless, separated from their respective applications, coherent states and (continuous) wavelets have a great deal in common. To illustrate this connection we shall outline three general types of coherent states and a few of their applications. We address our presentation to interested wavelet practitioners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pulsed-Beam Wavelets are exact, causal solutions of the inhomogeneous wave equation or Maxwell's equations whose `wavelet parameters,' instead of giving a time and scale, specify the point of emission, launch time, the radius of the emitting aperture, direction of propagation, duration, and width of the pulse. We derive their far fields and time- domain radiation patterns and confirm that the beams can be made arbitrarily well focused by choosing the wavelet parameters accordingly. We also find the source distribution as a generalized function supported on the compact source region determined by the wavelet parameters. As the radius of the disk shrinks to zero, the distribution reduces to the usual point source represented by the causal Green function. It is suggested that such `physical wavelets' may be synthesized in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we briefly review the connection between subband coding, wavelet approximation and general compression problems. Wavelet or subband coding is successful in compression applications partly because of the good approximation properties of wavelets. First, we revisit some rate-distortion bounds for wavelet approximation of piecewise smooth functions. We contrast these results with rate-distortion bounds achievable using oracle based methods. We indicate that such bounds are achievable in practice using dynamic programming. Finally, we conclude with an outlook on open questions in the area of compression and representations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ICA has been mostly demonstrated by computer simulation, and is not mature enough for general application in real world. The main assumptions are (1) instantaneous mixture, (2) linear mixture, (3) discrete and finite sources, which are satisfied such as the remote sensing subpixel composition, speech segmentation by means of ICA de-hyphenation, and digital TV bandwidth enhancement by simultaneously mixing sport and move entertainment events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper discusses the application of the new concept of spatial time-frequency distribution (STFD), and more generally the spatial arbitrary joint-variable distribution, to key array signal processing problems including blind source separations and high resolution direction finding of narrowband and broadband sources with stationary and nonstationary temporal characteristics. The STFD can be formulated based on the widely used class of time-frequency distributions, namely Cohen's class, or it can be devised by incorporating other classes of quadratic distributions, such as the Hyperbolic class and the Affine class. The paper delineates the fundamental offerings of STFDs, presents three examples of array signal processing using the localization properties of time-frequency distributions of the impinging signals, and summarizes recent contributions in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-frequency analysis is particularly useful for non- stationary signals, such as radar, seismic, speech, biomedical signals, and machine condition monitoring. However, most time-frequency analyses are non-parametric and its resolution limited by the number of samples. We introduce model-based time-frequency analysis. Its frequency resolution is not limited by the number of samples in the data set, thus, super-high frequency resolution can be achieved. It also has unlimited zooming capability, which will be of benefit to the extraction of detailed time- frequency features of signals for automatic target recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The overlapping subjects of target identification, inverse scattering and active classification have many applications that differ depending on specific sensors. Many useful techniques for these relevant subjects have been developed in the frequency and the time domains. A more recent approach views the target signatures in the combined or coupled time-frequency domain. For either ultra-wideband (UWB) projectors, or UWB processing these joint time- frequency techniques are particularly advantageous. Such analysis requires the use of some of the scores of non- linear distributions that have been proposed and studied over the years. Basic ones, such as the Wigner distribution and its many relatives, have been shown to belong to the well-studied `Cohen Class.' We will select half-a-dozen of these distributions to study applications that we have addressed and solved in several areas such as: (1) active sonar, (2) underwater mine classification using pulses from explosive sources, (3) identification of submerged shells having different fillers using dolphin bio-sonar `clicks,' and (4) broadband radar pulses to identify aircraft, other targets covered with dielectric absorbing layers, and also (land) mine-like objects buried underground, using a ground penetrating radar. These examples illustrate how the informative identifying features required for accurate target identification are extracted and displayed in this general time-frequency domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The non-ideal motion of the hydrophone usually induces the aperture error of the synthetic aperture sonar (SAS), which is one of the most important factors degrading the SAS imaging quality. In the SAS imaging, the return signals are usually nonstationary due to the non-ideal hydrophone motion. In this paper, joint time-frequency analysis (JTFA), as a good technique for analyzing nonstationary signals, is used in the SAS imaging. Based on the JTFA of the sonar return signals, a novel SAS imaging algorithm is proposed. The algorithm is verified by simulation examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an efficient system for the registration of Range maps in the compressed wavelet domain. A feature-based approach is taken to reduce the computational burden involved in the registration search process. Efficient and accurate registration of range or depth maps is an important problem in military, medical and manufacturing applications. The techniques described can be applied not only to 3D data but also to 2D images. We illustrate registration results on a range of real scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landsat image data were produced from a multispectral scanner on Landsat satellites. Vegetation indices are based on the distinctive rise in the reflectance of green vegetation: a wavelength increase from visible red to reflective infrared caused by the selective absorption of red light by chlorophyll for photosynthesis. The spectral bandwidth of different Landsat crops are uniquely different and provide a basis for High Scale Discontinuity Detection. High Scale Discontinuity Detection applied to Landsat cross sections (1D signals) detects boundaries between urban areas and agricultural areas and different crops. These boundaries will be used to reconstruct an image based on boundaries. This approach might be usefully applied to IR images, laser remote sensing or any image where vegetation changes abruptly because of altitude or moisture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new method for the estimation of oceanic surface velocity fields using wavelet decomposition of Sea Surface Temperature images (SST). This method requires the use of two SST images of the same region taken within a known time interval. Wavelet decompositions are performed on both images to obtain approximations that will be compared for local displacements. These local displacements are smoothed to eliminate noise and to generate a rough initial estimate of the velocity field. Then this estimate is iteratively refined by a similar analysis over more detailed images produced by inverse wavelet transforms. A description of the method and its assumptions are presented with initial results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wavelet-based image registration approach has previously been proposed by the authors. In this work, wavelet coefficient maxima obtained from an orthogonal wavelet decomposition using Daubechies filters were utilized to register images in a multi-resolution fashion. Tested on several remote sensing datasets, this method gave very encouraging results. Despite the lack of translation- invariance of these filters, we showed that when using cross-correlation as a feature matching technique, features of size larger than twice the size of the filters are correctly registered by using the low-frequency subbands of the Daubechies wavelet decomposition. Nevertheless, high- frequency subbands are still sensitive to translation effects. In this work, we are considering a rotation- and translation-invariant representation developed by E. Simoncelli and integrate it in our image registration scheme. The two types of filters, Daubechies and Simoncelli filters, are then being compared from a registration point of view, utilizing synthetic data as well as data from the Landsat/Thematic Mapper and from the NOAA Advanced Very High Resolution Radiometer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Macrostructure/Wavelet-texture-label per pixel: In this paper, we concentrate on neural network classifiers on sub- regions of the image and we show how texture information obtained with a wavelet transform can be integrated to improve such a single label classifier. We apply a local spatial frequency analysis, a wavelet transform, to account for statistical texture information in Landsat/TM imagery. Statistical texture is extracted with a continuous edge- texture composite wavelet transform. We show how this approach relates to texture information computed from a co- occurrence matrix. The network is then trained with both the texture information and the additional pixel labels provided by the ground truth data. Theory and regional results are described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this study is to analyze the use of wavelet- based shape features for automated recognition of mammographic mass shapes. Two sets of shape features are used. The first set includes wavelet-based scalar-energy features. The mass boundary radial distance memory is decomposed using a discrete wavelet transform. The energy of the coefficients at each scale are computed, and these energy values are then used to form a feature vector. Several mother wavelets are used for the wavelet-based shape features: Daubechies-3 (DB3), DB5, DB7, DB9, DB11, DB13, DB15, Coiflets-3 (C3), C5, Symlets-2 (S2), S4, S6, and S8. The second set includes the following traditional features: radial distance mean, standard deviation, zero-crossing count, roughness index, entropy, and compactness. For each set of shape features, linear discriminant analysis is used to appropriately weight the features, and a minimum Euclidean distance classifier is used to separate the shapes into three classes: round, lobular, and irregular. The classification results, as well as false positive and false negative rates, are compared for each set of shape features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major problem associated with a `film-less hospital' is the amount of digital image data that is generated and stored. Image compression must be used to reduce the storage size. Most current image compression methods were developed for the compression of single images. A new compression that uses similar image set redundancy and minimal numbers of orthogonal features can be used to efficiently compress medical images. As presented in this paper, wavelet analysis, principal component analysis, and statistical recrimination can be successfully used to optimally denote image differences and achieve efficient similar set image compression for medical images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a 3D compression method based on separable wavelet transform is discussed in detail. The most commonly used digital modalities generate multiple slices in a single examination, which are normally anatomically or physiologically correlated to each other. 3D wavelet compression methods can achieve more efficient compression by exploring the correlation between slices. The first step is based on a separable 3D wavelet transform. Considering the difference between pixel distances within a slice and those between slices, one biorthogonal Antoninin filter bank is applied within 2D slices and a second biorthogonal Villa4 filter bank on the slice direction. Then, S+P transform is applied in the low-resolution wavelet components and an optimal quantizer is presented after analysis of the quantization noise. We use an optimal bit allocation algorithm, which, instead of eliminating the coefficients of high-resolution components in smooth areas, minimizes the system reconstruction distortion at a given bit-rate. Finally, to remain high coding efficiency and adapt to different properties of each component, a comprehensive entropy coding method is proposed, in which arithmetic coding method is applied in high-resolution components and adaptive Huffman coding method in low-resolution components. Our experimental results are evaluated by several image measures and our 3D wavelet compression scheme is proved to be more efficient than 2D wavelet compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An exploratory investigation of the applicability of wavelets and wavelet transforms to signals that are encountered in high speed radar has been performed. The wideband cross ambiguity function (WBCAF) is calculated for two signals, the original signal and scaled and translated versions of the second. In an active sensing problem such as encountered in radar, the first signal will be known transmission and the second will be a reflected, refracted and/or reradiated version of the transmitted signa. When the two signal are correlated, a single correlation coefficient is calculated at each scale and translation value. These values can be interpreted as a representation of the actual scattering process. The approach used to calculate the WBCAF utilizes wavelet transforms, in this case splines, and can be expressed as an integral oeprator acting on the two signal wavelet transforms. The WBCAF is thus calculated entirely in the wavelet transform domain. A significant point of this formulation is that the wavelet transforms for each signal are with respect to an arbitrary mother wavelet which can be chosen a priori with the mathematical model of its time scaling and translation also known. For the work described herein, a linear chirp signal (i.e. sinusoidal wave with increasing frequency) served as the test case of transmission. Provision is made for the target or return signal generator to have significant velocities and accelerations as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Taking a wavelet standpoint, we survey on the one hand various approaches to multifractal analysis, as a means of characterizing long-range correlations in data, and on the other hand various ways of statistically measuring anisotropy in 2D fields. In both instances, we present new and related techniques: (1) a simple multifractal analysis methodology based on Discrete Wavelet Transforms (DWTs), and (2) a specific DWT adapted to strongly anisotropic fields sampled on rectangular grids with large aspect ratios. This DWT uses a tensor product of the standard dyadic Haar basis (dividing ratio 2) and a non-standard triadic counterpart (dividing ratio 3) which includes the famous `French to-hat' wavelet. The new DWT is amenable to an anisotropic version of Multi-Resolution Analysis (MRA) in image processing where the natural support of the field is 2n pixels (vertically) by 3n pixels (horizontally), n being the number of levels in the MRA. The complete 2D basis has one scaling function and five wavelets. The new MRA is used in synthesis mode to generate random multifractal fields that mimic quite realistically the structure and distribution of boundary-layer clouds even though only a few parameters are used to control statistically the wavelet coefficients of the liquid water density field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR interferometry is a technique for extracting height information from phase difference images referred to as interferograms. Phase measurements from interferograms need to be unwrapped before useful information can be extracted since they are given module 2(pi) . Phase unwrapping is a difficult task to do because phase measurements tend to be corrupted by additive random phase noises caused by low coherence. The current approach to the phase noise reduction is to smooth interferograms by using coherent averaging method, which is simple but leads to a degradation of spatial resolution and edge blur. In this paper, an approach to phase noise reduction is proposed based on the wavelet transform. It is implemented by decomposing the original interferogram into a smoothed approximation and a series of wavelet planes at different resolution levels, followed by a morphological filtering of the wavelet planes that contain noises. The noise-reduced interferogram is reconstructed based on the approximation and the smoothed wavelet planes. The results show that the proposed approach can reduce phase noises with retained spatial resolution of the original interferograms when preserving edges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analytic signal representation introduced by Ville has found extensive use in matched filter processing and in time-frequency analysis. We discuss the role of the analytic signal representation in radar signal processing. In particular we analyze errors associated with this representation using quantized calculus and wavelets. We also discuss the classical problem of ambiguity synthesis, both narrowband and wideband, in the context of the analytic signal representation and highlight the importance of the upper half plane and its group of conformal transformations for wideband synthesis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet Transform is known to produce the most effective and computational efficient technique for image compression. The optimum space-spatial frequency localization property of this transform is utilized in the Embedded Zero Tree Wavelet Coding which has been refined to produce best performance in SPIHT (set partitioning in the hierarchical trees) algorithm for lossy compression and S+P (S-transform and prediction) for lossless compression. Using the multi- resolution property of wavelet transform one can also have progressive transmission for preliminary inspection where the criterion for progressiveness could be either fidelity or resolution. The three important points of wavelet based compression algorithms are: (1) partial ordering of transformed magnitudes with order transmission using subset partitioning, (2) refinement bit transmission using ordered bit plane, and (3) use of the self-similarity of the transform coefficients for different scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in areas of both the Discrete Wavelet Transforms representing human visual system neural network have resulted in improved video compression, restoration, and filtering techniques. These software techniques are capable of achieving quality performance in video, the computational complexity requires a special design hardware called WaveNet to run a real time live video through radio. The brassboard integrated with computers can potentially provide us many applications including remote sensors, security systems, commercial and home video teleconferencing. This paper describes a low cost board to support video compression (H.263), restoration, and filter system in real time processing. The WaveNet board has been optimized for wavelet-based image and video compression and enhancement techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Field-Programmable-Logic (FPL) is on the verge of revolutionizing digital signal processing (DSP) in a manner that programmable DSP microprocessors did nearly two decades ago. While FPL densities and performance have steadily improved to the point where some DSP solutions can be integrated into a single FPL chip, they still have limited the use in high-precision high-bandwidth applications. In this paper it is shown that alternative implementation strategies can be found which overcome the precision/bandwidth barrier. The design of Daubechies length 4 and 8 filter is presented to compare FPL and programmable DSP solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study describes a detailed study of the effect of finite precision computation on wavelet-based image compression. Specifically, we examine how the quality of the final decoded image is affected by various choices that a hardware designed will have to make, such as choice of wavelet (integer or real), fixed-point attributes (number of integer and fractional bits), and compression. The algorithm studied here is that adopted by the JPEG 2000 committee, and it uses trellis-coded quantization on the wavelet coefficients, followed by bit-plane coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We wavelet transform a non-diffracting field into a linear combination of the non-diffracting electromagnetic wavelets, which are spatially localized in the lateral direction, translated in the propagation direction and scaled in the time and temporal frequency. The non-diffracting wavelet is the window Fourier transform of the Bessel beam with a dilated temporal window. The proposed transform will be useful for analyzing the non-diffracting pulses and polychromatic beams.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Investigating a number of different integral transforms uncovers distinct patterns in the type of translation convolution theorems afforded by each. It is shown that transforms based on separable kernels (aka Fourier, Laplace and their relatives) have a form of the convolution theorem providing for a transform domain product of the convolved functions. However, transforms based on kernels not separable in the function and transform variables mandate a convolution theorem of a different type; namely in the transform domain the convolution becomes another convolution--one function with the transform of the other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet transform can be computed in the time domain by direct, polyphase, or run-length methods. Alternatively FFT methods can be used. Typical comparison of the computational effort in the literature are based on the number of float multiplications or float multiplication and additions. The break even point of the time and FFT methods are computed based on this counting. But todays computer architecture are to complex (e.g. pipelining, cache and CPU register utilization) to archive precise results by counting. Benchmarks are reported in this paper to compare different methods. In addition MMX code will be shown to give essential speed-ups. The benchmarks are conducted in connection with the ESA-MAE image compression algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the issue of extracting fine-frequency modulation laws from narrow-band pulsed signals and provides a fast and efficient wavelet-based scheme for their extraction. Two distinct methods of frequency demodulation are analyzed: first, is standard direct extraction using the arctangent function and second, is estimation via wavelet transform ridges. The first method is predominant in applications because implementation is straightforward and computational complexity is small; however, such direct estimates have leading edge and trailing edge variability and are very sensitive to noise. The second wavelet-based method is shown to be robust to noise and provide crisp estimates of all modulation components in a signal. Both methods are numerically implemented in MATLAB and results are compared on a set of signals of interest. As compared to conventional direct estimation, this approach is shown to provide many benefits including noise robustness, estimate leading and trailing edge consistency, and pulse-to-pulse low variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose the use of a geometry-adapted Discrete Wavelet Transform as a simple tool for classifying geometric errors that occur due to variation in manufacturing processes. It might be used, for example, in sheet metal forming, where a clay model of an automobile body needs to conform to the specifications contained in a CAD model. Likewise, it might also be used in numerically controlled machining to classify the source of machining error. We demonstrate that constant, oscillatory and random variations can be easily distinguished through an examination of the wavelet coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many autonomous vehicle navigation systems have adopted area-based stereo image processing techniques that use correlation measures to construct disparity maps as a basic obstacle detection and avoidance mechanism. Although the intra-scale area-based techniques perform well in pyramid processing frameworks, significant performance enhancement and reliability improvement may be achievable using wavelet- based inter-scale correlation measures. This paper presents a novel framework, which can be facilitated in unmanned ground vehicles, to recover 3D depth information (disparity maps) from binocular stereo images. We propose a wavelet- based coarse-to-fine incremental scheme to build up refined disparity maps from coarse ones, and demonstrate that usable disparity maps can be generated from sparse (compressed) wavelet coefficients. Our approach is motivated by a biological mechanism of the human visual system where multiresolution is known feature for perceptional visual processing. Among traditional multiresolution approaches, wavelet analysis provides a mathematically coherent and precise definition to the concept of multiresolution. The variation of resolution enables the transform to identify image signatures of objects in scale space. We use these signatures embedded in the wavelet transform domain to construct more detailed disparity maps at finer levels. Inter-scale correlation measures within the framework are used to identify the signature at the next finer level, since wavelet coefficients contain well-characterized evolutionary information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper applies the dyadic wavelet transform and the structured neural networks approach to recognize 2D objects under translation, rotation, and scale transformation. Experimental results are presented and compared with traditional methods. The experimental results showed that this refined technique successfully classified the objects and outperformed some traditional methods especially in the presence of noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents a methodology for multiple object tracking in image sequences at high video rate using a multi-resolution technique based on wavelet transforms. The main objective of visual tracking is to closely follow objects in each frame of a video stream such that the object position as well as other geometric information are always known. The idea is to locate and accompany an object based on previously identified features such as salient geometric properties. Important applications include real time tracking of single or multiple targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a method for performing a multi- scale analysis on range images by using the wavelet transform, that is capable of revealing multi-resolution information. An accurate non-contact optical system based upon laser triangulation is used to determine the depth information of the object being scanned. The resulting range image is treated as a gray-level image by using a multi- resolution approach based on the generalization of the cascade algorithm using the quincunx wavelet transform. The quincunx wavelet assures very fine analysis. This method allows reconstruction of non-subsampled images that correspond to decompositions previously done at chosen scales. Multi-resolution details and approximation image are computed and the emphasis is on their 3D representation. A 3D mesh is created from the object envelope. Vertices relatives to details information at specific scales are color-coded so that a mesh reduction algorithm only reduces the amount of data corresponding to the object envelope. Thus, this paper proposes a method for extracting multi- resolution information contained in range images by using the quincunx wavelet analysis for the purpose of data reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classical problem of motion (or velocity field, or optical flow) estimation from a pair of consecutive frames of a video sequence is here approached in a context of Circular Harmonic Wavelet (CHW) theory. This contribution extends some previous works of the authors on wavelet based Optimum Scale-Orientation Independent Pattern Recognition. In particular here we make use of an orthogonal system of Laguerre-Gauss wavelets. Each wavelet represents the image by translated, dilated and rotated versions of a complex waveform whereas, for a fixed resolution, this expansion provides a local representation of the image around any point. In addition each waveform is self-steerable, i.e. it rotates by simple multiplication with a complex factor. These properties allow to derive an iterative joint translation and rotation field Maximum Likelihood estimation procedure based on a bank of CHWs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of real-time video compression for high-performance wireless information systems, particularly for military applications. Our approach is based on embedded coding of wavelet coefficients. Since this compresses data in order of visual importance, the bitstream can be truncated at any point, allowing a straightforward tradeoff between bitrate and picture quality. For lower bitrates, we propose the inclusion of singularity maps, which are a generalization of edge maps. These singularity maps help identify regions of interest in the video, so that they can be protected while the background is aggressively compressed. To lower costs and increase flexibility, we investigate purely software implementations running on general-purpose processors. Because of the availability of today's fast processors, designed for multimedia and communications applications, such an approach becomes more feasible. To this end we apply integer-to-integer wavelet transforms in order to avoid expensive floating-point operations. We also avoid motion compensation, which requires the expensive computation of motion vectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A multi-resolution approach to problems of the identification of classes of ballistic missile objects is outlined. This approach is based on the utilization of features estimated from time-varying infrared signatures and the subsequent discrimination of different objects using unique time-frequency patterns obtained from a multi- resolution decomposition of the training and observation (performance evaluation) data. For example, we have identified four features that show some promise for discrimination: the intensity in the second lowest sub-band, the temporal profile in the lowest frequency sub-band, the modulation intensity, and the DC level of each observed object. The multi-resolution discrimination algorithm's performance can be evaluated by comparing with more traditional Fourier based approaches. The multi-resolution discrimination algorithms were applied to simulated data and were shown, by using L1 or L2 norms as distance metrics, to provide good classification performance and to reduce the temporal data length by half. The features extracted using the discrete wavelet packet transform can help to further improve classification performance. The robustness of the algorithm in the presence of noise is also studied. All data sets were generated with Raytheon Missile Systems Company's high fidelity simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet and Gabor theories are applied to active imaging by sensors such as radar. This is achieved by estimating the reflectivity of the target area (target density function). New results are also obtained on beamforming, utilizing time varying shading (weighting) of the sensor output, and on multiresolution analysis of the target density functions which is useful for feature analysis of targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scaling or dilation is an integral part of the wavelet transform. The wavelet transform possesses certain scale- invariance properties. This paper explores scale invariance further in the construction of linear, 2D, discrete-space, scale-invariant systems. Through a new definition of discrete-space, continuous-parameter dilation operation, deterministic and stochastic self-similarity in images is studied. It is shown that the dilation operation leads to the construction of linear scale-invariant systems for digital images. The paper provides methods for constructing such systems and shows application to the modeling of texture images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present closed form expressions for filters in multidimensional interpolation and approximation sampling systems matched to the input random field or image class in the mean squared sense. We then present expression for the mean squared error between the reconstructed and the input field. For the approximation sampling system we use this expression to show that the optimal antialiasing and reconstruction filters are spectral factors or an ideal brickwall-type of a filter. Finally, we give examples of filters matched to an image class generated using a spearable AR model and a quincunx sampling lattice and compare their performance with that of some standard interpolators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we present the discrete wavelet transform technique to calculate the spacing of moire pitches alternatively. The various resolution of moire fringe pattern can be processed by the discrete wavelet transform technique to get a periodic waveform. The results we obtained show that the outcomes are satisfactory and error is minimized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides two general methods for exact reconstruction of finite-length digital signal in wavelet transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital watermarking of digital images allow authentication of ownership and protection of intellectual property. We consider an algorithm wherein a luminance channel modulated watermark is embedded in the blue channel of a color image. The blue channel image is decomposed via a wavelet transform, and the embedding strength is varied depending on the resolution band. The resolution band weighting improves the algorithms' watermark recovery performance when distortion is introduced due to lossy compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main goal of this work is to denoise 3D confocal microscope scans of neuronal cells taken at high resolution such that neuronal structures of size smaller than 1 micrometers become visible. Although scanning confocal microscopes give much clearer images than ordinary light microscopes do, the images are still noisy and blurred. Our goal is to filter out the noise in these images without disturbing the smallest neuronal structures which have the same signal amplitude and geometric size as the noise. In order to obtain a good scale-space representation of the analyzed image, we use the 3D-wavelet transformation. We extend the denoising method of Donoho for 3D data and obtain several ways of computing thresholds and noise variances. Finally we develop a quality measure, for images with tree like structures, to determine the denoising method and wavelet form best suited for a particular confocal scan.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.