PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7535, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new compression approached based on the decomposition of images into continuous monovariate
functions, which provide adaptability over the quantity of information taken into account to define the monovariate
functions: only a fraction of the pixels of the original image have to be contained in the network used
to build the correspondence between monovariate functions. The Kolmogorov Superposition Theorem (KST)
stands that any multivariate functions can be decomposed into sums and compositions of monovariate functions.
The implementation of the decomposition proposed by Igelnik, and modified for image processing, is combined
with a wavelet decomposition, where the low frequencies will be represented with the highest accuracy, and the
high frequencies representation will benefit from the adaptive aspect of our method to achieve image compression.
Our main contribution is the proposition of a new compression scheme, in which we combine KSTand
multiresolution approach. Taking advantage of the KST decomposition scheme, we use a decomposition into
simplified monovariate functions to compress the high frequencies. We detail our approach and the different
methods used to simplify the monovariate functions. We present the reconstruction quality as a function of the
quantity of pixels contained in monovariate functions, as well as the image reconstructions obtained with each
simplification approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a method to optimize the computation of the wavelet transform for the 3D seismic data
while reducing the energy of coefficients to the minimum. This allow us to reduce the entropy of the signal and
so increase the compression ratios. The proposed method exploits the geometrical information contained in the
seismic 3D data to optimize the computation of the wavelet transform. Indeed, the classic filtering is replaced by
a filtering following the horizons contained in the 3D seismic images. Applying this approach in two dimensions
permits us to obtain wavelets coefficients with lowest energy. The experiments show that our method permits
to save extra 8% of the size of the object compared to the classic wavelet transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the civilian aviation field, the radar detection of
hazardous weather phenomena (winds) is very
important. This detection will allow the avoidance of
these phenomena and consequently will enhance the
safety of flights. In this work, we have used the
wavelets method to estimate the mean velocity of
winds. The results showed that the application of this
method is promising compared with the classical
estimators (pulse pair, Fourier)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method that detects edge orientations in still images. Edge orientation is a crucial
information when one wants to optimize the quality of edges after different processings. The detection is carried
out in the wavelet domain to take advantage of the multi-resolution features of the wavelet spaces, and locally
adapts the resolution to the characteristics of edges. Our orientation detection method consists of finding the
local direction along which the wavelet coefficients are the most regular. To do so, the image is divided in square
blocks of varying size, in which Bresenham lines are drawn to represent different directions. The direction of
the Bresenham line that contains the most regular wavelet coefficients, according to a criterion defined in the
paper, is considered to be the direction of the edge inside the block. The choice of the Bresenham line drawing
algorithm is justified in this paper, and we show that it considerably increases the angle precision compared to
other methods as for instance, the method used for the construction of bandlet bases. An optimal segmentation
is then computed in order to adapt the size of the blocks to the edge localization and to isolate in each block at
most one contour orientation. Examples and applications on image interpolation are shown on real images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In signal processing, the region of abrupt changes contains the most of the useful information about the nature of the signal. The region or the points where these changes occurred are often termed as singular point or singular region. The singularity is considered to be an important character of the signal, as it refers to the discontinuity and interruption present in the signal and the main purpose of the detection of such singular point is to identify the existence, location and size of those singularities. Electrocardiogram (ECG) signal is used to analyze the cardiovascular activity in the human body. However the presence of noise due to several reasons limits the doctor's decision and prevents accurate identification of different pathologies. In this work we attempt to analyze the ECG signal with energy based approach and some heuristic methods to segment and identify different signatures inside the signal. ECG signal has been initially denoised by empirical wavelet shrinkage approach based on Steins Unbiased Risk Estimate (SURE). At the second stage, the ECG signal has been analyzed by Mallat approach based on modulus maximas and Lipschitz exponent computation. The results from both approaches has been discussed and important aspects has been highlighted. In order to evaluate the algorithm, the analysis has been done on MIT-BIH Arrhythmia database; a set of ECG data records sampled at a rate of 360 Hz with 11 bit resolution over a 10mv range. The results have been examined and approved by medical doctors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an extension of the Minimum Sobolev Norm interpolation scheme to an approximation
scheme. A fast implementation of the MSN interpolation method using the methods for Hierarchical Semiseparable
(HSS) matrices is described and experimental results are provided. The approximation scheme is
introduced along with a numerically stable solver. Several numerical results are provided comparing the interpolation
scheme, the approximation scheme and Thin Plate Splines. A method to decompose images into smooth
and rough components is presented. A metric that could be used to distinguish edges and textures in the rough
component is also introduced. Suitable examples are provided for both the above.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
State of the art dimension reduction and classification schemes in multi- and
hyper-spectral imaging rely primarily on the information contained in the spectral
component. To better capture the joint spatial and spectral data distribution we
combine the Wavelet Packet Transform with the linear dimension reduction method
of Principal Component Analysis. Each spectral band is decomposed by means of the
Wavelet Packet Transform and we consider a joint entropy across all the spectral
bands as a tool to exploit the spatial information. Dimension reduction is then
applied to the Wavelet Packets coefficients. We present examples of this technique
for hyper-spectral satellite imaging. We also investigate the role of various shrinkage
techniques to model non-linearity in our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an evidential segmentation scheme of respiratory sounds for the detection of wheezes. The
segmentation is based on the modeling of the data by evidence theory which is well suited to represent such
uncertain and imprecise data. Moreover, this paper studies the efficiency of the fuzzy theory for modelizing data
imprecision. The segmentation results are improved by adding a priori information to the segmentation scheme.
The effectiveness of the method is demonstrated on synthetic and real signals
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Old movies suffer from several degradations mainly due to the archiving conditions. Since most of the old
films represent an important amount of valuable data in scientific, cultural, social and economical purposes, it
is mandatory to preserve them by resorting to fully digital and automatic restoration techniques. Among the
existing degradations, blotches have been found to be very visually unpleasant artifacts and hence, many efforts
have been devoted to design blotch correction procedures. Generally, two-step restoration procedures have been
investigated: a blotch detection is performed prior to the correction of the degradation. The contribution of our
approach is twofold. Firstly, the blotch detection is carried out on a multiscale representation of the degraded
frames. Secondly, statistical tests are employed to locate the underlying artifacts. In this paper, we aim at
achieving two objectives. In one hand, we improve the detection performances by exploiting into the statistical
test, the interscale dependencies existing between the coefficients of the considered multiscale representation of
the underlying frames. In the other hand, an efficient spatio-temporal inpainting-based technique of filling-in
missing areas is used in order to estimate the information masked by the blotches. Experimental results indicate
the efficiency of our approach compared to conventional blotch correction methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this article, we present the method of empirical modal decomposition (EMD) applied to the electrocardiograms and
phonocardiograms signals analysis and denoising. The objective of this work is to detect automatically cardiac anomalies
of a patient. As these anomalies are localized in time, therefore the localization of all the events should be preserved
precisely. The methods based on the Fourier Transform (TFD) lose the localization property [13] and in the case of
Wavelet Transform (WT) which makes possible to overcome the problem of localization, but the interpretation remains
still difficult to characterize the signal precisely.
In this work we propose to apply the EMD (Empirical Modal Decomposition) which have very significant properties on
pseudo periodic signals. The second section describes the algorithm of EMD. In the third part we present the result
obtained on Phonocardiograms (PCG) and on Electrocardiograms (ECG) test signals. The analysis and the interpretation
of these signals are given in this same section. Finally, we introduce an adaptation of the EMD algorithm which seems to
be very efficient for denoising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a real-time system for blur estimation using wavelet decomposition. The system is based on an emerging
multi-core microprocessor architecture (Cell Broadband Engine, Cell BE) known to outperform any available general
purpose or DSP processor in the domain of real-time advanced video processing solutions. We start from a recent
wavelet domain blur estimation algorithm which uses histograms of a local regularity measure called average cone ratio
(ACR). This approach has shown a very good potential for assessing the level of blur in the image yet some important
aspects remain to be addressed in order for the method to become a practically working one. Some of these aspects are
explored in our work. Furthermore, we develop an efficient real-time implementation of the novelty metric and integrate
it into a system that captures live video. The proposed system estimates blur extent and renders the results to the remote
user in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Watermarking, Image Retrieval, and 3D Meshes Analysis
The watermarking state of the art exhibits the hybrid methods combining spread spectrum and side information principles.
The present study is focussed on speeding up such an algorithm (jointly patented by SFR - Vodafone Group and Institut
Telecom). The dead lock on the reference method is first identified: the embedding module accounts for 90% of the whole
watermarking chain and that more than 99% of this time is spent on applying an attack procedure (required in order to grant
a good robustness to this method). The main issue of the present study is to deploy Monte Carlo generators accurately
representing the watermarking attacks. In this respect, two difficulties should be overcome. First, accurate statistical models
for the watermarking attacks should be obtained. Secondly, efficient Monte Carlo simulators should be deployed for these
models. The last part of the study was devoted to the experimental validations. The mark is inserted in the (9,7) DWT
representation of video sequence. Several types of attacks have been considered (linear and non-linear filters, geometrical
transformations, ...). The quantitative results proved that the data payload, transparency and robustness properties have
been inherited from the reference method. However, the watermarking speed was increased by a factor of 80.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work a novel technique for color texture representations and classifications is presented. We assume that a color
texture can be mainly characterized by two components: structure and color. Concerning the structure, it is analyzed by
using the Laguerre-Gauss circular harmonic wavelet decomposition of the luminance channel. At this aim, the marginal
density of the wavelet coefficients is modeled by Generalized Gaussian Density (GGD), and the similarity is based on
the Kullback-Leibler divergence (KLD) between two GGDs. The color is characterized by the moments computed on the
chromatic channels, and the similarity is evaluated by using the Euclidean distance. The overall similarity is obtained by
linearly combining the two individual measures. Experimental results on a data set of 640 color texture images, extracted
from the "Vision Texture" database, show that the retrieval rates is about 81% when only the structural component is
employed, and it rises up to 87% when using both structural and color components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D meshes are widely used in computer graphics applications for approximating 3D models. When representing
complex shapes in raw data format, meshes consume a large amount of space. Applications calling for compact
and fast processing of large 3D meshes have motivated a multitude of algorithms developped to process
these datasets efficiently. The concept of multiresolution analysis proposes an efficient and versatile tool for
digital geometric processing allowing for numerous applications. In this paper, we survey recent developments
in multiresolution methods for 3D triangle meshes. We also show some results of these methods through various
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.