The present paper proposes a blind multi-bit watermarking method for High Dynamic Range (HDR) images. The proposed
approach is designed in order to guarantee the watermark imperceptibility both in the HDR marked image and in its Low
Dynamic Range (LDR) counterpart, being thus robust against significant non-linear distortions such as those performed
by tone-mapping operators (TMOs). In order to do so, the wavelet transform of the Just Noticeable Difference (JND)-
scaled space of the original HDR image is employed as embedding domain. Moreover, a visual mask taking into account
specific aspects of the Human Visual System (HVS) is exploited to improve the quality of the resulting watermarked image.
Specifically, bilateral filtering is used to locate information on the detail part of the HDR image, where the watermark
should be preferably embedded. A contrast sensitivity function is also employed to modulate the watermark intensity
in each wavelet decomposition subband according to its scale and orientation. An extensive set of experimental results
testifies the effectiveness of the proposed scheme in embedding multi-bit watermarks into HDR images without affecting
the visual quality of the original image, while being robust against TMOs.
This paper presents a hybrid watermarking technique which mixes additive and multiplicative watermark embedding
with emphasis on its robustness versus the imperceptibility of the watermark. The embedding is performed
in six wavelet sub-bands using independently three embedding equations and two parameters to modulate the
embedding strength for multiplicative and additive embedding. The watermark strength is independently modulated
into distinct image areas. Specifically, when a multiplicative embedding is used, the visibility threshold
is first reached near the image edges, whereas using an additive embedding technique the visibility threshold is
first reached into the smooth areas. A subjective experiment has been used to provide the optimal watermark
strength for three distinct embedding equations. Observers were asked to tune the watermark amplitude and to
set the strength at the visibility threshold. The experimental results showed that using an hybrid watermarking
technique significantly improves the robustness performance. This work is a preliminary study for the design of
an optimal wavelet domain Just Noticeable Difference (JND) mask.
In this paper we address the problem of crosstalk reduction for autostereoscopic displays. Crosstalk refers to the
perception of one or more unwanted views in addition to the desired one. Specifically, the proposed approach consists of
three different stages: a crosstalk measurement stage, where the crosstalk is modeled, a filter design stage, based on the
results obtained out of the measurements, to mitigate the crosstalk effect, and a validation test carried out by means of
subjective measurements performed in a controlled environment as recommended in ITU BT 500-11. Our analysis,
synthesis, and subjective experiments are performed on the Alioscopy® display, which is a lenticular multiview display.
We present an object-oriented method for watermarking stereo images. Since stereo images are characterized by the perception of depth, the watermarking scheme we propose relies on the extraction of a depth map from the stereo pairs to embed the mark. The watermark embedding is performed in the wavelet domain using the quantization index modulation method. Experimental results show that the proposed method is a semifragile one that is robust toward JPEG and JPEG2000 compression and fragile with respect to other signal manipulations.
In this paper an adaptive feature-based approach to on-line signature verification is presented. Cryptographic
techniques are employed to protect the extracted templates thus making impossible to derive the original biometric
data from the stored information, as well as to generate multiple templates from the same original biometrics.
Our approach allows to obtain, together with protection, also template cancelability thus guaranteeing user's
privacy. The proposed authentication scheme is able to automatically adjust its parameters to the variability of
each user's signature, thus obtaining a user adaptive system with enhanced performances with respect to a nonadaptive
one. Experimental results show the effectiveness of our approach. Also the effects on the recognition performances when using the pen inclination features are investigated.
In this contribution, we propose an adaptive multiresolution denoising technique operating in the wavelet domain that
selectively enhances object contours, extending a restoration scheme based on edge oriented wavelet representation by
means of adaptive surround inhibition inspired by the human visual system characteristics. The use of the complex edge
oriented wavelet representation is motivated by the fact that it is tuned to the most relevant visual image features. In this
domain, an edge is represented by a complex number whose magnitude is proportional to its "strength" while phase
equals the orientation angle. The complex edge wavelet is the first order dyadic Laguerre Gauss Circular Harmonic
Wavelet, acting as a band limited gradient operator. The anisotropic sharpening function enhances or attenuates
large/small edges more or less deeply, accounting for masking effects induced by textured background. Adapting
sharpening to the local image content is realized by identifying the local statistics of natural and artificial textures like
grass, foliage, water, composing the background. In the paper, the whole mathematical model is derived and its
performances are validated on the basis of simulations on a wide data set.
Biometrics is rapidly becoming the principal technology
for automatic people authentication. The main advantage in using
biometrics over traditional recognition approaches relies in the difficulty
of losing, stealing, or copying individual behavioral or physical
traits. The major weakness of biometrics-based systems relies in
their security: in order to avoid data stealing or corruption, storing
raw biometric data is not advised. The same problem occurs when
biometric templates are employed, since they can be used to recover
the original biometric data. We employ cryptographic techniques
to protect dynamic signature features, making it impossible
to derive the original biometrics from the stored templates, while
maintaining good recognition performances. Together with protection,
we also guarantee template cancellability and renewability.
Moreover, the proposed authentication scheme is tailored to the signature
variability of each user, thus obtaining a user adaptive system
with enhanced performances with respect to a nonadaptive
one. Experimental results show the effectiveness of our approach
when compared to both traditional nonsecure classifiers and other,
already proposed protection schemes.
In this paper we propose an OFDM (Orthogonal Frequency Division Multiplexing) wireless communication system that
introduces mutual authentication and encryption at the physical layer, without impairing spectral efficiency, exploiting
some freedom degrees of the base-band signal, and using encrypted-hash algorithms. FEC (Forward Error Correction) is
instead performed through variable-rate Turbo Codes. To avoid false rejections, i.e. rejections of enrolled (authorized)
users, we designed and tested a robust hash algorithm. This robustness is obtained both by a segmentation of the hash
domain (based on BCH codes) and by the FEC capabilities of Turbo Codes.
In this paper we propose a signature-based biometric system, where watermarking is applied to signature images in order to hide and keep secret some signature features in a static representation of the signature itself. Being a behavioral biometric, signatures are intrinsically different from other commonly used biometric data, possessing dynamic properties which can not be extracted from a
single signature image. The marked images can be used for user authentication, letting their static characteristics being analyzed by automatic algorithms or security attendants. When a higher security is needed, the embedded features can be extracted and used, thus realizing a multi-level decision procedure. The proposed watermarking techniques are tailored to images with sharpened edges, just like a signature picture. In order to obtain a robust method,
able to hide relevant data while keeping intact the original structure of the host, the mark is embedded as close as possible to the lines that constitute the signature, using the properties of the Radon transform. An extensive set of experimental results, obtained varying the system's parameters and concerning both the mark
extraction and the verification performances, show the effectiveness
of our approach.
Biometrics is the most emerging technology for automatic people authentication, nevertheless severe concerns
raised about security of such systems and users' privacy. In case of malicious attacks toward one or more components
of the authentication system, stolen biometric features cannot be replaced. This paper focuses on securing
the enrollment database and the communication channel between such database and the matcher. In particular,
a method is developed to protect the stored biometric templates, adapting the fuzzy commitment scheme to iris
biometrics by exploiting error correction codes tailored on template discriminability. The aforementioned method
allows template renewability applied to iris based authentication and guarantees high security performing the
match in the encrypted domain.
In this paper we propose an Orthogonal Frequency Division Multiplexing Ultra Wide Band (OFDM-UWB) system that
introduces encryption, mutual authentication, and data integrity functions, at the physical layer, without impairing
spectral efficiency. Encryption is performed by rotating the constellation employed in each band by means of a pseudorandom
phase-hopping sequence. Authentication and data integrity, based on encrypted-hash, are directly coupled with
Forward Error Correction (FEC). Dependence of the phase hopping sequence on the transmitted message deny the use of
the phase hopping obtained by means of known and chosen plaintext attacks for decryption of further messages.
Moreover, since phase hopping generation keys change very rapidly they are also difficultly detectable from a hypothetic
man in the middle. Computer simulations confirm the superior performance, even in terms of BER, to a standard PSKOFDM
system, due to the FEC capabilities of encrypted hash.
In the last decade a lot of efforts have been devoted to the development of biometrics-based authentication
systems. In this paper we propose a signature-based biometric authentication system, where watermarking
techniques are used to embed some dynamic signature features in a static representation of the signature itself,
stored either in a centralized database or in a smartcard. The user authentication can be performed either by using
some static features extracted from the acquired signature or by using both the aforementioned static features
together with the dynamic features embedded in the enrollment stage. A multi-level authentication system,
which is capable to provide various degree of security, is thus obtained. The proposed watermarking techniques
are tailored to images with sharp edges, like a signature picture, in order to obtain a robust embedding method
while keeping intact the original structure of the host signal. Experimental results show the two different levels
of security which can be reached when either static features or both static and dynamic features are employed in the authentication process.
In the last decade digital watermarking techniques have been devised to answer the ever-growing need to protect the
intellectual property of digital still images, video sequences or audio from piracy attacks. Because of the proliferation of
watermarking algorithms and their applications some benchmarks have been created in order to help watermarkers
comparing their algorithms in terms of robustness against various attacks (i.e. Stirmark, Checkmark). However, no equal
attention has been devoted to the proposition of benchmarks tailored to assess the watermark perceptual transparency. In
this work, we study several watermarking techniques in terms of the mark invisibility through subjective experiments.
Moreover, we test the ability of several objective metrics, used in the literature mainly to evaluate distortions due to the
coding process, to be correlated with subjective scores. The conclusions drawn in the paper are supported by extensive
experimentations using both several watermarking techniques and objective metrics.
What visually distinguishes a painting from a photograph is often the absence of texture and the sharp edges: in many
paintings, edges are sharper than in photographic images while textured areas contain less detail. Such artistic effects can
be achieved by filters that smooth textured areas while preserving, or enhancing, edges and corners. However, not all
edge preserving smoothers are suitable for artistic imaging. This study presents a generalization of the well know
Kuwahara filter aimed at obtaining an artistic effect. Theoretical limitations of the Kuwahara filter are discussed and
solved by the new nonlinear operator proposed here. Experimental results show that the proposed operator produces
painting-like output images and is robust to corruption of the input image such as blurring. Comparison with existing
techniques shows situations where traditional edge preserving smoothers that are commonly used for artistic imaging fail
while our approach produces good results.
The most emerging technology for people identification and authentication is biometrics. In contrast with
traditional recognition approaches, biometric authentication relies on who a person is or what a person does,
being based on strictly personal traits, much more difficult to be forgotten, lost, stolen, copied or forged than
traditional data. In this paper, we focus on two vulnerable points of biometric systems: the database where the
templates are stored and the communication channel between the stored templates and the matcher. Specifically,
we propose a method, based on user adaptive error correction codes, to achieve securitization and cancelability of
the stored templates applied to dynamic signature features. More in detail, the employed error correction code is
tailored to the intra-class variability of each user's signature features. This leads to an enhancement of the system
performance expressed in terms of false acceptance rate. Moreover, in order to avoid corruption or interception
of the stored templates in the transmission channels, we propose a scheme based on threshold cryptography:
the distribution of the certificate authority functionality among a number of nodes provides distributed, fault-tolerant,
and hierarchical key management services. Experimental results show the effectiveness of our approach,
when compared to traditional non-secure correlation-based classifiers.
Canny edge detector is based both on local and global image analysis, present in the gradient computation and
connectivity-related hysteresis thresholding, respectively. This contribution proposes a generalization of these ideas.
Instead of the sole gradient magnitude, we consider several local statistics, to take into account how much texture is
present around each pixel. This information is used in biologically inspired surround inhibition of texture. Global
analysis is generalized by introducing a long range connectivity analysis. We demonstrate the effectiveness of our
approach by extensive experimentation.
KEYWORDS: Sensors, Edge detection, Binary data, Volume rendering, Biological research, Detection and tracking algorithms, Image processing, Medical imaging, Signal to noise ratio, Linear filtering
In this paper we propose a multiscale biologically motivated technique for contour detection by texture suppression.
Standard edge detectors react to all the local luminance changes, irrespective whether they are due to the contours of the
objects represented in the scene, rather than to natural texture like grass, foliage, water, etc. Moreover, edges due to
texture are often stronger than edges due to true contours. This implies that further processing is needed to discriminate
true contours from texture edges. In this contribution we exploit the fact that, in a multiresolution analysis, at coarser
scales, only the edges due to object contours are present while texture edges disappear. This is used in combination with
surround inhibition, a biologically motivated technique for texture suppression, in order to build a contour detector which
is insensitive to texture. The experimental results show that our approach is also robust to additive noise.
In this paper we propose a new approach for the synthesis of natural video textures. After generalizing the bidimensional extended self-similar (ESS) model to the three dimensional (3D) case we want to generate samples of 3D-ESS fields on a discrete grid. The video texture is modeled according to the 3D-ESS model. The autocorrelation functions (ACFs) of the increments of the original video texture, at different spatial and temporal scales, are estimated according to the 3D-ESS model. A synthetic 3D-ESS field, whose increments have the same ACFs of the corresponding ones of the given prototype, is generated using the incremental Fourier synthesis algorithm.
This paper considers the use of data hiding strategies for improved color image compression. Specifically, color information is piggybacked on the luminance component of the image in order to reduce the overall signal storage requirements. A practical wavelet-based data hiding scheme is proposed in which selected perceptually irrelevant luminance bands are replaced with perceptually salient chrominance components. Simulation results demonstrate the improvement in compression quality of the proposed scheme to SPIHT and JPEG at low bit rates. The novel technique also has the advantage that it can be used to further reduce the storage requirements of algorithms such as SPIHT which is optimized for grayscale image compression.
In this paper a novel wavelet based robust blind watermarking scheme is presented. Taking into account the property of the human visual system, the proposed scheme embeds the watermark into the wavelet coefficients representative of the image's areas highly dense of details. More specifically a two stages unconventional wavelet decomposition is used. In fact, after having performed a 2D one-level wavelet decomposition, the high frequency wavelet subbands are further wavelet decomposed, thus obtaining those coefficients where the mark is finally embedded. Experimental results demonstrate that this embedding strategy allows to greatly increase the local ratio between the mark and the original image energies, thus leading to robustness even against severe image degradations, without transparency loss.
In this paper, the design of a video conference system is outlined. The proposed scheme allows the various attendees to access the multipoint video distribution center via channels of different nature and capacity without resorting to parallel bank of coders or multiple decoding-coding conversions. This approach allows to obtain a scalability of the user profile which is not present in DCT based video codecs, where the distribution of the encoded video to different attendees is performed by using a unique bit rate adjusted according to the capacity of the worst connection. In this contribution a video coder based on spatio-temporal multiresolution pyramid generated by a 3D separable Wavelet transform is proposed. Experimental results show the capability of the proposed method.
In a multimedia framework, digital image sequences (videos) are by far the most demanding as far as storage, search, browsing and retrieval requirements are concerned. In order to reduce the computational burden associated to video browsing and retrieval, a video sequence is usually decomposed into several scenes (shots) and each of them is characterized by means of some key frames. The proper selection of these key frames, i.e. the most representative frames in the scene, is of paramount importance for computational efficiency. In this contribution a novel key frame extraction technique based on the wavelet analysis is presented. Experimental results show the capability of the proposed algorithm to select key frames properly summarizing the shot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.