We revisit the well-known watermarking detection problem, also known as one-bit watermarking, in the presence of an oracle attack. In the absence of an adversary, the design of the detector generally relies on probabilistic formulations (e.g., Neyman-Pearson's lemma) or on ad-hoc solutions. When there is an adversary trying to minimize the probability of correct detection, game-theoretic approaches are possible. However, they usually assume that the attacker cannot learn the secret parameters used in detection. This is no longer the case when the adversary launches an oracle-based attack, which turns out to be extremely effective. In this paper, we discuss how the detector can learn whether it is being subject to such an attack, and take proper measures. We present two approaches based on different attacker models. The first model is very general and makes minimum assumptions on attacker's beaver. The second model is more specific since it assumes that the oracle attack follows a weel-defined path. In all cases, a few observations are sufficient to the watermark detector to understand whether an oracle attack is on going.
In this paper we propose a method for perspective distortion correction of rectangular documents. This scheme
exploits the orthogonality of the document edges, allowing to recover the aspect ratio of the original document.
The results obtained after correcting the perspective of several document images captured with a mobile phone
are compared with those achieved by digitizing the same documents with several scanner models.
One of the biggest challenges in universal steganalysis is the identification of reliable features that can be used
to detect stego images. In this paper, we present a steganalysis method using features calculated from a measure
that is invariant for cover images and is altered for stego images. We derive this measure, which is the ratio
of any two Fourier coefficients of the distribution of the DCT coefficients, by modeling the distribution of the
DCT coefficients as a Laplacian. We evaluate our steganalysis detector against three different pixel-domain
steganography techniques.
KEYWORDS: Distortion, Stochastic processes, Digital watermarking, Fourier transforms, Data hiding, Process modeling, Image quality, Image enhancement, Signal to noise ratio, Steganography
Desynchronization attacks based on fine resampling of a watermarked signal can be very effective from the point of view of degrading decoding performance. Nevertheless, the actual perceptual impact brought about by these attacks has not been considered in enough depth in previous research. In this work, we investigate geometric distortion measures which aim at being simultaneously general, related to human perception, and easy to compute in stochastic contexts. Our approach is based on combining the stochastic characterization of the sampling grid jitter applied by the attacker with empirically relevant perceptual measures. Using this procedure, we show that the variance of the sampling grid, which is a customary geometric distortion measure, has to be weighted in order to carry more accurate perceptual meaning. Indeed, the spectral characteristics of the geometric jitter signal have to be relevant from a perceptual point of view, as intuitively seen when comparing constant shift resampling and white jitter resampling. Finally, as the geometric jitter signal does not describe in full the resampled signal, we investigate more accurate approaches to producing a geometric distortion measure that takes into account the amplitude modifications due to resampling.
KEYWORDS: Distortion, Multimedia, Digital watermarking, Computer programming, Data hiding, Signal detection, Computer security, Signal processing, Quantization, Information security
This work deals with practical and theoretical issues raised by the information-theoretical framework for authentication
with distortion constraints proposed by Martinian et al.1 The optimal schemes proposed by these
authors rely on random codes which bear close resemblance to the dirty-paper random codes which show up
in data hiding problems. On the one hand, this would suggest to implement practical authentication methods
employing lattice codes, but these are too easy to tamper with within authentication scenarios. Lattice codes
must be randomized in order to hide their structure. One particular multimedia authentication method based
on randomizing the scalar lattice was recently proposed by Fei et al.2 We reexamine here this method under the
light of the aforementioned information-theoretical study, and we extend it to general lattices thus providing a
more general performance analysis for lattice-based authentication. We also propose improvements to Fei et al.'s
method based on the analysis by Martinian et al., and we discuss some weaknesses of these methods and their
solutions.
KEYWORDS: Information security, Digital watermarking, Computer security, Distortion, Data hiding, Binary data, Numerical integration, Statistical analysis, Detection theory, Communication theory
This paper presents an information-theoretic analysis of security for data hiding methods based on spread
spectrum. The security is quantified by means of the mutual information between the observed watermarked
signals and the secret carrier (a.k.a. spreading vector) that conveys the watermark, a measure that can be used
to bound the number of observations needed to estimate the carrier up to a certain accuracy. The main results of
this paper permit to establish fundamental security limits for this kind of methods and to draw conclusions about
the tradeoffs between robustness and security. Specifically, the impact of the dimensionality of the embedding
function, the host rejection, and the embedding distortion in the security level is investigated, and in some cases
explicitly quantified.
KEYWORDS: Digital watermarking, Sensors, Distortion, Signal detection, Signal to noise ratio, Binary data, Detection and tracking algorithms, Algorithm development, Information security, Signal processing
From December 15, 2005 to June 15, 2006 the watermarking community was challenged to remove the watermark
from 3 different 512×512 watermarked images while maximizing the Peak Signal to Noise Ratio (PSNR) measured
by comparing the watermarked signals with their attacked counterparts. This challenge, which bore the inviting
name of Break Our Watermarking System (BOWS),1 and was part of the activities of the European Network
of Excellence ECRYPT, had as its main objective to enlarge the current knowledge on attacks to watermarking
systems; in this sense, BOWS was not aimed at checking the vulnerability of the specific chosen watermarking
scheme against attacks, but to inquire in the different strategies the attackers would follow to achieve their target.
In this paper the main results obtained by the authors when attacking the BOWS system are introduced.
Mainly, the strategies followed can be divided into two different approaches: blind sensitivity attacks and exhaustive
search of the secret key.
KEYWORDS: Digital watermarking, Sensors, Binary data, Information security, Receivers, Signal detection, Modulation, Distortion, Detection and tracking algorithms, Statistical analysis
Zero-knowledge watermark detectors presented to date are based on a linear correlation between the asset features
and a given secret sequence. This detection function is susceptible of being attacked by sensitivity attacks, for
which zero-knowledge does not provide protection.
In this paper, an efficient zero-knowledge version of the Generalized Gaussian Maximum Likelihood (ML)
detector is introduced. The inherent robustness that this detector presents against sensitivity attacks, together
with the security provided by the zero-knowledge protocol that conceals the keys that could be used to remove
the watermark or to produce forged assets, results in a robust and secure protocol.
Two versions of the zero-knowledge detector are presented; the first one makes use of two new zero-knowledge
proofs for modulus and square root calculation; the second is an improved version applicable when the spreading
sequence is binary, and it has minimum communication complexity.
Completeness, soundness and zero-knowledge properties of the developed protocols are proved, and they are
compared with previous zero-knowledge watermark detection protocols in terms of receiver operating characteristic,
resistance to sensitivity attacks and communication complexity.
KEYWORDS: Digital watermarking, Computer security, Information security, Data hiding, Distortion, Monte Carlo methods, Data analysis, Optical spheres, Quantization, Statistical analysis
In this paper, security of lattice-quantization data hiding is considered under a cryptanalytic point of view. Security in this family of methods is implemented by means of a pseudorandom dither signal which randomizes the codebook, preventing unauthorized embedding and/or decoding. However, the theoretical analysis shows that the observation of several watermarked signals can provide sufficient information for an attacker willing to estimate the dither signal, quantifying information leakages in different scenarios. The practical algorithms proposed in this paper show that such information leakage may be successfully exploited with manageable complexity, providing accurate estimates of the dither using a small number of observations. The aim of this work is to highlight the security weaknesses of lattice data hiding schemes whose security relies only on secret dithering.
KEYWORDS: Digital watermarking, Linear filtering, Data hiding, Distortion, Fourier transforms, Quantization, Modulation, Optical filters, Electronic filtering, Signal to noise ratio
Rational Dither Modulation (RDM) is a high-rate data hiding method invariant to gain attacks. We propose an extension of RDM to construct a scheme that is robust to arbitrary linear time-invariant filtering attacks, as opposed to standard Dither Modulation (DM) which we show to be extremely sensitive to those attacks. The novel algorithm, named Discrete Fourier Transform RDM (DFT-RDM) basically works in the DFT domain, applying the RDM core on each frequency channel. We illustrate the feasibility of DFT-RDM by passing the watermarked signal through an implementation of a graphic equalizer: the average error probability is small enough to justify the feasibility of adding a coding with interleaving layer to DFT-RDM. Two easily implementable improvements are discussed: windowing and spreading. In particular, the latter is shown to lead to very large gains.
KEYWORDS: Digital watermarking, Sensors, Binary data, Signal detection, Distortion, Detection and tracking algorithms, Information security, Beryllium, Iterative methods, Quantization
Until now, the sensitivity attack was considered as a serious threat to the robustness and security of spread spectrum-based schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. Nevertheless, it had not been used to tamper other watermarking algorithms, as those which use side-information. Furthermore the sensitivity attack has never been used to obtain falsely watermarked contents, also known as forgeries. In this paper a new version of the sensitivity attack based on a general formulation is proposed; this method does not require any knowledge about the detection function nor any other system parameter, but just the binary output of the detector, thus being suitable for attacking most known watermarking methods, both for tampering watermarked signals and obtaining forgeries. The soundness of this new approach is tested by empirical results.
A novel quantization-based data-hiding method, named Rational Dither Modulation (RDM), is presented. This method retains most of the easiness of the Dither Modulation (DM) scheme, which is known to be vulnerable to fixed-gain attacks. However, RDM modifies DM in such a way that it becomes invariant to those attacks. The basic principle behind RDM is the use of an adaptive quantization step-size at both embedder and decoder, which depends on previously watermarked samples. When the host signal is stationary, this causes the watermarked signal being under some mild conditions asymptotically stationary. Mathematical tools, new to data-hiding, are used to determine this stationary probability density function, which is later employed to analytically establish the performance of RDM in Gaussian channels. We also show that by properly increasing the memory of the system, it is possible to asymptotically approach the performance of conventional DM, while still keeping invariance to fixed gain attacks. Moreover, RDM is compared to improved spread-spectrum (ISS) methods, showing that for the former much higher rates can be achieved for the same bit error probability. Our theoretical results are validated with experimental results, which also serve to show a moderate resilience of RDM in front of slow-varying gain attacks. Perhaps the main advantage of RDM in comparison with other schemes designed to cope with fixed-gain attacks is its simplicity.
KEYWORDS: Probability theory, Digital watermarking, Error analysis, Distortion, Quantization, Data hiding, Statistical analysis, Binary data, Signal to noise ratio, Interference (communication)
The performance of quantization-based data hiding methods is commonly analyzed by assuming a flat probability density function for the host signal, i.e. uniform inside each quantization cell and
with its variance large enough to assuming that all the centroids occur with equal probability. This paper comes to fill a gap in watermarking theory, analyzing the exact performance of the Scalar Costa Scheme (SCS) facing additive Gaussian attacks when the former approximation is not valid, thus taking into account the host statistics. The accomplished analysis reveals that the true performance of such a scheme for an optimal selection of its parameters and low watermark to noise ratios (WNR) is never worse than that of classical spread-spectrum-based methods, in terms of achievable rate and probability of error, as it was thought so far. The reduction of SCS to a two-centroid problem allows the derivation of theoretical expressions which characterize its behavior for small WNR's, showing interesting connections with spread-spectrum (SS) and the Improved Spread Spectrum (ISS) method. Furthermore, we show that, in contrast to the results reported until now, the use of pseudorandom dithering in SCS-based schemes can have a negative impact in performance. Performance losses are also reported for the case in which a modulo reduction is undertaken prior to decoding. The usefulness of these results is shown in the computation of the exact performance in projected domains.
KEYWORDS: Distortion, Computer programming, Digital watermarking, Signal to noise ratio, Data hiding, Quantization, Forward error correction, Information security, Binary data, Multimedia
Structured codes are known to be necessary in practical implementations of capacity-approaching "dirty paper schemes." In this paper we study the performance of a recently proposed dirty paper technique, by Erez and ten Brink which, to the authors' knowledge, is firstly applied to data-hiding, and compare it with other existing approaches. Specifically, we compare it with conventional side-informed schemes previously used in data-hiding based on repetition and turbo coding. We show that a significant improvement can be achieved using Erez and ten Brink's proposal. We also study the considerations we have to take into account when these codes are used in data-hiding, mainly related with perceptual questions.
In this paper we consider the problem of performance improvement of known-host-state (quantization-based) watermarking methods undergo additive white Gaussian noise (AWGN) and uniform noise attacks. We analyze the underlying assumptions used for design of Dither Modulation (DM) and Distortion Compensated Dither Modulation (DC-DM) methods and question the optimality of high rate uniform quantizer based embedding into real images from the point of view of robustness of these methods to the selected additive attacks in terms of bit error rate probability. Motivated by superior performance of uniform deadzone quantizer (UDQ) over the uniform one in lossy transform based source coding, we propos to replace the latter one by the UDQ in data-hiding set-up designed according to the statistics of the host data that are assumed to be independent identically distributed Laplacian. Based on the suggested modifications we obtained analytical expressions for bit error rate probability analysis of host-statistics-dependent quantization-based watermarking methods in AWGN and uniform noise attacking channels. Experimental results of computer simulations demonstrate significant performance enhancement of the designed modified DM and DC-DM watermarking techniques in comparison to the classically elaborated known-host-state schemes in terms of the selected performance measure.
KEYWORDS: Digital watermarking, Signal detection, Quantization, Information security, Sensors, Distortion, Radon, Detection and tracking algorithms, Matrices, Interference (communication)
In this paper, a novel method for detection in quantization-based
watermarking is introduced. This method basically works by quantizing a projection of the host signal onto a subspace of smaller dimensionality. A theoretical performance analysis under
AWGN and fixed gain attacks is carried out, showing great improvements over traditional spread-spectrum-based methods operating under the same conditions of embedding distortion and attacking noise. A security analysis for oracle-like attacks is also accomplished, proposing a sensitivity attack suited to quantization-based methods for the first time in the literature, and showing a trade-off between security level and performance; anyway, this new method offers significant improvements in security, once again, over spread-spectrum-based methods facing the same kind of attacks.
The main goal of this study consists in the development of the additive worst case attack (WCA) for quantization-based methods from two points of view: the bit error rate probability and from the rerspective of the information theoretic performance. Our analysis will be focused on the practical scheme known as distortion compensation dither modulation (DC-DM). From the mathematical point of view, the problem of the WCA design with probability of error as the cost function can be formulated as the maximization of the average probability of error subject to introduced distortion for a given decoding rule. When mutual information is selected as cost function, the problem of the WCA design establishes the global maximum of the optimization problem independently of the decodification process. Our results contribute to the common understanding and the development of fair benchmarks. The results show that the developed noise attack leads to a stronger performance decrease for the considered class of embedding techniques than the AWGN or the uniform noise attacks within the class of additive noise attacks.
KEYWORDS: Radon, Darmstadtium, Digital watermarking, Data hiding, Electronic filtering, Digital signal processing, Matrices, Computer security, Information security, Optimal filtering
A game-theoretic approach is introduced to quantify possible information leaks in spread-spectrum data hiding schemes. Those leaks imply that the attacker knows the set-partitions and/or the pseudorandom sequence, which in most of the existing methods are key-dependent. The bit error probability is used as payoff for the game. Since a closed-form strategy is not available in the general case, several simplifications leading to near-optimal strategies are also discussed. Finally, experimental results supporting our analysis are presented.
KEYWORDS: Expectation maximization algorithms, Data hiding, Distortion, Digital watermarking, Computer programming, Forward error correction, Lead, Reliability, Quantization, Signal to noise ratio
Distortion-Compensated Dither Modulation (DC-DM), also known as Scalar Costa Scheme (SCS), has been theoretically shown to be near-capacity achieving thanks to its use of side information at the encoder. In practice, channel coding is needed in conjunction with this quantization-based scheme in order to approach the achievable rate limit. The most powerful coding methods use iterative decoding (turbo codes, LDPC), but they require knowledge of the channel model. Previous works on the subject have assumed the latter to be known by the decoder. We investigate here the possibility of undertaking blind iterative decoding of DC-DM, using maximum likelihood estimation of the channel model within the decoding procedure. The unknown attack is assumed to be i.i.d. and additive. Before each iterative decoding step, a new optimal estimation of the attack model is made using the reliability information provided by the previous step. This new model is used for the next iterative decoding stage, and the procedure is repeated until convergence. We show that the iterative Expectation-Maximization algorithm is suitable for solving the problem posed by model estimation, as it can be conveniently intertwined with iterative decoding.
A novel technique allowing secure transmission/storage of
electronic documents in printed form is described. First, given a
document to protect, an error resilient "visibly encrypted"
version is printed. Later, when the original document is to be
recovered, the system scans the "visibly encrypted" document and
decrypts it after asking for a secret key. Unfortunately, one
faces the problem that when a document is printed and scanned, the
rescanned document may look similar to the original, but will be
distorted during the process. Therefore, to ensure reliable and
high rate transmission over the print-and-scan channel it is
essential a judicious theoretical model for characterizing the
problem and providing reliable communications schemes. The
proposed method is based on Pulse Amplitude Modulation (PAM),
using small square-shaped pulses and a Maximum Likelihood (ML)
detector that is derived after estimating the distortions
introduced by the print-and-scan channel. Furthermore, it is
essential to employ synchronization techniques to correctly
demodulate the printed pulses. In our case, we use an adaptive
scheme that resembles the well-known phase locked loops (PLL's).
Finally, we will discuss schemes that can make the bit stream
resilient to transmissions errors and how to combine them with
cryptographic algorithms in order to produce a secure system.
Data hiding using quantization has revealed as an effective way of taking into account side information at the encoder. When quantizing more than one host signal samples there are two choices: (1) using the Cartesian product of several one-dimensional quantizers, as made in Scalar Costa Scheme (SCS); or (2) performing vectorial quantization. The second option seems better, as rate-distortion theory affirms that higher dimensional quantizers yield improved performance due to better sphere-packing properties. Although the embedding problem does resemble that of rate-distortion, no attacks or host signal characteristics are usually considered when designing the quantizer in this way. We show that attacks worsen the performance of the a priori optimal lattice quantizer through a counterexample: the comparison under Gaussian distortion of hexagonal lattice quantization against bidimensional Distortion-Compensated Quantized Projection (DC-QP), a data hiding alternative based in quantizing a linear projection of the host signal. Apart from empirical comparisons, theoretical lower bounds on the probability of decoding error of hexagonal lattices under Gaussian host signal and attack are provided and compared to the already analyzed DC-QP method.
The performance of data hiding techniques in still images may be greatly improved by means of coding. In previous approaches repetition coding was firstly used to obtain N identical Gaussian channels over which block and convolutional coding were used. But knowing that repetition coding can be improved we may turn our attention to coding forwardly at the sample level. Bounds for both hard and soft decision decoding performance are provided and the use of concatenated coding and turbo coding for this approach is explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.