KEYWORDS: Steganalysis, 3D modeling, Steganography, Detection and tracking algorithms, Image processing, Error analysis, Databases, RGB color model, Sensors, Linear filtering
Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.
In this paper, we propose a method for estimation of camera lens distortion correction from a single image. Without relying on image EXIF, the method estimates the parameters of the correction by searching for a maximum energy of the so-called linear pattern introduced into the image during image acquisition prior to lens distortion correction. Potential applications of this technology include camera identification using sensor fingerprint, narrowing down the camera model, estimating the distance between the photographer and the subject, forgery detection, and improving the reliability of image steganalysis (detection of hidden data).
In camera identification using sensor fingerprint, it is absolutely essential that the fingerprint and the noise residual from a given test image be synchronized. If the signals are desynchronized due to a geometrical transformation, fingerprint detection becomes significantly more complicated. Besides constructing the detector in an invariant transform domain (which limits the type of the geometrical transformation) a more general approach is to maximize the generalized likelihood ratio with respect to the transform parameters, which requires a potentially expensive search and numerous resamplings of the entire image (or fingerprint). In this paper, we propose a measure that significantly reduces the search complexity by reducing the need to resample the entire image to a much smaller subset of the signal called the fingerprint digest. The technique can be applied to an arbitrary geometrical distortion that does not involve spatial shifts, such as digital zoom and non-linear lens-distortion correction.
Computational photography is quickly making its way from research labs to the market. Recently, camera manufacturers
started using in-camera lens-distortion correction of the captured image to give users more powerful
range of zoom in compact and affordable cameras. Since the distortion correction (barrel/pincushion) depends
on the zoom, it desynchronizes the pixel-to-pixel correspondence between images taken at two different focal
lengths. This poses a serious problem for digital forensic methods that utilize the concept of sensor fingerprint
(photo-response non-uniformity), such as "image ballistic" techniques that can match an image to a specific camera.
Such techniques may completely fail. This paper presents an extension of sensor-based camera identification
to images corrected for lens distortion. To reestablish synchronization between an image and the fingerprint,
we adopt a barrel distortion model and search for its parameter to maximize the detection statistic, which is
the peak to correlation energy ratio. The proposed method is tested on hundreds of images from three compact
cameras to prove the viability of the approach and demonstrate its efficiency.
The goal of temporal forensics is to establish temporal relationship among two or more pieces of evidence. In this paper, we focus on digital images and describe a method using which an analyst can estimate the acquisition time of an image given a set of other images from the same camera whose time ordering is known. This is achieved by first estimating the parameters of pixel defects, including their onsets, and then detecting their presence in the image under investigation. Both estimators are constructed using the maximum-likelihood principle. The accuracy and limitations of this approach are illustrated on experiments with three cameras. Forensic and law-enforcement analysts are expected to benefit from this technique in situations when the temporal data stored in the EXIF header is lost due to processing or editing images off-line or when the header cannot be trusted. Reliable methods for establishing temporal order between individual pieces of evidence can help reveal deception attempts of an adversary or a criminal. The causal relationship may also provide information about the whereabouts of the photographer.
In camera identification using sensor noise, the camera that took a given image can be determined with high certainty
by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal
counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto
an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim.
We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available
to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were
used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled
"triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of
conditions. This test is then extended to the case when none of the images that the attacker used to create the fake
fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's
performance experimentally and investigate its limitations. The conclusion that can be made from this study is
that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously
thought.
Sensor fingerprint is a unique noise-like pattern caused by slightly varying pixel dimensions and inhomogeneity of the
silicon wafer from which the sensor is made. The fingerprint can be used to prove that an image came from a specific
digital camera. The presence of a camera fingerprint in an image is usually established using a detector that evaluates
cross-correlation between the fingerprint and image noise. The complexity of the detector is thus proportional to the
number of pixels in the image. Although computing the detector statistic for a few megapixel image takes several
seconds on a single-processor PC, the processing time becomes impractically large if a sizeable database of camera
fingerprints needs to be searched through. In this paper, we present a fast searching algorithm that utilizes special
"fingerprint digests" and sparse data structures to address several tasks that forensic analysts will find useful when
deploying camera identification from fingerprints in practice. In particular, we develop fast algorithms for finding if a
given fingerprint already resides in the database and for determining whether a given image was taken by a camera
whose fingerprint is in the database.
This paper presents a large scale test of camera identification from sensor fingerprints. To overcome the problem of
acquiring a large number of cameras and taking the images, we utilized Flickr, an existing on-line image sharing site. In
our experiment, we tested over one million images spanning 6896 individual cameras covering 150 models. The
gathered data provides practical estimates of false acceptance and false rejection rates, giving us the opportunity to
compare the experimental data with theoretical estimates. We also test images against a database of fingerprints,
simulating thus the situation when a forensic analyst wants to find if a given image belongs to a database of already
known cameras. The experimental results set a lower bound on the performance and reveal several interesting new facts
about camera fingerprints and their impact on error analysis in practice. We believe that this study will be a valuable
reference for forensic investigators in their effort to use this method in court.
In this paper, we extend our camera identification technology based on sensor noise to a more general setting when
the image under investigation has been simultaneously cropped and scaled. The sensor fingerprint detection is
formulated using hypothesis testing as a two-channel problem and a detector is derived using the generalized
likelihood ratio test. A brute force search is proposed to find the scaling factor which is then refined in a detailed
search. The cropping parameters are determined from the maximum of the normalized cross-correlation between two
signals. The accuracy and limitations of the proposed technique are tested on images that underwent a wide range of
cropping and scaling, including images that were acquired by digital zoom. Additionally, we demonstrate that sensor
noise can be used as a template to reverse-engineer in-camera geometrical processing as well as recover from later
geometrical transformations, thus offering a possible application for re-synchronizing in digital watermark detection.
In this paper, we study the problem of identifying digital camera sensor from a printed picture. The sensor is identified
by proving the presence of its Photo-Response Non-Uniformity (PRNU) in the scanned picture using camera ID
methods robust to cropping and scaling. Two kinds of prints are studied. The first are postcard size (4" by 6") pictures
obtained from common commercial printing labs. These prints are always cropped to some degree. In the proposed
identification, a brute force search for the scaling ratio is deployed while the position of cropping is determined from
the cross-correlation surface. Detection success mostly depends on the picture content and the quality of the PRNU
estimate. Prints obtained using desktop printers form the second kind of pictures investigated in this paper. Their
identification is complicated by complicated geometric distortion due to imperfections in paper feed. Removing this
distortion is part of the identification procedure. From experiments, we determine the range of conditions under which
reliable sensor identification is possible. The most influential factors in identifying the sensor from a printed picture
are the accuracy of angular alignment when scanning, printing quality, paper quality, and size of the printed picture.
KEYWORDS: Video, Video compression, Sensors, Optical sensors, Video surveillance, Digital imaging, Image compression, Internet, Video processing, Digital cameras
Photo-response non-uniformity (PRNU) of digital sensors was recently proposed [1] as a unique identification fingerprint
for digital cameras. The PRNU extracted from a specific image can be used to link it to the digital camera that took the
image. Because digital camcorders use the same imaging sensors, in this paper, we extend this technique for
identification of digital camcorders from video clips. We also investigate the problem of determining whether two video
clips came from the same camcorder and the problem of whether two differently transcoded versions of one movie came
from the same camcorder. The identification technique is a joint estimation and detection procedure consisting of two
steps: (1) estimation of PRNUs from video clips using the Maximum Likelihood Estimator and (2) detecting the presence
of PRNU using normalized cross-correlation. We anticipate this technology to be an essential tool for fighting piracy of
motion pictures. Experimental results demonstrate the reliability and generality of our approach.
KEYWORDS: Cameras, Sensors, Optical filters, Error analysis, Denoising, Image compression, Statistical analysis, Signal detection, Digital imaging, Signal to noise ratio
In this paper, we revisit the problem of digital camera sensor identification using photo-response non-uniformity noise
(PRNU). Considering the identification task as a joint estimation and detection problem, we use a simplified model for
the sensor output and then derive a Maximum Likelihood estimator of the PRNU. The model is also used to design
optimal test statistics for detection of PRNU in a specific image. To estimate unknown shaping factors and determine
the distribution of the test statistics for the image-camera match, we construct a predictor of the test statistics on small
image blocks. This enables us to obtain conservative estimates of false rejection rates for each image under Neyman-
Pearson testing. We also point out a few pitfalls in camera identification using PRNU and ways to overcome them by
preprocessing the estimated PRNU before identification.
We present a new approach to detection of forgeries in digital images under the assumption that either the camera that took the image is available or other images taken by that camera are available. Our method is based on detecting the presence of the camera pattern noise, which is a unique stochastic characteristic of imaging sensors, in individual regions in the image. The forged region is determined as the one that lacks the pattern noise. The presence of the noise is established using correlation as in detection of spread spectrum watermarks. We proposed two approaches. In the first one, the user selects an area for integrity verification. The second method attempts to automatically determine the forged area without assuming any a priori knowledge. The methods are tested both on examples of real forgeries and on non-forged images. We also investigate how further image processing applied to the forged image, such as lossy compression or filtering, influences our ability to verify image integrity.
Construction of steganographic schemes in which the sender and the receiver do not share the knowledge about the location of embedding changes requires wet paper codes. Steganography with non-shared selection channels empowers the sender as now he is able to embed secret data by utilizing arbitrary side information, including a high-resolution version of the cover object (perturbed quantization steganography), local properties of the cover (adaptive steganography), and even pure randomness, e.g., coin flipping, for public key steganography. In this paper, we propose a new approach to wet paper codes using random linear codes of small codimension that at the same time improves the embedding efficiency-the number of message bits embedded per embedding change. We describe a practical algorithm, test its performance experimentally, and compare the results to theoretically achievable bounds. We point out an interesting ripple phenomenon that should be taken into account by practitioners. The proposed coding method can be modularly combined with most steganographic schemes to allow them to use non-shared selection channels and, at the same time, improve their security by decreasing the number of embedding changes.
KEYWORDS: Cameras, Steganalysis, Digital imaging, Databases, Image compression, Wavelets, Distortion, Steganography, Quantization, Signal to noise ratio
The contribution of this paper is two-fold. First, we describe an improved version of a blind steganalysis method previously proposed by Holotyak et al. and compare it to current state-of-the-art blind steganalyzers. The features for the blind classifier are calculated in the wavelet domain as higher-order absolute moments of the noise residual. This method clearly shows the benefit of calculating the features from the noise residual because it increases the features' sensitivity to embedding, which leads to improved detection results. Second, using this detection engine, we attempt to answer some fundamental questions, such as "how much can we improve the reliability of steganalysis given certain a priori side-information about the image source?" Moreover, we experimentally compare the security of three steganographic schemes for images stored in a raster format - (1) pseudo-random ±1 embedding using ternary matrix embedding, (2) spatially adaptive ternary ±1 embedding, and (3) perturbed quantization while converting a 16-bit per channel image to an 8-bit gray scale image.
This paper is an extension of our work on stego key search for JPEG images published at EI SPIE in 2004. We provide a more general theoretical description of the methodology, apply our approach to the spatial domain, and add a method that determines the stego key from multiple images. We show that in the spatial domain the stego key search can be made significantly more efficient by working with the noise component of the image obtained using a denoising filter. The technique is tested on the LSB embedding paradigm and on a special case of embedding by noise adding (the ±1 embedding). The stego key search can be performed for a wide class of steganographic techniques even for sizes of secret message well below those detectable using known methods. The proposed strategy may prove useful to forensic analysts and law enforcement.
In this paper, we propose a new method for estimating the number of embedding changes for non-adaptive ±K embedding in images. The method uses a high-pass FIR filter and then recovers an approximate message length using a Maximum Likelihood Estimator on those stego image segments where the filtered samples can be modeled using a stationary Generalized Gaussian random process. It is shown that for images with a low noise level, such as decompressed JPEG images, this method can accurately estimate the number of embedding changes even for K=1 and for embedding rates as low as 0.2 bits per pixel. Although for raw, never compressed images the message length estimate is less accurate, when used as a scalar parameter for a classifier detecting the presence of ±K steganography, the proposed method gave us relatively reliable results for embedding rates as low as 0.5 bits per pixel.
Hiding data in binary images can facilitate the authentication and annotation of important document images in digital domain. A representative approach is to first identify pixels whose binary color can be flipped without introducing noticeable artifacts, and then embed one bit in each non-overlapping block by adjusting the flippable pixel values to obtain the desired block parity. The distribution of these flippable pixels is highly uneven across the image, which is handled by random shuffling in the literature. In this paper, we revisit the problem of data embedding for binary images and investigate the incorporation of a most recent steganography framework known as the wet paper coding to improve the embedding capacity. The wet paper codes naturally handle the uneven embedding capacity through randomized projections. In contrast to the previous approach, where only a small portion of the flippable pixels are actually utilized in the embedding, the wet paper codes allow for a high utilization of pixels that have high flippability score for embedding, thus giving a significantly improved embedding capacity than the previous approach. The performance of the proposed technique is demonstrated on several representative images. We also analyze the perceptual impact and capacity-robustness relation of the new approach.
In this paper, we show that the communication channel known as writing in memory with defective cells is a relevant information-theoretical model for a specific case of passive warden steganography when the sender embeds a secret message into a subset C of the cover object X without sharing the selection channel C with the recipient. The set C could be arbitrary, determined by the sender from the cover object using a deterministic, pseudo-random, or a truly random process. We call this steganography “writing on wet paper” and realize it using low-density random linear codes with the encoding step based on the LT process. The importance of writing on wet paper for covert communication is discussed within the context of adaptive steganography and perturbed quantization steganography. Heuristic arguments supported by tests using blind steganalysis indicate that the wet paper steganography provides improved steganographic security for embedding in JPEG images and is less vulnerable to attacks when compared to existing methods with shared selection channels.
In this paper, we demonstrate that it is possible to use the sensor’s pattern noise for digital camera identification from images. The pattern noise is extracted from the images using a wavelet-based denoising filter. For each camera under investigation, we first determine its reference noise, which serves as a unique identification fingerprint. This could be done using the process of flat-fielding, if we have the camera in possession, or by averaging the noise obtained from multiple images, which is the option taken in this paper. To identify the camera from a given image, we consider the reference pattern noise as a high-frequency spread spectrum watermark, whose presence in the image is established using a correlation detector. Using this approach, we were able to identify the correct camera out of 9 cameras without a single misclassification for several hundred images. Furthermore, it is possible to perform reliable identification even from images that underwent subsequent JPEG compression and/or resizing. These claims are supported by experiments on 9 different cameras including two cameras of exactly same model (Olympus C765).
Steganalysis in the wide sense consists of first identifying suspicious objects and then further analysis during which
we try to identify the steganographic scheme used for embedding, recover the stego key, and finally extract the
hidden message. In this paper, we present a methodology for identifying the stego key in key-dependent
steganographic schemes. Previous approaches for stego key search were exhaustive searches looking for some
recognizable structure (e.g., header) in the extracted bit-stream. However, if the message is encrypted, the search
will become much more expensive because for each stego key, all possible encryption keys would have to be tested.
In this paper, we show that for a very wide range of steganographic schemes, the complexity of the stego key search
is determined only by the size of the stego key space and is independent of the encryption algorithm. The correct
stego key can be determined through an exhaustive stego key search by quantifying statistical properties of samples
along portions of the embedding path. The correct stego key is then identified by an outlier sample distribution.
Although the search methodology is applicable to virtually all steganographic schemes, in this paper we focus on
JPEG steganography. Search techniques for spatial steganographic techniques are treated in our upcoming paper.
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least
Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and
then formulate the problem of determining the unknown message length as a simple optimization problem. The
methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of
the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant
estimator accuracy analysis using statistical image models.
In lossless watermarking, it is possible to completely remove the embedding distortion from the watermarked image
and recover an exact copy of the original unwatermarked image. Lossless watermarks found applications in fragile
authentication, integrity protection, and metadata embedding. It is especially important for medical and military
images. Frequently, lossless embedding disproportionably increases the file size for image formats that contain lossless
compression (RLE BMP, GIF, JPEG, PNG, etc...). This partially negates the advantage of embedding information as
opposed to appending it. In this paper, we introduce lossless watermarking techniques that preserve the file size. The
formats addressed are RLE encoded bitmaps and sequentially encoded JPEG images. The lossless embedding for the
RLE BMP format is designed in such a manner to guarantee that the message extraction and original image
reconstruction is insensitive to different RLE encoders, image palette reshuffling, as well as to removing or adding
duplicate palette colors. The performance of both methods is demonstrated on test images by showing the capacity,
distortion, and embedding rate. The proposed methods are the first examples of lossless embedding methods that
preserve the file size for image formats that use lossless compression.
In this paper, we present general methodology for developing attacks on steganographic systems for the JPEG image format. The detection first starts by decompressing the JPEG stego image, geometrically distorting it (e.g., by cropping), and recompressing. Because the geometrical distortion breaks the quantized structure of DCT coefficients during recompression, the distorted/recompressed image will have many macroscopic statistics approximately equal to those of the cover image. We choose such macroscopic statistic S that also predictably changes with the embedded message length. By doing so, we estimate the unknown message length by comparing the values of S for the stego image and the cropped/recompressed stego image. The details of this detection methodology are explained on the F5 algorithm and OutGuess. The accuracy of the message length estimate is demonstrated on test images for both algorithms. Finally, we identify two limitations of the proposed approach and show how they can be overcome to obtain accurate detection in every case. The paper is closed with outlining a condition that must be satisfied by all secure high-capacity steganographic algorithms for JPEGs.
In this paper, we describe a new higher-order steganalytic method called Pairs Analysis for detection of secret messages embedded in digital images. Although the approach is in principle applicable to many different steganographic methods as well as image formats, it is ideally suited to 8-bit images, such as GIF images, where message bits are embedded in LSBs of indices to an ordered palette. The EzStego algorithm with random message spread and optimized palette order is used as an embedding archetype on which we demonstrate Pairs Analysis and compare its performance with the chi-square attacks and our previously proposed RS steganalysis. Pairs Analysis enables more reliable and accurate message detection than previous methods. The method was tested on databases of GIF images of natural scenes, cartoons, and computer-generated images. The experiments indicate that the relative steganographic capacity of the EzStego algorithm with random message spread is less than 10% of the total image capacity (0.1 bits per pixel).
In this paper, we present a new steganographic paradigm for digital images in raster formats. Message bits are embedded in the cover image by adding a weak noise signal with a specified but arbitrary probabilistic distribution. This embedding mechanism provides the user with the flexibility to mask the embedding distortion as noise generated by a particular image acquisition device. This type of embedding will lead to more secure schemes because now the attacker must distinguish statistical anomalies that might be created by the embedding process from those introduced during the image acquisition itself. Unlike previously proposed schemes, this new approach, that we call stochastic modulation, achieves oblivious data transfer without using noise extraction algorithms or error correction. This leads to higher capacity (up to 0.8 bits per pixel) and a convenient and simple implementation with low embedding and extraction complexity. But most importantly, because the embedding noise can have arbitrary properties that approximate a given device noise, the new method offers better security than existing methods. At the end of this paper, we extend stochastic modulation to a content-dependent device noise and we also discuss possible attacks on this scheme based on the most recent advances in steganalysis.
Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.
Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.
In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.
In this paper, we present two new methods for authentication of digital images using invertible watermarking. While virtually all watermarking schemes introduce some small amount of non-invertible distortion in the image, the new methods are invertible in the sense that, if the image is deemed authentic, the distortion due to authentication can be removed to obtain the original image data. Two techniques are proposed: one is based on robust spatial additive watermarks combined with modulo addition and the second one on lossless compression and encryption of bit-planes. Both techniques provide cryptographic strength in verifying the image integrity in the sense that the probability of making a modification to the image that will not be detected can be directly related to a secure cryptographic element, such as a has function. The second technique can be generalized to other data types than bitmap images.
In this paper, we describe new and improved attacks on the authentication scheme previously proposed by Yeung and Mintzer. Previous attacks assumed that the binary watermark logo inserted in an image for the purposes of authentication was known. Here we remove that assumption and show how the scheme is still vulnerable, even if the binary logo is not known but the attacker has access to multiple images that have been watermarked with the same secret key and contain the same (but unknown) logo. We present two attacks. The first attack infers the secret watermark insertion function and the binary logo, given multiple images authenticated with the same key and containing the same logo. We show that a very good approximation to the logo and watermark insertion function can be constructed using as few as two images. With color images, one needs many more images, nevertheless the attack is still feasible. The second attack we present, which we call the 'collage-attack' is a variation of the Holliman-Memon counterfeiting attack. The proposed variation does not require knowledge of the watermark logo and produces counterfeits of superior quality by means of a suitable dithering process that we develop.
KEYWORDS: Digital watermarking, Distortion, Image compression, Digital filtering, Visibility, Modulation, Image processing, Digital image processing, Image filtering, Information security
A methodology for comparing robustness of watermarking techniques is proposed. The techniques are first modified into a standard form to make comparison possible. The watermark strength is adjusted for each technique so that a certain perceptual measure of image distortion based on spatial masking is below a predetermined value. Each watermarking technique is further modified into two versions for embedding watermarks consisting of one and 60-bits, respectively. Finally, each detection algorithm is adjusted so that the probability of false detections is below a specified threshold. A family of typical image distortions is selected and parametrized by a distortion parameter. For the one-bit watermark, the robustness with respect to each image distortion is evaluated by increasing the distortion parameter and registering at which value the watermark bit is lost. The bit error rate is used for evaluating the robustness of the 60-bit watermark. The methodology is explained with two frequency-based spread spectrum techniques. The paper is closed with an attempt to introduce a formal definition of robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.