A watermark embed scheme has been developed to insert a watermark with the maximum signal
strength for a user selectable visibility constraint. By altering the watermark strength and direction to
meet a visibility constraint, the maximum watermark signal for a particular image is inserted. The
method consists of iterative embed software and a full color human visibility model plus a watermark
signal strength metric.
The iterative approach is based on the intersections between hyper-planes, which represent visibility and
signal models, and the edges of a hyper-volume, which represent output device visibility and gamut
constraints. The signal metric is based on the specific watermark modulation and detection methods and
can be adapted to other modulation approaches. The visibility model takes into account the different
contrast sensitivity functions of the human eye to L, a and b, and masking due to image content.
KEYWORDS: Digital watermarking, Signal to noise ratio, Video, Video compression, Image compression, Sensors, Visibility, Cesium, Image processing, Error analysis
A persistent challenge with imagery captured from Unmanned Aerial Systems (UAS), is the loss of
critical information such as associated sensor and geospatial data, and prioritized routing information
(i.e., metadata) required to use the imagery effectively. Often, there is a loss of synchronization between
data and imagery. The losses usually arise due to the use of separate channels for metadata, or due to
multiple imagery formats employed in the processing and distribution workflows that do not preserve
the data. To contend with these issues and provide another layer of authentication, digital watermarks
were inserted at point of capture within a tactical UAS. Implementation challenges included
traditional requirements surrounding, image fidelity, performance, payload size, robustness and
application requirements such as power consumption, digital to analog conversion and a fixed
bandwidth downlink, as well as a standard-based approach to geospatial exploitation through a serviceoriented-
architecture (SOA) for extracting and mapping mission critical metadata from the video
stream. The authors capture the application requirements, implementation trade-offs and ultimately
analysis of selected algorithms. A brief summary of results is provided from multiple test flights onboard
the SkySeer test UAS in support of Command, Control, Communications, Computers,
Intelligence, Surveillance and Reconnaissance applications within Network Centric Warfare and
Future Combat Systems doctrine.
KEYWORDS: Digital watermarking, Data hiding, Signal processing, Distortion, Data communications, Algorithm development, Databases, Computer programming, Data conversion, Computer simulations
A high-capacity, data-hiding algorithm that lets the user embed a large amount of data in a digital audio signal is presented in this paper. The algorithm also lets the user restore the original digital audio from the watermarked digital audio after retrieving the hidden data. The hidden information can be used to authenticate the audio, communicate copyright information, facilitate audio database indexing and information retrieval without degrading the quality of the original audio signal, or enhance the information content of the audio. It also allows secret communication between two parties over a digital communication link. The proposed algorithm is based on a generalized, reversible, integer transform, which calculates the average and pair-wise differences between the elements of a vector composed from the audio samples. The watermark is embedded into the pair-wise difference coefficients of selected vectors by replacing their least significant bits (LSB) with watermark bits. Most of these coefficients are shifted left by one bit before replacing their LSB. The vectors are carefully selected such that they remain identifiable after embedding and they do not suffer from overflow or underflow after embedding. To ensure reversibility, the locations of the shifted coefficients and the original LSBs are appended to the payload. Simulation results of the algorithm and its performance are presented and discussed in the paper.
Lattice codes have been evaluated in the watermarking literature based on their behavior in the presence of additive
noise. In contrast with spread spectrum methods, the host image does not interfere with the watermark. Such evaluation
is appropriate to simulate the effects of operations like compression, which are effectively noise-like for lattice codes.
Lattice codes do not perform nearly as well when processing that fundamentally alters the characteristics of the host
image is applied. One type of modification that is particularly detrimental to lattice codes involves changing the
amplitude of the host. In a previous paper on the subject, we describe a modification to lattice codes that makes them
invariant to a large class of amplitude modifications; those that are order preserving. However, we have shown that in its
pure form the modification leads to problems with embedding distortion and noise immunity that are image dependent.
In the current work we discuss an improved method for handling the aforementioned problem. Specifically, the set of
quantization bins that is used for the lattice code is governed by a finite state machine. The finite state machine approach
to quantization bin assignment requires side information in order for the quantizers to be recovered exactly. Our paper
describes in detail two methods for recovery when such an approach is used.
The many recent publications that focus upon watermarking with side information at the embedder emphasize the fact that this side information can be used to improve practical capacity. Many of the proposed algorithms use quantization to carry out the embedding process. Although both powerful and simple, recovering the original quantization levels, and hence the embedded data, can be difficult if the image amplitude is modified. In our paper, we present a method that is similar to the existing class of quantization-based techniques, but is different in the sense that we first apply a projection to the image data that is invariant to a class of amplitude modifications that can be described as order preserving. Watermark reading and embedding is done with respect to the projected data rather than the original. Not surprisingly, by requiring invariance to amplitude modifications we increase our vulnerability to other types of distortions. Uniform quantization of the projected data generally leads to non-uniform quantization of the original data, which in turn can cause greater susceptibility to additive noise. Later in the paper we describe a strategy that results in an effective compromise between invariance to amplitude modification and noise susceptibility.
KEYWORDS: Digital watermarking, Cameras, Point spread functions, CCD cameras, Sensors, Digital cameras, Signal to noise ratio, CMOS cameras, Image processing, Amplifiers
Many articles covering novel techniques, theoretical studies, attacks, and analyses have been published recently in the field of digital watermarking. In the interest of expanding commercial markets and applications of watermarking, this paper is part of a series of papers from Digimarc on practical issues associated with commercial watermarking applications. In this paper we address several practical issues associated with the use of web cameras for watermark detection. In addition to the obvious issues of resolution and sensitivity, we explore issues related to the tradeoff between gain and integration time to improve sensitivity, and the effects of fixed pattern noise, time variant noise, and lens and Bayer pattern distortions. Furthermore, the ability to control (or at least determine) camera characteristics including white balance, interpolation, and gain have proven to be critical to successful application of watermark readers based on web cameras. These issues and tradeoffs are examined with respect to typical spatial-domain and transform-domain watermarking approaches.
A common application of digital watermarking is to encode a small packet of information in an image, such as some form of identification that can be represented as a bit string. One class of digital watermarking techniques employs spread spectrum like methods where each bit is redundantly encoded throughout the image in order to mitigate bit errors. We typically require that all bits be recovered with high reliability to effectively read the watermark. In many watermarking applications, however, straightforward application of spread spectrum techniques is not enough for reliable watermark recovery. We therefore resort to additional techniques, such as error correction coding. As proposed by M. Kutter 1, M-ary modulation is one such technique for decreasing the probability of error in watermark recovery. It1 was shown that M-ary modulation techniques could provide performance improvement over binary modulation, but direct comparisons to systems using error correction codes were not made. In this paper we examine the comparative performance of watermarking systems using M-ary modulation and watermarking systems using binary modulation combined with various forms of error correction. We do so in a framework that addresses both computational complexity and performance issues.
KEYWORDS: Digital watermarking, Edge detection, Signal detection, Detection and tracking algorithms, Visual system, Visual process modeling, Image processing, Visualization, Visibility, Algorithm development
In digital watermarking, one aim is to insert the maximum possible watermark signal without significantly affecting image quality. Advantage can be taken of the masking effect of the eye to increase the signal strength in busy or high contrast image areas. The application of such a human visual system model to watermarking has been proposed by several authors. However if a simple contrast measurement is used, an objectionable ringing effect may become visible on connected directional edges. In this paper we describe a method which distinguishes between connected directional edges and high frequency textured areas, which have no preferred edge direction. The watermark gain on connected directional edges is suppressed, while the gain in high contrast textures is increased. Overall, such a procedure accommodates a more robust watermark for the same level of visual degradation because the watermark is attenuated where it is truly objectionable, and enhanced where it is not. Furthermore, some authors propose that the magnitude of a signal which can be imperceptibly placed in the presence of a reference signal can be described by a non-linear mapping of magnitude to local contrast. In this paper we derive a mapping function experimentally by determining the point of just noticeable difference between a reference image and a reference image with watermark.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.