Photo response noise uniformity (PRNU) based source attribution has proven to be a powerful technique in multimedia forensics. The increasing prominence of this technique, combined with its introduction as evidence in the court, brought with it the need for it to withstand anti-forensics. Although robustness under common signal processing operations and geometrical transformations have been considered as potential attacks on this technique, new adversarial settings that curtail the performance of this technique are constantly being introduced. Starting with an overview of proposed approaches to counter PRNU based source attribution, this work introduces photographic panoramas as one such approach and discusses how to defend against it.
An important issue concerning real-world deployment of steganalysis systems is the computational cost of ac- quiring features used in building steganalyzers. Conventional approach to steganalyzer design crucially assumes that all features required for steganalysis have to be computed in advance. However, as the number of features used by typical steganalyzers grow into thousands and timing constraints are imposed on how fast a decision has to be made, this approach becomes impractical. To address this problem, we focus on machine learning aspect of steganalyzer design and introduce a decision tree based approach to steganalysis. The proposed steganalyzer system can minimize the average computational cost for making a steganalysis decision while still maintaining the detection accuracy. To demonstrate the potential of this approach, a series of experiments are performed on well known steganography and steganalysis techniques.
Several promising techniques have been recently proposed to bind an image or video to its source acquisition
device. These techniques have been intensively studied to address performance issues, but the computational
efficiency aspect has not been given due consideration. Considering very large databases, in this paper, we
focus on the efficiency of the sensor fingerprint based source device identification technique.1 We propose a
novel scheme based on tree structured vector quantization that offers logarithmic improvements in the search
complexity as compared to conventional approach. To demonstrate the effectiveness of the proposed approach
several experiments are conducted. Our results show that with the proposed scheme major improvement in
search time can be achieved.
We investigate the robustness of
PRNU-based camera identification in cases where the test
images have been passed through common image processing
operations. We address the issue of whether current camera
identification systems remain effective in the presence of a
nontechnical, mildly evasive photographer who makes efforts
at circumvention using only standard and/or freely available
software. We study denoising, recompression, out-of-camera
demosaicing.
The increasing use of biometrics in different environments presents
new challenges. Most importantly, biometric data are irreplaceable.
Therefore, storing biometric templates, which is unique to
individual user, entails significant security risks. In this paper,
we propose a geometric transformation for securing the minutiae
based fingerprint templates. The proposed scheme employs a robust
one-way transformation that maps geometrical configuration of the
minutiae points into a fixed-length code vector. This representation
enables efficient alignment and reliable matching. Experiments are
conducted by applying the proposed method on a synthetically
generated minutiae point sets. Preliminary results show that the
proposed scheme provides a simple and effective solution to the
template security problem of the minutiae based fingerprint.
We investigate the performance of state of the art universal steganalyzers proposed in the literature. These universal steganalyzers are tested against a number of well-known steganographic embedding techniques that operate in both the spatial and transform domains. Our experiments are performed using a large data set of JPEG images obtained by randomly crawling a set of publicly available websites. The image data set is categorized with respect to size, quality, and texture to determine their potential impact on steganalysis performance. To establish a comparative evaluation of techniques, undetectability results are obtained at various embedding rates. In addition to variation in cover image properties, our comparison also takes into consideration different message length definitions and computational complexity issues. Our results indicate that the performance of steganalysis techniques is affected by the JPEG quality factor, and JPEG recompression artifacts serve as a source of confusion for almost all steganalysis techniques.
In the past few years, we have witnessed a number of powerful steganalysis technique proposed in the literature. These technique could be categorized as either specific or universal. Each category of techniques has a set of advantages and disadvantages. A steganalysis technique specific to a steganographic embedding technique would perform well when tested only on that method and might fail on all others. On the other hand, universal steganalysis methods perform less accurately overall but provide acceptable performance in many cases. In practice, since the steganalyst will not be able to know what steganographic technique is used, it has to deploy a number of techniques on suspected stego objects. In such a setting the most important question that needs to be answered is: What should the steganalyst do when the decisions produced by different steganalysis techniques are in contradiction? In this work, we propose and investigate information fusion techniques, that combine a number of steganalysis techniques. We start by reviewing possible fusion techniques which are applicable to steganalysis. Then we illustrate, through a number of case studies, how one is able to obtain performance improvements as well as scalability by employing suitable fusion techniques.
There have been a number of steganography embedding techniques proposed over the past few years. In turn the development of these
techniques have led to an increased interest in steganalysis techniques. More specifically Universal steganalysis techniques have become more attractive since they work independently of the embedding technique. In this work, our goal is to compare a number of universal steganalysis techniques proposed in the literature which include techniques based on binary similarity measures, wavelet coefficients' statistics, and DCT based image features. These universal steganalysis techniques are tested against a number of well know embedding techniques, including Outguess, F5, Model based, and perturbed quantization. Our experiments are done using a large dataset of JPEG images, obtained by randomly crawling a
set of publicly available websites. The image dataset is categorized with respect to the size and quality. We benchmark embedding rate versus detectability performances of several widely used embedding as well as universal steganalysis techniques. Furthermore, we provide a framework for benchmarking future techniques.
KEYWORDS: Video, Computer programming, Video compression, Video coding, Detection and tracking algorithms, Cameras, Motion estimation, Multimedia, Image analysis, Video processing
This paper presents a simple and effective pre-processing method, developed for the segmentation of MPEG compressed video sequences. The proposed method for scene-cut detection only involves computing the number of bits spent for each frame (encoding cost data), thus avoiding decoding the bitstream. The information is separated into I-, P-, B- frames, thus forming 3 vectors, which are independently processed by a new peak detection algorithm, based on overcomplete filter banks and on joint thresholding, using a confidence number. Each processed vector yields a set of candidate frame numbers, i.e., 'hints' of positions where scene-cuts may have occurred. The 'hints' for all frame types are recombined into one frame sequence and clustered into scene cuts. The algorithm was not designed to distinguish among types of cuts, but rather to indicate its position and duration. Experimental results show that the proposed algorithm is effective in detecting abrupt scene changes, as well as gradual transitions. For precision- demanding applications, the algorithm can be used with a low confidence factor, just to select the frames, which are worth being investigated by a more complex algorithm. The algorithm is not particularly tailored to MPEG and can be applied to most video compression techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.