Omnidirectional video, also known as 360° video, is becoming quite popularsince it provides a more immersive and natural representation of the real world. However, to fulfill the expectation of an high quality-of-experience (QoE), the video content delivered to the end users must also have high quality. To automatically evaluate the video quality, objective quality assessment metrics are then required. This paper starts by presenting the results of a subjective assessment campaign that was conducted to evaluate the impact, on quality, of HEVC compression and/or spatial/temporal subsampling, when the videos are displayed in a head mounted device (HDM). The subjective assessment results are then used as ground-truth to evaluate conventional quality assessment metrics developed for 2D video, as well as some of the recently proposed metrics for omnidirectional video, namely, spherical peak-signal to noise ratio (S-PSNR), weighted to spherically uniform PSNR (WS-PSNR), and viewport PSNR (VP-PSNR); in the context of this study, the adaptation of two SSIM based metrics, to omnidirectional contents, are also proposed and evaluated.
The main purpose of the research presented in this paper is the development and validation, through the application to a case study, of an efficient form of satellite image classification that integrates ancillary information (Census data; the Municipal Mater Plan; the Road Network) and remote sensing data in a Geographic Information System. The developed procedure follows a layered classification approach, being composed by three main stages: 1) Pre- classification stratification; 2) Application of Bayesian and Maximum-likelihood classifiers; 3) Post-classification sorting. Common approaches incorporate the ancillary data before, during or after classification. In the proposes method, all the steps take the auxiliary information into account. The proposed method achieves, globally, much better classification results than the classical, one layer, Minimum Distance and Maximum-likelihood classifiers. Also, it greatly improves the accuracy of those classes where the classification process uses the ancillary data.
Many of the proposed image watermarking algorithms belong to the class of spread-spectrum techniques, in which a pseudo- noise signal (itself a function of the mark) is added to the host signal, either in space or frequency domains. Under these approaches, several results and concepts from digital communication theory, such as M-ary digital modulation, channel coding and optimum detection, are easily applicable.
KEYWORDS: Image compression, Digital watermarking, Image quality, Visualization, Data modeling, Image processing, Data communications, Information visualization, Visibility, Data processing
The recent development of digital multimedia communications together with the intrinsic capability of digital information to be copied and manipulated requires new copyright protection and content authentication schemes to be developed. This paper is devoted to the second issue, the one of image or video content authentication. A computationally efficient spatial watermarking technique for authentication of visual information, robust to small distortions caused by compression, is described. In essence, a content-dependent authentication data is embedded into the picture, by modifying the relationship of image projections throughout the entire image. To obtain a secure data embedding and extraction procedure, directions onto which image parts are projected depend on a secret key. In order to guarantee minimum visibility of the embedded data, the insertion process is used in conjunction with perceptual models, exploiting spatial domain masking effects. The viability of the method as a means for protecting the content is assessed under JPEG compression and semantic content modifications. With the present system, robustness to JPEG compression up to compression factors of about 1:10, can be achieved, maintaining the subjective image quality after watermark insertion. At the same time, it is possible to detect and localize small image manipulations.
The increasing availability of digitally stored information and the development of new multimedia broadcasting services, has recently motivated research on copyright protection and authentication schemes for these services. Possible solutions range from low-level systems based upon header description associated with the bit-stream (labelling), up to high level, holographically inlayed, non-deletable systems (watermarking). This paper is focused on authentication, using the labeling approach; a generic framework is firstly presented and two specific methods are then proposed for the particular cases of still images and videos. The resistance of both methods to JPEG and MPEG2 compression, as well as its sensitivity to image manipulations, are evaluated.
In standard DCT coding schemes like MPEG, the sequences compression is achieved by motion compensation, transformation, quantization, and entropy coding. In this paper, we have followed the same path by adapting to the image signal the elements of the coding scheme. The motion compensation is achieved by a block-matching method, where the size of the blocks is adapted to the signal. Great attention has been paid to the relevance of the motion field. Combined with the motion compensation, the two fields of each frame are merged, taking into account the measured motion vectors, to compose a pseudo-progressive frame. The encoding is applied to this `motion-compensated progressive' frame. A wavelet decomposition is then applied on each (inter or intra) frame. Such a transform, intrinsically owning linear- phase and perfect reconstruction properties, has been optimized for maximizing a perceptually weighted coding gain. The wavelet coefficients are thereafter vector-quantized, in order to reach the maximum perceptual SNR : frequency weighting is taken into account. The relevance of the measured vector field allows a precise spatio-temporal quantization optimization. The vectors are entropy coded taking into account the remaining inter-band dependence, by an adapted entropy code. Results obtained from 1 Mbit/s to 8 Mbit/s are shown for moving sequences at the conference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.