Paper
25 February 2014 Visually lossless coding based on temporal masking in human vision
Author Affiliations +
Proceedings Volume 9014, Human Vision and Electronic Imaging XIX; 90141C (2014) https://doi.org/10.1117/12.2043150
Event: IS&T/SPIE Electronic Imaging, 2014, San Francisco, California, United States
Abstract
This paper presents a method for perceptual video compression that exploits the phenomenon of backward temporal masking. We present an overview of visual temporal masking and discuss models to identify portions of a video sequences masked due to this phenomenon exhibited by the human visual system. A quantization control model based on the psychophysical model of backward visual temporal masking was developed. We conducted two types of subjective evaluations and demonstrated that the proposed method up to 10% bitrate savings on top of state of the art encoder with visually identical video. The proposed methods were evaluated using HEVC encoder.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Velibor Adzic, Howard S. Hock, and Hari Kalva "Visually lossless coding based on temporal masking in human vision", Proc. SPIE 9014, Human Vision and Electronic Imaging XIX, 90141C (25 February 2014); https://doi.org/10.1117/12.2043150
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Quantization

Visualization

Visibility

Computer programming

Video compression

Video coding

Back to Top