KEYWORDS: Computer programming, Video coding, Optical spheres, Machine learning, Video, Mobile devices, System on a chip, Visual information processing, Electronic imaging, Current controlled current source
In this paper, we show that it is possible to reduce the complexity of Intra MB coding in H.264/AVC based
on a novel chance constrained classifier. Using the pairs of simple mean-variances values, our technique is able
to reduce the complexity of Intra MB coding process with a negligible loss in PSNR. We present an alternate
approach to address the classification problem which is equivalent to machine learning. Implementation results
show that the proposed method reduces encoding time to about 20% of the reference implementation with
average loss of 0.05 dB in PSNR.
KEYWORDS: Video, Video surveillance, Mobile devices, Image processing, Video processing, Video compression, Scalable video coding, Computer programming, Image quality, Video coding
Video applications on handheld devices such as smart phones pose a significant challenge to achieve high quality user
experience. Recent advances in processor and wireless networking technology are producing a new class of multimedia
applications (e.g. video streaming) for mobile handheld devices. These devices are light weight and have modest sizes,
and therefore very limited resources - lower processing power, smaller display resolution, lesser memory, and limited
battery life as compared to desktop and laptop systems. Multimedia applications on the other hand have extensive
processing requirements which make the mobile devices extremely resource hungry. In addition, the device specific
properties (e.g. display screen) significantly influence the human perception of multimedia quality. In this paper we
propose a saliency based framework that exploits the structure in content creation as well as the human vision system to
find the salient points in the incoming bitstream and adapt it according to the target device, thus improving the quality of
new adapted area around salient points. Our experimental results indicate that the adaptation process that is cognizant of
video content and user preferences can produce better perceptual quality video for mobile devices. Furthermore, we
demonstrated how such a framework can affect user experience on a handheld device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.