This paper proposes a simple but effective method for local disparity remapping that is capable of enhancing the depth
quality of stereoscopic 3D images. In order to identify and scale the unperceivable difference in disparities in a scene,
the proposed approach decomposes the disparity map into a two disparity layers: one is the coarse disparity layer that
contains the information of the global depth structures and the other is the detail disparity layer that contains the details
of the depth structures. Then, the proposed method adaptively manipulates the detail disparity layer in depth and image
spaces under the guidance of a stereoacuity function, which describes the minimum amount of perceivable disparity
difference given a disparity magnitude. In this way, relative depths between objects (or regions) can be effectively
emphasized while providing a spatial adaptability in the image space. Experimental results showed that the proposed
method was capable of improving the depth quality and overall viewing quality of stereoscopic 3D images as well.
Automatic prediction of visual discomfort for stereoscopic videos is important to address the viewing safety issues in stereoscopic displays. In literature, disparity and motion characteristics have been known as major factors of visual discomfort caused by stereoscopic content. For developing an algorithm to accurately predict visual discomfort in stereoscopic videos, this study investigates how to combine the individual prediction values of disparity- and motion-induced discomforts into an overall prediction value of visual discomfort. To that end, subjective experiments were performed with various stereoscopic videos, and four possible combination methods were compared based on the results of subjective experiments. The combination methods used in this study were the weighted summation, multiplication, Minkowski summation, and max combination methods. Experimental results showed that the Minkowski summation combination with a high exponent and max combination yielded the best accuracy in predicting the overall level of visual discomfort. The results indicated that the overall level of perceived discomfort could be dominantly affected by the most significant discomfort factor, i.e., the winner-takes-all mechanism. Our results could be useful for the future development of an accurate and reliable algorithm for visual discomfort prediction.
As stereoscopic displays have spread, it is important to know what really causes the visual fatigue and discomfort and what happens in the visual system in the brain behind the retina while viewing stereoscopic 3D images on the displays. In this study, functional magnetic resonance imaging (fMRI) was used for the objective measurement to assess the human brain regions involved in the processing of the stereoscopic stimuli with excessive disparities. Based on the subjective measurement results, we selected two subsets of comfort videos and discomfort videos in our dataset. Then, a fMRI experiment was conducted with the subsets of comfort and discomfort videos in order to identify which brain regions activated while viewing the discomfort videos in a stereoscopic display. We found that, when viewing a stereoscopic display, the right middle frontal gyrus, the right inferior frontal gyrus, the right intraparietal lobule, the right middle temporal gyrus, and the bilateral cuneus were significantly activated during the processing of excessive disparities, compared to those of small disparities (< 1 degree).
As the viewing safety issues in stereoscopic 3D services have been under the spotlight again, it has been more important to investigate determinants of visual discomfort in viewing stereoscopic images. In general, excessive binocular disparity has been regarded as one of key determinants of visual discomfort in stereoscopic viewing. However, in consideration of the complexity of the visual system, the degree of perceived visual discomfort could be also different depending on other characteristics of visual stimulus. Inspired by previous studies that have investigated the relation between stimulus width and binocular fusion limit, we assume that stimulus width can also affect the subjective sensation of visual discomfort in stereoscopic viewing. This paper investigates the relationship between stimulus width and visual comfort by measuring subjective visual discomfort. Experimental results showed that smaller stimulus width could induce more visual discomfort.
The great success of the three-dimensional (3D) digital cinema industry has opened up a new era of 3D content services.
While we have witnessed a rapid surge of stereoscopic 3D services, the issue of viewing safety remains a possible
obstacle to the widespread deployment of such services. In this paper, we propose a novel disparity remapping method to
reduce the visual discomfort induced by fast change in disparity. The proposed remapping approach selectively adjusts
the disparities of the discomfort regions where the fast change in disparity occurs. To this purpose, the proposed
approach detects visual importance regions in a stereoscopic 3D video, which may have dominant influence on visual
comfort in video frames, and then locally adjust the disparity by taking into account the disparity changes in the visual
importance regions. The experimental results demonstrate that the proposed approach to adjust local problematic regions
can improve visual comfort while preserving naturalness of the scene.
Visual discomfort assessment metric is of importance in the creation and viewing of stereoscopic 3D contents. This
paper investigates the importance of considering object thickness as well as disparity magnitude to predict visual
discomfort. Throughout the paper, we introduce the overall process to predict visual discomfort by analyzing the
thickness of objects and their disparity magnitude in an image. Using natural stereoscopic images, we evaluate the
contribution of object thickness to the prediction performance of visual discomfort. Experimental results demonstrate
that the combined use of disparity magnitude and object thickness substantially improves the prediction performance of
visual discomfort.
This paper investigates visual discomfort induced by fast motion of salient object in a stereoscopic video. We have
conducted a subjective assessment to investigate the degree of visual discomfort caused by motion characteristics of a
controlled graphics object in a video scene. As results of the subjective assessment, we observe the changes of the degree
of visual discomfort with varying velocity and direction of object motion. In order to verify the acceptability of our
observation for real stereoscopic 3D videos, we exploit the concept of visual saliency to define the salient object motion
severely affecting the degree of visual discomfort in a video scene. The salient object motion feature is extracted and a
visual comfort model is derived from our observation. Then we predict the degree of visual discomfort by using the
extracted motion feature and the visual comfort model. We have conducted a subjective test to compare the predicted
visual comfort score with actual subjective score. The experiment results show that the predicted visual comfort score
correlates well with the actual subject score.
This paper investigates visual discomfort induced by fast motion of salient object in a stereoscopic video. We have
conducted a subjective assessment to investigate the degree of visual discomfort caused by motion characteristics of a
controlled graphics object in a video scene. As results of the subjective assessment, we observe the changes of the degree
of visual discomfort with varying velocity and direction of object motion. In order to verify the acceptability of our
observation for real stereoscopic 3D videos, we exploit the concept of visual saliency to define the salient object motion
severely affecting the degree of visual discomfort in a video scene. The salient object motion feature is extracted and a
visual comfort model is derived from our observation. Then we predict the degree of visual discomfort by using the
extracted motion feature and the visual comfort model. We have conducted a subjective test to compare the predicted
visual comfort score with actual subjective score. The experiment results show that the predicted visual comfort score
correlates well with the actual subject score.
KEYWORDS: Video, Scalable video coding, Spatial resolution, Quality measurement, Multimedia, Environmental sensing, Signal to noise ratio, Video compression, Roads, Mobile devices
Environments for the delivery and consumption of multimedia are often very heterogeneous, due to the use of various
terminals in varying network conditions. One example of such an environment is a wireless network providing
connectivity to a plethora of mobile devices. H.264/AVC Scalable Video Coding (SVC) can be utilized to deal with
diverse usage environments. However, in order to optimally tailor scalable video content along the temporal, spatial, or
perceptual quality axes, a quality metric is needed that reliably models subjective quality. The major contribution of this
paper is the development of a novel quality metric for scalable video bit streams having a low spatial resolution,
targeting consumption in wireless video applications. The proposed quality metric allows modeling the temporal, spatial,
and perceptual quality characteristics of SVC bit streams. This is realized by taking into account several properties of the
compressed bit streams, such as the temporal and spatial variation of the video content, the frame rate, and PSNR values.
An extensive number of subjective experiments have been conducted to construct and verify the reliability of our quality
metric. The experimental results show that the proposed quality metric is able to efficiently reflect subjective quality.
Moreover, the performance of the quality metric is uniformly high for video sequences with different temporal and
spatial characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.