Advances in imaging and display engineering have given rise to new and improved image and video applications that aim to maximize visual quality under given resource constraints (e.g., power, bandwidth). Because the human visual system is an imperfect sensor, the images/videos can be represented in a mathematically lossy fashion but with enough fidelity that the losses are visually imperceptible—commonly termed “visually lossless.” Although a great deal of research has focused on gaining a better understanding of the limits of human vision when viewing natural images/video, a universally or even largely accepted definition of visually lossless remains elusive. Differences in testing methodologies, research objectives, and target applications have led to multiple ad-hoc definitions that are often difficult to compare to or otherwise employ in other settings. We present a compendium of technical experiments relating to both vision science and visual quality testing that together explore the research and business perspectives of visually lossless image quality, as well as review recent scientific advances. Together, the studies presented in this paper suggest that a single definition of visually lossless quality might not be appropriate; rather, a better goal would be to establish varying levels of visually lossless quality that can be quantified in terms of the testing paradigm.
While heuristics have evolved over decades for the capture and display of conventional 2D film, it is not clear these always apply well to stereoscopic 3D (S3D) film. Further, while there has been considerable recent research on viewer comfort in S3D media, little attention has been paid to audience preferences for filming parameters in S3D. Here we evaluate viewers’ preferences for moving S3D film content in a theatre setting. Specifically we examine preferences for combinations of camera motion (speed and direction) and stereoscopic depth (IA). The amount of IA had no impact on clip preferences regardless of the direction or speed of camera movement. However, preferences were influenced by camera speed, but only in the in-depth condition where viewers preferred faster motion. Given that previous research shows that slower speeds are more comfortable for viewing S3D content, our results show that viewing preferences cannot be predicted simply from measures of comfort. Instead, it is clear that viewer response to S3D film is complex and that film parameters selected to enhance comfort may in some instances produce less appealing content.
Stereoscopic displays must present separate images to the viewer's left and right eyes. Crosstalk is the unwanted contamination of one eye's image from the image of the other eye. It has been shown to cause distortions, reduce visual comfort, and increase perceived workload during the performance of visual tasks. Crosstalk also affects one's ability to perceive stereoscopic depth although little consideration has been given to the perception of depth magnitude in the presence of crosstalk. We extend a previous study (Tsirlin, Allison, and Wilcox, 2011) on the perception of depth magnitude in stereoscopic occluding and non-occluding surfaces to the special case of crosstalk in thin structures. We use a paradigm in which observers estimated the perceived depth difference between two thin vertical bars using a measurement scale. Our data show that as crosstalk levels increase, the magnitude of perceived depth decreases, especially for stimuli with larger relative disparities. In contrast to the effect of crosstalk on depth magnitude in larger objects, in thin structures a significant detrimental effect has been found at all disparities. Our findings, when considered with the other perceptual consequences of crosstalk, suggest that its presence in S3D media, even in modest amounts, will reduce observers' satisfaction.
KEYWORDS: Cameras, Visual system, Stereoscopic displays, Information operations, Cinematography, Image compression, 3D vision, 3D displays, Computer simulations, Imaging systems
In a stereoscopic 3D scene, non-linear mapping between real space and disparity could produce distortions when
camera geometry differs from natural stereoscopic geometry. When the viewing distance and zero screen parallax
setting are held constant and interaxial separation is varied, there is an asymmetric distortion in the mapping of
stereoscopic to real space. If an object traverses this space at constant velocity, one might anticipate distortion of the
perceived velocity. To determine if the predicted distortions are in fact perceived, we assessed perceived acceleration
and deceleration using an animation of a ball moving in depth through a simulated environment, viewed
stereoscopically. The method of limits was used to measure transition points between perceived acceleration and
deceleration as a function of interaxial and context (textured vs. non-textured background). Based on binocular
geometry, we predicted that the transition points would shift toward deceleration for small and towards acceleration
for large interaxial separations. However, the average transition values were not influenced by interaxial separation.
These data suggest that observers are able to discount distortions of stereoscopic space in interpreting the object
motion. These results have important implications for the rendering or capture of effective stereoscopic 3D content.
Crosstalk remains an important determinant of stereoscopic 3D (S3D) image quality. Defined as the leakage of
one eye's image into the image of the other eye it affects all commercially available stereoscopic viewing systems.
Previously we have shown that crosstalk affects perceived depth magnitude in S3D displays. We found that perceived
depth between two lines separated in depth decreased as crosstalk increased. The experiments described here extend our
previous work to complex images of natural scenes. We controlled crosstalk levels by simulating them in images
presented on a zero-crosstalk mirror stereoscope display. The observers were asked to estimate the amount of
stereoscopic depth between pairs of objects in stereo-photographs of cluttered rooms. Data show that as crosstalk
increased perceived depth decreased; an effect found at all disparities. Similarly to our previous experiments a
significant decrease in perceived depth was observed with as little as 2-4% crosstalk. Taken together these results
demonstrate that our previous findings generalize to natural scenes and show that crosstalk reduces perceived depth
magnitude even in natural scenes with pictorial depth cues.
Stereoscopic displays must present separate images to the viewer's left and right eyes. Crosstalk is the unwanted
contamination of one eye's image from the image of the other eye. It has been shown to cause distortions, reduce
image quality and visual comfort and increase perceived workload when performing visual tasks. Crosstalk also
affects one's ability to perceive stereoscopic depth although little consideration has been given to the perception of
depth magnitude in the presence of crosstalk. In this paper we extend a previous study (Tsirlin, Allison & Wilcox,
2010, submitted) on the perception of depth magnitude in stereoscopic occluding and non-occluding surfaces to the
special case of crosstalk in thin structures. Crosstalk in thin structures differs qualitatively from that in larger objects
due to the separation of the ghost and real images and thus theoretically could have distinct perceptual consequences.
To address this question we used a psychophysical paradigm, where observers estimated the perceived depth
difference between two thin vertical bars using a measurement scale. Our data show that crosstalk degrades
perceived depth. As crosstalk levels increased the magnitude of perceived depth decreased, especially for stimuli
with larger relative disparities. In contrast to the effect of crosstalk on depth magnitude in larger objects, in thin
structures, a significant detrimental effect was found at all disparities. Our findings, when considered with the other
perceptual consequences of crosstalk, suggest that its presence in S3D media even in modest amounts will reduce
observers' satisfaction.
Expected temporal effects in a night vision goggle (NVG) include the fluorescence time constant, charge depletion at high signal levels, the response time of the automatic gain control (AGC) and other internal modulations in the NVG. There is also the possibility of physical damage or other non-reversible effects in response to large transient signals. To study the temporal behaviour of an NVG, a parametric Matlab model has been created. Of particular interest in the present work was the variation of NVG gain, induced by its automatic gain control (AGC), after a short, intense pulse of light. To verify the model, the reduction of gain after a strong pulse was investigated experimentally using a simple technique. Preliminary laboratory measurements were performed using this technique. The experimental methodology is described, along with preliminary validation data.
Night vision devices (NVDs) or night-vision goggles (NVGs) based on image intensifiers improve nighttime visibility
and extend night operations for military and increasingly civil aviation. However, NVG imagery is not equivalent to
daytime vision and impaired depth and motion perception has been noted. One potential cause of impaired perceptions
of space and environmental layout is NVG halo, where bright light sources appear to be surrounded by a disc-like halo.
In this study we measured the characteristics of NVG halo psychophysically and objectively and then evaluated the
influence of halo on perceived environmental layout in a simulation experiment. Halos are generated in the device and
are not directly related to the spatial layout of the scene. We found that, when visible, halo image (i.e. angular) size was
only weakly dependent on both source intensity and distance although halo intensity did vary with effective source
intensity. The size of halo images surrounding lights sources are independent of the source distance and thus do not obey
the normal laws of perspective. In simulation experiments we investigated the effect of NVG halo on judgements of
observer attitude with respect to the ground during simulated flight. We discuss the results in terms of NVG design and
of the ability of human operators to compensate for perceptual distortions.
Perception of motion-defined form is important in operational tasks such as search and rescue and camouflage breaking. Previously, we used synthetic Aviator Night Vision Imaging System (ANVIS-9) imagery to demonstrate that the capacity to detect motion-defined form was degraded at low levels of illumination (see Macuda et al., 2004; Thomas et al., 2004). To validate our simulated NVG results, the current study evaluated observer’s ability to detect motion-defined form through a real ANVIS-9 system. The image sequences consisted of a target (square) that moved at a different speed than the background, or only depicted the moving background. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus. Mean illumination and hence image noise level was varied by means of Neutral Density (ND) filters placed in front of the NVG objectives. At each noise level, we tested subjects at a series of target speeds. With both real and simulated NVG imagery, subjects had increased difficulty detecting the target with increased noise levels, at both slower and higher target speeds. These degradations in performance should be considered in operational planning. Further research is necessary to expand our understanding of the impact of NVG-produced noise on visual mechanisms.
Anecdotal reports by pilots flying with Night Vision Goggles (NVGs) in urban environments suggest that halos produced by bright light sources impact flight performance. The current study developed a methodology to examine the impact of viewing distance on perceived halo size. This was a first step in characterizing the subtle phenomenon of halo. Observers provided absolute size estimates of halos generated by a red LED at several viewing distances. Physical measurements of these halos were also recorded. The results indicated that the perceived halo linear size decreased as viewing distance was decreased. Further, the data showed that halos subtended a constant visual angle on the goggles (1°48’, ±7’) irrespective of distance up to 75’. This invariance with distance may impact pilot visual performance. For example, the counterintuitive apparent contraction of halo size with decreasing viewing distance may impact estimates of closure rates and of the spatial layout of light sources in the scene. Preliminary results suggest that halo is a dynamic phenomenon that requires further research to characterize the specific perceptual effects that it might have on pilot performance.
KEYWORDS: Goggles, Visualization, Night vision, Night vision goggles, Light sources and illumination, Light sources, Modulation transfer functions, Defense and security, Standards development, Psychophysics
Several methodologies have been used to determine resolution acuity through Night Vision Goggles. The present study compared NVG acuity estimates derived from the Hoffman ANV-126 and a standard psychophysical grating acuity task. For the grating acuity task, observers were required to discriminate between horizontal and vertical gratings according to a method of constant stimuli. Psychometric functions were generated from the performance data, and acuity thresholds were interpolated at a performance level of 70% correct. Acuity estimates were established at three different illumination levels (0.06-5X10-4 lux) for both procedures. These estimates were then converted to an equivalent Snellen value. The data indicate that grating acuity estimates were consistently better (i.e. lower scores) than acuity measures obtained from the Hoffman ANV-126. Furthermore significant differences in estimated acuity were observed using different tube technologies. In keeping with previous acuity investigations, although the Hoffman ANV-126 provides a rapid operational assessment of tube acuity, it is suggested that more rigorous psychophysical procedures such as the grating task described here be used to assess the real behavioural resolution of tube technologies.
When a bright light source is viewed through Night Vision Goggles (NVG), the image of the source can appear enveloped in a “halo” that is much larger than the “weak-signal” point spread function of the NVG. The halo phenomenon was investigated in order to produce an accurate model of NVG performance for use in psychophysical experiments. Halos were created and measured under controlled laboratory conditions using representative Generation III NVGs. To quantitatively measure halo characteristics, the NVG eyepiece was replaced by a CMOS imager. Halo size and intensity were determined from camera images as functions of point-source intensity and ambient scene illumination. Halo images were captured over a wide range of source radiances (7 orders of magnitude) and then processed with standard analysis tools to yield spot characteristics. The spot characteristics were analyzed to verify our proposed parametric model of NVG halo event formation. The model considered the potential effects of many subsystems of the NVG in the generation of halo: objective lens, photocathode, image intensifier, fluorescent screen and image guide. A description of the halo effects and the model parameters are contained in this work, along with a qualitative rationale for some of the parameter choices.
A concept is described for the detection and location of transient objects, in which a "pixel-binary" CMOS imager is used to give a very high effective frame rate for the imager. The sensitivity to incoming photons is enhanced by the use of an image intensifier in front of the imager. For faint signals and a high enough frame rate, a single "image" typically contains only a few photon or noise events. Only the event locations need be stored, rather than the full image. The processing of many such "fast frames" allows a composite image to be created. In the composite image, isolated noise events can be removed, photon shot noise effects can be spatially smoothed and moving objects can be de-blurred and assigned a velocity vector. Expected objects can be masked or removed by differencing methods. In this work, the concept of a combined image intensifier/CMOS imager is modeled. Sensitivity, location precision and other performance factors are assessed. Benchmark measurements are used to validate aspects of the model. Options for a custom CMOS imager design concept are identified within the context of the benefits and drawbacks of commercially available night vision devices and CMOS imagers.
Night vision devices are important tools that extend the operational capability of military and civilian flight operations. Although these devices enhance some aspects of night vision, they distort or degrade other aspects. Scintillation of the NVG signal at low light levels is one of the parameters that may affect pilot performance. We have developed a parametric model of NVG image scintillation. Measurements were taken of the output of a representative NVG at low light levels to validate the model and refine the values of the embedded parameters. A simple test environment was created using a photomultiplier and an oscilloscope. The model was used to create sequences of simulated NVG imagery that were characterized numerically and compared with measured NVG signals. The sequences of imagery are intended for use in laboratory experiments on depth and motion-in-depth perception.
KEYWORDS: Visualization, Night vision, Target detection, Photons, Motion models, Visual process modeling, Signal to noise ratio, Software development, Night vision goggles, Image visualization
The influence of Night Vision Goggle-produced noise on the perception of motion-defined form was investigated using synthetic imagery and standard psychophysical procedures. Synthetic image sequences incorporating synthetic noise were generated using a software model developed by our research group. This model is based on the physical properties of the Aviator Night Vision Imaging System (ANVIS-9) image intensification tube. The image sequences either depicted a target that moved at a different speed than the background, or only depicted the background. For each trial, subjects were shown a pair of image sequences and required to indicate which sequence contained the target stimulus. We tested subjects at a series of target speeds at several realistic noise levels resulting from varying simulated illumination. The results showed that subjects had increased difficulty detecting the target with increased noise levels, particularly at slower target speeds. This study suggests that the capacity to detect motion-defined form is degraded at low levels of illumination. Our findings are consistent with anecdotal reports of impaired motion perception in NVGs. Perception of motion-defined form is important in operational tasks such as search and rescue and camouflage breaking. These degradations in performance should be considered in operational planning.
Convergence of the real or virtual stereoscopic cameras is an important operation in stereoscopic display systems. For example, convergence can shift the range of portrayed depth to improve visual comfort; can adjust the disparity of targets to bring them nearer to the screen and reduce accommodation-vergence conflict; or can bring objects of interest into the binocular field-of-view. Although camera convergence is acknowledged as a useful function, there has been considerable debate over the transformation required. It is well known that rotational camera convergence or 'toe-in' distorts the images in the two cameras producing patterns of horizontal and vertical disparities that can cause problems with fusion of the stereoscopic imagery. Behaviorally, similar retinal vertical disparity patterns are known to correlate with viewing distance and strongly affect perception of stereoscopic shape and depth. There has been little analysis of the implications of recent findings on vertical disparity processing for the design of stereoscopic camera and display systems. We ask how such distortions caused by camera convergence affect the ability to fuse and perceive stereoscopic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.