KEYWORDS: Eye, Retina, Displays, Depth of field, Light sources, Point spread functions, Far field diffraction, Lenses, Diffraction, 3D image processing
By using a method of expanding the depth of field, the VAC problem can be alleviated, and when applied to an AR optical system, clear virtual image can be delivered even under conditions that deviate from the depth of the virtual screen. In achieving these conditions, an optical system was developed that demonstrates the possibility of a simpler EDOF optical system.
For the AR optics that provides only one virtual depth, it is impossible to be applied to a wide range of depths due to the vergence and accommodation conflict (VAC). We have developed a way to overcome this mismatch problem by devising the AR optics system with two depths. And from the analysis of FOV, ER, EB and experiment results we show the possibility that the depth of virtual image has wide range.
Approaches to control the moiré effect are discussed. To improve the image quality, especially in the autostereoscopic three-dimensional displays, the minimization of the moiré effect is considered, including the minimization by angle, by period, by distance, etc. Also, less known nontraditional but effective approaches, such as spectral trajectories, statistical treatment, convolution-based minimization, and the amplitude of the moiré patterns, are at issue. On the other hand, the maximum moiré can be useful in displays. A proposed “2.5-D” display represents a new type of displays, the displays based on the moiré effect as on the main physical principle. As of today, binary black-and-white images are displayed, but this is not a principal limit and color imaging is also possible. A positive usage of the moiré effect to show images instead of its removal may be considered as an approach in displaying.
The tutorial describes essential features of moiré patterns, as well as the circumstances, when the moiré patterns appear and how to estimate their characteristics (parameters) such as the orientation and period. The moiré effect is described in two domains, the image space (spatial domain) and in the spectral domain using the complex numbers. The tutorial covers the indicial equation method, the coplanar and noncoplanar sinusoidal gratings, the moiré effect in a spatial object (a cylinder), as well as explains the moiré wave vector, the moiré spectra, the spectral trajectories, and summarizes behavior of the visible patterns in moved/rotated gratings.
For a multiview three-dimensional (3-D) display system using a two-dimensional (2-D) flat panel display, it is very important to attach accurately an optical plate such as a parallax barrier and a lenticular lens sheet onto a 2-D display panel for the best quality of 3-D images. In most practical cases, however, misalignment occurs since it is too difficult to align perfectly in assembly process. In general, angular misalignment results in the deterioration of 3-D image quality by some increase of crosstalk, so that the resulting 3-D images are even distorted as tilted ones. To correct distorted 3-D images, we propose a method to skew 3-D objects before each image for multiviews is taken by multiple cameras. For this, a formula is derived to determine the amount of skewing 3-D objects. And by using it, some experimental results are shown that distorted 3-D images in a misaligned multiview 3-D display system are completely corrected. Since skewing 3-D objects implies a coordinate transformation of 3-D space, this method can be also used in the manipulation of 3-D image data obtained from a depth camera in order to correct the distorted 3-D images caused by angular misalignment.
Point Crosstalk is a criteria for representing 3D image quality in glassless 3D display and motion parallax is coupled, a part, with point crosstalk when we consider smoothness of the motion parallax. Therefore we need to find a relationship between point crosstalk and motion parallax. Lowering point crosstalk is important for better 3D image quality but more discrete motion parallax appears at lower point crosstalk at OVD (Optimal Viewing Distance). Therefore another consideration for representing smoothness of motion parallax is necessary. And we analyze average crosstalk for smoother motion parallax as a candidate of another parameter for representing 3D image quality in glassless 3D display.
KEYWORDS: Computer simulations, Ray tracing, Image resolution, 3D displays, Eye, 3D image processing, Device simulation, Image quality, Cameras, Imaging systems
We studied expansion method of the three-dimensional viewing freedom of autostereoscopic 3D display with dynamic MVZ under tracking of viewer’s eye. The dynamic MVZ technique can provide three dimensional images with minimized crosstalk when observer move at optimal viewing distance (OVD). In order to be extended to movement in the depth direction of the observer of this technology, it is provided a new pixel mapping method of the left eye and the right eye images at the time of the depth direction movement of the observer. When this pixel mapping method is applied to common autostereoscopic 3D display, the image of the 3D display as viewed from the observer position has the nonuniformed brightness distribution of a constant period in the horizontal direction depending on depth direction distance from OVD. It makes it difficult to provide a three-dimensional image of good quality to the observer who deviates from OVD. In this study, it is simulated brightness distribution formed by the proposed pixel mapping when it is moved in the depth direction away OVD and confirmed the characteristics with the captured photos of two cameras on observer position to simulated two eyes of viewer using a developed 3D display system. As a result, we found that observer can perceive 3D images of same quality as OVD position even when he moves away from it in the developed 3D display system.
In autostereoscopic display using LASER beam scanning type of multiple projectors, accurate projector calibration is essential to alleviate optical distortions such as keystone distortion. However, calibrating hundreds of projectors with high accuracy takes too much time and effort. Moreover, there exist a limited range where viewers can percept correct depth with respect to human visual system (HVS) although the ideal projector calibration is possible. After fine projector calibration, we explored its accuracy with a brute-force technique, and analyzed depth expression ranges (DER) in the given accuracy with respect to HVS. We set five error conditions for projector calibration accuracy. And then we derive correlation between projector calibration error (PCE) and DER, and determine accuracy of projector calibration affect DER. And we determine that there is no problem in that the observer can perceive the depth of 3D object up to a certain accuracy of projector calibration. From this result, we proposed a perceptive threshold for acceptable projector calibration accuracy for whole system’s efficiency eventually.
Existing methods for tracking three-dimensional (3-D) eye positions with a monocular color camera mostly rely on a generic 3-D face model and a certain face database. However, the performance of these methods is susceptible to the variations of head poses. For this reason, existing methods for estimating 3-D eye position from a single two-dimensional face image may yield erroneous results. To improve the accuracy of 3-D eye position trackers using a monocular camera, we present a compensation method as a postprocessing technique. We address the problem of determining an optimal registration function for fitting 3-D data consisting of the inaccurate estimates from the eye position tracker and their corresponding ground truths. To obtain the ground truths of 3-D eye positions, we propose two different systems by combining an optical motion capture system and checkerboards, which construct the form of the hand-eye and robot-world calibration. By solving a least-squares optimization problem, we can determine the optimal registration function in an affine form. Real experiments demonstrate that the proposed method can considerably improve the accuracy of 3-D eye position trackers using a single color camera.
KEYWORDS: Visualization, 3D displays, Image enhancement, Video, Glasses, 3D image enhancement, Optical engineering, 3D equipment, Eye, 3D image processing
Visual discomfort is a common problem in three-dimensional (3D) videos, and this issue is the subject of many current studies. Among the methods to overcome visual discomfort presented in current research, parallax adjustment methods provide little guidance in determining the condition for parallax control. We propose a parallax adjustment based on the effects of parallax distribution and cross talk on visual comfort, where the visual comfort level is used as the adjustment parameter, in parallax-barrier-type autostereoscopic 3D displays. We use the horizontal image shift method for parallax adjustment to enhance visual comfort. The speeded-up robust feature is used to estimate the parallax distribution of 3D sequences, and the required amount for parallax control is chosen based on the predefined effect of parallax distribution and cross talk on visual comfort. To evaluate the performance of the proposed method, we used commercial 3D equipment with various intrinsic cross-talk levels. Subjective tests were conducted at the fixed optimal viewing distance for each piece of equipment. The results show that comfortable videos were generated based on the proposed parallax adjustment method.
Although autostereoscopic display is considered to be mainstream in the three-dimensional (3-D) display market for the near future, practical quality problems still exist due to various challenges such as the accommodation-vergence conflict and crosstalk. A number of studies have shown that these problems reduce the visual comfort and reliability of the perceived workload. We present two experiments for investigating the effect of parallax distribution, which affects the behavior of the accommodation and vergence responses and crosstalk on visual comfort in autostereoscopic display. We measured the subjective visual scores and perceived depth position for watching under various conditions that include foreground parallax, background parallax, and crosstalk levels. The results show that the viewers’ comfort is significantly influenced by parallax distribution that induces a suitable conflict between the accommodation and vergence responses of the human visual system. Moreover, we confirm that crosstalk changes significantly affect visual comfort in parallax barrier autostereoscopic display. Consequently, the results can be used as guidelines to produce or adjust the 3-D image in accordance with the characteristics of parallax barrier autostereoscopic display.
The moiré effect is an optical phenomenon which has a negative influence to the image quality; as such, this effect should be avoided or minimized in displays, especially in autostereoscoipic three-dimensional ones. The structure of the multiview autostereoscoipic displays typically includes two parallel layers with an integer ratio between the cell sizes. In order to provide the minimization of the moiré effect at finite distances, we developed a theory and computer simulation tool which simulates the behavior of the visible moiré waves in a range of parameters (the displacement of an observer, the distance to the screen and the like). Previously, we have made simulation for the sinusoidal waves; however this was not enough to simulate all real-life situations. Recently, the theory was improved and the non-sinusoidal gratings are currently included as well. Correspondingly, the simulation tool is essentially updated. In simulation, parameters of the resulting moiré waves are measured semi-automatically. The advanced theory accompanied by renewed simulation tool ensures the minimization and make it convenient. The tool run in two modes, overview and detailed, and can be controlled in an interactive manner. The computer simulation and physical experiment confirm the theory. The typical normalized RMS deviation is 3 - 5%.
In this paper, a novel measurement method of multi-view 3D display for determining an optimum viewing distance (OVD) is proposed. Using this method, the OVD can be efficiently determined by analyzing ray tracing results from at least one view–point images of some local areas of a multi-view 3D display and position error of each view-point images formed from entire 3D display area can be also calculated depending on z-direction.
A new method is introduced to reduce three crosstalk problems and the brightness variation in 3D image by means
of the dynamic fusion of viewing zones (DFVZ) using weighting factor. The new method effectively generates the
flat viewing zone at the center of viewing zone. The new type autostereoscopic 3D display can give less brightness
variation of 3D image when observer moves.
3D Display is generally designed to show 3D stereoscopic image to viewer at the center position of the display. But,
some interactive 3D technology needs to interact with multiple viewers and each stereoscopic image such as an imaging
demonstration. In this case, the display panel on the table is more convenient for multiple viewers. In this paper, we
introduce the table-top stereoscopic display that has the potential to combine this interactive 3D technology. This display
system enables two viewers to see the other images simultaneously on the table-top display and each viewer to
stereoscopic images on it. Also, this display has first optical sheet to make multiple viewers see each image and second
optical sheet to make them see stereoscopic images. We use a commercial LCD display, design the first optical sheet to
make two viewers see each image, and design the second optical sheet to make each viewer see each stereoscopic image.
The viewing zone from our display system is designed and easy to be viewed from children to adult to look at three
dimensional stereoscopic images very well. We expect our 3D stereoscopic display system on the table-top can be
applied for the interactive 3D display applications in the near future.
Generally, autostereoscopy has some shortcomings as a fusible stereo condition. Since viewer is keep a viewing distance
viewer and autostereoscopic display. The previous measurement method to measure the characteristics of the autostereoscopic
display has a problem. We proposed a moving image sensor method for measuring auto-stereoscopic
display. By using this method, the intensity distribution can be measured using the correct optimum view distance
(OVD). In addition, OVD around the crosstalk can be found.
An autostereoscopic 3D display provides the binocular perception without eye glasses, but induces the low 3D effect and
dizziness due to the crosstalk effect. The crosstalk related problems give the deterioration of 3D effect, clearness, and
reality of 3D image. A novel method of reducing the crosstalk is designed and tested; the method is based on the fusion
of viewing zones and the real time eye position. It is shown experimentally that the crosstalk is effectively reduced at any
position around the optimal viewing distance.
Generally non-glass type three dimensional stereoscopic display systems should be considering human factor. Human
factor include the crosstalk, motion parallax, types of display, lighting, age, unknown aspects around the human factors
issues and user experience. Among these human factors, the crosstalk is very important human factor because; it reduces
3D effect and induces eye fatigue or dizziness. In these reason, we considered method of reduction crosstalk in three
dimensional stereoscopic display systems. In this paper, we suggest method of reduction crosstalk using lenticular lens.
Optical ray derived from projection optical system, converted to viewing zone shape by convolution of two apertures. In
this condition, we can minimize and control the beam width by optical properties of lenticular lens (refractive, pitch,
thickness, radius of curvature) and optical properties of projector (projection distance, optical features). In this
processing, Gaussian distribution type shape is converted to rectangular distribution type shape. According to the beam
width reduction will be reduce crosstalk, and it was verified used to lenticular lens.
A new multifocus three-dimensional display which gives full parallax monocular depth cue and omni-directional focus is developed with the least parallax images. The key factor of this display system is a slanted array of light-emitting diode light source, not a horizontal array. In this system, defocus effects are experimentally achieved and the monocular focus effect is tested by four parallax images and even two parallax images. The full parallax multifocus three-dimensional display is more applicable to monocular or binocular augmented reality three-dimensional display in the modification to a see-through type.
In this paper, we suggested a way constructing an objective space transformation as the distorted objective space
to make a perceived scaled depth sense as the nature depth for an actual image circumstance. In there, a hybrid
camera system is adopted as a tool of multi-views actual images acquisition. There is consisted of two camera
system, one is a depth camera which used to take an actual object's depth information and the other is a common
camera which is used to map color information to the depth image. In previous work, we already showed the
possibility of the concept that a transformed an object space for taking a natural depth sense based on CG object
space is good approach. For improving the work, we show an advanced approach that the multi-views of the
right scaled object depth senses without a depth distortion based on an actual image by any size of display
adaptation can also perceived. Both systematic and observational stereoscopic constraints are already considered
to the distorted object space to make a scaled depth image in the reconstructed image space.
KEYWORDS: 3D displays, Eye, Light emitting diodes, Digital micromirror devices, Light sources, LED displays, 3D image processing, Visualization, Cameras, Multiplexing
Three types of multi-focus(MF) 3D display are developed and possibility about monocular depth cue is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed 3D display system for each eye, which can satisfy accommodation to displayed virtual objects within defined depth. The first MF 3D display is developed via laser scanning method, the second MF 3D display uses LED array for light source, and the third MF 3D display uses slated LED array for full parallax monocular depth cue. The full parallax MF 3D display system gives omnidirectional focus effect. The proposed 3D display systems have a possibility of solving eye fatigue problem that comes from the mismatch between the accommodation of each eye and the convergence of two eyes. The monocular accommodation is tested and a proof of the satisfaction of the full parallax accommodation is given as a result of the proposed full parallax MF 3D display system. We achieved a result that omni-directional focus adjustment is possible via parallax images.
We developed a head-mounted display (HMD)-type multifocus display system using a laser-scanning method to provide an accommodation effect for viewers. This accomplishment indicates that providing a monocular depth cue is possible through this multifocus system. In the system, the optical path is changed by a scanning action. To provide an accurate accommodation effect for the viewer, the multifocus display system is designed and manufactured in accordance with the geometric analysis of the system's scanning action. Using a video camera as a substitute for the viewer, correct focus adjustment without the scanning action problem is demonstrated. By analyzing the scanning action and experimental results, we are able to illustrate the formation of a viewpoint in an HMD-type multifocus display system using a laser-scanning method. In addition, we demonstrate that the accommodation effect could be provided independent of the viewing condition of the viewer.
The mobile device has the lacked space for configuring cameras to make either ortho- or hyperstereoscopic
condition with a small size of display. Therefore mobile stereoscopy cannot provide a presence with a good
depth sense to an observer. To solve this problem, we focused on the depth sense control method with a
switchable stereo camera alignment. In converging type, the fusible stereo area becomes wider compared to a
parallel type when the same focal length was used in both types. This matter makes it that the stereo fusible area
formed by converging type to be equal to the parallel type with a more shorten focal length. Therefore there is a
kind of the zoom-out effect at the reconstructed depth sense. In diverging type, the fusible stereo area becomes
narrower than the parallel. As the same way, the diverging type guarantees a similar characteristic of that an
increased focal length is considered in parallel type. Therefore there is a zoom-in effect existing. Stereoscopic
zoom-in depth effect becomes rapidly changed by the increased angle but zoom-out becomes retarded relatively.
In this paper, we suggested a new way to overcome a shortcoming as stereoscopic depth distortion in common
stereoscopy based on computer graphics (CG). In terms of the way, let the objective space transform as the
distorted space to make a correct perceived depth sense as if we are seeing the scaled object volume which is
well adjusted to user's stereoscopic circumstance. All parameters which related the distortion such as a focal
length, an inter-camera distance, an inner angle between camera's axes, a size of display, a viewing distance and
an eye distance can be altered to the amount of inversed distortion in the transformed objective space by the
linear relationship between the reconstructed image space and the objective space. Actually, the depth distortion
is removed after image reconstruction process with a distorted objective space. We prepared a stereo image
having a right scaled depth from -200mm to +200mm with an interval as 100mm by the display plane in an
official stereoscopic circumstance and showed it to 5 subjects. All subjects recognized and indicated the
designed depths.
Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.
The current multiview 3 dimensional imaging systems are mostly based on a multiview image set.
Depending on the methods of presenting and arranging the image set on a display panel or a screen, the
systems are basically classified into contact- and projection-type. The contact-type is further classified
into MV(Multiview), IP(Integral Photography), Multiple Image, FLA(Focused light array) and Tracking.
The depth cue provided by those types are both binocular and motion parallaxes. The differences between
the methods in a same type can only be identified by the composition of images projected to viewer eyes
at the viewing regions.
A two dimensional quality function which counts both the number of mixed view images and disparity between images
is derived based on the one dimension quality function which counts the number of mixed view images in multiview 3
dimensional imaging systems. This function predicts the quality of images with reasonable accuracy. This is proved
experimentally.
Recently, mobile phone has been requiring one of the most of necessaries as well as the mobile phone-camera would be
a representative method to presents as self-publicity and entertainment in human life. Furthermore, mobile phone with
stereo-camera to be more powerful tool to perform above mentioned fields and present specially three-dimensional
image to an observer compared to the existing it. In this paper, we investigated the constraints to obtain optimized
stereovision when it taken by the mobile phone applied stereo camera and they makes possible to good fusible stereo.
Theory and experiment were performed by permitted range of the disparity which was extracted by the stereogram on
mobile display. Consequently, the permitted horizontal and vertical disparities were taken up to +3.75mm and +2.59mm
based on mobile phone having 2.8" display, QVGA resolution, F/2.8, 54 degree of field of view and 220mm of viewing
distance. To examine suitability, the experiment is performed by ten subjects.
Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization.
The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and
Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by
geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical
calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and
experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display
condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.
KEYWORDS: Visualization, Motion measurement, Motion analysis, 3D image processing, RGB color model, 3D displays, 3D modeling, Personal digital assistants, Computing systems, Image processing
A networked viewpoint controller exploits a spatiotemporal attention module for the smart user interaction of the future
3D TV. In this paper, a new approach to locate a focus of attention for the generation of candidate viewpoints is
suggested. The suggested method combines spatial and temporal features given from a series of images to provide
viewers with several viewpoints.
A mobile phone (Hand Phone) is designed to display stereo images taken from a camera attached to it. Software of processing a stereo image pair to be displayed on the display panel of the phone is developed and a detachable viewing zone forming optics is installed for the stereoscopic image generation without moires. Since the phone is operating only in the palm of the phone's owner, special cares needed in photographing the image pair are described.
A HMD type multi-focus 3D display system is developed and experiment about satisfaction of eye accommodation is tested. Four LEDs(Light Emitting Diode) and a DMD are used to generate four parallax images at single eye and any mechanical part is not included in this system. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed a 3D display system for only one eye, which can satisfy the accommodation to displayed virtual objects within defined depths. Therefore this proposed 3D display system has a possibility to solve the problem that the 3-dimensional image display system using only binocular disparity can induce the eye fatigue because of the mismatch between the accommodation of each eye and the convergence of two eyes. The accommodation of one eye is tested and a proof of the satisfaction of the accommodation is given as a result by using the proposed 3D display system. We could achieve a result that focus adjustment is possible at 4 step depths in sequence within 2m depth for only one eye.
There have been reported several researches on gaze tracking techniques using monocular camera or stereo camera. The
most popular used gaze estimation techniques are based on PCCR (Pupil Center & Cornea Reflection). These techniques
are for gaze tracking for 2D screen or images. In this paper, we address the gaze-based 3D interaction to stereo image for
3D virtual space. To the best of our knowledge, our paper first addresses the 3D gaze interaction techniques to 3D
display system.
Our research goal is the estimation of both of gaze direction and gaze depth. Until now, the most researches are
focused on only gaze direction for the application to 2D display system. It should be noted that both of gaze direction
and gaze depth should be estimated for the gaze-based interaction in 3D virtual space.
In this paper, we address the gaze-based 3D interaction techniques with glassless stereo display. The estimation of
gaze direction and gaze depth from both eyes is a new important research topic for gaze-based 3D interaction. We
present our approach for the estimation of gaze direction and gaze depth and show experimentation results.
KEYWORDS: Visualization, 3D image processing, 3D displays, Personal digital assistants, Image processing, 3D modeling, Visual process modeling, Cameras, Internet, 3D acquisition
There have been many researches on multi-view image processing for free viewpoint image generation. However, most of the previous studies focused on the way to generate free viewpoint images with a sense of naturalness considering simplicity of computation and real time processing. One of the merits of the free viewpoint TV is to generate some images that correspond to the user's viewpoint. In this paper, we present a smart remote controller to enjoy a future 3D TV. The proposed controller is smart in that it provides the users with some candidate viewpoints and they are automatically computed based on the theory of human visual attention. The candidate viewpoints are generated from the analysis of an input image by human visual attention model.
A LC panel is layered on the LCD panel with the same pixel structure by matching each pixel to display stereoscopic images. The images for the LCD and LC panels are prepared by taking square root of sum of squared intensities and intensity ratio, respectively, of corresponding pixels from the stereoscopic image pair. The layered LCD panel enables to display each view image of the stereoscopic image pair with the full resolution of the panel and stereoscopic images with high quality.
KEYWORDS: Visualization, Image display, 3D image processing, 3D displays, Visual process modeling, 3D modeling, RGB color model, Image analysis, Systems modeling, Visual analytics
Human visual attention system has a remarkable ability to interpret complex scenes with the ease and simplicity by selecting or focusing on a small region of visual field without scanning the whole images. In this paper, a novel selective visual attention model by using 3D image display system for a stereo pair of images is proposed. It is based on the feature integration theory and locates ROI(region of interest) or FOA(focus of attention). The disparity map obtained from a stereo pair of images is exploited as one of spatial visual features to form a set of topographic feature maps in our approach. Though the true human cognitive mechanism on the analysis and integration process might be different from our assumption the proposed attention system matches well with the results found by human observers.
KEYWORDS: 3D displays, Eye, 3D image processing, Digital micromirror devices, Virtual point source, Video, Displays, Scanners, Cameras, Head-mounted displays
A multi-focus 3D display system is developed and tested the performance about supporting parallax image numbers within eye pupil diameter. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed a 3D display system for only one eye, which can satisfy the accommodation to displayed virtual objects within defined depth. Therefore this proposed 3D display system has a possibility to solve the problem that the 3-dimensional image display system using only binocular disparity can induce the eye fatigue because of the mismatch between the accommodation of each eye and the convergence of two eyes. The accommodation of one eye is tested and a proof of the satisfaction of the accommodation is given as a result by using the proposed 3D display system. We could achieve a result that focus adjustment is possible at 5 step depths in sequence within 2m depth for only one eye. Additionally, the change level of burring depending on the focusing depth is tested by captured photos and moving pictures of video camera and several subjects.
The methods of presenting multiview images, such as IP, the Multiview, Multiple Imaging and Focused light array are reviewed and their image forming principles were compared. These methods have their own ways of presenting multiview images but the images projected to viewer's eyes are mostly synthesized by the small part of each view image in the different view images presented to the viewers. This is a common property for all those methods.
The 3-dimensional image display system using only binocular disparity can induce the eye fatigue because of the mismatch between the accommodation of each eye and the convergence of two eyes. A new 3-dimensional display system for one observer that can solve eye fatigue caused by mismatch between accommodation and convergence is introduced in this paper. A proof about the possibility of satisfaction of accommodation of one eye is given as the experimental result from this 3-dimensional display system.
One of the main problems in practical application of digital holography is that the unit cell size of CCD(Charge-Coupled Device) is too large. As a result, the object size of recording of interference pattern to CCD is very limited. The angle of incoming laser light ray is reduced by the ratio of two focal lengths of two confocal lenses. This induces the spatial frequency of interference pattern to be lower. So the higher spatial frequency of interference pattern than the spatial frequency determined by the unit cell size of CCD, can be recorded. By using confocal lenses optical setup, a merit can be achieved like that the area of the 0-th order diffraction light is reduced to the square of the ratio of two focal lengths at numerical reconstruction. As a result, numerical reconstruction for larger object size compared with CCD only case, is calculated by using FFT(Fast Fourier Transform) adapted to the integral came from Huygens-Fresnel principle. The needed diameters of two confocal lenses and the position of CCD camera are calculated for recoding the interference pattern of larger object size.
KEYWORDS: Image filtering, 3D image reconstruction, Holograms, Charge-coupled devices, Diffraction, Holography, Confocal microscopy, Digital holography, 3D image processing, 3D displays
In this study, it is newly attempted to combine digital holography method to get the holography fringe data using the CCD with the pulsed electro-holographic system for real-time display. Owing to the one-dimensional characteristic of the Bragg regime AOSLM that the fringe data propagating in parallel with the laser incident plane in the crystal cannot diffract the incident reference beam vertically by the momentum conservation, the diffraction of vertical direction from the AOSLM is impossible and the vertical parallax is not generated. Therefore the confocal lens system with a horizontal slit is introduced to obtain proper interference pattern. When the display of pulsed laser electro-holographic system is made by the fringe data of which bandwidth is reduced, the image clearness and quality is more improved than the recorded data without the confocal system with a horizontal slit since the fringe is formed to give mainly the horizontal parallax and the bandwidth of vertical direction is reduced so that the one line of object is projected onto only one line pixels of the CCD at same height.
The application of pulsed laser in real time holography base don acousto-optic (AO) cells allows to get rid of mechanical horizontal scanning. The improvement of recently reported real time pulsed holographic display is presented. Ar-ion laser, used before as a coherent light source, is replaced with Q-switching Nd:YVO4 laser with SHG, operated by external triggering at 46 kHz. The commercial PC dual monitor graphic card was adapted to generate six parallel analogue outputs to feed six AO cells with gray scale hologram data. The upgrade improves brightness and clearness of reconstructed images and reduces the system dimensions and power consumption. 2D and 3D images reconstructed from precomputed HPO holograms are presented.
Current 3-dimensional display system using only the binocular disparity has the problem of the conflict between eye convergence and accommodation. A 3-dimensional display system for one observer that can solve it was introduced in this paper. A proof about the possibility of the satisfaction of accommodation of one eye was given as the experimental result in this 3-dimensional display system. This system uses 2- dimensional images to generate a 3-dimensional image for one observer. The number of 2-dimensional images is 127 for one 3- dimensional image. The horizontal directional motion parallax is possible in this system but the area is a little larger than the distance between both eyes of one observer. This system is close to the observer and generates horizontal parallax only 3-dimensional image. In this case, vertical diffuser could not be used in the system because of image blurring in vertical direction. A vertical directional cylindrical lens instead of the vertical diffuser was used in the system. It could solve the image blurring but the depth directional freedom to the observer's eye position was restricted in narrow width because of using cylindrical lens. This system can offer a large 3-dimensional image as HMD (Head Mounted Display) can display a large 2-dimensional image.
A holographic video system, displaying computer generated Fourier transform hologram is described. The method of Horizontal Parallax Only (HPO) Fresnel CGH is well known. HPO computer generated Fourier transform hologram is basically 2- dimensional hologram. To expand an HPO computer generated Fourier transform hologram algorithm based on the ray tracing is introduced. This algorithm can possibly reduce CGH data amount and CGH calculation time. The experimental result is included, and it shows the reduction of CGH data amount and CGH calculation time. One extra advantage compared with Fresnel CGH is described as well.
The use of a pulse laser as the source light for a holographic video system allows to eliminate the polygon mirror or scanners in the system. At the same time, a single long aperture AOM is replaced by 6 small aperture AOMs aligned in a line. The effective aperture length of the 6 AOM combination is 6 times of that of the individual AOM. the CGH data is divided in 6 equal part s and fed into the corresponding AOM simultaneously. This new AOM structure permits to use a personal computer for data feeding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.