PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6778, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The three-dimension (3D) to two-dimension (2D) convertibility of display hardware described in this paper is an
essential factor in the commercialization of a 3D display. The liquid crystal (LC), which is a suitable material with its
optical anisotropy and electric properties, is widely used for various 3D/2D convertible display techniques. There are
three kinds of autostreoscopic 3D/2D convertible techniques - the LC lenticular lens, the LC parallax barrier, and
integral imaging. The techniques are on their ways for continuing development and improvement. In this keynote paper
we summarize the principle and status of the techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A two dimensional quality function which counts both the number of mixed view images and disparity between images
is derived based on the one dimension quality function which counts the number of mixed view images in multiview 3
dimensional imaging systems. This function predicts the quality of images with reasonable accuracy. This is proved
experimentally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integral method enables observers to see 3D images like real objects. It requires extremely high resolution for both
capture and display stages. We present an experimental 3D television system based on the integral method using an
extremely high-resolution video system. The video system has 4,000 scanning lines using the diagonal offset method
for two green channels. The number of elemental lenses in the lens array is 140 (vertical) × 182 (horizontal). The
viewing zone angle is wider than 20 degrees in practice. This television system can capture 3D objects and provides full
color and full parallax 3D images in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integral imaging provides with three-dimensional (3D) images. This technique works perfectly with incoherent
light and does not need the use of any special glasses nor stabilization techniques. Here we present relay systems
for both acquire and display 3D images. Some other important challenges are revisited.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A three-dimensional (3D) interface system based on digital holography is presented. For the development of 3D
interface system, a 3D display system, a recording system of 3D objects, an information processing system of 3D
manipulation, and 3D measurement system are required. In the system, the complex amplitude distribution of 3D objects
is recorded as digital hologram. In the reconstruction, the complex amplitude distribution of the 3D objects or phase-only
information is used. The optical reconstruction is also available. The manipulation of 3D object can be implemented by
processing complex amplitude of the 3D objects in the hologram plane. We present numerical and experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method is employed to eliminate the twin-images in the so called 'in-line' digital
holographic microscope. We could achieve digital holographic microscope which solve the
problems of overlapping of real and imaginary images and eliminating one of them by padding
and removing DC term by averaging method. The entire process needs only one digital
hologram.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an overview of optical imaging system for 3D visualization and recognition of micro/nano biological
organisms. For 3D sensing of a biological specimen, the diffraction pattern of the specimen is recorded on charge
coupled device (CCD) image sensor. The recorded hologram is then transferred to the computer where 3D images of the
specimen at different depths along the longitudinal direction are numerically reconstructed by using inverse Fresnel
transformation. For 3D recognition and identification of micro/nano biological organisms, image segmentation
algorithms are performed to identify regions of interest for further processing and then statistical classifiers using
maximum likelihood estimator are used to classify the detected specimen into one of previously trained classes. It is
shown in the experiments that the proposed system is useful for 3D sensing, recognition and classification of different
biological specimen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Resolution in digital holography microscopy can be improved by enlarging the hologram aperture. We review different
techniques for resolution enhancement in digital holography, and present a system for reconstructing single-exposure online
(SEOL) digital holograms with improved resolution using a synthetic aperture. In our method, several recordings are
made in order to compose a synthetic aperture, shifting the camera within the hologram plane. After processing the
synthetic hologram, an inverse Fresnel transformation provides an enhanced resolution reconstruction. The method
employs a simple set-up, including no microscope objective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization.
The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and
Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by
geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical
calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and
experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display
condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce a new Ray-Space acquisition system that we developed. The Ray-Space method records the
position and direction of rays that are transmitted in the space as ray data. The composition of arbitrary viewpoint images
using the Ray-Space method enables the generation of realistic arbitrary viewpoint picture. However, acquisition of a
dense Ray-Space is necessary to apply the Ray-Space method. The conventional method of acquiring the ray data uses a
camera array. This method enables capturing a dynamic scene. To acquire a dense Ray-Space by this method, however,
interpolation is necessary. There is another common method for ray data acquisition, which uses a rotating stage. This
method enables capturing images without requiring interpolation. However, only static scenes can be captured by this
method. Therefore, we developed a new Ray-Space acquisition system. This system uses two parabolic mirrors. Incident
rays that are parallel to the axis of a parabolic mirror gather at the focus of the parabolic mirror. Hence, rays that come out
of an object that is placed at the focus of the lower parabolic mirror gather at the focus of the upper parabolic mirror. Then,
the real image of the object is generated at the focus of the upper parabolic mirror, and a rotating aslope mirror scans rays
at the focus of the upper parabolic mirror. Finally, the image from the aslope mirror is captured by a camera. By using this
system, we were able to acquire an all-around image of an object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advantage of a lenticular type three-dimensional (3D) display is its simple structure consisting of a flat-panel display
and a lenticular sheet. The disadvantages are the limited viewing angle and the existence of flipped 3D images. In this
study, we propose a technique which uses a curved screen display and a curved lenticular sheet in order to enlarge the
horizontal viewing angle. A mask plate is placed in front of the curved screen in order to eliminate flipped images. A
lenticular sheet with the lens pitch of 30 lpi was curved with the radius of 300 mm. The image behind the lenticular sheet
was printed on a paper with 2,400 dpi using a DDCP printer instead of using a curved screen display. The viewing angle
was enlarged to 106°. Rays were emitted into 80 different horizontal directions with the angle pitch of 0.65°. The mask
plate completely eliminated flipped 3D images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We analyzed resolution characteristics of a lenticular-sheet 3D display system. The measured samples are onedimensional
integral imaging (1D-II) display systems of 9-18 parallaxes with slanted/vertical lenticular sheet. The
measured contrast ratio curves of various sinusoidal patterns as functions of depth are in good agreement with the
theoretical resolution limit for both vertical and slanted lenticular-sheet types. The 1D-II display systems with parallel
beam configuration show spatial distribution of resolution in the horizontal direction corresponding to parallax crosstalk.
If the parallax crosstalk is not designed properly, this distribution is observed as moiré pattern and degrades 3D image
quality. When the gap between the lenticular sheet and the elemental image plane changes in the depth direction, the
apparent resolution curve shifts in the same direction; if the gap is large, objects displayed at the near side have higher
resolution, and if the gap is small, objects displayed at the far side have higher resolution. This phenomenon is also
explained by an effect of the parallax crosstalk caused by defocusing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
View synthesis technique is essential for FTV (Free viewpoint TV) systems. In this paper, we propose a multi-step view
synthesis algorithm to efficiently reconstruct an arbitrary view from limited number of known views of a 3D scene. We
describe an efficient image rectification procedure which guarantees that an interpolation process produces valid views.
This rectification method can be extended to multi-view images. Since, it transforms only one image. Then, to generate
high quality intermediate views, we use an efficient dense disparity estimation algorithm with occlusion handling. Main
concept of the algorithm is based on the region dividing bidirectional pixel matching. Estimated disparity vectors are
used to synthesize intermediate view of stereo images with occlusion handling. Experimental results show that the
performance is superior to other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents mobile phone based service of 3D virtual space. First, this paper introduces a 3D responsive
virtual space which includes 3D indoor virtual environment and model of real and virtual sensors in indoor
space. In responsive 3D virtual space, a status of 3D virtual environment is dynamically changed according to
the sensor status. Second, an interactive service on mobile phone is introduced for browsing of 3D responsive
virtual space. The main feature of this service is that interactive 3D view image browsing can be provided with
popular mobile phone without 3D graphic engine. Finally, the system implementation of our service and its
experiment are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, mobile phone has been requiring one of the most of necessaries as well as the mobile phone-camera would be
a representative method to presents as self-publicity and entertainment in human life. Furthermore, mobile phone with
stereo-camera to be more powerful tool to perform above mentioned fields and present specially three-dimensional
image to an observer compared to the existing it. In this paper, we investigated the constraints to obtain optimized
stereovision when it taken by the mobile phone applied stereo camera and they makes possible to good fusible stereo.
Theory and experiment were performed by permitted range of the disparity which was extracted by the stereogram on
mobile display. Consequently, the permitted horizontal and vertical disparities were taken up to +3.75mm and +2.59mm
based on mobile phone having 2.8" display, QVGA resolution, F/2.8, 54 degree of field of view and 220mm of viewing
distance. To examine suitability, the experiment is performed by ten subjects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A networked viewpoint controller exploits a spatiotemporal attention module for the smart user interaction of the future
3D TV. In this paper, a new approach to locate a focus of attention for the generation of candidate viewpoints is
suggested. The suggested method combines spatial and temporal features given from a series of images to provide
viewers with several viewpoints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method to capture directly a compressed version of an object's image. The compression is accomplished by
optical means with a single exposure. For objects that have sparse representation in some known domain (e.g. Fourier or
wavelet) the novel imaging systems has larger effective space-bandwidth-product than conventional imaging systems.
This implies, for example, that more object pixels may be reconstructed and visualized than the number of pixels of the
image sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D display using light beam reconstruction method has some great advantages. Special glasses are not needed. The observation point is not fixed. Some researcher claims that a viewer may be able to focus on 3D images under the super multi-view condition. However, the 3D display needs to reconstruct a great number of light beams. Usually, the number of light beams is limited by the resolution of a flat-panel display because only the space-division method is used. Therefore, improving the performance of the flat-panel display as a 3D display is difficult. Thus, using the time-multiplexing method is important.
In this paper, we discussed the 3D display using light beam reconstruction method that uses a fast light shutter as the 3D display with the time-multiplexing method. We consider the relationsip between the performance of the 3D display and that of the devices that comprise the 3D display. The simulation results of the super multi-view condition suggest that the number of light beams that enter the pupil of the viewer's eye and the width of the slit are important for the accommodation function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we address distortion-tolerant object recognition using photon-counting three-dimensional (3D) integral
imaging (II). A photon-counting linear discriminant analysis (LDA) is reviewed for classification of out-of-plane rotated
objects. We also investigate the effect of a large number of photons and the irradiance change in training and test objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we overview on a three dimensional imaging and tracking algorithm in order to track biological specimen
in sequence of holographic microscopy images. We use a region tracking method based on MAP estimator in a Bayesian
framework and we adapt it to 3D holographic data sequences to efficiently track the desired microorganism. In our
formulation, the target-background interface is modeled as the isolevel of a level set function which is evolved at each
frame via level set update rule. The statistical characteristics of the target microorganism versus the background are
exploited to evolve the interface from one frame to another. Using the bivariate Gaussian distribution to model the
reconstructed hologram data enables one to take into account the correlation between the amplitude and phase of the
reconstructed field to obtain a more accurate solution. Also, the level set surface evolution provides a robust, efficient
and numerically stable method which deals automatically with the change in the topology and geometrical deformations
that a microorganism may be subject to.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the simulation results on the analysis of Synthetic Aperture Integral Imaging (SAII) technique and its
sensitivity to pickup position uncertainty. SAII is a passive three dimensional imaging technique based on multiple
image acquisitions with different perspective of the scene under incoherent or natural illumination. In practical SAII
applications, there is always an uncertainty associated with the position at which each sensor captures the elemental
image. We present simulation results of image degradation in terms of Mean Square Error (MSE) metric. We also show
an inverse relationship between the reconstruction distance and degradation metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use independent component analysis technique to fuse reconstructed holographic images at different longitudinal
depths from the image sensor. We experiment using phase-shifting digital holography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A floating-image display technique, which can project two-dimensional images into a real space through a convex lens
or a concave mirror, has been studied as a new approach for implementation of the next-generation three-dimensional
(3D) display system. However, the conventional floating-image display system was implemented just by using active
display devices such as LCD panel and it could provide only a real plane image in space to an observer comparing with
other 3D display systems having different perspectives. For practical application of a floating-image display system to
3D display systems, multi-layered display structure might be required to present multi-depth images in space. In this
paper, a novel floating-image display system composed of two plane images with different depth by use of a half mirror
is proposed. One plane image of an object is provided with the conventional floating-image display system to present
and the other plane image of a background is provided with the integral imaging technique. Therefore, the proposed
display system can provide high-resolution floating images with background images having different perspectives to
observers. To show the usefulness of the proposed system, some experiments are carried out and the results are presented
as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image reconstructed by use of the computational integral imaging reconstruction at the output plane where a three-dimensional
object was originally located is almost focused, while its image has defocused areas due to the background
images or the other object images reconstructed at the same output plane and it gets considerable in case of multiple
objects. For overcoming this problem, this paper presents a digital technique that estimates a blur measure of the
reconstructed plane images and efficiently eliminates defocused areas. The depth of a three-dimensional object can be
accurately detected from a blur measure and the resolution and quality of reconstructed plane images are slightly
enhanced by adaptive erosion operation. Therefore, we qualitatively expect this scheme to work well in the specific task
such as a part of the system for object detection and recognition in integral imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A three-dimensional (3D) imaging system using multi-cameras is presented. Perspectives of a 3D object are taken by the
multi-cameras located randomly on a circle. The 3D object can be reconstructed numerically by waveform
reconstruction with an angle correction function. The angle correction function is introduced to correct the angle of
camera at each pixel in the projected image and at each 3D reconstructed position. Numerical results show that point
sources can be reconstructed successfully. Experimental results of two 3D objects are also presented.iminates defocused areas. The depth of a three-dimensional object can be
accurately detected from a blur measure and the resolution and quality of reconstructed plane images are slightly
enhanced by adaptive erosion operation. Therefore, we qualitatively expect this scheme to work well in the specific task
such as a part of the system for object detection and recognition in integral imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel 3D display using a new principle which has the features of both Integral Imaging (II) and
volumetric display. The proposed display consists of one 2D display and two lens arrays, a convex lens array and a
concave lens array. The two lens arrays are placed between the 2D display and the observer. When the observer watches
the 2D display through the two lens arrays, he feels that the image displayed by 2D display is reproduced at the position
which is different from the position of the 2D display. Furthermore, by changing the position of the 2D display, the
image is reproduced at the different position than before. Therefore the various depth images are reproduced by moving
2D display. This is how the proposed display reconstructs 3D space. Here, we simulated this display with ray tracing and
checked its validity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a resolution-enhanced integral imaging with pinhole arrays on liquid crystal (LC) panel. Since
light through a pinhole corresponds to a pixel in 3D image, we electrically move the pinhole arrays on LC panel fast
enough to make after-image effect and display corresponding elemental image synchronously without reducing the 3D
viewing aspect of the reconstructed image. The explanation of the proposed system will be provided and the
experimental results will also be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new free-view video system that generates 3D video from arbitrary point of view, using
multiple cameras. When target objects are captured by these cameras, the PC allocated to each capturing camera
segments the objects and transmits the masks and color textures to a 3D modeling server via the system's network. The
modeling server then generates 3D models of each object from the gathered masks. Finally, the server generates a 3D
video at the designated point of view with the 3D model and texture information. In 3D modeling, a reliability-based
shape-from-silhouette technique reconstructs a visual hull by carving a 3D space based on the intra-/inter-silhouette
reliabilities. In final view rendering, we use a cinematographic camera control system and an ARToolkit to control
virtual cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer holographic stereogram (CHS) is useful for holographic 3D TV because it is constructed from
the multi horizontal viewpoint plane images and is compatible to the multi-view point images. Each
hologram is recorded as a slit hologram (element hologram) but total viewing area and the number of the
element holograms have been limited to some extent by the size and the resolution points of LCD.
Therefore we used two LCDs for making CHS and deposited them horizontally and increased the viewing
points to two times and extend the display area to satisfy the binocular parallax. We considered how
viewing area becomes extended. We consider how we could improve the characteriostics of the images of
CHS. From now we consider the condition such as transmission and real time calculation about the
3D-TV
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a practical way of adjusting global disparity with binocular energy model and image partition
for given stereoscopic images. Previous method estimated a single global disparity and then used it directly to
control the convergence angle between cameras. But, the previous method might cause local disparities to be
excessive in some regions since a single global disparity has been considered. Hence in this paper, we consider how
to mitigate the excessive disparities in some regions. To begin with, we partition the stereoscopic images into 4
sub-images, respectively, and then calculate multiple local disparities for a pair of partitioned images. Secondly,
we define a new disparity by the average value of the local disparities. Lastly, the newly defined disparity is used
to adjust the global disparity of the given stereoscopic images. Through experimental results, we show that the
proposed method can prevent the local disparities from being excessive in some regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose the holographic reconstruction technique from the images that captured by II technique with
some image processing. Elemental image array of 3D object is captured by II technique and transformed by sub-image
array. Then elemental hologram pattern is generated by each sub-image by computational method then arranged by form
of sub-image array. Finally, the arranged hologram pattern is reconstructed using the reference wave that using the
hologram generation process. In the simulation, the characters of 'KW' with different depth are used as 3D objects and
pickuped and processed using II technique. Then processed image is successfully reconstructed using hologram
technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An infrared transmitting technique for 3D holographic images is studied. It seems to be very effective as a transmitting
technique for 3D holographic images in the places where electric wave is prohibited to be used for transmission. In this
paper, we first explain our infrared transmitting system for holograms and a display system for the presentation of
holographic 3D images reconstructed from the received signal. Next, we make a report on the results obtained by infrared
transmission of CGH and a comparison of the real and the reconstructed 3D images in our system. As this result, it is
found that reconstructed holographic 3D images do not suffer a large deterioration in the quality and highly contrasted
ones can be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new depth-fused 3-D (DFD) display for multiple users is presented. A DFD display, which consists of a stack of
layered screens, is expected to be a visually comfortable 3-D display because it can satisfy not only binocular disparity,
convergence, accommodation, but also motion parallax for a small observer displacement. However, the display cannot
be observed from an oblique angle due to image doubling caused by the layered screen structure, so the display is
applicable only for single-observer use. In this paper, we present a multi-viewing-zone DFD display using a stack of a
see-through screen and a multi-viewing-zone 2-D display. We used a film, which causes polarization-selective scattering,
as the front screen, and an anisotropic scattering film for the rear screen. The front screen was illuminated by one
projector, and the screen displayed an image at all viewing angles. The rear screen was illuminated by multiple
projectors from different directions. The displayed images on the rear screen were arranged to be well overlapped for
each viewing direction to create multiple viewing zones without image doubling. This design is promising for a large-area
3-D display that does not require special glasses because the display uses projection and has a simple structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In general, it is necessary for Multi-view Video Coding (MVC) methods to compress multi-view videos efficiently and
have a property of view-scalability in order to decode arbitrary views according to any viewer's interests. Much research
has been done on MVC methods, with the goal of increasing coding efficiency. Although these previous methods have
considered the property of view-scalability, a lot of coding bits and delays were necessary to decode arbitrary views. In
this paper, we propose an MVC method based on image stitching. We generated a stitched reference and encoded multiview
sequences using disparity-compensated method. The proposed method is able to reduce delays during the decoding
stage. Experimental results show that the proposed MVC method increased the PSNR by 1.5~2.0dB and saved 10% of
the coding bits compared to simulcast coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.