KEYWORDS: Video compression, Video, Image compression, 3D video compression, 3D image processing, Video coding, 3D image reconstruction, Image filtering, Dimension reduction, Signal to noise ratio
We propose a novel method of 5-D dynamic light field compression using multi-focus images. A light field enables us to observe its scene from various viewpoints. However, it generally consists of 4-D enormous data, that are not suitable for storing or transmitting without effective compression. 4-D light fields are very redundant because they essentially include just 3-D scene information. Actually, a method of reconstructing a light field directly from 3-D information composed of multi-focus images without any scene estimation is successfully derived, though robust 3-D scene estimation such as depth recovery from light fields is not so easy. Previously, based on the method, we proposed novel light field compression via multi-focus images as effective representation of 3-D scenes. In this paper, we extend this method to compression of 5-D light fields composed of multiview videos including the time domain. It achieves significant improvement in compression efficiency by utilizing multiple redundancy of 5-D light fields. We show experimental results using synthetic videos. Quality of reconstructed light fields is evaluated by PSNR and SSIM for analyzing characteristics of its performance. They reveal that our method is much superior to light field compression using HEVC at practical lower bit-rates.
In this paper, we study dense multi-view systems transmitting light fields beyond a visual obstruction as if it were transparent. Multi-view 3d displays appropriately combined with camera arrays or lens arrays provide consistently augmented reality for many users at the same time as a simple solution of such occlusion in the real world. We first describe a design of our proposed multi-view system, appearances of which are evaluated by assuming state-of-the-art visual devices to be used. Then, in order to reduce image sensors acquiring light fields for inexpensive systems, efficient interpolation is introduced to reconstruct the entire 4d light field in real time for consistency with the real world. We show that our practical implementation on a GPGPU actually achieves real-time interpolation obtaining enormous data of light fields without severe degradation for various depths.
This paper deals with the technique for generating free viewpoint images by using multi-focus imaging sequences.
The method may enable us to realize dense 4-D ray-space reconstruction from 3-D multi-focus imaging sequences.
Previously, we proposed a method of generating free viewpoint, iris and focus images directly from multi-focus
imaging sequences with a 3-D convolution filter that expresses how the scene is defocused in the sequence.
However, the cost of the spatial frequency analysis based on 3-D FFT is not inexpensive. In this paper, efficient
reconstruction of free viewpoint images by dimension reduction and 2-D FFT is discussed. We show some
experimental results by using synthetic and real images. Epipolar plane images are also reconstructed in order
to show the disparity between generated free viewpoint images clearly.
This paper deals with a view interpolation problem using multiple images captured with a circular camera array.
An inverse filtering method for reconstructing a virtual image at the center of the camera array is proposed.
First, we generate a candidate image by adding all correspondence pixel values based on multiple layers assumed in a scene.
Second, we model a linear relationship between the desired virtual image and the candidate image, and then derive the inverse filter to reconstruct the virtual image.
Since we can derive the inverse filter independent of the scene structure,
the proposed method requires no depth estimation.
Simulation results using synthetic images show
that increasing the number of cameras and layers improves the quality of the virtual image.
In the paper, we propose a novel method of arbitrarily focused image acquisition using multiple differently focused images. First, we describe our previous select-and-merge method for all-focused image acquisition. We can get god results by using this method but it's not easy to extend this method for generating arbitrarily focused images. Then, based on the assumption that depth of the scene changes stepwise, we derive a formula for reconstruction between the desired arbitrarily focused image and multiple acquired images; we can reconstruct the arbitrarily focused image by iterative use of the formula. We also introduce coarse-to- fine estimation of PSFs of the acquired images. We show we can reconstruct arbitrarily focused images for a natural scene. In other words, we can simulate virtual cameras and synthesize images focused on arbitrarily depths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.