KEYWORDS: Cameras, Sensors, 3D displays, Video, 3D image processing, Video acceleration, Denoising, RGB color model, Data fusion, Time of flight cameras
We have developed an end-to-end system for 3D scene sensing which combines a conventional high-resolution RGB
camera with a low-resolution Time-of-Flight (ToF) range sensor. The system comprises modules for range data denoising,
data re-projection and non-uniform to uniform up-sampling and aims at composing high-resolution 3D video
output for driving auto-stereoscopic 3D displays in real-time. In our approach, the ToF sensor is set to work with short
integration time with the aim to increase the capture speed and decrease the amount of motion artifacts. However,
reduced integration time leads to noisy range images. We specifically address the noise reduction problem by performing
a modification of the non-local means filtering in spatio-temporal domain. Time-consecutive range images are utilized
not only for efficient de-noising but also for accurate non-uniform to uniform up-sampling on the high-resolution RGB
grid. Use is made of the reflectance signal of the ToF sensor for providing a confidence-type of feedback to the denosing
module where a new adaptive averaging is proposed to effectively handle motion artifacts. As of the non-uniform
to uniform resampling of range data is concerned, we have developed two alternative solutions; one relying entirely on
the GPU power and another being applicable to any general platform. The latter method employs an intermediate virtual
range camera recentering after with the resamploing process degrades to a 2D interpolation performed within the lowresolution
grid. We demonstrate a real-time performance of the system working in low-power regime.
We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format
particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras
might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework,
described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We
also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the
estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how
estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.
We propose an image registration technique to be implemented on mobile devices equipped with cameras. We address
the limited computational power and low-quality optics of such devices and aim at designing a registration algorithm,
which is fast, robust with respect to noise, and allows for corrections of optical distortions. We favor a feature-based
approach, consisting of feature extraction, feature filtering, feature matching, and transformation estimation. In our
application, the transformation estimation is robust to local distortions, and is accurate enough to allow for a subsequent
super-resolution on the registered images. The performance of the technique is demonstrated in fixed-point
implementation on the TMS 320 C5510 DSP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.