A time-of-flight (ToF) depth camera can capture a depth map of the scene by measuring the phase delay between the emitted and reflected infrared (IR) light signals. In addition, an intensity map that represents the magnitude of the reflected light can be obtained by the ToF camera. If we consider the light source of the ToF camera as a flash, the intensity map can be deemed as an IR flashed image. Building on ideas from flash/no-flash photography and dark flash photography, we devise a color image enhancement framework that exploits information from the intensity and depth maps. To this end, ToF-related distortions of the intensity and depth maps are first reduced. We then restore fine details of color images captured under weak illumination by combining mutually beneficial information from the visible and IR band signals. In addition, we show that the depth map can be used to produce depth-adaptive effects such as depth-adaptive smoothing at the resultant color image.
Commercially available time-of-flight cameras illuminate the scene with amplitude-modulated infrared light signals and detect their reflections to provide per-pixel depth maps in real time. These cameras, however, suffer from an inherent problem called phase wrapping, which occurs due to the modular ambiguity in the phase delay measurement. As a result, the measured distance to a scene point becomes much shorter than its actual distance if the point is farther than a certain maximum range. There have been multifrequency phase unwrapping methods, which recover the actual distance values by exploiting the consistency in the disambiguated depth values across depth maps of the same scene, acquired at different modulation frequencies. For robust and accurate estimation against noise, a cost function is built that evolves over time to enforce both the interframe depth consistency and the intraframe depth continuity. As demonstrated in the experiments with real scenes, the proposed method correctly disambiguates the depth measurements, extending the maximum range restricted by the modulation frequency.
Time-of-flight cameras measure the distances to scene points by emitting and detecting a modulated infrared light signal. The modulation frequency of the signal determines a certain maximum range within which the measured distance is unambiguous. If the actual distance to a scene point is longer than the maximum range, the measured distance suffers from phase wrapping, which makes the measured to be shorter than its actual distance by an unknown multiple of the maximum range. This paper proposes a time-of-flight camera, which is capable of restoring the actual distance by simultaneously emitting light signals of different modulation frequencies and detecting them separately in different regions of the sensor. We analyze the noise characteristic of the camera, and aquire simulated depth maps using a commercially available time-of-flight camera, reflecting the increased amount of noise due to the use of dual-frequency signals. We finally propose a phase unwrapping method for restoring the actual distances from such a dual-frequency depth map. Through experiments, we demonstrate that the proposed method is capable of extending the maximum range at least twice, with high success rates.
Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.
This paper presents a novel Time-of-Flight (ToF) depth denoising algorithm based on parametric noise modeling.
ToF depth image includes space varying noise which is related to IR intensity value at each pixel. By assuming
ToF depth noise as additive white Gaussian noise, ToF depth noise can be modeled by using a power function
of IR intensity. Meanwhile, nonlocal means filter is popularly used as an edge-preserving denoising method
for removing additive Gaussian noise. To remove space varying depth noise, we propose an adaptive nonlocal
means filtering. According to the estimated noise, the search window and weighting coefficient are adaptively
determined at each pixel so that pixels with large noise variance are strongly filtered and pixels with small
noise variance are weakly filtered. Experimental results demonstrate that the proposed algorithm provides good
denoising performance while preserving details or edges compared to the typical nonlocal means filtering.
Recently a Time-of-Flight 2D/3D image sensor has been developed, which is able to capture a perfectly aligned
pair of a color and a depth image. To increase the sensitivity to infrared light, the sensor electrically combines
multiple adjacent pixels into a depth pixel at the expense of depth image resolution. To restore the resolution
we propose a depth image super-resolution method that uses a high-resolution color image aligned with an input
depth image. In the first part of our method, the input depth image is interpolated into the scale of the color
image, and our discrete optimization converts the interpolated depth image into a high-resolution disparity image,
whose discontinuities precisely coincide with object boundaries. Subsequently, a discontinuity-preserving filter is
applied to the interpolated depth image, where the discontinuities are cloned from the high-resolution disparity
image. Meanwhile, our unique way of enforcing the depth reconstruction constraint gives a high-resolution depth
image that is perfectly consistent with its original input depth image. We show the effectiveness of the proposed
method both quantitatively and qualitatively, comparing the proposed method with two existing methods. The
experimental results demonstrate that the proposed method gives sharp high-resolution depth images with less
error than the two methods for scale factors of 2, 4, and 8.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.