Motion artefacts in time-of-flight range imaging are treated as a feature to measure. Methods for measuring linear radial velocity from range imaging cameras are developed and tested. With the measurement of velocity, the range to the position of the target object at the start of the data acquisition period is computed, effectively correcting the motion error. A new phase based pseudo-quadrature method designed for low speed measurement measures radial velocity up to ±1.8 m/s with RMSE 0.045 m/s and standard deviation of 0.09-0.33 m/s, and new high-speed Doppler extraction method measures radial velocity up to ±40 m/s with standard deviation better than 1 m/s and RMSE of 3.5 m/s.
Time of flight (ToF) range cameras illuminate the scene with an amplitude-modulated continuous wave light source and measure the returning modulation envelopes: phase and amplitude. The phase change of the modulation envelope encodes the distance travelled. This technology suffers from measurement errors caused by multiple propagation paths from the light source to the receiving pixel. The multiple paths can be represented as the summation of a direct return, which is the return from the shortest path length, and a global return, which includes all other returns. We develop the use of a sinusoidal pattern from which a closed form solution for the direct and global returns can be computed in nine frames with the constraint that the global return is a spatially lower frequency than the illuminated pattern. In a demonstration on a scene constructed to have strong multipath interference, we find the direct return is not significantly different from the ground truth in 33/136 pixels tested; where for the full-field measurement, it is significantly different for every pixel tested. The variance in the estimated direct phase and amplitude increases by a factor of eight compared with the standard time of flight range camera technique.
Time-of-flight range imaging cameras are capable of acquiring depth images of a scene. Some algorithms require these cameras to be run in `raw mode', where any calibrations from the off-the-shelf manufacturers are lost. The calibration of the MESA SR4000 is herein investigated, with an attempt to reconstruct the full calibration. Possession of the factory calibration enables calibrated data to be acquired and manipulated even in “raw mode.” This work is motivated by the problem of motion correction, in which the calibration must be separated into component parts to be applied at different stages in the algorithm. There are also other applications, in which multiple frequencies are required, such as multipath interference correction. The other frequencies can be calibrated in a similar way, using the factory calibration as a base. A novel technique for capturing the calibration data is described; a retro-reflector is used on a moving platform, which acts as a point source at a distance, resulting in planar waves on the sensor. A number of calibrations are retrieved from the camera, and are then modelled and compared to the factory calibration. When comparing the factory calibration to both the “raw mode” data, and the calibration described herein, a root mean squared error improvement of 51:3mm was seen, with a standard deviation improvement of 34:9mm.
Time-of-flight range imaging systems typically require several raw image frames to produce one range, or depth, image. The problem of motion blur in traditional imaging is compounded in time-of-flight imaging because motion between these raw frames leads to invalid data. The use of the coded exposure and optical flow techniques are investigated together for correcting both motion blur within each frame and errors arising due to changes between frames. Examples of the motion correction in real range measurements are also given, along with comparisons to reference data, which shows a significant improvement over noncorrected output.
Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single
viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path
problem, which causes range distortions when stray light interferes with the range measurement in a given pixel.
Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but
enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating
the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only
been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this
paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator
unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization
approach, rather than relying on the processing provided by the manufacturer, to determine the individual component
returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.
KEYWORDS: Signal to noise ratio, Multiplexing, Light sources, Computer programming, Spectroscopy, Light, Data acquisition, Modulation, Hyperspectral imaging, Imaging systems
A hyperspectral imaging system is in development. The system uses spatially modulated Hadamard patterns to encode image information with implicit stray and ambient light correction and a reference beam to correct for source light changes over the spectral image capture period. In this study we test the efficacy of the corrections and the multiplex advantage for our system. The signal to noise ratio (SNR) was used to demonstrate the advantage of spatial multiplexing in the system and observe the effect of the reference beam correction. The statistical implications of the data acquisition technique, illumination source drift and correction of such drift, were derived. The reference beam correction was applied per spectrum before Hadamard decoding and alternately after decoding to all spectra in the image. The reference beam method made no fundamental change to SNR, therefore we conclude that light source drift is minimal and other possibly rectifiable error sources are dominant. The multiplex advantage was demonstrated ranging from a minimum SNR boost of 1.5 (600-975 nm) to a maximum of 11 (below 500 nm). Intermediate SNR boost was observed in 975-1700 nm. The large variation in SNR boost is also due to some other error source.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.