Three-dimensional (3D) imaging with structured light is crucial in diverse scenarios, ranging from intelligent manufacturing and medicine to entertainment. However, current structured light methods rely on projector–camera synchronization, limiting the use of affordable imaging devices and their consumer applications. In this work, we introduce an asynchronous structured light imaging approach based on generative deep neural networks to relax the synchronization constraint, accomplishing the challenges of fringe pattern aliasing, without relying on any a priori constraint of the projection system. To overcome this need, we propose a generative deep neural network with U-Net-like encoder–decoder architecture to learn the underlying fringe features directly by exploring the intrinsic prior principles in the fringe pattern aliasing. We train within an adversarial learning framework and supervise the network training via a statistics-informed loss function. We demonstrate that by evaluating the performance on fields of intensity, phase, and 3D reconstruction. It is shown that the trained network can separate aliased fringe patterns for producing comparable results with the synchronous one: the absolute error is no greater than 8 μm, and the standard deviation does not exceed 3 μm. Evaluation results on multiple objects and pattern types show it could be generalized for any asynchronous structured light scene.
KEYWORDS: Calibration, Cameras, Distortion, 3D modeling, 3D acquisition, Stereoscopic cameras, 3D metrology, Visualization, Optical engineering, Visual process modeling
Camera calibration is crucial for geometric vision-based three-dimensional (3D) deformation measurement tasks. Among existing calibration techniques, the one based on planar targets has attracted much attention in the community due to its flexibility and reliability. Our study proposes a calibration technique to obtain high-accuracy internal and external parameters based on low-cost ordinary planar patterns. The proposed method determines the optimal internal parameters for each camera by refining 3D coordinates of planar control points, where an analytic model of optics distortion is presented to enable lens distortion to be corrected directly in subsequent external calibration and underlying 3D reconstruction. External parameters are estimated by minimizing a bundle adjustment framework, which is carefully designed based on the proposed distortion correction model and depth parameterization. In contrast to the existing techniques, the proposed method is capable of obtaining a high-accuracy calibration with ordinary targets rather than the well-designed and fabricated ones. We experimented the proposed method with a calibration performance analysis and a displacement measurement; both results demonstrated the accuracy and robustness.
Measuring surface deformation of objects with natural patterns using digital image correlation (DIC) is difficult due to the challenges of the pattern quality and discriminative pattern matching. Existing studies in DIC predominantly focus on the artificial speckle patterns while seldom paying attention to the inevitable natural texture patterns. We propose a recursive-iterative method based on salient features to measure the deformation of objects with natural patterns. The method is proposed to select salient features according to the local intensity gradient and then to compute their displacements by incorporating the inverse compositional Gauss–Newton (IC-GN) algorithm into the classic image pyramidal computation. Compared with the existing IC-GN-based DIC technology, the use of discriminative subsets allows avoidance of displacement computation at pixels with poor spatial gradient distribution. Furthermore, the recursive computation based on the image pyramid can estimate the displacements of the features without the need for initial value estimation. This method remains effective even for large displacement measurements. The results of simulation and experiment prove the method’s feasibility, demonstrating that the method is effective in deformation measurement based on natural texture patterns.
The strain errors in stereo-digital image correlation (DIC) due to camera calibration were investigated using precisely controlled numerical experiments and real experiments. Three-dimensional rigid body motion tests were conducted to examine the effects of camera calibration on the measured results. For a fully accurate calibration, rigid body motion causes negligible strain errors. However, for inaccurately calibrated camera parameters and a short working distance, rigid body motion will lead to more than 50-μϵ strain errors, which significantly affects the measurement. In practical measurements, it is impossible to obtain a fully accurate calibration; therefore, considerable attention should be focused on attempting to avoid these types of errors, especially for high-accuracy strain measurements. It is necessary to avoid large rigid body motions in both two-dimensional DIC and stereo-DIC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.