Open Access
1 March 2024 Calibration method of a three-dimensional scanner based on a line laser projector and a camera with 1-axis rotating mechanism
Van-Tung Ha, Van-Phu Do, Byung-Ryong Lee
Author Affiliations +
Abstract

This study introduces a scanning system that utilizes a motor platform to mount a line laser and a camera. Two crucial parameters are considered to extract three-dimensional information from a single frame: a rotation axis and a specific point along this axis. A calibration approach for rotation axis identification is presented to determine these parameters. The rotation axis corresponds to the normal vector of the camera’s movement plane and defines the rotation matrix during the scanning process. The identified point on the rotation axis serves as the translation matrix’s center and represents the focal point of the camera’s trajectory. A comprehensive point cloud is generated by assembling multiple frames using the scanner’s rotation angle. Experimental findings substantiate the method’s efficacy in enhancing the system’s scanning for plane reconstruction and accuracy, and the scanning quality is much better than that of the previous approach.

1.

Introduction

Laser scanning is a widely adopted technique for generating three-dimensional (3D) imagery; it is valued for its simplicity, rapidity, and adaptability in capturing a diverse array of objects.17 These scanning systems find utility when integrated into robots for tasks such as damage assessment, point selection, and navigation within confined spaces like tunnels, pipelines, and indoor environments.4,815 Nonetheless, many cost-effective 3D scanners possess restricted scanning areas. To surmount this challenge, researchers have proposed solutions involving rotary mechanisms,1619 as well as systems combining rotating lasers and static cameras.20 It is important to note that these methods entail various limitations, including high costs, applicability limited to movable reconstruction objects, and usage constraints tied to objects falling within predetermined camera fields of view.

This paper presents a 3D scanner with a line laser projector and a camera attached to a rotating mechanism. By rotating the scanner at specific angles and combining the 3D data from each angle, we scan an entire scene without any limitations on the scanning area. This rotating process allows us to check objects with multiple sides and even their inner spaces.

The scanning process involves two primary steps. First, line scanning of the object is conducted using the laser triangulation technique to generate 3D data from a stationary position. Subsequently, the scanner undergoes rotation and amalgamates data captured from diverse angles to construct a comprehensive 3D model. The laser triangulation technique relies on a stereo system model to precisely ascertain the intersection points of the laser line pattern with the object’s surface on the camera sensor. Typically, the camera origin and rotation axis are misaligned to ensure that the laser line remains within the camera’s field of view during rotation.

In the second scanning step, the integration of multi-frame data is facilitated by employing the translation relationship between different-time camera coordinate systems and the rotational transformation. This approach enables the seamless compilation of multiple frames, and yields an output that is more exhaustive and precise. This process contributes to an enhanced and efficient scanning procedure.

As shown in Fig. 1, our system places the line laser projector and camera at a distance from the rotation axis on opposite sides of the turntable. The laser’s position can be freely designed as long as the laser line remains within the camera’s field of view during the rotation process. We propose a method for identifying the rotation axis, which remains constant during rotational motion, and a point on the rotation axis. The rotation axis is identified as the perpendicular vector to a plane within which the camera’s point of origin undergoes motion. Additionally, a designated point on this rotation axis is defined as the central point of a 3D circle, which symbolizes the trajectory of the camera coordinate system’s origins.

Fig. 1

3D model of rotating-scan system.

OE_63_3_033101_f001.png

This paper introduces a new calibration technique for a 3D scanner that uses a line laser projector and a camera with a 1-axis mechanism. Using a planar calibration pattern, the method extracts the axis of rotation and a point belonging to the axis. Images of the calibration board are taken at several different poses. By relating the actual and pixel dimensions of the calibration board, the origin of the camera frame in the calibration board coordinate frame is determined.

The path of the camera’s origin and its holding plane are fitted to an equation of a circle in 3D space. From this result, we extract the necessary information about the rotation axis and the point on the rotation axis for recovering point clouds from each scanning angle and combining them into a world coordinate. Finally, the exported result is optimized using a non-linear optimization method based on the point-to-point relationship.

This paper presents an accurate calibration method between the line laser projector and camera platform with a 1-axis rotating mechanism for comprehensive and precise 3D scanning. The remainder of this paper is structured as follows: Sec. 2 reviews previously available calibration methods; Sec. 3 describes the overall system and scanning process; Sec. 4 presents the calibration methods; Sec. 5 reports on calibration results and experiments; and finally, Sec. 6 offers the conclusions.

2.

Related Work

The reconstruction of object surfaces encompasses a range of algorithms characterized by distinct architectures and measurement methodologies. A prevalent approach involves the rotational manipulation of objects coupled with capturing images from varying viewpoints. Broadly, such scanning systems can be categorized into two primary types: those characterized by a rotation axis within the camera’s field of view and those with the rotation axis positioned outside the camera’s field of view. Calibration principles are contingent upon the specific rotational configuration employed in each context.

In their scholarly investigation, Cai et al.18 introduced a calibration methodology for panoramic 3D shape measurement systems wherein the rotation axis resided within the camera’s field of view. This technique employs an auxiliary camera to enhance the precision of the rotation axis vector calibration while streamlining the calibration process. The procedure entails determining the rotation axis vector of the turntable employing a secondary camera (Camera-2), establishing the transformation relationship between coordinate systems from multiple views, and executing the rotation registration predicated on this established relationship. Although this approach advances the accuracy of point cloud registration, it presents certain limitations, including the necessity for an additional camera and meticulous alignment of the camera and projector concerning the turntable axis.

In a parallel study, Bisogni et al.21 proposed a dual-axis rotary table system deploying a single checkerboard and multiple images at varying poses to derive insights into both axes. The method involves affixing a checkerboard to the table and capturing several images with diverse poses; these are then harnessed in an optimization algorithm to discern the orientations and positions of the two axes. The paper also introduced a metric for evaluating the calibration quality, quantified through the average mean reprojection error. However, a limitation of this approach surfaces when it is extended to encompass multi-axis turntables, as each axis mandates separate calibration conducted consecutively. Furthermore, the paper lacks comprehensive specifications concerning the positioning of the calibration target and the optimal number of images required for attaining satisfactory outcomes. Experiments encompassing diverse target placements and image acquisition strategies might be imperative to attaining precise calibration results.

Specific calibration techniques have been proposed in scenarios in which the rotation axis lies outside the camera’s field of view. Zhao et al.22 presented a calibration method for a self-rotating linear-structured-light (LSL) scanning system geared toward 3D reconstruction. Grounded in-plane constraints, this technique involves collecting point cloud data from a planar target and aligning it with the fundamental plane equation principle. The objective is to calibrate the position parameters between the LSL module’s coordinate system and the self-rotating, LSL scanning, and 3D reconstruction system’s coordinate system. A transformation equation is derived from the optimized position parameters. Experimental findings validate the efficacy of this approach in enhancing measurement accuracy within the self-rotating LSL scanning and 3D reconstruction system. However, it was observed that issues stemming from initial geometric factor values based on 3D modeling and estimation can lead to computationally demanding challenges. Lee et al.20 devised a model encompassing a rotating line laser and a stationary camera featuring an expansive field of view. A specialized calibration board was utilized in this model to ascertain changes in the laser plane during rotation, hinging on the laser’s rotation axis within the camera coordinate system. Employing a cone model, the authors estimated the rotation axis between the camera and the rotating line laser coordinate systems. This cone model proves effective due to its alignment of the central axis with the motor’s rotation axis, facilitating precise estimation and improved calibration outcomes. However, the accuracy of this calibration depends on the precise estimation of the rotation axis relative to the camera coordinates. Triangulation is used to measure the object depth, with attention given to factors potentially affecting rotation axis accuracies, such as mechanical fabrication errors, line laser misalignment, and optical characteristics. This approach yields accurate 3D data from the scanning system, although experimental results suggest that the system’s accuracy remains limited when exceeding 4 mm.

Niu et al.23 introduced a method using two checkerboards and an additional camera to establish the relationship between a camera fixed on a rotation axis and its orientation. Experimental validation underscores the proposed method’s high precision and flexibility. Notably, the calibration procedure necessitates two checkerboards and an additional camera with an extensive viewing angle, potentially posing limitations in certain practical applications. Kurnianggoro et al.24 innovated a calibration technique for a rotating two-dimensional (2D) laser scanner system, aiming to extract the rotation axis and radius via a planar calibration pattern. This process entails scanning a planar calibration pattern from various poses to collect measurement data points at specific rotation angles. The measured points are subsequently treated as virtual points, rotated by their corresponding angles to restore their original positions. A linear equation is established based on the relationship among the virtual end, initial point, rotation axis, rotation radius, and parameters of the calibration plane.

3.

Overall System

3.1.

Hardware Configuration

Our scanning setup encompasses three essential components: a line laser projector, a camera, and a platform affixed to a motor. For the line laser, we employed a FLEXPOINT MVnano model featuring a 660 nm wavelength and a fan angle span of 90 deg. The camera utilized was a Basler acA1440-220um, coupled with an 8 mm focal lens. The design of the platform was tailored to secure the line laser, camera, and motor effectively. To facilitate motorized rotation, we incorporated DGM60-AZAK/AZD-KD Hollow Rotary Actuators endowed with an adjustable resolution. In this particular configuration, a rotation step of 0.05 deg was employed.

3.2.

Scanning System Processing

Our approach involves executing three distinct processes to transform 2D image data into 3D information, as illustrated in Figs. 2(a) and 2(b). The scanning procedure comprises three primary constituents. The calibration procedures are employed to establish the interrelationships of transformation between the camera and the other two devices.

Fig. 2

(a) Calibration process; (b) scanning process.

OE_63_3_033101_f002.png

When we use a laser to scan an object, it illuminates a specific position on object’s surface for every angle of the motor. As the motor moves through a series of adjacent consecutive positions, the laser produces a sequence of corresponding line images on the object’s surface. By extracting the coordinates of these laser lines from the photos and combining them with calibration data and the motor’s angular value, we determine the object’s coordinate value in 3D space. This data can then be used for simulation or saved in computer memory.

4.

Calibration Method

4.1.

Construction on the Target Frame

The image in Fig. 3(a) illustrates the coplanar calibration board employed in this study; it comprises 121 circular features organized in an 11 by 11 grid. These circles appear as ellipses in the captured images. Our calibration process utilizes the circle centers as reference points. The central circle’s center establishes the origin of the intended coordinate system. We analyze the four largest circular features surrounding this origin for orientational axes determination. Figure 3(b) presents the resultant target frame.

Fig. 3

(a) Image of calibration board; (b) calibration board coordinate frame.

OE_63_3_033101_f003.png

4.2.

Calibration of Camera

Camera calibration plays a crucial role in 3D reconstruction, enabling the extraction of metric details from 2D images. Its primary objective is to establish the intrinsic parameters that facilitate the mapping of 3D points from the global coordinate system to 2D image coordinates. Numerous techniques have been introduced for camera calibration.25 Notably, Zhang’s approach26 is widely adopted due to its remarkable accuracy and user-friendly procedure.

4.3.

Calibration of Camera-Laser

The process of camera-laser calibration establishes the mathematical connection between the laser plane and the camera’s coordinate system, as depicted in Fig. 4. The process involves identifying at least three points within the camera’s field of view that lie on the laser plane. These points’ coordinates are derived from the image of the laser line. To achieve this, we leverage a calibration board, which aids in establishing the transformation between the camera’s coordinate system and the world coordinate system (specifically, the calibration board’s coordinate system). Each laser line pose is captured in two images:27 one containing the laser line and another featuring the calibration references.

Fig. 4

Camera-laser model.

OE_63_3_033101_f004.png

As illustrated in Fig. 5, the camera-laser calibration process unfolds by extracting the laser points’ coordinates on the calibration board through homography, which links feature points on the board to their respective image locations. The perspective-n-point algorithm helps deduce the transformation relationship. By merging the laser points’ coordinates with transformation data, we ascertain their placement within the camera’s coordinate framework. A least-squares approach to fit these points to a plane equation allows us to determine the camera-laser calibration particulars swiftly, given as

Eq. (1)

l:  Ax+By+Cz+D=0.

Fig. 5

Process of camera-laser calibration.

OE_63_3_033101_f005.png

Numerous algorithms have been put forth for the extraction of laser peaks, each presenting its own merits and drawbacks.28 In our study, we opted for the Blais and Rioux Detector (BR4)29 due to its proficient performance when dealing with stripe widths characterized by Gaussian width parameters exceeding 2 pixels. Additionally, it demonstrates minimal error rates.

4.4.

Calibration of Camera-Motor

4.4.1.

Overview

The positional correlation between the rotational orientation and the origin of the camera’s coordinate system remains consistent throughout its motion. This stability underscores the necessity to define both the rotation axis and a point within the camera’s coordinate system for calibrating the rotation axis.

During the motion, the camera and laser line traverse a shared plane as the motor rotates. We designate the axis of rotation as u, represented by the perpendicular vector of this plane. Given that a particular point within the camera-laser line system follows a 3D circular orbit as the motor moves, we establish a point on the rotation axis u by identifying the center of the circular trajectory traced by the camera coordinate system’s origin during this motion.24 We designate C as a reference point on the rotation axis. In tandem with a known angular change, it is possible to convert any 3D point situated within the camera’s coordinate system at a specific angular position ϕ into a pre-defined camera frame, referred to as the absolute coordinate system. The initial angular position at which the system initiates motion is the reference for this absolute coordinate grid (ϕ=0).

This process involves defining ABF as the coordinate of AB in the spatial framework F and Rϕ as the rotation matrix that rotates a point by ϕ degrees from M0 to Mϕ. Assume that two frames, M0 and Mϕ, are acquired during the motor’s motion, as depicted in Fig. 6. M0 serves as both the starting frame and the absolute coordinate system. Subsequently, Mϕ is the frame achieved following the motor’s rotation by ϕ degrees. A process is proposed to transform the coordinates of any point within the Mϕ coordinate system to the absolute M0 frame. Let us define a laser point A within the Mϕ coordinate system, denoted as Aϕ. We aim to determine its coordinates within the absolute coordinate system M0.

Fig. 6

Correlation between a laser point within its associated coordinate system; the absolute frame is established.

OE_63_3_033101_f006.png

Introduce A as a virtual point of A that satisfies Eq. (2). Let C denote a point on the rotation axis. As vector math is applied to vector v3, we ascertain that v3 conforms to Eq. (3). The rotation matrix Rϕ, which rotates frame M0 by ϕ degrees to Mϕ, establishes the relationship between v5 and t through Eq. (4). Consequently, Eqs. (2) and (4) enable us to deduce Eq. (5). A further application of vector math leads to Eq. (6). In culmination, the coordinates of laser point A in the camera coordinate system Mϕ are re-expressed using Eq. (7) within the absolute coordinate system M0. The expressions are given as

Eq. (2)

v20=v1ϕ,

Eq. (3)

v30=v20+t0,

Eq. (4)

v50=Rϕ*t0,

Eq. (5)

v40=Rϕ*v30,

Eq. (6)

v60=v40t0,

Eq. (7)

v60=Rϕ*(v20+t0)t0.

The 3D transformation relationship between point A in a particular frame Mϕ and the absolute reference frame M0 comprises three successive steps. First, there is a translation using the translation matrix t0 from the given coordinate system Mϕ to a coordinate system with its origin positioned at point C along the rotation axis. Subsequently, vector v30 is rotated by an appropriate angle via the rotation matrix Rϕ, resulting in vector v40. Finally, vector v40 is translated from point C to the origin of the absolute frame, executed through an inverse translation operation24

Eq. (8)

A0=Rϕ*(Aϕ+t0)t0.

During the scanning process, we capture a series of consecutive frames. By converting all pixels from their current frame to a destination frame [as shown in Eq. (8)] and combining the results, we reconstruct the 3D scene.

In Eq. (8), the rotation matrix R has nine hidden components and three additional hidden components in the translation vector. This means that we need to specify 12 variables. To reduce the complexity of this expression, we re-represent it using the definition of a rotation matrix around any axis u24,30 [as shown in Eq. (9)]. By substituting this rotation matrix into our original expression, we reduce the number of unknowns to 7:3 for the coordinates of the point on the rotation axis, 3 for the coordinates of the rotation axis vector, and 1 for the known rotation angle, which is given as

Eq. (9)

Ru,ϕ=[cos(ϕ)+ux2(1cos(ϕ)))uxuy(1cos(ϕ))uzsin(ϕ)uzux(1cos(ϕ))+uysin(ϕ)uxuy(1cos(ϕ))+uzsin(ϕ)cos(ϕ)+uy2(1cos(ϕ)))uzuy(1cos(ϕ))uxsin(ϕ)uzux(1cos(ϕ))+uysin(ϕ)uzuy(1cos(ϕ))+uxsin(ϕ)cos(ϕ)+uz2(1cos(ϕ)))].

4.4.2.

Calibration of camera-motor

Our system maintains a consistent angle for each rotation step throughout the rotation axis calibration and scanning procedures. During the calibration phase, the camera captures an image of the calibration board at each angle θ. Notably, the calibration board remains fixed during this calibration process. By placing the observation point at the origin of the calibration board’s coordinate system, it becomes evident that the camera’s coordinate system undergoes circular motion.

4.4.3.

Establish the camera’s origin coordinate within the calibration board

After calibrating the camera, we determine the transformation matrix for each camera pose. This matrix consists of a rotation matrix RC and a translation vector TC. These values are calculated using the image of the calibration board and must satisfy the conditions outlined in Ref. 31, given as

Eq. (10)

XC=RCXW+TC.

The camera’s optical center is the origin of the camera coordinate system, so its coordinates are (0,0,0)T. Given that we know the values of RC and TC from our previous calculations, we quickly determine the coordinates XW of the camera’s optical center in the calibration board coordinate system as

Eq. (11)

XW=RC1TC.

Each image of the calibration board provides us with the coordinates of the camera’s optical center at each motor angle. The collection of all of these points, denoted as c=Oc,OC1,,OCn, forms an arc in space, as illustrated in Fig. 7.

Fig. 7

Calibration board coordinate system and camera frame at different times.

OE_63_3_033101_f007.png

4.4.4.

Calculate the direction vector in the calibration board’s coordinate frame

Once the coordinates of the camera’s optical center are determined for each motor angle, we utilize the least-squares method to estimate the plane p that encompasses this set of points. This computation is carried out following Eq. (12).

The perpendicular bisector of any line segment formed by two points on an arc always passes through the center point of the 3D circle. Therefore, the point where two perpendicular bisectors intersect, created by any four points on the circular part, is at the center of the 3D circle. In other words, we can always determine the center of a circle from any set of points on it. Because any line in 3D space has infinitely many perpendicular bisectors on its orthogonal plane, we need to determine which one contains the circle’s center. This perpendicular bisector is defined as the line of intersection between the plane containing the arc and the vertical bisector plane of a line segment formed by two points on the arc

Eq. (12)

p:  Ax+By+Cz+D=0.

To determine the perpendicular plane for any pair of points Oi and Oi+1 on the circle, we first define the normal vector as ni=OiOi+1. The point on the plane is then defined as the midpoint Ki of the line segment connecting Oi and Oi+1. We determine set k of orthogonal planes for all points on the circle by randomly choosing two points from the collection of points on the arch. Each element in set k and plane p containing the arc form a corresponding median line. By finding the median line for all members in set k, we obtain a set of median lines m.

As shown in Fig. 8, any two perpendicular bisectors from the set of orthogonal d always intersect at the center of the 3D circle. By repeating these steps for all perpendicular bisectors in d, we determine a set c that represents the center of rotation for the circle. The coordinates of center point C are obtained by taking the average coordinates of all points in c after filtering out any noise.

Fig. 8

Procedure of estimating the center of the 3D circle.

OE_63_3_033101_f008.png

4.4.5.

Determine the direction vector within the camera coordinate system

The rotation axis u and its associated point C, as defined in the earlier section, are established in the context of the calibration board coordinate system. However, because this coordinate frame varies with the position of the calibration board, it is not suitable for reconstruction during scanning. To overcome this limitation, we need to utilize the rotation information in the camera coordinate system as the relationship between them remains consistent.

By utilizing the transformation relationship delineated in Eq. (10), we convert plane p from the calibration board’s spatial frame to that of the camera. Consequently, we derive its normal vector, the rotation vector. The process for obtaining the point on the rotation axis mirrors this methodology.

4.4.6.

Orientation optimization

The rotation axis and point on the axis obtained in the previous step are based only on calculation. We performed an optimization method to get the optimal calibration parameters by minimizing the geometrical error in the calibration system, given as

Eq. (13)

ϵ=norm(A0Rϕ(Aϕ+t0)t0),

Eq. (14)

argmini=0ϕi=1Nnorm(A0iRϕj(Aϕji+t0)+γt0).

We define the geometrical error as the point-to-point distance error Eq. (13), which amalgamates the distance errors of all of the calibration board’s feature points from the first pose with those from the subsequent poses. Incorrectly selected calibration parameters directly impact the magnitude of this point-to-point distance error. Because the rotation axis is the normal vector of the camera’s trajectory plane, the rotation vector and translation axis must be mutually perpendicular. A penalty score is incorporated whenever these two parameters fail to meet the perpendicular condition to account for any deviations from the perpendicularity. Consequently, the optimization equation is formulated as Eq. (14), where N denotes the number of feature points on the calibration board and BB is a constant influencing the penalty magnitude in cases in which the rotation axis and translation vector deviate from perpendicularity.24

The optimization challenge outlined in Eq. (14) was addressed using a non-linear optimization approach. The MATLAB Optimization Toolbox was employed to solve these minimization problems.

The procedure for calibrating the axis of rotation is depicted in Fig. 9. The camera captures images of the calibration board at various motor angles. Leveraging the camera’s calibration parameters, we deduce each pose’s rotation matrix RC and translation matrix TC. These matrices facilitate the computation of the camera frame origin’s coordinates in the calibration frame system. The rotation axis emerges as the plane’s normal vector by fitting these origin points to a plane. The point belonging to this axis, the center of the 3D circle, is calculated through geometric relationships. Subsequently, the rotation information becomes defined by the axis of rotation u and the point on axis C, drawing from the calibration board’s feature points obtained in the preceding process. An optimization process is undertaken to minimize discrepancies, aiming to determine the optimal values for u and C. The transformation matrices between the camera and absolute frame systems can be computed upon integration with the rotation angle.

Fig. 9

Pipeline of the camera-motor rotation axis calibration process.

OE_63_3_033101_f009.png

5.

Experiment and Result

Figures 10(a) and 10(b) are the schematics of the 3D scanner’s design. The system boasts simplicity and ease of assembly. The calibration procedure involves acquiring the camera’s intrinsic parameters, distortion coefficients, laser-plane attributes, and transformation parameters connecting the camera and motor. To validate the calibration’s precision, we conducted three experiments.

Fig. 10

(a) Technical blueprint of the scanning system. (b) Assembled camera-laser integration on the mounting platform.

OE_63_3_033101_f010.png

We performed an experiment aimed at evaluating the planarity of the scanned surface. In the second experiment, our objective was to validate the accuracy of the dimensions obtained from the scanned results. Furthermore, we conducted one additional experiment to appraise the quality of the scanned outputs across various objects.

5.1.

Calibration Experiments

Camera calibration involves capturing multiple images of a calibration board in various poses. The collected feature data is then processed, and the camera’s intrinsic parameters are acquired using the OpenCV library. The outcomes of the camera calibration process are presented in Table 1.

Table 1

Camera calibration result.

Camera matrix[2406.4350766.42002410.246580.604001]
Distortion matrix[0.0910.0369000.411]
Reprojection error0.061

While calibrating the camera-laser relationship, the calibration board was deliberately positioned in five different orientations. For each of these orientations, a pair of images was captured: one showcasing the calibration board to facilitate the extraction of feature points and another displaying the laser line, crucial for obtaining data from the laser plane. The precise points situated on the laser plane are indicated in Fig. 11.

Fig. 11

Center points of the laser stripe in the camera coordinate system.

OE_63_3_033101_f011.png

During the camera-motor calibration procedure, the calibration board was consistently positioned at a fixed location while the motor underwent movement. An image was captured at each incremental step of the motor’s motion. Two filters were employed to enhance the precision of estimating center points. The first filter involved selecting pairs of points along the arc, so only points exceeding a distance of 6 mm from each other were considered. The second filter focused on bisectors, with only those with angles surpassing 3 deg chosen when computing the intersection of two perpendicular bisectors. The configuration of arc points and their respective estimated center points is illustrated in Fig. 12(a). By contrast, Fig. 13(b) depicts the arc points alongside their estimated center points.

Fig. 12

(a) Trajectory of the camera in 3D space. (b) Overlay of camera trajectory and estimated center points in 3D space.

OE_63_3_033101_f012.png

Fig. 13

(a) Reprojection error before optimization process; (b) reprojection error after optimization process.

OE_63_3_033101_f013.png

The calibration parameters for the laser plane are presented in Table 2, and the rotation axis information is provided in Table 3. We calculate the rotation radius from the center point coordinates as r=Cx2+Cy2+Cz2=74.360  mm. This value is suitable when compared with the dimension from the camera position to the rotation center, as shown in Fig. 10(a).

Table 2

Camera - laser calibration result.

Parameter nameValue
Number of laser plane points2947
Laser plane equation0.966x+0.027y0.257z+148.503=0
RMSE (mm)0.1065

Table 3

Calibration parameters before and after optimization.

Parameter nameValue before optimizationValue after optimization
Rotation axis u0.070,0.996,0.0560.021,0.998,0.049
Center points C57.976,8.073,7.95171.634,0.984,19.926

To estimate the plane and obtain its normal vector, we use 30 center points, which represent 30 relationships between the camera and world coordinate systems. For each connection, we select 501 random points within a sphere of radius 50,000 mm to gather points on the plane.

5.1.1.

Optimization result and the effect of the accuracy of the rotation device to the calibration result

The results of the reprojection between the feature points of the first camera’s pose and those of the other camera’s pose before and after the optimization process are shown in Figs. 13(a) and 13(b). As can be seen, the reprojection was significantly improved.

We divide our calibration process into two main steps: the estimation of the rotation axis and rotation vector and the optimization process. As explained earlier, the results of the first step are based solely on the camera’s origin with respect to the calibration board coordinate system at different positions. The motor position is not used in this step. Therefore, it is evident that the accuracy of the rotation device does not affect its results. However, in the optimization process (and the scanning process after calibration), the rotation matrix is calculated based on the motor position or rotated angle, which is related to the motor’s rotation accuracy. Figure 14 shows the reprojection error of feature points after optimization in a case in which the accuracy of the rotation device is low. In this case, we assume that the motor rotates a larger angle than it would be with the error of 0 deg, 0.05 deg, 0.1 deg, …, 0.45 deg, and 0.5 deg.

Fig. 14

Relationship between rotation accuracy and reprojection error after optimization.

OE_63_3_033101_f014.png

5.2.

Verification Calibration

5.2.1.

Planarity of the plane

This experiment serves the purpose of evaluating the quality of the scanned surface. The criteria for establishing a plane’s attributes through planarity are outlined in Fig. 15(a).32 In our scenario, we leverage information from both the ideal and actual planes to gauge and appraise the level of planarity exhibited by the scanned plane. We used a calibration board as the scanning subject owing to its even surface. The scanning system was aligned with the first camera frame’s coordinate system, facilitating straightforward validation of the plane’s planarity. By scanning the calibration board, a 3D point cloud was generated. The equation of the plane was then determined through the application of the least squares method. To assess planarity, we computed the sum of squared distances between each point in the point cloud and the estimated plane equation. To enhance the accuracy, any outlier points in the scanning outcome were eliminated using point cloud software such as MeshLab.

Fig. 15

(a) Definition of the plane constraints considering planarity;32 (b) illustration of the ideal plane, real plane, and planarity of a plane; (c) result of our ideal plane and the estimated plane.

OE_63_3_033101_f015.png

The distance from each estimated point to the ideal plane, illustrated in Fig. 15(b), was computed as

Eq. (15)

e=|nT(pq)|n2,
where n and q signify the ideal plane’s characteristics and p denotes a point sourced from the scanned point set. Figure 15(c) shows an example of a real plane scanned by our scanning system.

We consider the calibration boards in this experiment to be reference objects, as shown in Fig. 16(a). In this experiment, we scanned the calibration board in two parts: the first was from the beginning of the scanning process (from 0 deg) and the other was to the end (from some to 360 deg). This scanning result provided a more comprehensive check of the planarity of the entire scanning process. The scanned results are shown in Fig. 16(b). Because the color of the scanning object affects the scanning result, as shown in Fig. 16(c), only the white color area was used to perform the planarity check, as shown in Fig. 16(d).

Fig. 16

(a) Set up for the planarity experiment; (b) scanning result; (c) line scanning results from objects of different colors; (d) sample used in the planarity check.

OE_63_3_033101_f016.png

The average and standard deviation of the distance between the ideal plane and the points on the actual plane are shown in Fig. 17(a). We compared our results with those of Lee et al.,20 shown in Fig. 17(b), and found that our planarity check results were better.

Fig. 17

(a) Distance errors from the calibration board plane to every point in our system; (b) distance error from the checkboard plane to every point in Ref. 20.

OE_63_3_033101_f017.png

5.2.2.

Accuracy check

To evaluate the precision of our scanning system’s measurements in terms of dimensions, we employed a calibration block as the reference object, depicted in Fig. 18(a). Figure 18(b) details the block’s specific dimensions, and Fig. 18(c) showcases the resultant scan output. Notably, the consistent height difference between the two planar sections of the block remains evident throughout the experiment.

Fig. 18

(a) Calibration block; (b) dimension of the calibration block; (c) scanned result of the calibration block; (d) illustration of the way to measure the distance between two planes.

OE_63_3_033101_f018.png

To verify the accuracy of the calibration process, we computed the height difference of the calibration block using 3D coordinates derived from our 3D reconstruction methodology. This height difference signifies the separation between the two surfaces or planes associated with the calibration block. Addressing potential flatness irregularities, we employed the point-plane relationship to determine this separation, as visually depicted in Fig. 18(d).

In this context, let Po1 and Po2 denote two points corresponding to consecutive surfaces on the calibration block. Using the least squares technique, we derived two planes, p1 and p2, from these point sets, as shown in Fig. 18(d). Subsequently, let C1 and C2 symbolize the centroids of Po1 and Po2, respectively. We calculated projection points A and B, representing the projections of C1 and C2 onto planes p1 and p2, respectively. The distance between the two planes is defined as the average of the distances from point A to plane p2 and from point B to plane p1, given as

Eq. (16)

d=d1+d22.

The average and error of the distances between the two planes are presented in Table 4. The standard value, mean error, and root mean square error (RMSE) of all distances are compared with the results of Zhu et al.31 in Table 5.

Table 4

Measurement accuracy at different distances.

The distance between scanner and calibration block (mm)Ref. value (mm)Average measured value (mm) (10 times)Error (mm)Error rate (%)
40054.9490.0511.020%
50054.9330.0671.340%
60054.9320.0681.360%

Table 5

Comparison of system measurement accuracies.

Zhu32 systemLee15 systemOur system
Ref. Value (mm)304.85
Mean error (mm)2.39520.062
Mean error rate (%)7.9%1.237%
RMSE (mm)2.41402.20.063
RMSE rate (%)8%45.833%1.251%

5.2.3.

Scanning quality check

This experiment aims to evaluate the qualitative reconstruction of scanned results for objects of various shapes. As shown in Fig. 19(a), we examined multiple items, including a tennis ball and a cup. The results, presented in Fig. 19(b), demonstrate that our scanner produces high-quality scans.

Fig. 19

(a) Scanning workspace for the quality check; (b) scanning result of the quantity check.

OE_63_3_033101_f019.png

In addition, we conducted another experiment to scan an object with complex features. We used a human statue, as shown in Fig. 20(a). The results in Fig. 20(b) demonstrate that our scanner produces high-quality scans even for objects with intricate details compared with the original scene.

Fig. 20

(a) Real human statue used for the quality check; (b) scanning result of the quantity check.

OE_63_3_033101_f020.png

6.

Conclusion

We have introduced a 3D laser scanning system incorporating a 1-axis rotating mechanism. To facilitate 3D reconstruction, establishing the transformation relationship between any camera frame and the absolute frame becomes crucial. This paper presented a calibration approach to determine the rotation axis and center point within the camera’s coordinate system. This information lets us adjust the scanning resolution by manipulating the motor’s step angle.

The calibration board image must cover the entire image frame to yield optimal results. In our evaluation, we assessed the quality of our scanning results by examining the planarity and scanning quality of the scanned objects. By comparing our findings with established standards and other methods, we demonstrated the superior accuracy of our system.

Disclosures

The authors declare no conflicts of interest.

Code and Data Availability

The data utilized in this study were obtained from ATM Co., Ltd, South Korea. Data are available from the authors upon request and with permission from ATM Co., Ltd, South Korea.

Acknowledgments

This research was supported by the Shipbuilding and Marine Industry Technology Development Program through Korea Industrial Technology Evaluation and Management Institute funded by Ministry of Trade, Industry and Energy (Grant No. 20014703).

References

1. 

A. Geiger, J. Ziegler and C. Stiller, “StereoScan: dense 3D reconstruction in real-time,” in IEEE Intell. Veh. Symp. (IV), 963 –968 (2011). https://doi.org/10.1109/IVS.2011.5940405 Google Scholar

2. 

S. Izadi et al., “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proc. 24th Annu. ACM Symp. on User Interface Softw. and Technol., 559 –568 (2011). https://doi.org/10.1145/2047196.2047270 Google Scholar

3. 

D. Moreno and G. Taubin, “Simple, accurate, and robust projector-camera calibration,” in Second Int. Conf. 3D Imaging, Model., Process., Visualization & Transmission, 464 –471 (2012). https://doi.org/10.1109/3DIMPVT.2012.77 Google Scholar

4. 

L. Zhang et al., “A cross structured light sensor and stripe segmentation method for visual tracking of a wall climbing robot,” Sensors, 15 13725 –13751 https://doi.org/10.3390/s150613725 SNSRES 0746-9462 (2015). Google Scholar

5. 

C. Roman, G. Inglis and J. Rutter, “Application of structured light imaging for high resolution mapping of underwater archaeological sites,” in OCEANS’10 IEEE SYDNEY, 1 –9 (2010). https://doi.org/10.1109/OCEANSSYD.2010.5603672 Google Scholar

6. 

H. Fan et al., “Refractive laser triangulation and photometric stereo in underwater environment,” Opt. Eng., 56 113101 https://doi.org/10.1117/1.OE.56.11.113101 (2017). Google Scholar

7. 

N. V. Gestel et al., “A performance evaluation test for laser line scanners on cmms,” Opt. Lasers Eng., 47 336 –342 https://doi.org/10.1016/j.optlaseng.2008.06.001 (2007). Google Scholar

8. 

X. Y. Shao, G. H. Tian and Y. Zhang, “A 2D mapping method based on virtual laser scans for indoor robots,” Int. J. Autom. Comput., 18 747 –765 https://doi.org/10.1007/s11633-021-1304-1 (2021). Google Scholar

9. 

S. H. Kim, S. J. Lee and S. W. Kim, “Weaving laser vision system for navigation of mobile robots in pipeline structures,” IEEE Sens. J., 18 2585 –2591 https://doi.org/10.1109/JSEN.2018.2795043 ISJEAZ 1530-437X (2018). Google Scholar

10. 

Y. Zhang, J. Karlovšek and X. Liu, “Identification method for internal forces of segmental tunnel linings via the combination of laser scanning and hybrid structural analysis,” Sensors, 22 2421 https://doi.org/10.3390/s22062421 SNSRES 0746-9462 (2022). Google Scholar

11. 

X. Liang et al., “Visual laser-slam in large-scale indoor environments,” in IEEE Int. Conf. Rob. and Biomimetics, ROBIO, 19 –24 (2016). https://doi.org/10.1109/ROBIO.2016.7866271 Google Scholar

12. 

D. J. Seo and J. Chool, “Development of cross section management system in tunnel using terrestrial laser scanning technique,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., 36 573 –582 (2008). Google Scholar

13. 

S. Chi, Z. Xie and W. Chen, “A laser line auto-scanning system for underwater 3D reconstruction,” Sensors, 16 1534 https://doi.org/10.3390/s16091534 SNSRES 0746-9462 (2016). Google Scholar

14. 

I. Robotics et al., “Integrated navigation system using camera and gimbaled laser scanner for indoor and outdoor autonomous flight of UAVs,” in IEEE/RSJ Int. Conf. Intell. Rob. and Syst. (IROS), (2013). Google Scholar

15. 

U. Stenz et al., “High-precision 3D object capturing with static and kinematic terrestrial laser scanning in industrial applications-approaches of quality assessment,” Remote Sens., 12 290 https://doi.org/10.3390/rs12020290 (2020). Google Scholar

16. 

S. S. Pawar et al., “Review paper on design of 3D scanner,” in IEEE Int. Conf. on Innov. Mech. for Ind. Appl., ICIMIA 2017 – Proc., 650 –652 (2017). https://doi.org/10.1109/ICIMIA.2017.7975542 Google Scholar

17. 

H.-C. Nguyen, “3D model reconstruction system development based on laser-vision technology,” Int. J. Autom. Technol., 10 (5), 813 –820 https://doi.org/10.20965/ijat.2016.p0813 (2016). Google Scholar

18. 

X. Cai et al., “Calibration method for the rotating axis in panoramic 3D shape measurement based on a turntable,” Meas. Sci. Technol., 32 035004 https://doi.org/10.1088/1361-6501/abcb7e MSTCEP 0957-0233 (2021). Google Scholar

19. 

S. Guo et al., “Application of a self-compensation mechanism to a rotary-laser scanning measurement system,” Meas. Sci. Technol., 28 115007 https://doi.org/10.1088/1361-6501/aa8749 MSTCEP 0957-0233 (2017). Google Scholar

20. 

J. Lee, H. Shin and S. Lee, “Development of a wide area 3D scanning system with a rotating line laser,” Sensors, 21 3885 https://doi.org/10.3390/s21113885 SNSRES 0746-9462 (2021). Google Scholar

21. 

L. Bisogni et al., “Automatic calibration of a two-axis rotary table for 3D scanning purposes,” Sensors, 20 7107 https://doi.org/10.3390/s20247107 SNSRES 0746-9462 (2020). Google Scholar

22. 

J. Zhao et al., “A calibration method for a self-rotating, linear-structured-light scanning, three-dimensional reconstruction system based on plane constraints,” Sensors, 21 8359 https://doi.org/10.3390/s21248359 SNSRES 0746-9462 (2021). Google Scholar

23. 

Z. Niu et al., “Calibration method for the relative orientation between the rotation axis and a camera using constrained global optimization,” Meas. Sci. Technol., 28 055001 https://doi.org/10.1088/1361-6501/aa5fd4 MSTCEP 0957-0233 (2017). Google Scholar

24. 

L. Kurnianggoro, H. V. Dung and K.-H. Jo, “Calibration of a 2D laser scanner system and rotating platform using a point-plane constraint,” Comput. Sci. Inf. Syst., 12 307 –322 https://doi.org/10.2298/CSIS141020093K (2015). Google Scholar

25. 

E. E. Hemayed, “A survey of camera self-calibration,” in Proc. - IEEE Conf. Adv. Video and Signal Based Surveillance, AVSS, 351 –357 (2003). https://doi.org/10.1109/AVSS.2003.1217942 Google Scholar

26. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., 22 1330 –1334 https://doi.org/10.1109/34.888718 ITPIDJ 0162-8828 (2000). Google Scholar

27. 

R. Usamentiaga, J. Molleda and D. F. Garcia, “Structured-light sensor using two laser stripes for 3D reconstruction without vibrations,” Sensors, 14 20041 –20063 https://doi.org/10.3390/s141120041 SNSRES 0746-9462 (2014). Google Scholar

28. 

R. B. Fisher and D. K. Naidu, “A comparison of algorithms for subpixel peak detection,” (1996). Google Scholar

29. 

F. Blais and M. Rioux, “Real-time numerical peak detector,” Sig. Process., 11 (2), 145 –155 https://doi.org/10.1016/0165-1684(86)90033-2 (1986). Google Scholar

30. 

C. Taylor and D. Kriegman, “Minimization on the lie group SO(3) and related manifolds,” (1994). Google Scholar

31. 

Z. Zhu et al., “Rotation axis calibration of laser line rotating-scan system for 3D reconstruction,” in 11th Int. Conf. Awareness Sci. and Technol. (iCAST), 1 –5 (2020). https://doi.org/10.1109/iCAST51195.2020.9319495 Google Scholar

32. 

Z. Cui and F. Du, “Assessment of large-scale assembly coordination based on pose feasible space,” Int. J. Adv. Manuf. Technol., 104 4465 –4474 https://doi.org/10.1007/s00170-019-04307-8 IJATEA 1433-3051 (2019). Google Scholar

Biography

Van-Tung Ha received his BE degree in mechatronic engineering from the Can Tho University, Viet Nam, in 2019. He is currently working toward his PhD at the Department of Mechanical Engineering, University of Ulsan, South Korea. His research interests include the 3D scanner technique, computer vision, vision-based robotics, and SLAM.

Van-Phu Do obtained his BS degree from Ho Chi Minh City University of Technology in 2010 and his PhD from the University of Ulsan in 2014. He worked at Ulsan’s Intelligent Control and Mechatronics Lab until 2015. He co-founded Abeosystem Co. LTD in 2014 and was a researcher there until 2019. Since 2020, he has been a senior researcher at ATM Co., Ltd. His research interests include 3D scanning and processing, machine vision for robotics, parallel computing, and AI.

Byung-Ryong Lee received his bachelor’s degree from Busan National University, Korea, in 1983, his master’s degree in mechanical engineering in 1988, and his PhD from North Carolina State University, USA, in 1994. He worked at RIST, Korea, from 1988 to 1990 and was a researcher at KIMM, Korea, from 1994 to 1995. Since 1995, he has been with Ulsan University’s Intelligent Mechatronics Laboratory and is currently a full professor. His research spans intelligent control, machine vision, 3D measurement, robotics, and system diagnosis.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Van-Tung Ha, Van-Phu Do, and Byung-Ryong Lee "Calibration method of a three-dimensional scanner based on a line laser projector and a camera with 1-axis rotating mechanism," Optical Engineering 63(3), 033101 (1 March 2024). https://doi.org/10.1117/1.OE.63.3.033101
Received: 19 September 2023; Accepted: 30 January 2024; Published: 1 March 2024
Advertisement
Advertisement
KEYWORDS
Calibration

Cameras

3D scanning

Imaging systems

Matrices

Optical engineering

Laser scanners

Back to Top