The helmet-mounted sight is a sighting device fixed on the pilot’s helmet and becomes increasingly important in flight battles. An important premise for the helmet to perform correctly is obtaining the pilot’s head orientation. The traditional vision-based method to obtain the pilot’s head orientation relies on cooperative targets, which can be constituted by multiple LEDs embedded on the helmet. However, the installation of multiple LEDs will increase the weight of the helmet. In addition, the measurement accuracy will decline due to the complex environment interference, strong illumination, and uncertainty of the LED luminous center. In order to solve these problems, this paper proposes a tightly coupled visual/IMU system for head pose estimation based on non-cooperative targets. A camera and an inertial measurement unit (IMU) are installed on the helmet to track the head orientation. In the reconstruction phase, a binocular system is used to reconstruct the internal environment of the cockpit and build the feature points database based on structure from motion (SFM) and the scale-invariant feature transform (SIFT) feature descriptors. In the measurement phase, the feature points of the captured images are extracted and matched with the database to obtain the 3D world coordinates of feature points. The coordinates are directly fused with the inertial data through the cubature Kalman filter (CKF) to realize fast and accurate attitude measurement. The practical experimental platform is set up to simulate the measurement of the pilot’s head attitude. The experimental results effectively verify the feasibility of the proposed measurement system and scheme.
In order to overcome the shortcomings of Kalman filter algorithm based on Euler Angle and quaternion, a vision and inertial fusion filtering algorithm based on error quaternion is proposed. In this algorithm, error quaternion parameters are used to describe attitude, which not only avoids the singularity of Euler Angle description, but also eliminates the unit constraint of quaternion description. In addition to improving the positioning accuracy of helmet, it can also estimate and compensate the drift error of helmet MIMU online in real time.The effectiveness of the proposed algorithm has been verified by simulation experiments, and the main factors affecting the integrated positioning accuracy has been analyzed.
Aiming at the problem that the loosely coupled Kalman Filter algorithm cannot perform measurement updating when the feature points are few, a head attitude tracking algorithm based on adaptive loosely-tightly coupled Extended Kalman Filtering is proposed. Firstly, according to the angular velocity measurement data from an IMU mounted on the head, the algorithm realizes the time updating of the head attitude. Then the algorithm completes the adaptive loosely-tightly coupled measurement updating according to the number of available feature points. When there are more than 4 feature points, the PnP pose is solved firstly. Then the loosely coupled measurement updating is performed by using the pose measurement. Otherwise, the tightly coupled measurement updating is performed directly by using the image measurement data. Finally, the experimental results show that the proposed algorithm can significantly expand the updating range of the head pose measurement, and improve the accuracy and stability of the head attitude tracking.
With the development of augmented reality technology, the pilot helmet also employs this technology, i.e. Helmet Mounted Display. According to the augmented reality design requirements of a type of aircraft Helmet Mounted Display, the Helmet Mounted Display optical system is designed. The software correction method for the aberration of the optical system was studied with the help of the optical software CODEV tool, and the circuit was designed to realize the aberration correction of the Helmet Mounted Display optical system. The results meet the design requirements for the aberration of the optical system.
In pose measurement where infrared marking point is adopted as feature identification, only point coordinate information can be provided. In order to improve the accuracy of pose measurement, it is necessary to increase the number of feature points involved in pose solving, which limits the application of pose measurement method. A new pose measurement method based on two-dimensional optical coding marking orientation is designed. In the process of measurement, not only the coordinate information of the marking point can be used, but also the orientation information of the visual sensor to shoot the marker can be obtained, which improves the accuracy of pose measurement. The optical marker in this method includes a microlens array, a back two-dimensional code pattern and a background light source. When the visual sensor shoots the optical marker from different angles, different two-dimensional coding patterns can be obtained, which provides orientation information for pose measurement. Firstly, the images captured from different viewpoints are preprocessed, including segmenting the optical marker with the surrounding environment and transforming the images into undistorted frontal views. Then, on the basis of the pixel information of the microlens array’s surface pattern, every single microlens for different viewpoints is encoded to obtain a set of code strings varying with the viewpoint. Finally, the code string information of different viewpoints is processed and the orientation model of two-dimensional optical coding marker is established. In addition, the factors that may impact the pose measurement accuracy are evaluated.
With the development of laser radar technology, more and more fields have begun to use laser radar to acquire 3D point cloud information. The crux and premise of 3D object recognition and 3D model semantic segmentation is the depth feature of 3d point cloud. Therefore, it is significant for indoor intelligent robots to recognize 3D objects by using laser radar. However, unlike the regular arrangement of pixels in 2D images, the 3D point cloud data is irregular and disordered, which means it is difficult to acquire local related information between the 3D point cloud with direct convolution operation. At present, the research focus of 3D object recognition is the method based on deep learning. At this stage, the deep convolutional neural network constructed by PointConv can achieve a high level in the semantic segmentation of 3D point cloud. First, this paper introduces a model named PointConv. To balance the performance and complexity of the model, this paper simplifies the PointConv which called Mini-PointConv to reduce the occupation of network computing resources while ensuring the accuracy of the model segmentation results. Furthermore, the method of ScanNet is adopted to test the Mini-PointConv, which shows that the improved network has achieved a good experimental result in 3D scene semantic segmentation tasks and gained a better performance as balance as well. Finally, the Mini-PointConv is tested in a variety of indoor environments using laser radar and obtain a good indoor 3D point cloud recognition result.
In the current face 3D measurement technology, binocular stereo vision has been widely used. For the passive binocular 3D measurement system that does not need to project auxiliary light, it has the characteristics of simple system structure, but the result is not accurate enough and the algorithm is complex. Therefore, this paper proposes a fast measurement method for binocular stereo vision combined with infrared grating structure light. Because Digital-Light-Processing (DLP) projector has slow projection speed, dynamic images acquisition cannot be performed well, and when applied to the face 3D measurement, the eyes of the measured person will be stimulated by strong light, so a Micro-Electro-Mechanical System (MEMS) infrared projector is used in this paper. It has the advantages of high projection speed, high precision and no stimulation to the human eyes, so the MEMS projector can be well applied to 3D measurement of human face. The sinusoidal fringes are projected onto the face by the MEMS projector, and the phase is wrapped and unwrapped by phase measurement profilometry. In this paper, the four-step phase-shift method is used to calculate the wrapped phase, and the phase order is obtained according to the multi-frequency heterodyne principle. Fast matching of corresponding points of two image planes by combining epipolar and phase order constrained algorithms. The experiment verified that the highspeed, stable and low-cost face 3D measurement system was realized.
Flatness is one of the most important properties for quality control of manufacturing mechanical parts. The most widely applied coordinate measuring machine techniques of measuring flatness error are ineffective to collect abundant sample points, and the probes must contact the tested surface. Existing noncontact optical techniques are not full-field measurement and their devices are complex. This paper presents a simple but noncontact full-field flatness measuring system based on fringe projection profilometry technique. The designed device projects fringe patterns onto the tested surface, and by calculating the phase of modulated fringe images and calibrating the phase-height mapping relationship, the full-field surface sample points are acquired. Polarizers are applied to eliminate intense highlight of the measured surfaces. Optimization algorithm is introduced to determine the reference ideal plane and flatness error is then calculated. Several experiments are conducted to demonstrate that the proposed flatness measuring system can be applied to general mechanical parts, and it has high precision and high repeatability.
Recent years, people pay great attention to the fusion of different kinds of sensors include camera, radar, LiDAR and so on. Since each sensor has its own advantage and disadvantage in automatic drive and obstacle detection, the fusion of sensors is more robust and reliable. In this paper, we present a novel target calibration of LiDAR-camera system. We use a 3D Flash LiDAR which has a resolution of 320×240, much cheaper and more reliable than Scanning LiDAR, and we use the camera which has a resolution of 1280×1024. We propose a novel target calibration method. The 3D target can provide both geometric features and visual features. This method is fast, easy to estimate all the six parameters of the extrinsic calibration. Our experiments validate our method and show that it achieves good accuracy.
To determination relative attitude between two space objects on a rocking base, an integrated system based on vision and dual IMU (inertial determination unit) is built up. The determination system fuses the attitude information of vision with the angular determinations of dual IMU by extended Kalman filter (EKF) to obtain the relative attitude. One IMU (master) is attached to the measured motion object and the other (slave) to the rocking base. As the determination output of inertial sensor is relative to inertial frame, thus angular rate of the master IMU includes not only motion of the measured object relative to inertial frame but also the rocking base relative to inertial frame, where the latter can be seen as redundant harmful movement information for relative attitude determination between the measured object and the rocking base. The slave IMU here assists to remove the motion information of rocking base relative to inertial frame from the master IMU. The proposed integrated attitude determination system is tested on practical experimental platform. And experiment results with superior precision and reliability show the feasibility and effectiveness of the proposed attitude determination system.
In order to meet the aviation’s and machinery manufacturing’s pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.
Pose estimation by monocular is finding the pose of the object by a single image of feature points on the object, which must meet the requirements of detecting all the feature points and matching them in the image. But it will be difficult to obtain the correct pose if part of the feature points are occluded when the object moving a large scale. We proposed a method for finding the pose on the condition that the correspondences between the object points and the image points are unknown. The method combines two algorithms: one algorithm is SoftAssign, which constructs a weight matrix of feature points and image points, and determines the correspondences by iteration loop processing; the other algorithm is OI(orthogonal iteration), which derives an iterative algorithm which directly computes orthogonal and globally convergent rotation matrices.We nest the two algorithms into one iteration loop.An appropriate pose will be chosen from a set of reference poses as the initial pose of object at the beginning of the loop, then we process the weight matrix to confirm the correspondences and calculate the optimal solution of rotation matrices alternately until the object space collinearity error is less than the threshold, each estimation will be closer to the truth pose than the last one through every iteration loop. Experimentally, the method proved to be efficient and have a high precision pose estimation of 3D object with large-scale motion.
According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.
A new pose measurement system based on orthogonal beam splitting imaging is proposed in this paper to solve the contradictions between the measurement accuracy and measurement speed in the existing pose measurement method of monocular or binocular vision with multi-linear CCDs. In the system, monocular object lens with beam splitting structure and two linear CCDs are combined to compose the pose measurement sensor. Monocular camera helps the system gain a large field of view. And the two orthogonally placed linear CCDs are equal to one array CCD. Furthermore, linear CCD possesses the advantage of high-resolution imaging, high-speed data capturing and high-efficiency data processing as compared to an array one. The key work of this paper lies in designing the optical structure of the sensor, calibrating the parameters of the camera corresponding to its model, and solving pose of the object by corresponding position algorithm. The experimental results prove that the measurement accuracy (2%) of orthogonally-splitting-imaging pose sensor can be achieved. Hence, this system meets the high-speed and high-precision measurement requirements in wide space and can be applied to pose measurement in aerospace and vehicle field.
To improve the accuracy of coordinate measurement, the precise 3D coordinates of spatial points on the surface of the target object are needed. Based on the stereo vision measurement model, an all-around coordinates measuring system with single camera and a two-dimensional turntable is proposed. By controlling the rotation of objects in two different orientations and by the principle of relative motion, the single-CCD sensor model was imaged as a visual multi-CCD sensor model. In other words, the visual CCD sensors at different but relative positions are used to acquire coordinates information of the measured points. Considering the calibration accuracy of those two shafts affecting the accuracy of the entire system, the mathematical calibration model is built, consisting of virtual multi-CCD sensor measuring system based on the non-orthogonal shafting. The shaft and its calibration method are described in detail. The experimental result shows that the system based on the virtual multi-CCD sensor model can achieve the standard deviation of 0.44mm, and thus proves the feasibility of its multi-angle coordinates measurement for spatial points.
The extensive application of surface mount technology requires various measurement methods to evaluate the printed circuit board (PCB), and visual inspection is one critical method. The local oversaturation, arising from the nonconsistent reflectivity of the PCB surface, will lead to an erroneous result. This paper presents a study on a high dynamic range image (HDRI) acquisition system which can capture HDRIs with less local oversaturation. The HDRI system is composed of the liquid crystal on silicon (LCoS) and charge-coupled diode (CCD) sensor. In this system, the LCoS uses a negative feedback to extend the dynamic range of the system, and the proportion integration differentiation (PID) theory is used to control the system for its rapidity. The input of the PID controller is images captured by the CCD sensor and the output is the LCoS mask, which controls the LCoS’s reflectivity. The significant characteristics of our method are that the PID control can adjust the image brightness pixel to pixel and the feedback procedure is accomplished by the computer in less time than the traditional method. Experimental results demonstrate that the system could capture HDRIs with less local oversaturation.
The online measurement of the metal surfaces’ parameters plays an important role in many industrial fields. Because the surfaces of the machined metal pieces have the characteristics of strong reflection and high possibilities of scattered disturbing irradiation points, this paper designs an online measurement system based on the measurement principles of linear structured light to detect whether the parameters of the machined metal surfaces’ height difference and inclination fulfill the compliance requirements, in which the grayscale gravity algorithm is applied to extract the sub-pixel coordinates of the center of laser, the least squares method is employed to fit the data and the Pauta criterion is utilized to remove the spurious points. The repeat accuracy of this system has been tested. The experimental results prove that the precision of inclination is 0.046° RMS under the speed of 40mm/sec, and the precision of height difference is 0.072mm RMS, which meets the design expectations. Hence, this system can be applied to online industrial detection of high speed and high precision.
In the solder paste inspection measurement system which is based on the structured light vision technology, the local oversaturation will emerge because of the reflection coefficient when the laser light project on PCB surface. As this, a high dynamic imaging acquisition system for the solder paste inspection research is researched to reduce the local oversaturation and misjudge. The Reflective liquid crystal on silicon has the merit that it can adjust the reflectance of the Incident light per-pixel. According to this merit, the high dynamic imaging acquisition system based on liquid crystal on silicon (LCoS) was built which using the high-resolution LCoS and CCD image sensor. The optical system was constructed by the imaging lens, the relay lens, the Polarizing Beam Splitter (PBS), and the hardware system was consist of ARM development board, video generic board, MCU and HX7809 according to the electrical characteristics of LCoS. Tests show that the system could reduce the phenomenon of image oversaturation and improve the quality of image.
The measurement of holes in the engine block top surface determines the general coupling effect of the engine. All of these holes are strictly restricted by the requirements of the dimensional tolerance and the geometrical tolerance, which determines the final engine quality. At present, these holes are measured mostly by the coordinate measuring machine (CMM) in the production line, and meeting the industry demands of automation, rapidity, and online testing with the method is difficult. A new rapid solution measuring the holes in the engine block top surface is proposed, which is based on the combination of multiple visual sensors. The flexible location method of the block is designed, and the global data fusion model based on multiple visual sensors is studied. Finally, the unified correction model of the lens distortion and the system inclination is proposed, and a revised system model with more precision is researched. The CMM measures the holes sizes and the spatial relationship between holes, and the data obtained are substituted into the global data fusion model to complete the system on-site rapid calibration. The experimental results show that the scheme is feasible. The measurement system can meet the production line needs of intelligence, rapidity, and high precision.
The UV absorption spectrometry technique DOAS (Differential Optical Absorption Spectroscopy) has been widely used
in continuous monitoring of flue gas, and has achieved good results. DOAS method is based on the basic law of light
absorption--Lambert-Beer law. SO2, NOX are the principal component of the flue gas. These components are
considered by DOAS method at the same time. And certain mathematical methods are used for concentrations
measuring. The Continuous Emission Monitoring System (CEMS) based on the principle of DOAS mainly has two
probe-styles present: in-situ probe-style and extractive probe-style.
For the in-situ probe-style CEMS based on DOAS method, prolonged use for the UV light source, contaminated lens
caused by floating oil and complex environment of the flue will all bring attenuation of the spectral intensity, it will
affect the accuracy of measurement. In this article, an in-situ continuous monitoring system based on DOAS method is
described, and a component adaptive sensing technology is proposed. By using this adaptive sensing technology, CEMS
can adjust the integral time of the spectrometer according to the non-measuring attenuation of the light source intensity
and automatically compensate the loss of spectral intensity. Under the laboratory conditions, the experiments for SO2,
NO standard gas measurement using adaptive sensing technology is made. Many different levels of light intensity
attenuation are considered in the experiments. The results show that the adaptive sensing technology can well
compensate the non-measuring loss of spectral intensity. In the field measurement, this technology can well reduce the
measurement error brought by attenuation of light intensity, compared with the handheld gas analyzer, the average error
of concentration measurement is less than 2% FS(Full Scale).
Position and orientation estimation of the object, which can be widely applied in the fields as robot navigation, surgery,
electro-optic aiming system, etc, has an important value. The monocular vision positioning algorithm which is based on
the point characteristics is studied and new measurement method is proposed in this paper. First, calculate the
approximate coordinates of the five reference points which can be used as the initial value of iteration in the camera
coordinate system according to weakp3p; Second, get the exact coordinates of the reference points in the camera
coordinate system through iterative calculation with the constraints relationship of the reference points; Finally, get the
position and orientation of the object. So the measurement model of monocular vision is constructed. In order to verify
the accuracy of measurement model, a plane target using infrared LED as reference points is designed to finish the
verification of the measurement method and the corresponding image processing algorithm is studied. And then The
monocular vision experimental system is established. Experimental results show that the translational positioning
accuracy reaches ±0.05mm and rotary positioning accuracy reaches ±0.2o .
The measuring principle of SO2 and NOx, which are the main gaseous contaminants in flue gas, is given based on the
differential optical absorption spectroscopy (DOAS). And the structure and composition of the measurement system are
introduced. In the aspect of obtaining the absorption feature of measured gas, a multi-resolution preprocessing method of
original spectrum is adopted to denoise by the signal energy in different scales. On the other hand, the useful signal
component is enhanced according to the signal correlation. These two procedures can improve the signal-noise ratio
(SNR) effectively. In addition, the origin of the nonlinear factors is analyzed, that is caused by the actual measurement
condition. And the polynomial approximation equation is deduced. In the lab, SO2 and NO are measured several times
with the system using the data extraction method mentioned above. The average deviation is less than 1.5%, while the
repeatability is less than 1%. In the scene of one power plant whose concentration of flue gas has a large variation range,
the maximum deviation is 2.31% in the 18 groups contrast data.
In traditional Fourier transform profilometry, the conversion from phase to height is deduced depending on the
supposition that not only are the projector and the camera at the same height above the reference plane, but also their
axes must cross at the same point on the reference plane. When these conditions are too strict to be satisfied, a large
measurement error will be introduced. An improved optical geometry of the projected-fringe technique is discussed and
phase-height mapping formula is deduced in this paper. Employing the new optical geometry, a simple calibration model
is developed based on absolute phase extraction and space mapping techniques, which make the environmental
parameters is not as critical as before. Furthermore, a virtual space pattern is used to provide the reference points for
camera calibration based on Zhang's calibration method. The calibration can be accomplished merely with a planar
pattern, which extremely reduce the cost of device and make the process of measurement more convenient. The
experiment result shows the good accuracy of the system.
Using a binocular stereo vision system for 3D coordinate measurement, system calibration is an important factor for
measurement precision. In this paper we present a flexible calibration method for binocular stereo system calibration to
estimate the intrinsic and extrinsic parameters of each camera and the exterior orientation of the turntable's axis which is
installed in front of the binocular stereo vision system to increase the system measurement range. Using a new flexible
planar pattern with four big circles and an array of small circles as reference points for calibration, binocular stereo
calibration is realized with Zhang Plane-based calibration method without specialized knowledge of 3D geometry. By
putting a standard ball in front of the binocular stereo vision system, a sequence pictures is taken at the same by both
camera with a few different rotation angles of the turntable. With the method of space intersection of two straight lines,
the reference points, the ball center at each turntable rotation angles, for axis calibration are figured out. Because of the
rotation of the turntable, the trace of ball is a circle, whose center is on the turntable's axis. All ball centers rotated are in
a plane perpendicular to the axis. The exterior orientation of the turntable axis is calibrated according to the calibration
model. The measurement on a column bearing is performed in the experiment, with the final measurement precision better
than 0.02mm.
The high-precision measurement method for the position and orientation of remote object, is one of the hot issues in
vision inspection, because it is very important in the field of aviation, precision measurement and so on. The position and
orientation of the object at a distance of 5 m, can be measured by near infrared monocular vision based on vision
measurement principle, using image feature extraction and data optimization. With the existent monocular vision
methods and their features analyzed, a new monocular vision method is presented to get the position and orientation of
target. In order to reduce the environmental light interference and make greater contrast between the target and
background, near infrared light is used as light source. For realizing automatic camera calibration, a new
feature-circle-based calibration drone is designed. A set of algorithms for image processing, proved to be efficient, are
presented as well. The experiment results show that, the repeatability precision of angles is less than 8"; the repeatability
precision of displacement is less than 0.02 mm. This monocular vision measurement method has been already used in
wheel alignment system. It will have broader application field.
KEYWORDS: 3D metrology, Calibration, Laser welding, 3D image processing, Laser applications, CCD cameras, 3D acquisition, Image processing, Control systems, Semiconductors
A 3D measurement system of solder pastes was established. The system aims to extract the height and other values of
solder paste, and realize the quality control of Surface Mount Technology (SMT). 3D laser measurement technique was
applied to this system. The calibration process is divided into two steps, the internal parameters of CCD camera are
obtained by the RAC method of Tsai, and the laser plane parameters are calibrated with one special multi-arris block.
The scanning technique fulfills the acquisition of final 3D profile. Experimental results at the product line prove that the
system is a more easily operated device with high performance, and its repeatable precision reaches ±1 µm.
KEYWORDS: Data acquisition, 3D modeling, 3D acquisition, Data modeling, Cameras, Image segmentation, Mathematical modeling, Sensors, Optical engineering, 3D scanning
This paper presents a novel scanning-path determination method to choose a next best view, using a combination of the shape-from-silhouette and convex-hull methods to ensure the accuracy and integrity of measured object surface data. A backlight system is used to illuminate the scene to acquire silhouettes from different viewpoints, from which an approximate 3-D surface model can be deduced. The scanning path is determined according to the convex hull of the approximate model so as to specify the movement of a three-axis motion table to achieve automatic measurement. With a mathematical model of a sensor equipped with a color CCD and a line-structured laser, 3-D color data of the object can be obtained integrally and accurately.
Molecular dynamics (MD) analysis has been found to be a powerful tool for understanding the deformation mechanisms
of materials at nanometre scales. The basic principles of MD and potential functions used in nanomachining are
reviewed in this paper. The cutting mechanism of brittle materials and ductile materials, tool geometry, tool wear and the
relation of macro-cutting mechanism and micro-cutting in the field of nanomachining by molecular dynamics simulation are summarized.
A detailed investigation of surface acoustic wave (SAW) propagating in x-cut y propagation lithium niobate (LiNbO3) for integrated acousto-optic tunable filters (IAOTF) is reported in this paper. With getting curves of velocities, the walk-off angular (the angular between the power-flux vector and the propagation direction) can be obtained by the cubic spline interpolation method. The electromechanical coupling constant curve is given. Now, an optimal configuration of IAOTF has been designed, in which the direction of interdigital transducer should be inclined about 4.18°.
A numerical approach based on the finite element - artificial transmitting boundary method is newly formulated for suppressing the sidelobe of integrated acousto-optic tunable filters with weighted coupling on a piezoelectric substrate. We use cosine and Gaussian weighting functions and obtained relevant response curves of TE-TM mode conversion efficiency. The result shows that the sidelobe can be efficiently suppressed by transforming the weighting function. Conversion of Gaussian weighting function is better than the one of cosine weighting function. The result is agreed with other references.
A completed system for color 3D object measurement is presented, which is composed of a line-structured laser sensor and a two-axes translation stage. As well as the mathematics model of 3D measurement based on line-structured laser method is founded and two groups of unknown-parameters are derived, namely the camera parameters and the light-plane parameters. The color 3D data (xi,yi,zi)-(Ri,Gi,Bi) is obtained by merging the color data (Ri,Gi,Bi) and 3D (xi,yi,zi) data according to the corresponding pixel coordinates (ui,vi) of the stripe. A kind of circle pattern is applied in the calibration on both groups of parameters, however the way to acquire the control points by use of the pattern is different in each calibration process. In the calculation of light-plane calibration, an unrestricted object function is constructed to ensure the orthogonal character and accuracy of the parameters. In order to verify the measurement accuracy of the calibrated system, the measurement on a standard jig is performed with the accuracy better than 0.1mm. The result shows the parameters calibration is reasonably effective and reliable. The system is developed for the antique digitization, game development, etc., and it will have a promising future.
A non-contact vehicle wheel alignment parameters measurement system based on line-structured laser sensors is described. The spatial position of vehicle wheel is determined by the tangent plane of the vehicle wheel tire. Three line-structured sensors conduct each wheel, and the tangent plane is calculated by 3D coordinate of each point on three characteristic curves of the tire. Consequently it is important to ascertain the tangent plane of the tire precisely for this system. This paper details the working principle of the system and coordinates unification technique of multiple line-structured laser sensors. Furthermore it presents a specific method for determining the tangent plane according to the characteristic of the laser image. It is composed of ellipse fitting in 2D space and searching algorithm in 3D space. Experimental results show that the algorithm is reasonably efficient, practical and reliable. And the deviations of alignment parameters are less than 2'.
This paper presents a laser vision wheel alignment system that includes the design of the system, the working principle, the deduction of mathematical model. Coordinates unification of multiple line-structured laser sensors, which is so called global calibration, is to obtain the position and direction parameters of the multi-sensor's light plane coordinates in the world coordinates system. This paper elaborates on the method of the coordinates unification method of multi-sensors by establish a world coordinates system using two theodolites, which shows the advantage of facile movement, high accurate and most suitable to establish flexible coordinates system. The calibration accuracy is δ equals ± 0.1 mm.
This paper propounds an optoelectronic self-study inspection method for IC' shell blanks on the basis of comparison with standardized workpieces. First, the inspection system self- studies. Second, the system automatically inspects the products, and those exceeding the acceptance tolerance will be singled out as the unqualified. By comparing the inspected workpieces with the standard ones the system can identify defects such as shutdown and cutting-out. This method is of high inspection speed, powerful anti-interference ability, and high flexibility in term of shape variety.
In this paper, the image preprocessing techniques are studied in detail combining with the characteristics of real-time radiography. In view of the characteristics of radiant attenuation, an image processing method is presented to make the relationship between image gray scale and workpiece thickness linear. In order to get better image contrast, an image stretching method based on fuzzy transformation is designed by which the image is effectively enhanced. Meanwhile, another algorithm is also proposed to avoid the complexity of above method. Moreover, the study is carried out to the influence of radiation source focus, which is a very important factor to image quality in real-time radiography. The mathematical model is established for the influence of radiation source focus on real-time radiography image. On the basis of this model, the corresponding algorithm is proposed to eliminate its influence. All these algorithms are studied in detail with corresponding results presented.
Sometimes the objects of isolated surfaces, such as BGA chip leaders, need to be dealt with. Here, a method of visual measurement is proposed for on-line measurement. An image split-splicing technology is developed next to the theory of the system is analyzed to develop the test speed while keeping the resolution. Based on the theory, an isolated surface measurement system is formed for gauging the BGA leaders' top coplanarity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.