KEYWORDS: Data modeling, Education and training, Point clouds, Computer aided design, 3D modeling, Solid modeling, Inspection, Sensors, Machine learning, Deep learning
Even though neural network methodologies have been established for a long time, only recently have they achieved exceptional efficacy in practical deployments, predominantly due to improvements in hardware computational capacity and the large amounts of available data for learning. Nonetheless, substantial challenges remain in utilizing deep learning in many domains, mainly because of the lack of large amounts of labeled data that are versatile enough for deep learning models to learn useful information. For instance, in mechanical assembly inspection, annotating data for each type of mechanical part to train a deep learning model can be very labor-intensive. Additionally, it is required to annotate data after each modification of mechanical part specification. Also, the system for inspection is typically not available until the first few samples are built to collect data. This paper proposes a solution for these challenges in case of the visual mechanical assembly inspection by processing point cloud data acquired via a three-dimensional (3D) scanner. To reduce the necessity for manually labeling large amounts of data, we employed synthetically generated data for both training and validation purposes, reserving the real sensor data exclusively for the testing phase. Our approach reduces the need for large amounts of labeled data by using synthetically generated point clouds from computer-aided design models for neural network training. Domain gap is a significant challenge for the usage of synthetically generated data. To reduce the domain gap, we used different preprocessing techniques, as well as a neural network architecture that focuses more on shared features that will not significantly change between synthetically generated data and real data from the 3D sensor.
This paper deals with the detection and characterization of surface damages (a dent, a crack, etc.) on mechanical surfaces using 2D/3D vision (3D scanner and/or 2D RGB camera). The main innovative aspect lies in the exploitation of the Computer Aided Design model, when it is available, with two possible scenarios: ”manual control” via a hand-held 3D scanner carried by an operator, or ”automated control” via a 3D scanner carried by a cobot. This research work has been carried out within the joint research laboratory ”Inspection 4.0” between IMT Mines Albi/ICA and the DIOTA company, specialized in the development of numerical tools for Industry 4.0
This paper proposes a solution for the problem of visual mechanical assembly inspection by processing point cloud data acquired via a 3D scanner. The approach is based on deep Siamese neural networks for 3D point clouds. To overcome the requirement for a large amount of labeled training data, only synthetically generated data is used for training and validation. Real-acquired point clouds are used only in testing phase.
We are focused on conformity control of complex aeronautical mechanical assemblies, typically an aircraft engine at the end or in the middle of the assembly process. Our overall system should ensure that all the mechanical parts are present and well-mounted. A 3D scanner carried by a robot arm provides acquisitions of 3D point clouds which are further processed. Computer-Aided Design (CAD) model of the mechanical assembly is available. In this paper, we are concentrating on detecting the absence of mechanical elements. Previously we have developed a rendering pipeline for creating realistic synthetic 3D point cloud data. We do this by using the CAD model and taking into account occlusion and self-occlusion of mechanical parts. In this paper, an existing deep neural network for 3D segmentation is experimentally chosen and trained on these synthetic data. Further, the model is evaluated on real data acquired by a 3D scanner and has shown good quantitative results according to a segmentation metric. Finally, when a threshold is applied to the segmentation result, a final decision is made on the absence/presence problem. The achieved accuracy is 98.7%. Our research work is being carried out within the framework of the joint research laboratory ”Inspection 4.0” between IMT Mines Albi/ICA and the company Diota specialized in the development of numerical tools for Industry 4.0. This research is a continuation of the work presented at the QCAV’2021 conference [1].
Deep learning has resulted in a huge advancement in computer vision. However, deep models require an enormous amount of manually annotated data, which is a laborious and time-consuming task. Large amounts of images demand the availability of target objects for acquisition. This is a kind of luxury we usually do not have in the context of automatic inspection of complex mechanical assemblies, such as in the aircraft industry. We focus on using deep convolutional neural networks (CNN) for automatic industrial inspection of mechanical assemblies, where training images are limited and hard to collect. Computer-aided design model (CAD) is a standard way to describe mechanical assemblies; for each assembly part we have a three-dimensional CAD model with the real dimensions and geometrical properties. Therefore, rendering of CAD models to generate synthetic training data is an attractive approach that comes with perfect annotations. Our ultimate goal is to obtain a deep CNN model trained on synthetic renders and deployed to recognize the presence of target objects in never-before-seen real images collected by commercial RGB cameras. Different approaches are adopted to close the domain gap between synthetic and real images. First, the domain randomization technique is applied to generate synthetic data for training. Second, domain invariant features are utilized while training, allowing to use the trained model directly in the target domain. Finally, we propose a way to learn better representative features using augmented autoencoders, getting performance close to our baseline models trained with real images.
Deep learning resulted in a huge advancement in computer vision. However, deep models require a large amount of manually annotated data, which is not easy to obtain, especially in a context of sensitive industries. Rendering of Computer Aided Design (CAD) models to generate synthetic training data could be an attractive workaround. This paper focuses on using Deep Convolutional Neural Networks (DCNN) for automatic industrial inspection of mechanical assemblies, where training images are limited and hard to collect. The ultimate goal of this work is to obtain a DCNN classification model trained on synthetic renders, and deploy it to verify the presence of target objects in never-seen-before real images collected by RGB cameras. Two approaches are adopted to close the domain gap between synthetic and real images. First, Domain Randomization technique is applied to generate synthetic data for training. Second, a novel approach is proposed to learn better features representations by means of self-supervision: we used an Augmented Auto-Encoder (AAE) and achieved results competitive to our baseline model trained on real images. In addition, this approach outperformed baseline results when the problem was simplified to binary classification for each object individually.
KEYWORDS: 3D modeling, Inspection, Data modeling, Solid modeling, Computer aided design, 3D scanning, Clouds, 3D acquisition, Optical inspection, Laser scanners
Our research work is being carried out within the framework of the joint research laboratory ”Inspection 4.0” between IMT Mines Albi/ICA and the company DIOTA specialized in the development of numerical tools for Industry 4.0. In this work, we are focused on conformity control of complex aeronautical mechanical assemblies, typically an aircraft engine at the end or in the middle of the assembly process. A 3D scanner carried by a robot arm provides acquisitions of 3D point clouds which are further processed by deep classification networks. Computer Aided Design (CAD) model of the mechanical assembly to be inspected is available, which is an important asset of our approach. Our deep learning models are trained on synthetic and simulated data, generated from the CAD models. Several networks are trained and evaluated and results on real clouds are presented.
KEYWORDS: Clouds, 3D modeling, Inspection, Solid modeling, Computer aided design, Data modeling, 3D scanning, RGB color model, Image segmentation, Scanners
We present a robust approach for detecting defects on an aircraft electrical wiring interconnection system in order to comply with the safety regulations such as the forbidden interference and allowed bend radius of cables and/or harness in mechanical assemblies. For this purpose, we exploit 3-D point clouds acquired with a 3-D scanner and the 3-D computer-aided design (CAD) model of the assembly being inspected. Our method mainly consists of two processes: an offline automatic selection of informative viewpoints and an online automatic treatment of the acquired 3-D point cloud from said viewpoints. The viewpoint selection is based on the 3-D CAD model of the assembly and the calculation of a scoring function, which evaluates a set of candidate viewpoints. After the offline viewpoint selection is completed, the robotic inspection system is ready for operation. During the online inspection phase, a 3-D point cloud is analyzed for measuring the bend radius of each cable and its minimum distance to the other elements in the assembly. For this, we developed a 3-D segmentation algorithm to find the cables in the point cloud, by modeling a cable as a collection of cylinders. Using the segmented cable, we carried out a quantitative analysis of the interference and bend radius of each cable. The performance of the inspection system is validated on synthetic and real data, the latter being acquired by our precalibrated robotic system. Our dataset is acquired by scanning different zones of an aircraft engine. The experimental results show that our proposed approach is accurate and promising for industrial applications.
KEYWORDS: Clouds, Sensors, 3D modeling, Inspection, Environmental sensing, Solid modeling, Data modeling, Computer aided design, RGB color model, Chemical elements
Usage of a three-dimensional (3-D) sensor and point clouds provides various benefits over the usage of a traditional camera for industrial inspection. We focus on the development of a classification solution for industrial inspection purposes using point clouds as an input. The developed approach employs deep learning to classify point clouds, acquired via a 3-D sensor, the final goal being to verify the presence of certain industrial elements in the scene. We possess the computer-aided design model of the whole mechanical assembly and an in-house developed localization module provides initial pose estimation from which 3-D point clouds of the elements are inferred. The accuracy of this approach is proved to be acceptable for industrial usage. Robustness of the classification module in relation to the accuracy of the localization algorithm is also estimated.
In this paper, we address the problem of automatic robotic inspection in two parts: first, automatic selection of informative viewpoints before the inspection process is started, and, second, automatic treatment of the acquired 3D point cloud from said viewpoints. We apply our system to detecting defects on aircraft Electrical Wiring Interconnection System (EWIS) in order to comply with the growing amount of safety regulations such as interference and allowable bend radius of cables in mechanical assemblies.
We focus on quality control of mechanical parts in aeronautical context using a single pan-tilt-zoom (PTZ) camera and a computer-aided design (CAD) model of the mechanical part. We use the CAD model to create a theoretical image of the element to be checked, which is further matched with the sensed image of the element to be inspected, using a graph theory–based approach. The matching is carried out in two stages. First, the two images are used to create two attributed graphs representing the primitives (ellipses and line segments) in the images. In the second stage, the graphs are matched using a similarity function built from the primitive parameters. The similarity scores of the matching are injected in the edges of a bipartite graph. A best-match-search procedure in the bipartite graph guarantees the uniqueness of the match solution. The method achieves promising performance in tests with synthetic data including missing elements, displaced elements, size changes, and combinations of these cases. The results open good prospects for using the method with realistic data.
KEYWORDS: Inspection, Image segmentation, Sensors, Cameras, 3D modeling, Image processing, 3D image processing, Oxygen, Mobile robots, Fluctuations and noise
This paper deals with an automated preflight aircraft inspection using a pan-tilt-zoom camera mounted on a mobile robot moving autonomously around the aircraft. The general topic is image processing framework for detection and exterior inspection of different types of items, such as closed or unlatched door, mechanical defect on the engine, the integrity of the empennage, or damage caused by impacts or cracks. The detection step allows to focus on the regions of interest and point the camera toward the item to be checked. It is based on the detection of regular shapes, such as rounded corner rectangles, circles, and ellipses. The inspection task relies on clues, such as uniformity of isolated image regions, convexity of segmented shapes, and periodicity of the image intensity signal. The approach is applied to the inspection of four items of Airbus A320: oxygen bay handle, air-inlet vent, static ports, and fan blades. The results are promising and demonstrate the feasibility of an automated exterior inspection.
This paper focuses on quality control of mechanical parts in aeronautical context by using a single PTZ camera and the CAD model of the mechanical part. In our approach two attributed graphs are matched using a similarity function. The similarity scores are injected in the edges of a bipartite graph. A best-match-search procedure in bipartite graph guarantees the uniqueness of the match solution. The method achieves excellent performance in tests with synthetic data, including missing elements, displaced elements, size changes, and combination of these cases.
This paper deals with the inspection of an airplane using a Pan-Tilt-Zoom camera mounted on a mobile robot moving around the airplane. We present image processing methods for detection and inspection of four different types of items on the airplane exterior. Our detection approach is focused on the regular shapes such as rounded corner rectangles and ellipses, while inspection relies on clues such as uniformity of isolated image regions, convexity of segmented shapes and periodicity of the image intensity signal. The initial results are promising and demonstrate the feasibility of the envisioned robotic system.
We propose a framework for obtaining synthetic speckle-pattern images based on successive transformations of Perlin's coherent noise function. In addition we show how a given displacement function can be used to produce deformed images, making this framework suitable for performance analysis of speckle-based displacement/strain measurement techniques, such as Digital Image Correlation, widely used in experimental mechanics.
KEYWORDS: Cameras, Sensors, Detection and tracking algorithms, Flame detectors, Temperature metrology, Black bodies, Video, Near infrared, Optical engineering, Target detection
We introduce a new approach to aircraft cargo compartment surveillance. The originality of the approach is in the use of a single sensor type, a CCD camera, to detect fire events and freight movement in aircraft cargo holds (multiphenomenom/monosensor approach). The CCD camera evaluation and the radiometric and geometric models are provided in (Sentenac et al., 2002). We go on to discuss the image analysis algorithms used in the detection of fire signatures (hot spots, flame, and smoke) and load displacement. For each phenomenon, the discriminant parameters are established and the algorithm is explained. The crucial factor is the validation procedure according to aeronautical standards. The experimental trials were carried out in a test chamber providing the fire and smoke test facilities [TF1 to TF6 following EN 54 (Afnor, 1997) requirements].
KEYWORDS: Sensors, Modulation transfer functions, Cameras, Near infrared, Charge-coupled devices, Calibration, Temperature metrology, 3D modeling, Video, Black bodies
KEYWORDS: Calibration, 3D modeling, Cameras, 3D image processing, 3D metrology, Image processing, Feature extraction, Metals, Stereoscopic cameras, Imaging systems
We present in this paper a stereovision system that has been developed in Ecole des Mines d'Albi Material Research Center laboratory for the automatic measurement of 3D deformed surfaces, in collaboration with the LAAS-CNRS laboratory. This method uses off-the-shelf lenses, CCD cameras and frame grabber, and requires that a predefined pattern be applied to the sheet surface before stamping. The system works in three steps: (i) the stereovision system is first calibrated, (ii) two images of the part to be measured are taken and the 3D coordinates are computed, (iii) the strains are calculated from these 3D coordinates.
UNION MINIERE (UM) is an international firm that has developed a special process, called preweathering, which gives rolled zinc sheets a natural patina look or a slate gray color. The strip of preweathered zinc can be affected by surface defects caused by the roll mill or by the preweathering process. We have equipped a production line of preweathered zinc in the plant with a computer vision based system in order to automatically inspect one side of the strip. The system is composed of a personal computer system, a light source, a line scan camera and an acquisition board. The basic purpose of this system is to provide effective on-line inspection. The main problems to be solved are, (1) that the imperfections show a considerable variation in length, from a few centimeters in the case of local defects, to several decimeters in the case of periodic spaced defects due to rollers, (2) that the contrast between the local defect and the strip is poor, (3) the grayness varies across the width of the strip, (4) that the grayness varies across the length of the roll. The real time inspection system has now been implemented and is currently undergoing evaluation in the plant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.