This paper deals with the detection and characterization of surface damages (a dent, a crack, etc.) on mechanical surfaces using 2D/3D vision (3D scanner and/or 2D RGB camera). The main innovative aspect lies in the exploitation of the Computer Aided Design model, when it is available, with two possible scenarios: ”manual control” via a hand-held 3D scanner carried by an operator, or ”automated control” via a 3D scanner carried by a cobot. This research work has been carried out within the joint research laboratory ”Inspection 4.0” between IMT Mines Albi/ICA and the DIOTA company, specialized in the development of numerical tools for Industry 4.0
Deep learning has resulted in a huge advancement in computer vision. However, deep models require an enormous amount of manually annotated data, which is a laborious and time-consuming task. Large amounts of images demand the availability of target objects for acquisition. This is a kind of luxury we usually do not have in the context of automatic inspection of complex mechanical assemblies, such as in the aircraft industry. We focus on using deep convolutional neural networks (CNN) for automatic industrial inspection of mechanical assemblies, where training images are limited and hard to collect. Computer-aided design model (CAD) is a standard way to describe mechanical assemblies; for each assembly part we have a three-dimensional CAD model with the real dimensions and geometrical properties. Therefore, rendering of CAD models to generate synthetic training data is an attractive approach that comes with perfect annotations. Our ultimate goal is to obtain a deep CNN model trained on synthetic renders and deployed to recognize the presence of target objects in never-before-seen real images collected by commercial RGB cameras. Different approaches are adopted to close the domain gap between synthetic and real images. First, the domain randomization technique is applied to generate synthetic data for training. Second, domain invariant features are utilized while training, allowing to use the trained model directly in the target domain. Finally, we propose a way to learn better representative features using augmented autoencoders, getting performance close to our baseline models trained with real images.
Deep learning resulted in a huge advancement in computer vision. However, deep models require a large amount of manually annotated data, which is not easy to obtain, especially in a context of sensitive industries. Rendering of Computer Aided Design (CAD) models to generate synthetic training data could be an attractive workaround. This paper focuses on using Deep Convolutional Neural Networks (DCNN) for automatic industrial inspection of mechanical assemblies, where training images are limited and hard to collect. The ultimate goal of this work is to obtain a DCNN classification model trained on synthetic renders, and deploy it to verify the presence of target objects in never-seen-before real images collected by RGB cameras. Two approaches are adopted to close the domain gap between synthetic and real images. First, Domain Randomization technique is applied to generate synthetic data for training. Second, a novel approach is proposed to learn better features representations by means of self-supervision: we used an Augmented Auto-Encoder (AAE) and achieved results competitive to our baseline model trained on real images. In addition, this approach outperformed baseline results when the problem was simplified to binary classification for each object individually.
KEYWORDS: 3D modeling, Inspection, Data modeling, Solid modeling, Computer aided design, 3D scanning, Clouds, 3D acquisition, Optical inspection, Laser scanners
Our research work is being carried out within the framework of the joint research laboratory ”Inspection 4.0” between IMT Mines Albi/ICA and the company DIOTA specialized in the development of numerical tools for Industry 4.0. In this work, we are focused on conformity control of complex aeronautical mechanical assemblies, typically an aircraft engine at the end or in the middle of the assembly process. A 3D scanner carried by a robot arm provides acquisitions of 3D point clouds which are further processed by deep classification networks. Computer Aided Design (CAD) model of the mechanical assembly to be inspected is available, which is an important asset of our approach. Our deep learning models are trained on synthetic and simulated data, generated from the CAD models. Several networks are trained and evaluated and results on real clouds are presented.
KEYWORDS: Clouds, 3D modeling, Inspection, Solid modeling, Computer aided design, Data modeling, 3D scanning, RGB color model, Image segmentation, Scanners
We present a robust approach for detecting defects on an aircraft electrical wiring interconnection system in order to comply with the safety regulations such as the forbidden interference and allowed bend radius of cables and/or harness in mechanical assemblies. For this purpose, we exploit 3-D point clouds acquired with a 3-D scanner and the 3-D computer-aided design (CAD) model of the assembly being inspected. Our method mainly consists of two processes: an offline automatic selection of informative viewpoints and an online automatic treatment of the acquired 3-D point cloud from said viewpoints. The viewpoint selection is based on the 3-D CAD model of the assembly and the calculation of a scoring function, which evaluates a set of candidate viewpoints. After the offline viewpoint selection is completed, the robotic inspection system is ready for operation. During the online inspection phase, a 3-D point cloud is analyzed for measuring the bend radius of each cable and its minimum distance to the other elements in the assembly. For this, we developed a 3-D segmentation algorithm to find the cables in the point cloud, by modeling a cable as a collection of cylinders. Using the segmented cable, we carried out a quantitative analysis of the interference and bend radius of each cable. The performance of the inspection system is validated on synthetic and real data, the latter being acquired by our precalibrated robotic system. Our dataset is acquired by scanning different zones of an aircraft engine. The experimental results show that our proposed approach is accurate and promising for industrial applications.
In this paper, we address the problem of automatic robotic inspection in two parts: first, automatic selection of informative viewpoints before the inspection process is started, and, second, automatic treatment of the acquired 3D point cloud from said viewpoints. We apply our system to detecting defects on aircraft Electrical Wiring Interconnection System (EWIS) in order to comply with the growing amount of safety regulations such as interference and allowable bend radius of cables in mechanical assemblies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.