The paper introduces a factory-specific SLAM algorithm that seamlessly integrates deep learning with feature point filtering to address challenges associated with inaccurate and biased positioning data in industrial environments. By carefully considering the distinct characteristics of factory settings, particularly those where AGV robots operate, our approach effectively distinguishes between dynamic and static entities commonly encountered in such environments. To achieve this, we employ a deep learning-based dynamic object detection mechanism along with a refined feature point filtering process. Initially, deep learning algorithms are utilized to identify potential dynamic objects in the scene, providing valuable prior information. Subsequently, a feature point filtering algorithm is meticulously crafted to eliminate feature points that may introduce interference. This refinement ensures a more rational removal of dynamic feature points, thereby improving the positioning precision and robustness of the visual SLAM system in dynamic factory environments. Extensive experimental results demonstrate that, when compared to ORB-SLAM2 and DS-SLAM, the proposed algorithm achieves superior positioning and mapping accuracy in factory settings. This advancement not only addresses a longstanding challenge in robotics but also represents a significant stride towards enhancing the autonomy and reliability of AGV robots in industrial applications.
Intelligent optical sensing technologies play important roles in many fields, one of which is to help unmanned devices such as UAVs, autonomous mobile robots and intelligent robots to achieve accurate localization and mapping. With the advancement of Industry 4.0 and intelligent manufacturing, the use of autonomous mobile robots has become an important indicator of a country's industrial modernization. As the core issue in the research of autonomous mobile robots technology, autonomous localization and mapping technology has been the focus and difficulty of many scholars at present. Through the efforts of early researchers and engineers, the localization and mapping technology of autonomous mobile robots in simple static environment has achieved fruitful results, and is also playing an important role in the practical industrial application of autonomous mobile robots. However, when the autonomous mobile robots are faced with more complex or changing surrounding environment, the traditional localization and mapping methods based on geometric features such as points and lines can not achieve more accurate results, and even produce many wrong data to hinder the normal operation of the autonomous mobile robots. In this paper, combined with the characteristics of the complex dynamic environment that autonomous mobile robots will encounter in actual work, we propose a method to obtain and utilize the relatively advanced semantic information in the surrounding environment and use it for autonomous mobile robot localization and mapping. The method of this paper uses deep learning technology to mine more advanced semantic information based on the traditional method obtaining the geometric information of the environment, so that the autonomous mobile robot can generate advanced recognition and cognition of the objects inthesurrounding environment, thus assisting it to complete more accurate localization and mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.