In visual simultaneous localization and mapping (SLAM), the odometry estimation and navigation map building are carried out concurrently using only cameras. An important step in the SLAM process is the detection and analysis of the keypoints found in the environment. Performing a good correspondence of these points allows us to build an optimal point cloud for maximum localization accuracy of the mobile robot and, therefore, to build a precise map of the environment. In this presentation, we perform an extensive comparison study of the correspondences made by various combinations of detectors/descriptors and contrast the performance of two iterative closest points (ICP) algorithms used in the RGB-D SLAM problem. An adaptive RGB-D SLAM system is proposed, and its performance with the TUM RGB-D dataset is presented and discussed.
|