Paper
23 January 2012 Accurate dense 3D reconstruction of moving and still objects from dynamic stereo sequences based on temporal modified-RANSAC and feature-cut
Author Affiliations +
Proceedings Volume 8301, Intelligent Robots and Computer Vision XXIX: Algorithms and Techniques; 830105 (2012) https://doi.org/10.1117/12.908037
Event: IS&T/SPIE Electronic Imaging, 2012, Burlingame, California, United States
Abstract
This paper improves the authors' conventional method for reconstructing the 3D structure of moving and still objects that are tracked in the video and/or depth image sequences acquired by moving cameras and/or range finder. The authors proposed a Temporal Modified-RANSAC based method [1] that (1) can discriminate each moving object from the still background in color image and depth image sequences acquired by moving stereo cameras or moving range finder, (2) can compute the stereo cameras' egomotion, (3) can compute the motion of each moving object, and (4) can reconstruct the 3D structure of each moving object and the background. However, the TMR based method has the following two problems concerning the 3D reconstruction: lack of accuracy of segmenting into each object's region and sparse 3D reconstructed points in each object's region. To solve these problems of our conventional method, this paper proposes a new 3D segmentation method that utilizes Graph-cut, which is frequently used for segmentation tasks. First, the proposed method tracks feature points in the color and depth image sequences so that 3D optical flows of the feature points in every N frames are obtained. Then, TMR classifies all the obtained 3D optical flows into regions (3D flow set) for the background and each moving object; simultaneously, the rotation matrix and the translation vector for each 3D flow set are computed. Next, Graph-Cut using the energy function that consists of color probability, structure probability and a-priori probability is performed so that pixels in each frame are segmented into object regions and the background region. Finally, 3D point clouds are obtained from the segmentation result image and depth image, and then the point clouds are merged using the rotation and translation from the N-th frame prior to the current frame so that 3D models for the background and each moving object are constructed with dense 3D point data.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Naotomo Tatematsu and Jun Ohya "Accurate dense 3D reconstruction of moving and still objects from dynamic stereo sequences based on temporal modified-RANSAC and feature-cut", Proc. SPIE 8301, Intelligent Robots and Computer Vision XXIX: Algorithms and Techniques, 830105 (23 January 2012); https://doi.org/10.1117/12.908037
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

3D modeling

Optical flow

3D image processing

Clouds

Stereoscopic cameras

3D acquisition

RELATED CONTENT


Back to Top