This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that
captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display
them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple
view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data
are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to
volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse,
those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense
volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D
polygon model, a simple inpainting method for improving depth maps is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.