As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory has developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by the stereo pair. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations. For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of three primary components. The first is a stereo-based visual odometry system that calculates the 6-degree of freedom camera motion between sequential frames. The second component uses a set of heuristics to identify straight-line segments that are likely to be part of a building exterior. Ranging to these straight-line features is computed using binocular or wide-baseline stereo. The resulting features and the associated range measurements are fed to the third software component, a particle-filter based localization system. This system uses the map and the most recent results from the first two to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and describes the results of applying the system to the global localization of a camera system over an approximately half-kilometer traverse across JPL's Pasadena campus.
Detecting water hazards for autonomous, off-road navigation of unmanned ground vehicles is a largely unexplored problem. In this paper, we catalog environmental variables that affect the difficulty of this problem, including day vs. night operation, whether the water reflects sky or other terrain features, the size of the water body, and other factors. We briefly survey sensors that are applicable to detecting water hazards in each of these conditions. We then present analyses and results for water detection for four specific sensor cases: (1) using color image classification to recognize sky reflections in water during the day, (2) using ladar to detect the presense of water bodies and to measure their depth, (3) using short-wave infrared (SWIR) imagery to detect water bodies, as well as snow and ice, and (4) using mid-wave infrared (MWIR) imagery to recognize water bodies at night. For color imagery, we demonstrate solid results with a classifier that runs at nearly video rate on a 433 MHz processor. For ladar, we present a detailed propagation analysis that shows the limits of water body detection and depth estimation as a function of lookahead distance, water depth, and ladar wavelength. For SWIR and MWIR, we present sample imagery from a variety of data collections that illustrate the potential of these sensors. These results demonstrate significant progress on this problem.
Tracked mobile robots in the 20 kg size class are under development for applications in urban reconnaissance. For efficient deployment, it is desirable for teams of robots to be able to automatically execute leader/follower behaviors, with one or more followers tracking the pat+6|+ken by a leader. The key challenges to enabling such a capability are (1) to develop sensor packages for such small robots that can accurately determine the path of the leader and (2) to develop path-following algorithms for the subsequent robots. To date, we have integrated gyros, accelerometers, compass/inclinometers, odometry, and differential GPS into an effective sensing package for a small urban robot. This paper describes the sensor package, sensor processing algorithm, and path tracking algorithm we have developed for the leader/follower problem in small robots and shows the results of performance characterization of the system. We also document pragmatic lessons learned about design, construction, and electromagnetic interference issues particular to the performance of state sensors on small robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.