Paper
29 December 2004 Active vision and image/video understanding systems built upon network-symbolic models for perception-based navigation of mobile robots in real-world environments
Author Affiliations +
Proceedings Volume 5609, Mobile Robots XVII; (2004) https://doi.org/10.1117/12.577747
Event: Optics East, 2004, Philadelphia, Pennsylvania, United States
Abstract
To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a “raster” into a “vector” representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.
© (2004) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Gary Kuvich "Active vision and image/video understanding systems built upon network-symbolic models for perception-based navigation of mobile robots in real-world environments", Proc. SPIE 5609, Mobile Robots XVII, (29 December 2004); https://doi.org/10.1117/12.577747
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Visualization

Information visualization

Visual process modeling

Brain

Systems modeling

Navigation systems

Image processing

Back to Top