Paper
26 February 1997 Real-time visual processing in support of autonomous driving
Marilyn Nashman, Henry Schneiderman
Author Affiliations +
Proceedings Volume 2962, 25th AIPR Workshop: Emerging Applications of Computer Vision; (1997) https://doi.org/10.1117/12.267816
Event: 25th Annual AIPR Workshop on Emerging Applications of Computer Vision, 1996, Washington, DC, United States
Abstract
Autonomous driving provides an effective way to address traffic concerns such as safety and congestion. There has been increasing interest in the development of autonomous driving in recent years. Interest has included high-speed driving on highways, urban driving, and navigation through less structured off-road environments. The primary challenge in autonomous driving is developing perception techniques that are reliable under the extreme variability of outdoor conditions in any of these environments. Roads vary in appearance. Some are smooth and well marked, while others have cracks and potholes or are unmarked. Shadows, glare, varying illumination, dirt or foreign matter, other vehicles, rain, and snow also affect road appearance. This paper describes a visual processing algorithm that supports autonomous driving. The algorithm requires that lane markings be present and attempts to track the lane markings on each of two lane boundaries in the lane of travel. There are three stages of visual processing computation: extracting edges, determining which edges correspond to lane markers, and updating geometric models of the lane markers. A fourth stage computes a steering command for the vehicle based on the updated road model. All processing is confined to the 2-D image plane. No information about the motion of the vehicle is used. This algorithm has been used as part of a complete system to drive an autonomous vehicle, a high mobility multipurpose wheeled vehicle (HMMWV). Autonomous driving has been demonstrated on both local roads and highways at speeds up to 100 kilometers per hour (km/h). The algorithm has performed well in the presence of non-ideal road conditions including gaps in the lane markers, sharp curves, shadows, cracks in the pavement, wet roads, rain, dusk, and nighttime driving. The algorithm runs at a sampling rate of 15 Hz and has a worst case processing delay time of 150 milliseconds. Processing is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) architecture and runs on a dedicated image processing engine and a VME-based microprocessor system.
© (1997) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Marilyn Nashman and Henry Schneiderman "Real-time visual processing in support of autonomous driving", Proc. SPIE 2962, 25th AIPR Workshop: Emerging Applications of Computer Vision, (26 February 1997); https://doi.org/10.1117/12.267816
Lens.org Logo
CITATIONS
Cited by 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Roads

Visual process modeling

Data modeling

Visualization

3D modeling

Image processing

Detection and tracking algorithms

RELATED CONTENT

Vehicle classification in video using virtual detection lines
Proceedings of SPIE (December 24 2013)
Lightweight 3D DenseNet with improved attention mechanism
Proceedings of SPIE (November 23 2022)
Road. Following By An Autonomous Vehicle Using Range Data
Proceedings of SPIE (February 25 1987)
Visual characterization of color CRTs
Proceedings of SPIE (August 04 1993)
Time-critical adaptive visualization method of 3D city models
Proceedings of SPIE (November 15 2007)

Back to Top