PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
With regard to obstacle avoidance, a paradigm shift from technology centered solutions to technology independent solutions is taking place. This trend also gives rise to a shift from function specific solutions to multifunctional solutions. A number of existing approaches are reviewed and a case study of a biologically inspired insect vision model is used to illustrate the new paradigm. The insect vision model leads to the realization of a sensor that is low in complexity, high in compactness, multifunctional and technology independent. Technology independence means that any front end technology, resulting in either optical, infrared or mm wave detection, for example, can be used with the model. Each technology option can be used separately or together with simple data fusion. Multifunctionality implies that the same system can detect obstacles, perform tracking, estimate time-to-impact, estimate bearing, etc. and is thus non-function specific. Progress with the latest VLSI realization of the insect vision sensor is reviewed and gallium arsenide is proposed as the future medium that will support a multifunctional and multitechnology fusion of optical, infrared, millimeter wave, etc. approaches. Applications are far reaching and include autonomous robot guidance, automobile anti-collision warning, IVHS, driver alertness warning, aids for the blind, continuous process monitoring/web inspection and automated welding, for example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a solution to the problem of position estimation for autonomous land vehicles (ALV). In our previous research, we have used the recursive least square error minimization method to determine the vehicle's pose information. When a reasonably precise initial estimate is used, our system will converge to the vehicle's pose parameters with high accuracy. Using the global positioning system (GPS), we can obtain a position estimate with errors which, in normal conditions, are of the order of 100 meters. This accuracy is not sufficient for automated navigation. The initial estimate taken from the GPS system can be refined and improved by using curvature matching. Comparing the expected and calculated 3D road curvature, the system can recognize the current position on the road. Curvature in the 3D space is determined by selecting and backprojecting points from a 2D road image and fitting a set of cubic spline patches to them. At present the system has been tested using both synthetic data and real data. The initial results indicate that the method can be made to work well, however, care is required in the measurement and calculation of the curvature in real applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dissectoral tracking systems are employed for coordinate determination and for tracking minor-dimensional light targets in optical coupling devices, laser location, and navigation systems. To analyze the accuracy of coordinate determination of minor dimensional point light target by dissectoral tracking system is suggested. The robustness and stability of the algorithm to deviation of the model parameters from nominal value and the accuracy of coordinate determination at different target contrasts, speed movement at the relationship signal/noise and dissectoral tracking system parameters, has been examined. The results are obtained by statistical simulation of the Kalman filter with a computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a more human-like approach towards navigating a mobile robot. We maintain that navigation of a mobile robot always occurs within a certain context which we call a situation. Typical situations might be 'entering a corridor,' 'passing through a door,' 'seeking a goal,' etc. To approximate the navigation behavior of an intelligent agent in such a situation, we define generic situations as collections of pathways. Each pathway describes a possible path followed by that agent in that situation. We further assume that these pathways can be generated by observing a limited set of beacons associated with each situation. Hence, the robot will make use of relative positions only, distance and bearing of the beacons with respect to the robot and distances between the different beacons, obviating the need for an absolute coordinate system. To limit the number of pathways that need to be stored to describe a generic situation we propose a competition and cooperation algorithm. To show how this approach fares in realistic circumstances on a real mobile robot we include preliminary results with a triangulation based ultrasonic sensor system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The most significant challenge encountered in the implementation of the MDARS Interior security robot system has involved navigational referencing -- the ongoing process of determining a mobile robot's position relative to a specified global frame of reference. Sensors and processing used in local navigation (determining position relative to objects in the environment and not colliding with them en route) can also support global navigation in a mapped environment. The task involves not only detecting and localizing features in the robot's environment, but also establishing with some confidence that these features are in fact specific features that appear in the world model. This perceptual function is one that humans do easily and instinctively, while robotic capabilities in this regard are rudimentary at best. This paper discusses a number of candidate approaches to navigational referencing applicable to indoor operating environments in terms of relevant evaluation criteria (including performance, cost, and generality of applicability), and describes how the experience of phased testing in real-world environments has driven the evolution of the MDARS system design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design and prototyping of a development environment, called BALI, for a small robot, viz., the MIT 6.270 robot, is presented in this paper. BALI is being developed and used for research work using a 6.270-based robot. Building on the experience with IC (interactive-C) for programming the 6.270 robot and new technologies like Java, a more powerful and low cost robot development environment is possible. The goal of BALI is to provide a flexible, customizable, and extensible development environment so that robot researchers can quickly tailor BALI to their robots. Given that the 6.270 robot is really a building kit made up of LEGO blocks (or similar kinds of physical building blocks), the 68HC11-based motherboard, and a variety of sensors, BALI cannot be specially built for one 'instance' of the 6.270 robot. Rather the guiding principles for building BALI should be to provide the GUI (graphical user interface) 'primitives' from which one can assemble and build his or her development environment. Thus GUI primitives for displaying status information, sensor readings, robot orientation, and environment maps must be provided. Much of these primitives are already provided in Java. It is the robot-specific ones that have to be developed for BALI. The Java- like language that forms the core of BALI is the main focus of this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our goal is to produce a prototype of an autonomous robot satellite, SATBOT. This robot differs from conventional robots in that it has three degrees of freedom, uses magnetics to direct the motion, and needs a zero gravity environment. The design integrates the robot's structure and a biomorphic (biological morphology) control system to produce a survival- oriented vehicle that adapts to an unknown environment. Biomorphic systems, loosely modeled after biological systems, use simple analog circuitry, are low power, and are microprocessor independent. These analog networks, called nervous networks (Nv), are used to solve real-time controls problems. The Nv approach to problem solving in robotics has produced many surprisingly capable machines which exhibit emergent behavior. The network can be designed to respond to positive or negative inputs from a sensor and produce a desired directed motion. The fluidity and direction of motion is set by the neurons and is inherent to the structure of the device. The robot is designed to orient itself with respect to a local magnetic field; to direct its attitude toward the greatest source of light; and robustly recover from variations in the local magnetic field, power source, or structural stability. This design uses a two neuron network which acts as a push-pull controller for the actuator (air core coil), and two sun sensors (photodiodes) as bias inputs to the neuron. The effect of sensor activation on an attractive or repulsive torque (directional motion) is studied. A discussion of this system's energy and frequency, noise immunity, and some dynamic characteristics is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The navigation problem of autonomous mobile robots (AMRs) through unknown terrains, i.e. terrains whose models are not a priori known, is considered. The word terrain means both the ground surface and the obstacles which may appear in the environment. Obstacle detection is sufficient for navigation on a flat terrain with discrete obstacles. When the terrain is uneven a local elevation map representing the terrain irregularities is essential. An algorithm is presented to navigate the AMR using radar range sensory data, among arbitrary obstacles, on a ground surface which is not completely flat, i.e. some rough regions or discontinuities of terrain elevation, as in the case of bumps and holes, may appear. Range sensory data is transformed into representations, for both the terrain surface features and the obstacles, that can be used by the navigation decision-making components of the system. The navigation problem involves integration between obstacle avoidance and terrain acquisition. The AMR builds the terrain model as it navigates and constitutes a local-free space for subgoal selections. The AMR modeled by its bounding circle and any positional uncertainty is accounted for in the navigation algorithm. Properties of the algorithm are illustrated by simulation examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SIC/R laboratory is conducting a research program called Autonomy of Mobile Robots in Unstructured Environments (AMRU), focusing on the realization of light low-cost legged robots for indoor and outdoor applications, study of image and speech processing, development of path planners. This paper summarizes the description of the first four robots (AMRU 1 to 4) of the table 1. Low cost allows the sacrifice and the replacement of the robots used in dangerous environmental conditions (minefield, battlefield, nuclear site, etc.) and implies the choice of low level proprioceptive and exteroceptive sensors coupled with a simple digital control system, light structure facilitates their transportation (by air, land or sea) to the application site.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultra-wideband (UWB) communications is a new field of technology that has a wide range of applications from range finding to wide bandwidth communications. Ultra-wideband signals are unusual because their bandwidth to center frequency ratio is not small and can be greater than 100%. Recent developments of integrating this technology on a chip have made it versatile and low cost. This technology also offers some special features like interference immunity, multi-access communications, and accurate range finding. A number of areas which need cooperative communications can benefit from this technology including automated highway systems (AHS) and military systems. This paper describes the basics of UWB technology, provides background on the technology, and provides some possible applications of UWB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a method for measuring odometry errors in mobile robots and for expressing these errors quantitatively. When measuring odometry errors, one must distinguish between (1) systematic errors, which are caused by kinematic imperfections of the mobile robot (for example, unequal wheel-diameters), and (2) non-systematic errors, which may be caused by wheel slippage or irregularities of the floor. Systematic errors are a property of the robot itself, and they stay almost constant over prolonged periods of time, while non- systematic errors are a function of the properties of the floor. Our method, called the University of Michigan benchmark test (UMBmark), is especially designed to uncover certain systematic errors that are likely to compensate for each other (and thus, remain undetected) in less rigorous tests. This paper explains the rationale for the UMBmark procedure and explains the procedure in detail. Experimental results from different mobile robots are also presented and discussed. Furthermore, the paper proposes a method for measuring non-systematic errors, called extended UMBmark. Although the measurement of non-systematic errors is less useful because it depends strongly on the floor characteristics, one can use the extended UMBmark test for comparison of different robots under similar conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lateral vehicle control is a nonlinear control problem. Because there exists many uncertain factors (e.g., imprecision of vehicle modeling and information measurement), it is limited to take advantage of the traditional control method based on a precise mathematical model to design a controller for lateral vehicle control. During the past several years, fuzzy control has emerged as one of the most active areas for research in dealing with the nonlinear control problem and has made many achievements. We designed a fuzzy logic controller for the road following of the autonomous land vehicle. The main sensor we used is the video camera that can provide the environment information around the vehicle. We have implemented this controller in the simulation. The controller drove the vehicle to follow the curved road and we got good tracking accuracy. In comparing with the PID controller, the fuzzy logic controller does not need a precise vehicle model and has better adaptability to parameter variation of the vehicle than the PID.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A goal of the surrogate semi-autonomous vehicle (SSV) program is to have multiple vehicles navigate autonomously and cooperatively with other vehicles. This paper describes the process and tools used in porting UGV/SSV (unmanned ground vehicle) autonomous mobility and target recognition algorithms from a SISD (single instruction single data) processor architecture (i.e., a Sun SPARC workstation running C/UNIX) to a MIMD (multiple instruction multiple data) parallel processor architecture (i.e., PARAGON-a parallel set of i860 processors running C/UNIX). It discusses the gains in performance and the pitfalls of such a venture. It also examines the merits of this processor architecture (based on this conceptual prototyping effort) and programming paradigm to meet the final SSV demonstration requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A visual target designation and tracking system is being developed within the context of the Autonomous Scout Rotorcraft Testbed Project at Georgia Tech. This paper describes both the algorithms and the hardware being used for this purpose by the Mission Equipment Package Technology Area Team. Preliminary results using two simple tracking algorithms are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Planetary space exploration by unmanned missions strongly relies on automatic navigation methods. Computer vision has been recognized as a key to the feasibility of robust navigation, landing site identification and hazard avoidance. We present a scenario that uses computer vision methods for the early identification of landing spots, emphasizing the phase between ten kilometers from ground and the identification of the lander position relative to the selected landing site. The key element is a robust matching procedure between the elevation model (and imagery) acquired during orbit, and ground features observed during approach to the desired landing site. We describe how (1) preselection of characteristic landmarks reduces the computational efforts, and (2) a hierarchical data structure (pyramid) on graylevels and elevation models can be successfully combined to achieve a robust landing and navigation system. The behavior of such a system is demonstrated by simulation experiments carried out in a laboratory mock-up. This paper follows up previous work we have performed for the Mars mission scenario, and shows relevant changes that emerge in the Moon mission case of the European Space Agency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most important fields utilizing sensors is the field of robotics for robot navigation, target identification as well as object grasping by end-effector. To improve sensor performance in the identification of target of almost the same size, we propose a sensor design incorporating both MFH and direct sequence (DS) spread spectrum techniques. Also, these techniques are extended to provide capabilities in integration and/or fusion of information provided by multiple sensors. The results of a simulation study are presented showing the effectiveness of incorporating MFH/SS technique in the sensor design for enhancing target identification, for the autonomous mobile robots (AMR). Also, the capabilities of matched frequency hopping combined with angle diversity (MFH/AD) and cumulative matched frequency hopping (CMFH) modified techniques in integration and/or fusion of information provided by multiple sensors is illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work presented in this paper is a part of the research project on multisensor integrated vision system and sensor fusion algorithm for the navigation of an autonomous mobile robot equipped with a laser range finder (LRF) and a color visual CCD camera. To combine the complementary information from LRF and color camera, a fusion algorithm is designed based on the Dempster-Shafer's theory of evidence (DSTE). As we know, DSTE can only be used to the independent evidences, but our fusion algorithm can deal with dependent ones, which is in line with the dependent character of mobile robotics sensor information. Finally, the system and the algorithm have been tested in real environments, and their effectiveness has been proved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a modular autonomous mobile robot controller. The advantages of a modular system are related to portability and the fact that any vehicle can become autonomous with minimal modifications. A mobile robot test-bed has been constructed using a golf cart base. This cart has full speed control with guidance provided by a vision system and obstacle avoidance using ultrasonic sensors systems. The speed and steering control are supervised by a 486 computer through a 3-axis motion controller. The obstacle avoidance system is based on a micro-controller interfaced with six ultrasonic transducers. The is micro-controller independently handles all timing and distance calculations and sends a steering angle correction back to the computer via the serial line. This design yields a portable independent system, where even computer communication is not necessary. Vision guidance is accomplished with a CCD camera with a zoom lens. The data is collected through a commercial tracking device, communicating with the computer the X,Y coordinates of the lane marker. Testing of these systems yielded positive results by showing that at five mph the vehicle can follow a line and at the same time avoid obstacles. This design, in its modularity, creates a portable autonomous controller applicable for any mobile vehicle with only minor adaptations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the options for students in mechanical engineering at West Virginia University is to participate in a 'robot design' course for their capstone design experience. For the past two years, the project has involved the construction of an autonomous mobile robot for entry in an international competition. This paper describes the design of the 1994-5 robot, reports on its status and offers observations on both the design and some of the pedagogical aspects of the class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a pair of autonomous mobile robots that have been programmed to navigate benign, though unstructured, natural terrain. The impetus behind our creations is to perform research on mobile robot autonomy, and to test our results by competing in the annual AUVS (Association for Unmanned Vehicle Systems) ground vehicle competition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Colorado School of Mines (CSM) entry placed fourth in the 1995 International Unmanned Ground Robotics Competition sponsored by the Association for Unmanned Vehicles (AUVS). Clementine 2, a battery powered children's jeep outfitted with a 100 MHz Pentium field computer, a camcorder, and a panning ultrasonic range finder served as the platform. The objectives of the CSM team were to gain familiarity with the CSM architecture by applying it to a well defined problem, evaluate existing computer vision based road following techniques, and gain practical experience in using multiple sensing modalities. The entry used the behavioral portion of the CSM hybrid deliberative/reactive architecture, which divided robot activities into four strategic and tactical behaviors: vision based follow-path, ultrasonic based avoid-obstacle, pan-camera, and speed-control using inclinometers. This paper details the motivation behind the CSM entry, the approach taken, and lessons learned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An overview of the hardware, software, and mechanical design concepts implemented on the University of Colorado's RoboCar are examined. The general design model and philosophy are reviewed, the actual physical implementation is discussed and suggestions for improving performance are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the results from a feasibility analysis performed on two different structured light system designs and the image processing algorithms they require for dent detection and localization. The impact of each structured light system is analyzed in terms of their mechanical realization and the complexity of the image processing algorithms required for robust dent detection. The two design alternatives considered consist of projecting vertical or horizontal laser stripes on the drum surface. The first alternative produces straight lines in the image plane and requires scanning the drum surface horizontally, whereas the second alternative produces conic curves on the camera plane and requires scanning the drum surface vertically. That is, the first alternative favors image processing against mechanical realization while the second alternative favors mechanical realization against image processing. The results from simulated and real structured light systems are presented and their major advantages and disadvantages for dent detection are presented. The paper concludes with the lessons learned from experiments with real and simulated structured light system prototypes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Longitudinal control of vehicles to maintain some desirable headway between vehicles is one of the most important control issues in advanced vehicle control systems (AVCS) for Automated Highway Systems (AHS). This paper describes the design and simulation results of a nonlinear robust full-state feedback control design based on the sliding mode technique for the automatic maintenance of headway between vehicles. The longitudinal dynamic model of the system is nonlinear, time varying, and has functional and parametric uncertainties. These uncertainties are effectively addressed by the sliding mode controller (SMC). The problem of chattering in sliding mode is eliminated by introducing a boundary layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the setup for advanced vehicle control systems (AVCS) experiments in the flexible low-cost automated scaled highway (FLASH) laboratory being developed at Virginia Tech. The laboratory is a proposed 1/15th scale hardware working model of Automated Highway Systems (AHS). The vehicles are equipped with ultrasonic sensors for longitudinal guidance. They are controlled by HC11 microprocessor boards, and have wireless two way communication infrastructure for vehicle-highway communication. This paper describes the hardware, software, and design issues for the experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.