The evolution of robots from tools to teammates will require them to derive meaningful information about the world around them, translate knowledge and skill into effective planning and action based on stated goals, and communicate with human partners in a natural way. Recent advances in foundation models, large pre-trained models such as large language models and visual language models, will help enable these capabilities. We describe how we are using open-vocabulary 3D scene graphs based on foundation models to add scene understanding and natural language interaction to our human-robot teaming research. Open-vocabulary scene graphs enable a robot to build and reason about a semantic map of the environment, as well as answer complex queries about it. We are exploring how semantic scene information can be shared with human teammates and inform context-aware decision making and planning to improve task performance and increase autonomy. We highlight human-robot teaming scenarios involving robotic casualty evacuation and stealthy movement through an environment that could benefit from enhanced scene understanding, describe our approach to enabling this enhanced understanding, and present preliminary results using a one-armed quadruped robot interacting with simplified environments. It is anticipated that advanced perception and planning capabilities provided by foundation models will give robots the ability to better understand their environment, share that information with human teammates, and generate novel courses of action.
Effective human-robot teaming requires human and robot teammates to share a common understanding of the goals of their collaboration. Ideally, a complex task can be broken into smaller components to be performed by team members with defined roles, and the plan of action and assignment of roles can be changed on the fly to accommodate unanticipated situations. In this paper we describe research on adaptive human-robot teaming that uses a playbook approach to team behavior to bootstrap multi-agent collaboration. The goal is to leverage known good strategies for accomplishing tasks, such as from training and operating manuals, to enable humans and robots to “be on the same page” and work from a common knowledge base. Simultaneous and sequential actions are specified using hierarchical text-based plans and executed as behavior trees using finite state machines. We describe a real-time implementation that supports sharing of task status updates through distributed message passing. Tasks related to human-robot teaming for exploration and reconnaissance are explored with teams comprising humans wearing augmented reality headsets and quadruped robots. It is anticipated that shared task knowledge provided by multi-agent playbooks will enable humans and robots to track and predict teammate behavior and promote team transparency, accountability and trust.
As the autonomy of intelligent systems continues to increase, the ability of humans to maintain control over machine behavior, work effectively in concert with them, and trust them, becomes paramount. Ideally, a machine’s plan of action would be accessible to and understandable by human team members, and machine behavior would be modifiable in real time, in the field, to accommodate unanticipated situations. The ability of machines to adapt to new situations quickly and reliably based on both human input and autonomous learning has the potential to enhance numerous human-machine teaming scenarios. Our research focuses on the question, “Can robots become competent and adaptive teammates by emulating human skill acquisition strategies?” In this paper we describe the Robotic Skill Acquisition (RSA) cognitive architecture and show preliminary results of teaming experiments involving a human wearing an augmented reality headset and a quadruped robot performing tasks related to reconnaissance. The goal is to combine instruction and discovery by integrating declarative symbolic AI and reflexive neural network learning to produce robust, explainable and trusted robot behavior, adjustable autonomy, and adaptive human-robot teaming. Humans and robots start with a playbook of modifiable hierarchical task descriptions that encode explicit task knowledge. Neural network based feedback error learning enables human-directed behavior shaping, and reinforcement learning enables discovery of novel subtask control strategies. It is anticipated that modifications to and transitions between symbolic and subsymbolic processing will enable highly adaptive behavior in support of enhanced situational awareness and operational effectiveness of human-robot teams.
Visual perception has become core technology in autonomous robotics to identify and localize objects of interest to ensure successful and safe task execution. As part of the recently concluded Robotics Collaborative Technology Alliance (RCTA) program, a collaborative research effort among government, academic, and industry partners, a vision acquisition and processing pipeline was developed and demonstrated to support manned-unmanned team ing for Army relevant applications. The perception pipeline provided accurate and cohesive situational awareness to support autonomous robot capabilities for maneuver in dynamic and unstructured environments, collaborative human-robot mission planning and execution, and mobile manipulation. Development of the pipeline involved a) collecting domain specific data, b) curating ground truth annotations, e.g., bounding boxes, keypoints, c) retraining deep networks to obtain updated object detection and pose estimation models, and d) deploying and testing the trained models on ground robots. We discuss the process of delivering this perception pipeline under limited time and resource constraints due to lack of a priori knowledge of the operational environment. We focus on experiments conducted to optimize the models despite using data that was noisy and exhibited sparse examples for some object classes. Additionally, we discuss our augmentation techniques used to enhance the data set given skewed class distributions. These efforts highlight some initial work that directly relates to learning and updating visual perception systems quickly in the field under sudden environment or mission changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.