In manned-unmanned teaming scenarios, autonomous unmanned robotic platforms with advanced sensing and compute capabilities will have the ability to perform online change detection. This change detection will consist of metric comparisons of sensor-based spatial information with information collected previously, for the purpose of identifying changes in the environment that could indicate anything from adversarial activity to changes caused by natural phenomena that could affect the mission. This previously collected information will be sourced from a variety of sources, such as satellite, IoT devices, other manned-unmanned teams, or the same robotic platform on a prior mission. While these robotic platforms will be superior to their human operators at detecting changes, the human teammates will for the foreseeable future exceed the abilities of autonomy at interpreting any changes, particularly for relevance to the mission and situational context. For this reason, the ability of a robot to intelligently and properly convey such information to maximize human understanding is essential. In this work, we build upon previous work which presented a mixed reality interface for conveying change detection information from an autonomous robot to a human. We discuss factors affecting human understanding of augmented reality visualization of detected changes, based upon multiple user studies where a user interacts with this system. We believe our findings will be informative to the creation of AR-based communication strategies for manned-unmanned teams performing multi-domain operations.
Robots, equipped with powerful modern sensors and perception algorithms, have enormous potential to use what they perceive to provide enhanced situational awareness to their human teammates. One such type of information is changes that the robot detects in the environment that have occurred since a previous observation. A major challenge for sharing this information from the robot to the human is the interface. This includes how to properly aggregate change detection data, present it succinctly for the human to interpret, and allow the human to interact with the detected changes, e.g., to label, discard, or even to task the robot to investigate, for the purposes of enhanced situational awareness and decision making. In this work we address this challenge through the design of an augmented reality interface for aggregating, displaying, and interacting with changes detected by an autonomous robot teammate. We believe the outcomes of this work could have significant applications to Soldiers interacting with any type of high-volume, autonomously-generated information in Multi-Domain Operations.
One of the most significant challenges for the emerging operational environment addressed by Multi-Domain Operations (MDO) is the exchange of information between personnel in operating environments. Making in- formation available for leveraging at the appropriate echelon is essential for convergence, a key tenet of MDO. Emergent cross-reality (XR) technologies are poised to have a significant impact on the convergence of the in- formation environment. These powerful technologies present an opportunity to not only enhance the situational awareness of individuals at the local" tactical edge and the decision-maker at the global" mission command (C2), but to intensely and intricately bridge the information exchanged across all echelons. Complimentarily, the increasing use of autonomy in MDO, from autonomous robotic agents in the field to decision-making assistance for C2 operations, also holds great promise for human-autonomy teaming to improve performance at all echelon levels. Traditional research examines, at most, a small subset of these problems. Here, we envision a system that sees human-robot teams operating at the local edge communicating with human-autonomy teams at the global operations level. Both teams use a mixed reality (MR) system for visualization and interaction with a common operating picture (COP) to enhance situational awareness, sensing, and communication { but with highly different purposes and considerations. By creating a system that bridges across echelons, we are able to examine these considerations to determine their impact on information shared bi-directionally, between the global (C2) and local (tactical) levels, in order to understand and improve autonomous agents teamed with humans at both levels. We present a prototype system that includes an autonomous robot operating with a human teammate sharing sensory data and action plans with, and receiving commands and intelligence information from, a tactical operations team commanding from a remote location. We examine the challenges and considerations in creating such a system, and present initial findings.
Currently fielded small unmanned ground vehicles (SUGVs) are operated via teleoperation. This method of operation
requires a high level of operator involvement within, or near within, line of sight of the robot. As advances are made in
autonomy algorithms, capabilities such as automated mapping can be developed to allow SUGVs to be used to provide
situational awareness with an increased standoff distance while simultaneously reducing operator involvement.
In order to realize these goals, it is paramount the data produced by the robot is not only accurate, but also presented in
an intuitive manner to the robot operator. The focus of this paper is how to effectively present map data produced by a
SUGV in order to drive the design of a future user interface. The effectiveness of several 2D and 3D mapping
capabilities was evaluated by presenting a collection of pre-recorded data sets of a SUGV mapping a building in an
urban environment to a user panel of Soldiers. The data sets were presented to each Soldier in several different formats
to evaluate multiple factors, including update frequency and presentation style. Once all of the data sets were presented,
a survey was administered. The questions in the survey were designed to gauge the overall usefulness of the mapping
algorithm presentations as an information generating tool. This paper presents the development of this test protocol
along with the results of the survey.
Autonomous systems operating in militarily-relevant environments are valuable assets due to the increased situational
awareness they provide to the Warfighter. To further advance the current state of these systems, a collaborative
experiment was conducted as part of the Safe Operations of Unmanned Systems for Reconnaissance in Complex
Environments (SOURCE) Army Technology Objective (ATO). We present the findings from this large-scale experiment
which spanned several research areas, including 3D mapping and exploration, communications maintenance, and visual
intelligence.
For 3D mapping and exploration, we evaluated loop closure using Iterative Closest Point (ICP). To improve current
communications systems, the limitations of an existing mesh network were analyzed. Also, camera data from a
Microsoft Kinect was used to test autonomous stairway detection and modeling algorithms. This paper will detail the
experiment procedure and the preliminary results for each of these tests.
Tactical situational awareness in unstructured and mixed indoor / outdoor scenarios is needed for urban combat as well as rescue operations. Two of the key functionalities needed by robot systems to function in an unknown environment are the ability to build a map of the environment and to determine its position within that map. In this paper, we present a strategy to build dense maps and to automatically close loops from 3D point clouds; this has been integrated into a mapping system dubbed OmniMapper. We will present both the underlying system, and experimental results from a variety of environments such as office buildings, at military training facilities and in large scale mixed indoor and outdoor environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.