Enabling leaders with the ability to make decisive actions in high operational tempo environments is key to achieving decision-superiority. Under stressful battlefield conditions with little to no time for communication, it is critical to acquire relevant tactical information quickly to inform decision-making. A potential augmentation to tactical information systems is access to real-time analytics on a unit's operating status and emergent behaviors inferred from soldier-worn or embedded sensors on their kit. Automatic human activity recognition (HAR) has been greatly achievable in recent years thanks to advancements in algorithms and ubiquitous low-cost, yet powerful processors, hardware and sensors. In this paper, we present weapon-born sensor measurement acquisition, processing, and HAR approaches to demonstrate Soldier state estimation in a target acquisition and tracking experiment. The Soldier states that were classified include whether the Soldier is resting, tracking a target, transitioning between potential targets, or firing a shot at the target. We implemented Multivariate Time Series Classification (TSC) using the SKTime toolkit to perform this task and discuss the performance from various classification methods. We also discuss a framework for efficient transference of this information to other tactical information systems on the network.
Efforts are underway across the defense and commercial industries to develop cross-reality (XR), multi-user operation centers in which human users can perform their work while aided by intelligent systems. At their core is the objective to accelerate decision-making and improve efficiency and accuracy. However, presenting data to users in an XR, multi-dimensional environment results in a dramatic increase in extraneous information density. Intelligent systems offer a potential mechanism for mitigating information overload while ensuring that critical and anomalous data is brought to the attention of the human users in an immersive interface. This paper describes such a prototype system that combines real and synthetic motion sensors which, upon detection of an event, send a captured image for processing by a YOLO cluster. Finally, we describe how a future system can integrate a decision-making component for evaluation of the resulting metadata to determine whether to inject the results into an XR environment for presentation to human users.
Organized by experts from across academia, industry, the Federal Labs, and SPIE, this meeting will highlight emerging capabilities in immersive technologies and degraded visual environments as critical enablers to future multi-domain operations.
Decision-making is defined as a process resulting in the selection of a course of action from a number of alternatives based on variables that represent key considerations to the task. This is a complex process where the goal is to generate the “best" course of action given the data and knowledge obtained. As the use of intelligent systems increases, so too does the amount of data to be considered by human analysts and commanders. As the military looks toward integration of intelligent system like smart devices or internet of things, the devices and the data from these devices are important for decision making in highly dynamic situations. Of critical importance is the uncertainty of information associated with the data produced from such systems. Any uncertainty must be captured and communicated to aid the decision-making process. Our work focuses on how this process can be investigated to understand and analyze the impact of uncertainty for decision-making in multi-domain operational environments. We conducted user studies and present our results to discuss the presentation of uncertainty within the decision-making cycle for our tasks .
One of the most significant challenges for the emerging operational environment addressed by Multi-Domain Operations (MDO) is the exchange of information between personnel in operating environments. Making in- formation available for leveraging at the appropriate echelon is essential for convergence, a key tenet of MDO. Emergent cross-reality (XR) technologies are poised to have a significant impact on the convergence of the in- formation environment. These powerful technologies present an opportunity to not only enhance the situational awareness of individuals at the local" tactical edge and the decision-maker at the global" mission command (C2), but to intensely and intricately bridge the information exchanged across all echelons. Complimentarily, the increasing use of autonomy in MDO, from autonomous robotic agents in the field to decision-making assistance for C2 operations, also holds great promise for human-autonomy teaming to improve performance at all echelon levels. Traditional research examines, at most, a small subset of these problems. Here, we envision a system that sees human-robot teams operating at the local edge communicating with human-autonomy teams at the global operations level. Both teams use a mixed reality (MR) system for visualization and interaction with a common operating picture (COP) to enhance situational awareness, sensing, and communication { but with highly different purposes and considerations. By creating a system that bridges across echelons, we are able to examine these considerations to determine their impact on information shared bi-directionally, between the global (C2) and local (tactical) levels, in order to understand and improve autonomous agents teamed with humans at both levels. We present a prototype system that includes an autonomous robot operating with a human teammate sharing sensory data and action plans with, and receiving commands and intelligence information from, a tactical operations team commanding from a remote location. We examine the challenges and considerations in creating such a system, and present initial findings.
Collective intelligence is generally defined as the emergence and evolution of intelligence derived from the collective and collaborative efforts of several entities; to include humans and (dis)embodied intelligent agents. Recent advances in immersive technology have led to cost-effective tools that allow us to study and replicate interactions in a controlled environment. Combined together, immersive collective intelligence holds the promise of a symbiotic intelligence that could be greater than the sum of the individual parts. For the military, where the decision making process is typically characterized by high-stress and high-consequence, the concept of a distributive, immersive collective intelligence capability is game changing. Commanders and staff will now be able to remotely immerse themselves in their operational environment with subject matter expertise and advanced analytics. This paper presents the initial steps to understanding immersive collective intelligence with a demonstration designed to discern how military intelligence analysts benefit from an immersive data visualization.
Head mounted displays (HMD) may prove useful for synthetic training and augmentation of military C5ISR decisionmaking. Motion sickness caused by such HMD use is detrimental, resulting in decreased task performance or total user dropout. The genesis of sickness symptoms is often measured using paper surveys, which are difficult to deploy in live scenarios. Here, we demonstrate a new way to track sickness severity using machine learning on data collected from heterogeneous, non-invasive sensors worn by users who navigated a virtual environment while remaining stationary in reality. We discovered that two models, one trained on heterogeneous sensor data and another trained only on electroencephalography (EEG) data, were able to classify sickness severity with over 95% accuracy and were statistically comparable in performance. Greedy feature optimization was used to maximize accuracy while minimizing the feature subspace. We found that across models, the features with the most weight were previously reported in the literature as being related to motion sickness severity. Finally, we discuss how models constructed on heterogeneous vs homogeneous sensor data may be useful in different real-world scenarios.
Collaborative decision-making remains a significant research challenge that is made even more complicated in real-time or tactical problem-contexts. Advances in technology have dramatically assisted the ability for computers and networks to improve the decision-making process (i.e. intelligence, design, and choice). In the intelligence phase of decision making, mixed reality (MxR) has shown a great deal of promise through implementations of simulation and training. However little research has focused on an implementation of MxR to support the entire scope of the decision cycle, let alone collaboratively and in a tactical context. This paper presents a description of the design and initial implementation for the Defense Integrated Collaborative Environment (DICE), an experimental framework for supporting theoretical and empirical research on MxR for tactical decision-making support.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.