Proceedings Article | 22 May 2020
KEYWORDS: Visualization, Visual analytics, Sensors, Picosecond phenomena, Machine learning, Data modeling, Analytics, Computing systems, Control systems, Systems modeling
The U.S. Army envisions fighting and winning future wars in congested and contested environments and multi-domain battles where revolutionary capabilities for the network-centric warfare (NCW) are essentially needed. NCW is characterized by the ability of geographically dispersed forces to attain a high level of shared battle-space awareness that can be exploited to achieve strategic, operational, and tactical objectives by autonomously linking people, platforms, weapons, sensors, and decision aids into a single network. Future battlefield networks will generate a massive volume of data, which can go beyond quantities. In a multi-domain battle, novel technologies for real-time decision-making, which is based on a large amount of heterogeneous as well as sparse, noisy, and ill-defined data under extremely uncertain environment, are specifically required. Additionally, humans have sometimes become completely comfortable with the information brought in by our sensing technologies. As a result, the command architecture, built on a massive web of information sources, becomes more receptive to potential catastrophic machine-human decision-making conflicts as well as vulnerable to incoming cyber threats including adversaries’ deception, interruption, and obscuration, which can eventually introduce own sources of decision-making failure. In this paper, researchers present validation results of a conceptualized artificial intelligence-based visual analytics framework. The researchers’ ultimate goal is to integrate the mature technology into the situation awareness technology for local commands and global logistics centers to enable an effective logistic command and control of aviation platforms and autonomous systems while being operated in an expeditionary multi-domain environment.