Machine Learning (ML) and Artificial intelligence (AI) have increased automation potential within defense applications such as border protection, compound security, and surveillance applications. Advances in low-size weight and power (SWAP) computing platforms and unmanned aerial systems (UAS) have enabled autonomous systems to meet the critical needs of future defense systems. Recent academic advances in deep learning aided computer vision yielding impressive results on object detection and recognition, necessary capabilities to enable autonomy in defense applications. These advances, often open-sourced, enable the opportunistic integration of state-of-the-art (SOTA) algorithms. However, these systems require a large amount of object-relevant data to transfer from general academic domains to more relevant situations. Additionally, UAS systems require costly verification and validation of autonomy logic. These challenges can lead to high costs for both training data generation and costly field autonomy integration and testing activities. To address these challenges, in conjunction with partners, Elbit America has developed a multipurpose synthetic simulation environment capable of generating synthetic training data and prototyping, verifying, and validating autonomous distributed behaviors. We integrated a thermal modeling capability into Unreal Engine to create realistic training data by enabling the real-time simulation of SWIR, MWIR, and LWIR sensors. This radiometrically correct sensor model capability enables the simulation-based training data generation for our object recognition and classification pipeline, called Rapid Algorithm Development and Deployment (RADD). Several drones were instantiated using emulated flight controllers to enable end-to-end autonomy training and development before hardware availability. Herein, we describe an overview of the simulation environment and its relevance to detection, classification, and distributed autonomous decision-making.
Machine Learning (ML) and Artificial intelligence (AI) have led to an increase in automation potential within defense applications such as border protection, compound security, and surveillance applications. Recent academic advances in deep learning aided computer vision have yielded impressive results on object detection, and recognition, necessary capabilities to increase automation in defense applications. These advances are often open-sourced, enabling the opportunistic integration of state-of-the-art (SOTA) algorithms into real systems. However, these academic achievements do not translate easily to engineered systems. Academics often are looking at a single capability with metrics such as accuracy or F1 score without consideration of system-level performance and how these algorithms must integrate or what level of computational performance is required. An engineered system is developed as a system of algorithms that must work in conjunction with each other with deployment constraints. This paper describes a system, called Rapid Algorithm Design and Deployment for Artificial Intelligence (RADD-AITM), developed to enable the rapid development of systems of algorithms incorporating these advances in a modular fashion using networked Application Programming Interfaces (APIs). The inherent modularity mitigates the assumption of monolithic integration within a single ecosystem that creates vendor lock. This monolith assumption does not account for the reality that frameworks are usually targeted toward different types of problems and learning vs inference capabilities. RADD-AI makes no such assumption. If a different framework solves subsets of the system more eloquently, they can be integrated into the larger pipeline. RADD-AI enables the integration of state-of-the-art ML into deployed systems while also supporting the necessary ML engineering tasks, such as transfer learning, to operationalize academic achievements. To motivate how RADD-AI enables applications of ML/AI, we detail how this system is used to implement a defense application, a border surveillance capability, via the integration of detection, recognition, and tracking algorithms. This system, implemented and developed within RADD-AI, utilizes several SOTA models and traditional algorithms within multiple frameworks bridging the gap from academic achievement to fielded system.
Given that many readily available datasets consist of large amounts of unlabeled data,1 unsupervised learning methods are an important component of many data-driven applications. In many instances, ground-state truth labels may be unavailable or obtainable only at a costly expense. As a result, there is an acute need for the ability to understand and interpret unlabeled datasets as thoroughly as possible. In this article, we examine the effectiveness of learned deep embeddings via internal clustering metrics on a dataset comprised of unlabelled StarCraft 2 game replays. The results of this work indicate that the use of deep embeddings provides a promising basis for clustering and interpreting player behavior in complex game domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.