The need for labeled data is among the most common and well-known practical obstacles to deploying deep learning algorithms to solve real-world problems. The current generation of learning algorithms requires a large volume of data labeled according to a static and pre-defined schema. Conversely, humans can quickly learn generalizations based on large quantities of unlabeled data, and turn these generalizations into classifications using spontaneous labels, often including labels not seen before. We apply a state-of-the-art unsupervised learning algorithm to the noisy and extremely imbalanced xView data set to train a feature extractor that adapts to several tasks: visual similarity search that performs well on both common and rare classes; identifying outliers within a labeled data set; and learning a natural class hierarchy automatically.
This study investigates how operating conditions (OCs) impact the performance of a synthetic aperture radar (SAR) automatic target recognition (ATR) algorithm. We characterize the performance of the algorithm as a function of OCs to understand the algorithm's strengths and weaknesses and guide further development. This paper examines the classification stage of a template method called Quantized Grayscale Matching (QGM). To thoroughly investigate this problem, asymptotic prediction code is used to generate synthetic data for both training and testing to answer several questions. How does articulation impact the performance of the algorithm? How much training data is needed to handle the articulation of the targets? Certain targets may need more training data than others, but why? Which articulation states present the biggest challenge and why? How to have synthetic results have similar characteristics as measured results? These answers will help guide algorithm development and provide a framework to explore other OCs.
In many applications, access to large quantities of labeled data is prohibitive due to its cost or lack of access to classes of interest. This problem is exacerbated in the context of specific subclasses and data types that are not easily accessible, such as remotes sensing data. The problem of limited data for specific classes of data is referred to as the low-shot or few-shot problem. Typically in the low-shot problem, there is a wealth of data from a source domain that is leveraged to train a convolutional feature extractor that is then applied to a target domain in innovative ways. In this work we apply this framework to the low-shot and fully sampled problem, in which the convolutional neural network is used as a feature extractor and paired with an alternate classifier. We evaluate the benefits of this approach in two contexts, a baseline problem, and limited training data. Additionally, we investigate the impact of loss function selection and sequestering of low-shot data on the classification performance of this approach. We present an applications of these techniques on the recent public xView dataset.
As the Air Force pushes toward reliance on autonomous systems for navigation, situational awareness, threat analysis and target engagement there are several requisite technologies that must be developed. Key among these is the concept of `trust' in the autonomous system to perform its task. This term, `trust' has many application specific definitions. We propose that a properly calibrated algorithm confidence is essential to establishing trust. To accomplish properly calibrated confidence we present a framework for assessing algorithm performance and estimating confidence of a classifier's declaration. This framework has applications to improved algorithm trust, fusion, and diagnostics. We present a metric for comparing the quality of performance modeling and examine three different implementations of performance models on a synthetic dataset over a variety of operating conditions.
With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of ”latent” models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.
KEYWORDS: Error analysis, Information theory, Monte Carlo methods, Data modeling, Statistical analysis, Analytical research, Matrices, Imaging systems, Computer simulations, Systems modeling, Radar
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information– theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are ”loose” or ”tight” bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
Vibrometry offers the potential to classify a target based on its vibration spectrum. Signal processing is necessary for extracting features from the sensing signal for classification. This paper investigates the effects of fundamental frequency normalization on the end-to-end classification process [1]. Using the fundamental frequency, assumed to be the engine’s firing frequency, has previously been used successfully to classify vehicles [2, 3]. The fundamental frequency attempts to remove the vibration variations due to the engine’s revolution per minute (rpm) changes. Vibration signatures with and without fundamental frequency are converted to ten features that are classified and compared. To evaluate the classification performance confusion matrices are constructed and analyzed. A statistical analysis of the features is also performed to determine how the fundamental frequency normalization affects the features. These methods were studied on three datasets including three military vehicles and six civilian vehicles. Accelerometer data from each of these data collections is tested with and without normalization.
In vehicle target classification, contact sensors have frequently been used to collect data to simulate laser vibrometry data. Accelerometer data has been used in numerous literature to test and train classifiers instead of laser vibrometry data [1] [2]. Understanding the key similarities and differences between accelerometer and laser vibrometry data is essential to keep progressing aided vehicle recognition systems. This paper investigates the contrast of accelerometer and laser vibrometer data on classification performance. Research was performed using the end-to-end process previously published by the authors to understand the effects of different types of data on the classification results. The end-to-end process includes preprocessing the data, extracting features from various signal processing literature, using feature selection to determine the most relevant features used in the process, and finally classifying and identifying the vehicles. Three data sets were analyzed, including one collection on military vehicles and two recent collections on civilian vehicles. Experiments demonstrated include: (1) training the classifiers using accelerometer data and testing on laser vibrometer data, (2) combining the data and classifying the vehicle, and (3) different repetitions of these tests with different vehicle states such as idle or revving and varying stationary revolutions per minute (rpm).
Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.
KEYWORDS: Data modeling, Sensors, Vibrometry, Algorithm development, Physics, Skin, Fluctuations and noise, Signal attenuation, Systems modeling, Combustion
Vibration signatures sensed from distant vehicles using laser vibrometry systems provide valuable information that may be used to help identify key vehicle features such as engine type, engine speed, and number of cylinders. Through the use of physics models of the vibration phenomenology, features are chosen to support classification algorithms. Various individual exploitation algorithms were developed using these models to classify vibration signatures into engine type (piston vs. turbine), engine configuration (Inline 4 vs. Inline 6 vs. V6 vs. V8 vs. V12) and vehicle type. The results of these algorithms will be presented for an 8 class problem. Finally, the benefits of using a factor graph representation to link these independent algorithms together will be presented which constructs a classification hierarchy for the vibration exploitation problem.
Feature-aided tracking of targets in synthetic aperture radar is a topic of increasing interest. The aperture
synthesized through the combination of target and platform motion facilitates the application of two-dimensional
target recognition algorithms through noncooperative imaging of the target in question. Many non-parametric
inverse synthetic aperture radar imaging techniques maximize image sharpness by estimating the phase error
imposed by the unknown target motion. The resultant images can suffer from small unresolved phase errors
and ambiguous cross range resolution. Downstream image exploitation algorithms must be robust to these
effects. A set of civilian vehicles is investigated, which exacerbates image quality based ISAR algorithms due to
their comparatively small radar cross section. This paper addresses the feasibility of peak-based classifcation
of civilian targets moving through challenging tracking scenarios using ISAR images. Classifier performance
is evaluated over a set of sensor, target, and environmental operating conditions through use of synthetically
generated data.
KEYWORDS: Performance modeling, Sensors, Detection and tracking algorithms, Data modeling, Kinematics, Roads, Motion models, 3D modeling, Monte Carlo methods, Reflectivity
In order to provide actionable intelligence in a layered sensing paradigm, exploitation algorithms should produce a
confidence estimate in addition to the inference variable. This article presents a methodology and results of one such
algorithm for feature-aided tracking of vehicles in wide area motion imagery. To perform experiments a synthetic
environment was developed, which provided explicit knowledge of ground truth, tracker prediction accuracy, and
control of operating conditions. This synthetic environment leveraged physics-based modeling simulations to re-create
both traffic flow, reflectance of vehicles, obscuration and shadowing. With the ability to control operating conditions as
well as the availability of ground truth, several experiments were conducted to test both the tracker and expected
performance. The results show that the performance model produces a meaningful estimate of the tracker performance
over the subset of operating conditions.
In this paper we present an overview of the National Image Interprability Rating Scale (NIIRS) for SAR im-
agery. We map basic SAR image formation parameters into the NIIRS via an information theoretic framework.
Preliminary results obtained from a pilot study are presented for human interpretablity of various SAR im-
ages. Extensions to this work which include sensor exploitation algorithms and integration within the Pursuer
environment are outlined .
KEYWORDS: Signal to noise ratio, Monte Carlo methods, Performance modeling, Switches, Synthetic aperture radar, Quantization, Switching, Sensors, Statistical analysis, Detection and tracking algorithms
Synthetic aperture radar (SAR) exploitation algorithms typically rely on the use of derived features to represent
the target. These features are chosen to discriminate between target classes while exhibiting robustness to
noise and calibration artifacts. One of the challenges in working with such features, is understanding when this
assumption of robustness is no longer valid. In this paper, we focus on characterizing the performance of the
gray scale quantization feature in the presence of additive noise. We derive an approximation for the variance
of the intraclass distance by treating the additive noise as an independently identically distributed (iid) process.
The analytic model is contrasted with empirical results for a two class problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.