3D scene reconstruction provides an improved representation from which features of critical objects or targets may be extracted. Both electro-optical (EO) and synthetic aperture radar (SAR) sensors have been exploited for this purpose, but each modality possesses issues resulting in different sources for reconstruction errors. Reconstruction from EO data is limited by frame rate and can be blurred by moving targets or optical distortions in the lens, which leads to errors in the 3D model. Meanwhile, SAR offers the opportunity to correct from some of these errors through its capacity for making range measurements, even under clouds or during nighttime, when EO data would not be available. Conversely, SAR imagery lacks the texture offered by optical images and is more sensitive to perspective, while moving targets can likewise result in reconstruction errors. This work aims at exploiting the strengths of both modalities to reconstruct 3D scenes from multi-sensor EO-SAR data. In particular, we consider the fusion of multi-pass Gotcha SAR data with a modeled EO-data for the particular scene. We propose a framework that fuses 2D image maps acquired from airborne EO data as well as airborne SAR, which leverages the range information of SAR and object shape information of EO imagery. From an initial 2D image of the scene, with each additional sources of sensor data (EO or SAR), a 3D reconstruction is formed that is iteratively improved. This approach allows for the potential to achieve robust and real-time 3D representations as a basis for 4D surveillance.
Deep neural networks have become increasingly popular in radar micro-Doppler classification; yet, a key challenge, which has limited potential gains, is the lack of large amounts of measured data that can facilitate the design of deeper networks with greater robustness and performance. Several approaches have been proposed in the literature to address this problem, such as unsupervised pre-training and transfer learning from optical imagery or synthetic RF data. This work investigates an alternative approach to training which involves exploitation of “datasets of opportunity" micro-Doppler datasets collected using other RF sensors, which may be of a different frequency, bandwidth or waveform - for the purposes of training. Specifically, this work compares in detail the cross-frequency training degradation incurred for several different training approaches and deep neural network (DNN) architectures. Results show a 70% drop in classification accuracy when the RF sensors for pre-training, fine-tuning, and testing are different, and a 15% degradation when only the pre-training data is different, but the fine-tuning and test data are from the same sensor. By using generative adversarial networks (GANs), a large amount of synthetic data is generated for pre-training. Results show that cross-frequency performance degradation is reduced by 50% when kinematically-sifted GAN-synthesized signatures are used in pre-training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.