Presentation + Paper
7 June 2024 Method for training deep neural networks in vehicle detection using drone-captured data and background synthesis
Alexander Pichler, Nicolas Hueber
Author Affiliations +
Abstract
Deep neural network based military vehicle detectors pose particular challenges due to the scarcity of relevant images and limited access to vehicles in this domain, particularly in the infrared spectrum. To address these issues, a novel drone-based bi-modal vehicle acquisition method is proposed, capturing 72 key images from different view angles of a vehicle in a fast and automated way. By overlaying vehicle patches with relevant background images and utilizing data augmentation techniques, synthetic training images are obtained. This study introduces the use of AI-generated synthetic background images compared to real video footage. Several models were trained and their performance compared in real-world situations. Results demonstrate that the combination of data augmentation, context-specific background samples, and synthetic background images significantly improves model precision while maintaining Mean Average Precision, highlighting the potential of utilizing Generative AI (Stable Diffusion) and drones to generate training datasets for object detectors in challenging domains.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Alexander Pichler and Nicolas Hueber "Method for training deep neural networks in vehicle detection using drone-captured data and background synthesis", Proc. SPIE 13035, Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications II, 130350L (7 June 2024); https://doi.org/10.1117/12.3013736
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Object detection

Infrared radiation

Infrared imaging

Visible radiation

Sensors

Video

Back to Top