One aspect of the well-being of a military unit depends on its ability to reliably detect threats and properly prepare for them. While a given sensor mounted on a ground vehicle can adequately capture threats in some scenarios, its viewpoint can be quite limiting. A potential solution to these limitations is mounting the sensor onto an unmanned aerial vehicle (UAV) to provide a more holistic view of the scene. However, this new perspective creates challenges unique to it. Herein, we investigate the performance of an RGB sensor mounted onto a UAV for object detection and classification to enable advanced situational awareness for a manned/unmanned ground vehicle trailing the UAV. To do this, we perform transfer learning with state-of-the-art deep learning models, e.g., ResNet50, Inception-v3. While object detection with machine learning has been actively researched, even on remotely sensed imagery, most of it has been through the context of scene classification. Therefore, it is worthwhile to explore the implications of this new camera perspective on the performance of object detection. Performance is assessed via route-based cross-validation collected by the U.S. Army ERDC at a test site spanning multiple days.
|