Following the current Russian invasion of Ukraine, the use of AI-enabled technologies is the next logical step in the use of autonomous systems (such as UAVs or loitering munitions) for detecting military assets. The survivability of military assets on the battlefield is increased by Camouflage, Concealment & Deception (CCD) measures. However, current CCD measures are inadequate to prevent detection by AI-enabled technologies. To improve on CCD measures, adversarial patterns can be employed to fool AI for object detection: Assets, such as soldiers, command tents, and vehicles, camouflaged with adversarial patterns are either not detected or are misclassified by AI. In an operational setting, the downside of adversarial patterns is that they are colorful and distinct from their surroundings. This makes them easily detectable by the human eye. In this manuscript, we design anti-AI camouflage that only use colors close to camouflage netting, as commonly used by NATO forces. We show these are effective at (a) either preventing detection, (b) reducing the confidence the AI has in its detection (c) or making the AI detect many false objects with low confidence. This anti-AI camouflage can fool both human intelligence and artificial intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.