The Air Force Institute of Technology (AFIT) created the AFIT Sensor and Scene Emulation Tool (ASSET) which aims to produce accurate and realistic electro-optical and infrared (EO/IR) data. While working to validate ASSET’s cloud free radiometry calculations, researchers demonstrated that the radiometric accuracy of synthetic data can be improved using Hyperspectral Imagery (HSI). This research addresses the lack of accurate HSI reflectance data required by ASSET to perform scene generation with two novel machine learning (ML) models and a scene generation process. Two Convolutional Neural Network (CNN) models, a U-Net and a Pix2Pix Generative Adversarial Network (GAN), are trained using multi-sensor data including land classification, elevation, texture, and HSI image data. The ML model processes image chips as inputs to a novel rendering process, generating realistic whole-Earth hyperspectral reflectance maps between 480 nm and 2500 nm. To study the accuracy of this model and rendering process, generated data was compared against truth HSI data using Mean Absolute Error (MAE), Mean Squared Error (MSE), and image quality metrics, such as Structural Similarity (SSIM) and Peak-Signal-to-Noise-Ratio (PSNR). This paper details the current stage of model development and the possible contributions of the model to ASSET and synthetic scene generation.
|