Synthetic imagery is very useful for visible signature studies, because of the control, flexibility and replicability of simulated environments. But for study results to be meaningful, synthetic images must closely replicate reality, so validating radiometric representation is a key question. Recent research on extracting spectral reflectance from real digital photographs could be adapted to compare the spectral reflectance of objects in synthetic scenes to their real world counterparts. This paper is a preliminary study using real world spectral radiance data (combination of spectral reflectance and scene illumination) and associated RGB images to a train machine learning model to predict the spectral radiance of objects in any RGB image. The preliminary results using two machine learning algorithms, namely support vector machine and multi-layer perceptron, show promise for predicting spectral radiance from RGB images. Future research in the area will attempt to improve the construction by supplying a much larger pool of training data, by measuring the spectral response of our camera, and using image information from an earlier stage of the imaging pipeline, such as camera raw values instead of RGB values.
|