Polarimetric imaging sensors in the electro-optical region, already military and commercially available in both the visual and infrared, show enhanced capabilities for advanced target detection and recognition. The capabilities arise due to the ability to discriminate between man-made and natural background surfaces using the polarization information of light. In the development of materials for signature management in the visible and infrared wavelength regions, different criteria need to be met to fulfil the requirements for a good camouflage against modern sensors. In conventional camouflage design, the aimed design of the surface properties of an object is to spectrally match or adapt it to a background and thereby minimizing the contrast given by a specific threat sensor. Examples will be shown from measurements of some relevant materials and how they in different ways affect the polarimetric signature. Dimensioning properties relevant in an optical camouflage from a polarimetric perspective, such as degree of polarization, the viewing or incident angle, and amount of diffuse reflection, mainly in the infrared region, will be discussed.
Polarimetric information has been shown to provide means for potentially enhancing the capacity of electro-optical sensors in areas such as target detection, recognition and identification. The potential benefit must be weighed against the added complexity of the sensor and the occurrence and robustness of polarimetric signatures. While progress in the design of novel systems for snapshot polarimetry may result in compact and lightweight polarimetric sensors, the aim of this work is to report on the design, characterization and performance of a polarimetric imager, primarily designed for polarimetric signature assessment of static scenes in the long wave thermal infrared. The system utilizes the division-of-time principle and is based on an uncooled microbolometer camera and a rotating polarizing filter. Methods for radiometric and polarimetric calibrations are discussed. A significant intrinsic polarization dependency of the microbolometer camera is demonstrated and it is shown that the ability to characterize, model and compensate for various instrument effects play a crucial role for polarimetric signature assessment.
Hyperspectral remote sensing based on unmanned airborne vehicles is a field increasing in importance. The combined functionality of simultaneous hyperspectral and geometric modeling is less developed. A configuration has been developed that enables the reconstruction of the hyperspectral three-dimensional (3D) environment. The hyperspectral camera is based on a linear variable filter and a high frame rate, high resolution camera enabling point-to-point matching and 3D reconstruction. This allows the information to be combined into a single and complete 3D hyperspectral model. In this paper, we describe the camera and illustrate capabilities and difficulties through real-world experiments.
A small, lightweight, and inexpensive hyperspectral camera based on a linear variable filter close to the focal plane array (FPA) is described. The use of a full-frame sensor allows large coverage with high spatial resolution at moderate spectral resolution. The spatial resolution has been maintained using a tilt/shift lens for chromatic focusing corrections. The trade-offs of positioning the filter relative to the FPA and varying the f-number have been studied. Calibration can correct for artifacts such as spectral filter variability. Reference spectra can be obtained using the same camera system by imaging targets over homogeneous areas. For textured surfaces, the different materials can be separated by using statistical methods. Accurate reconstruction of the sparse spectral image data is demonstrated.
We propose a novel Deep learning approach using autoencoders to map spectral bands to a space of lower dimensionality while preserving the information that makes it possible to discriminate different materials. Deep learning is a relatively new pattern recognition approach which has given promising result in many applications. In Deep learning a hierarchical representation of increasing level of abstraction of the features is learned. Autoencoder is an important unsupervised technique frequently used in Deep learning for extracting important properties of the data. The learned latent representation is a non-linear mapping of the original data which potentially preserve the discrimination capacity.
Images collected in the shortwave infrared (SWIR) spectral range, 1-2.5 μm, are similar to visual (VIS) images and are easier to interpret for a human operator than images collected in the thermal infrared range, >3 μm. The ability of SWIR radiation to penetrate ordinary glass also means that conventional lens materials can be used. The night vision capability of a SWIR camera is however dependent on external light sources. At moonless conditions the dominant natural light source is nightglow, but the intensity is varying, both locally and temporally. These fluctuations are added to variations in other parameters and therefore the real performance of a SWIR camera at moonless conditions can be quite different compared with the expected performance. Collected measured data from the literature on the temporal and local variations of nightglow are presented and the variations of the nightglow intensity and other measured parameters are quantified by computing standard and combined standard uncertainties. The analysis shows that the uncertainty contributions from the nightglow variations are significant. However, nightglow is also found to be a potentially adequate light source for SWIR applications.
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of
characters. Simplified rendering methods based on computer graphics methods can be used to overcome these
limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of
animated people in terrain.
Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models,
these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions
match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that,
together with the terrain model, are used to produce high fidelity IR imagery of people or crowds.
For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed.
There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated
characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility
of HLAS to add animation into an HLA enabled sensor system simulation framework.
Interferometric hyperspectral imagers using infrared focal plane array (FPA) sensors have received increasing interest
within the field of security and defence. Setups are commonly based upon either the Sagnac or the Michelson
configuration, where the former is usually preferred due to its mechanical robustness. However, the Michelson
configuration shows advantages in larger FOV due to better vignetting performance and improved signal-to-noise ratio
and cost reduction due to relaxation of beamsplitter specifications. Recently, a laboratory prototype of a more robust and
easy-to-align corner-cube Michelson hyperspectral imager has been demonstrated. The prototype is based upon an
uncooled bolometric FPA in the LWIR (8-14 μm) spectral band and in this paper the noise properties of this
hyperspectral imager are discussed.