Hyperspectral remote sensing based on unmanned airborne vehicles is a field increasing in importance. The combined functionality of simultaneous hyperspectral and geometric modeling is less developed. A configuration has been developed that enables the reconstruction of the hyperspectral three-dimensional (3D) environment. The hyperspectral camera is based on a linear variable filter and a high frame rate, high resolution camera enabling point-to-point matching and 3D reconstruction. This allows the information to be combined into a single and complete 3D hyperspectral model. In this paper, we describe the camera and illustrate capabilities and difficulties through real-world experiments.
There exist several tools and methods for camera resectioning, i.e. geometric calibration for the purpose of estimating intrinsic and extrinsic parameters. The intrinsic parameters represent the internal properties of the camera such as focal length, principal point and distortion coefficients. The extrinsic parameters relate the cameras position to the world, i.e. how is the camera positioned and oriented in the world. With both sets of parameters known it is possible to relate a pixel in one camera to the world or to another camera. This is important in many applications, for example in stereo vision. The existing methods work well for standard visual cameras in most situations. Intrinsic parameters are usually estimated by imaging a well-defined pattern from different angles and distances. Checkerboard patterns are very often used for calibration since it is a well-defined pattern with easily detectable features. The intersections between the black and white squares form high contrast points which can be estimated with sub pixel accuracy. Knowing the precise dimension and structure of the pattern makes enables calculation of the intrinsic parameters. Extrinsic calibration can be performed in a similar manner if the exact position and orientation of the pattern is known. A common method is to distribute markers in the scene and to measure their exact locations. The key to good calibration is well-defined points and accurate measurements. Thermal cameras are a subset of infrared cameras that work with long wavelengths, usually between 9 and 14 microns. At these wavelengths all objects above absolute zero temperature emit radiation making it ideal for passive imaging in complete darkness and widely used in military applications. The issue that arises when trying to perform a geometric calibration of a thermal camera is that the checkerboard emits more or less the same amount of radiation in the black squares as in the white. In other words, the calibration board that is optimal for calibration of visual cameras might be completely useless for thermal cameras. A calibration board for thermal cameras should ideally be a checkerboard with high contrast in thermal wavelengths. (It is of course possible to use other sorts of objects or patterns but since most tools and software expect a checkerboard pattern this is by far the most straightforward solution.) Depending on the application it should also be more or less portable and work booth in indoor and outdoor scenarios. In this paper we present several years of experience with calibration of thermal cameras in various scenarios. Checkerboards with high contrast both for indoor and outdoor scenarios are presented as well as different markers suitable for extrinsic calibration.
A reliable indoor positioning system providing high accuracy has the potential to increase the safety of first responders and military personnel significantly. To enable navigation in a broad range of environments and obtain more accurate and robust positioning results, we propose a multi-sensor fusion approach. We describe and evaluate a positioning system, based on sensor fusion between a foot-mounted inertial measurement unit (IMU) and a camera-based system for simultaneous localization and mapping (SLAM). The complete system provides accurate navigation in many relevant environments without depending on preinstalled infrastructure. The camera-based system uses both inertial measurements and visual data, thereby enabling navigation also in environments and scenarios where one of the sensors provides unreliable data during a few seconds. When sufficient light is available, the camera-based system generally provides good performance. The foot-mounted system provides accurate positioning when distinct steps can be detected, e.g., during walking and running, even in dark or smoke-filled environments. By combining the two systems, the integrated positioning system can be expected to enable accurate navigation in almost all kinds of environments and scenarios. In this paper we present results from initial tests, which show that the proposed sensor fusion improves the navigation solution considerably in scenarios where either the foot-mounted or camera-based system is unable to navigate on its own.
Personnel positioning is important for safety in e.g. emergency response operations. In GPS-denied environments,
possible positioning solutions include systems based on radio frequency communication, inertial sensors, and cameras.
Many camera-based systems create a map and localize themselves relative to that. The computational complexity of
most such solutions grows rapidly with the size of the map. One way to reduce the complexity is to divide the visited
region into submaps. This paper presents a novel method for merging conditionally independent submaps (generated
using e.g. EKF-SLAM) by the use of smoothing. Using this approach it is possible to build large maps in close to linear
time. The method is demonstrated in two indoor scenarios, where data was collected with a trolley-mounted stereo vision
Laser-based 3D sensors measure range with high accuracy and allow for detection of objects behind various type of
occlusion, e.g., tree canopies. Range information is valuable for detection of small objects that are typically represented
by 5-10 pixels in the data set. Range information is also valuable in tracking problems when the tracked object is
occluded under parts of its movement and when there are several objects in the scene. In this paper, on-going work on
detection and tracking are presented. Detection of partly occluded vehicles is discussed. To detect partly occluded
objects we take advantage of the range information for removing foreground clutter. The target detection approach is
based on geometric features, for example local surface detection, shadow analysis and height-based detection. Initial
results on tracking of humans are also presented. The benefits with range information are discussed. Results are
illustrated using outdoor measurements with a 3D FLASH LADAR sensor and a 3D scanning LADAR.