The Infrared Eye project was developed at DRDC Valcartier to improve the efficiency of airborne search and rescue operations. A high performance opto-mechanical pointing system was developed to allow fast positioning of a narrow field of view with high resolution, used for search and detection, over a wide field of view of lower resolution that optimizes area coverage. This system also enables the use of a step-stare technique, which rapidly builds a large area coverage image mosaic by step-staring a narrow field camera and properly tiling the resulting images. The resulting image mosaic covers the wide field of the current Infrared Eye, but with the high resolution of the narrow field. For the desired application, the camera will be fixed to an airborne platform using a stabilized mount and image positioning in the mosaic will be calculated using flight data provided by an altimeter, a GPS and an inertial unit. This paper presents a model of the complete system, a dynamic step-stare strategy that generates the image mosaic, a flight image taking simulator for strategy testing and some results obtained with this simulator.
As part of the Infrared Eye project, this article describes the design of large-deviation, achromatic Risley prisms scanning systems operating in the 0.5 - 0.92 and 8 - 9.5 μm spectral regions. Designing these systems is challenging due to the large deviation required (zero - 25 degrees), the large spectral bandwidth and the mechanical constraints imposed by the need to rotate the prisms to any position in 1/30 second. A design approach making extensive use of the versatility of optical design softwares is described. Designs consisting of different pairs of optical materials are shown in order to illustrate the trade-off between chromatic aberration, mass and vignetting. Control of chromatic aberration and reasonable prism shape is obtained over 8 - 9.5 μm with zinc sulfide and germanium. The design is more difficult for the 0.5 - 0.92 μm band. Trade-offs consist in using sapphire with Cleartran® over a reduced bandwidth (0.75 - 0.9 μm ) or acrylic singlets with the Infrared Eye in active mode (0.85 - 0.86 μm). Non-sequential ray-tracing is used to study the effects of fresnelizing one element of the achromat to reduce its mass, and to evaluate detector narcissus in the 8 - 9.5 μm region.
The Infrared (IR) Eye was developed with support from the National Search-and-Rescue Secretariat (NSS), in view of improving the efficiency of airborne search-and-rescue operations. The IR Eye concept is based on the human eye and uses simultaneously two fields of view to optimize area coverage and detection capability. It integrates two cameras: the first, with a wide field of view of 40 degree(s), is used for search and detection while the second camera, with a narrower field of view of 10 degree(s) for higher resolution and identification, is mobile within the wide field and slaved to the operator's line of sight by means of an eye-tracking system. The images from both cameras are fused and shown simultaneously on a standard high resolution CRT display unit, interfaced with the eye-tracking unit in order to optimize the man-machine interface. The system was flight tested using the Advanced System Research Aircraft (Bell 412 helicopter) from the Flight Research Laboratory of the National Research Council of Canada. This paper presents some results of the flight tests, indicates the strengths and deficiencies of the system, and suggests future improvements for an advanced system.
This paper presents the results given by the nominal design and tolerance analysis of a narrow-field-of-view camera lens equipped with a rotating prism scanning system. The first aspects taken into consideration have been the compensation of the chromatic aberration, distortion reduction, and the correlation between depth-of-field and lens resolution. Then, a rigorous approach to determine the relationship between the field-of-regard direction and the prism orientations is discussed. Finally, the tolerance analysis results give the predictable pointing direction precision as function of the system tolerances.
The Infrared Eye is a new concept of surveillance system that mimics human eye behavior to improve detection of small or low contrast target. In search and rescue operations (SAR), a wide field of view IR camera (WFOV) of approximately 20 degrees is used for detection of target and switched to a narrow field of view (NFOV) of approximately 5 degrees for a better target identification. In current SAR system, both FOVs cannot be used concurrently on the same display. The system presented in this paper fuses on the same high-resolution display the high- sensitivity WFOV image and the high-resolution NFOV image obtained from two IR cameras. The NFOV image movement within the WFOV image is slaved to the operator's eye movement by an eye-tracking device. The operator's central vision is always looking at the high-resolution IR image of the scene captured by the NFOV camera, while his peripheral vision is filled by the enhanced sensitivity (but low-resolution) image of the WFOV camera. This paper will describe the operation principle and implementation of the display, including its interface with an eye-tracking system and the opto-mechanical system used to steer the NFOV camera.
This paper presents the recent developments of large area focal plane 'pseudo' arrays for infrared (IR) imaging. The devices (called QWIP-LED) are based on the epitaxial integration of an n-type mid-IR (8 - 10 micrometer in the present study) GaAs/AlGaAs quantum well detector with light emitting diode. The originality of this work is to use n-type quantum wells for large detection responsivity. From these structures, very large area (approximately equals cm2) mesas are processed with V-grooves to couple the mid-IR light with the QW intersubband transitions. The increase of spontaneous emission by the mid-infrared induced photocurrent is detected with a CCD camera in the reflection configuration. As demonstrated earlier on p-type QWIP structures the mid-IR image of a blackbody object is up-converted to a near-IR transformed image with very small distortion.
We show 3D object classification from their 2D infrared images. Our feature-based approach is robust to image brightness variations due to the temperature changes. An original feature space cubic spline interpolation is introduced to build feature space trajectories for the recognition.
A new concept of surveillance system called Wide Area Coverage Infrared Surveillance System (WACISS), based on the human vision, was developed and a first laboratory prototype was demonstrated recently. A second prototype, more operational, is named the Infrared Eye is being built and will be tested in cooperation with the NRCC Flight Research Laboratory. The Infrared Eye will use the new pixel-less quantum well infrared photodetector sensors, coupled to light emitting diodes (QWIP/LED), currently being developed at NRCC Institute for Microstructural Science under DREV sponsorship. The multiple advantages of the pixel-less QWIP/LED over conventional sensors will considerably simplify the design of the system. As the WACISS, the IR Eye will integrate two cameras: the first, with a wide field-of- view, will be used for detection while the second camera, with a narrower field with higher resolution for identification, will be mobile within the WFOV and slaved to the operator's line-of-sight by means of an eye-tracking system. The images from both cameras will be fused and shown simultaneously on a standard high resolution CRT display unit, interfaced with the eye-tracking unit. The basic concepts pertaining to the project and the design constraints of this second prototype are presented.
KEYWORDS: Minimum resolvable temperature difference, Signal to noise ratio, Interference (communication), Infrared imaging, Imaging systems, Cameras, Systems modeling, Spatial frequencies, 3D modeling, Image filtering
An objective methodology that can be used to perform automatic MRTD tests on infrared imaging systems is presented. It is based on the assumption that a unique threshold function should exist between the signal-to-noise ratio measured by a computer that perform a spatio-temporal filtering on digitized images and the MRTD.
Recent developments on QWIP-LED detectors have led to Pixelless Imaging Devices. These detectors convert a thermal IR image into a near-IR image giving the possibility to image an IR scene at a higher resolution on the same detector area. Their use into a surveillance system is of great interest. The aim of this theoretical study is to compare the Signal-to-Noise ratio obtained with different spectral bands of these new pixelless sensors.
A rotation, scale and translation invariant pattern recognition technique is proposed.It is based on Fourier- Mellin Descriptors (FMD). Each FMD is taken as an independent feature of the object, and a set of those features forms a signature. FMDs are naturally rotation invariant. Translation invariance is achieved through pre- processing. A proper normalization of the FMDs gives the scale invariance property. This approach offers the double advantage of providing invariant signatures of the objects, and a dramatic reduction of the amount of data to process. The compressed invariant feature signature is next presented to a multi-layered perceptron neural network. This final step provides some robustness to the classification of the signatures, enabling good recognition behavior under anamorphically scaled distortion. We also present an original feature extraction technique, adapted to optical calculation of the FMDs. A prototype optical set-up was built, and experimental results are presented.
Current infrared imaging systems used for surveillance and search and rescue operations possess two fields of view which may be alternately selected by the operator: a wide field of the order of 20 degrees is used for the search and detection of targets, and a narrower field of a few degrees is selected for the recognition tasks. However, the degraded sensitivity and resolution of the wider field prevents it from fulfilling its function adequately. A new concept based on the focal plane array detector technology is intended to correct this drawback and to improve future infrared surveillance system for search and rescue operations. Simulating the properties of the human eye, the concept allows the simultaneous surveillance and image acquisition in two fields of view. A wide peripheral field of view (60 degrees) with increased sensitivity but lower resolution is dedicated to search and detection. A narrower field (6 degrees), which can be steered within the wider field, allows the recognition of detected objects with an improved resolution obtained by the use of microscanning techniques. THe high resolution required for the simultaneous display of both fields of view has led to the development of a new type of display, based on optical projection and superposition, better adapted to the human eye and hence optimizing the human interface. The constraints on the opto-mechanical and electronic design imposed by the mobility of the narrower field within the larger one, the microscanning mechanism and the calibration requirements of the focal plane array are discussed, and the selected solutions are presented. The limitations of the system in its present state of development are exposed and the plans for future improvements are elaborated.
This paper describes the results of experiments that were conducted in order to characterize the types of noise limiting the performance of an amber InSb charge injection device focal plane array (3-5 microns) of 256 by 256 pixels. This is part of the work done at the Defense Research Establishment Valcartier to develop a wide-area-coverage infrared surveillance system. The emphasis is put on the analysis of the postcorrection spatial noise that reduces the array sensitivity to weak point-source targets. This residual noise limits the improvement provided by an increased array integration time. Furthermore, the results show that a temporal low frequency noise component has a more severe effect than detector nonlinearities. However, this problem can be partly resolved with a periodic offset compensation obtained by reference image subtraction. The reference image is acquired when the blade of a flat black chopper wheel completely blocks the aperture of the camera. The chopper wheel is synchronized on the acquisition process. Results show that this compensation method can efficiently reduce the low frequency noise level and enhance point-source target detection.
Microscanning is a technique that allows to double the resolution of a given staring array imager. It consists in taking multiple images of the same scene while displacing each time the image over the detector plane by a distance equal to a fraction of the detector pitch. The technique is limited by the time required to shift the image from one point to the other and by the precision of the movements. This article describes work that was done under contract for the Defense Research Establishment Valcartier as part of the Wide Area Coverage Infrared Search System (WACISS) project to develop a fast microscanning imaging device. The system includes three main sections: the microscanning head, the controller and the power amplifier. THe microscanning head is made of a lens and a two-axis microtranslation table driven by two piezoelectric translators. The controller drives a high voltage power amplifier which in turn drives the translator. The controller allows four operation modes: fixed position, 2 X 2, 3 X 3, and 4 X 4 microscan. It works in open as well as in closed loop for precise displacements. The systems will be integrated to the WACISS project and will serve as an aid for the identification of detected objects.
Focal plane arrays allowed tremendous improvement in the robustness and compactness of thermal imagers reducing both mechanical and optical requirements. However, these will always be limited by the pixel size, the fill factor, and by the sampling theorem. As compared to older one-detector scanning systems, focal plane arrays can only reproduce half the frequencies scanning systems do for a given instantaneous field of view. To overcome this limitation, microscanning seems to be a winning approach. Microscanning can be seen as an oversampling process. A series of images representing the same scene are taken while displacing each time the image over the array by a fraction of the detector pitch. The oversampled image is built by interlacing all the pixels from all the images in both directions. It can be shown that microscanning can bring the resolution to the same level it is with standard scanning system. Furthermore, by characterizing the process, one can compensate for it and bring the resolution to the level of a microdisplacement. This article describes work that has been undertaken at the Defense Research Establishment Valcartier to evaluate the requirements for the microscanning process and to determine gains that can be obtained by using that technique in a surveillance application.
A unique real-time hybrid optical-digital image processing system has been developed to perform analysis of low contrast visual and infrared images in radon-space. This system functions by implementing the forward radon transform (a mathematical tomographic transform of image data from two-dimensional image-space to one-dimensional radon-space) in a front-end optical processor, and a digital processing subsystem operating in radon-space instead of the more traditional image-space. The system works by optically converting the two-dimensional image-space data into a series of one-dimensional projections. All further processing is performed digitally in radon-space on the one-dimensional projections. Applications of interest such as the objective minimum resolvable temperature difference measurement of thermal imagers and automatic pattern recognition are discussed. Also, this paper discusses the topic of real-time object-moment analysis in radon-space which can be used for target identification under conditions of certain image distortions (size, rotation, translation, and contrast). Radon-space object-moments can be calculated using significantly less image data and fewer digital operations than those in image-space.
When designing wide field of view tactical or scientific infrared systems, the performance requirements often demand that a large aperture, diffraction limited telescope be specified. In such cases, it is often difficult to maintain a flat focal plane and to limit the image plane size to match an existing infrared focal plane array technology. In this paper the design of an optical system containing a coherent infrared fiber bundle is described. The advantages of reformatting a curved liner image plane to match a planar 2-D focal plane array are discussed. To validate the design concepts, a 12 channel fluoride fiber was employed to relay the image of a Schmidt Telescope to a remotely located detector array. A comparison of predicted system performance and measured system performance is presented.
A software developped in C language for a quick field calibration of thermal imagers is presented. The software allows to take into account the atmospheric transmission between the target and the imager, and to evaluate the corrected blackbody radiation temperature of any resolved area on the target at a known range. The software and the field procedure for calibration are described. Some validation tests carried out during the NATO RSG-17 trial in Meppen Germany, and more extensive tests performed at DREV after the trial are also presented. The error budget in calibrating thermal imagers and evaluating both extended and point target signatures from thermal imagery is analysed and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.