Most greylevel threshold-selection algorithms find thresholds that are optimal according to specific regional or global statistics. These traditional approaches do not involve any model of the target except that it is expected to be separable from the background by a threshold suite. Thus, they fail to make use of the most important feature of imaging target trackers: the well-known location of the target in each frame. We present a technique that uses knowledge of the target location to build up a temporally- smoothed greylevel distribution map from which we extract two thresholds that separate from the background the greylevels with a high probability of belonging to the target under track.
Reconfigurable computing using SRAM-based field programmable gate arrays (FPGAs) can achieve significant computational performance advantage over conventional programmable processors. Since FPGAs can be customized, reconfigurable computers can provide optimal logic-circuitry for distinct phases of an application resulting in superior performance compared to generic multi-purpose hardware implementations. This performance improvement can be accomplished by reallocating logic resources to address the critical task at-hand. Consequently, not only can reconfigurable processors provide higher performance than programmable processors; they also enable common module architectures useful for multiple application or programs. In this paper, we will describe a fielded, ruggedized, fully programmable, single card, image-based tracking system using a reconfigurable computing module. The reconfigurable computing board contains multiple FPGAs, which can be customized at-request by loading configuration data from the host processor to the module over the Peripheral Component Interface (PCI) bus. Configurations can be selectively loaded to a specific FPGA or multiple configurations can be loaded simultaneously to different devices. This system provides multiple video tracking algorithms, automatic and manual target acquisition, RS-170 video input/output, and command/data I/O on a single 6U VME format card. While the initial application for this reconfigurable system was image-based target tracking, its hardware reconfigurability allows it to be applied to a wide variety of image and signal processing applications, such as automatic target recognition, IR search and track, and image enhancement.
KEYWORDS: Video, Video processing, Signal processing, Sensors, Data conversion, Image processing, Video acceleration, Computer architecture, Human-machine interfaces, Computing systems
As the industry moves towards open architecture standards, current image processing systems need to exploit the high computing throughput of commercially available parallel processing architectures in order to implement increasingly complex algorithms and systems. Adapting the inputs from a variety of visible and IR sensors to the unique requirements of multi-processing systems is essential for development of high performance real-time image processing systems. This paper describes the architecture of the Hughes Video Input Card (VIC) which provides a programmable hardware interface between imaging sensors and a high-speed parallel processor interconnect bus. In addition to providing the sensor electrical interface, the VIC supports important video processing functions by pre-conditioning the video data by the sensor electrical interface, the VIC supports important video processing functions by pre-conditioning the video data by image windowing, flowing point data conversion, and pixel decimation. These programmable features, combined with the image windowing, flowing point data conversion, and pixel decimation. These programmable features, combined with the VIC's interface to the high-speed RACEway, far exceed the capabilities of any front panel data port input.
There are numerous types of real-time processing applications with diverse requirements. Even though application requirements vary significantly, they share many common elements. An integrated image processing software architecture applicable to multiple image processing applications is beneficial in reducing software costs, increasing information fusion, and rapidly prototyping demonstration systems. This paper describes an image processing architecture that provides a framework for integrating multiple image processing applications such as image-based target trackers, IR search and track (IRST), and automatic target cueing/recognition. It discusses the general image processing structure describing common and unique elements of these different applications including such issues as throughput, control and data flow, and latency requirements. We describe the integrated architecture including data input, processing parallelization, image and data processing, information fusing, interfaces, and displays. We present examples of image-based target tracking, moving target indication and IRST implemented using this software architecture on parallel processors. The integrated image processing framework has proven to be extremely beneficial for rapid development of image processing systems form the concept to the demonstration stage.
Loss of lock on background regions or target-like objects (clutter) is a major problem for imaging target trackers. Although gated trackers naturally resist clutter interference until the threat is within the track gate, they do not typically predict the impending perturbation. Also, classical trackers exercise a drastic response to the eventual detection of clutter within the track gate--they coast, totally ignoring current position measurements while propagating an old target rate. If the target accelerates during coast, loss of lock is very likely. We present an algorithm for detecting and tracking clutter objects in the tracked scene and modifying the loop gains in the primary target tracker. This method rejects breaklock due to clutter interference while remaining responsive to target maneuvers.
A novel wavelet transform based clutter measurement technique correlates remarkably well with imaging tracker performance. To determine the robustness of automated tracking algorithms we need a quantitative measure of ground clutter complexity. Current tracking and detection system performance is often specified against signal-to-noise ratio, which is sensor related, and provides no information regarding the scene content. A signal-to- clutter ratio (SCR) can specify the degree to which the target signal is discernible from the background. In the context of electro-optical imaging systems, clutter is defined as objects or scene phenomena that interfere with target detection and tracking. Scene clutter is endemic to imaging systems, yet limited useful work has been performed to measure it. Clutter complexity is usually determined subjectively, and in reference to the object of interest. We devised a clutter measure that is quantitative and also independent of the object of interest. Based on this measure, we also developed a SCR, which is used to analyze detection and tracking performance, and allows prediction of target tracking performance in given clutter levels for particular target signatures. We present results of imaging target tracking performance with respect to target signatures for a given target in various clutter levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.