Multi-sensor management for data fusion in target tracking concerns issues of sensor assignment and scheduling by
managing or coordinating the use of multiple sensor resources. Since a centralized sensor management technique has a
crucial limitation in that the failure of the central node would cause whole system failure, a decentralized sensor
management (DSM) scheme is increasingly important in modern multi-sensor systems. DSM is afforded in modern
systems through increased bandwidth, wireless communication, and enhanced power. However, protocols for system
control are needed to management device access. As game theory offers learning models for distributed allocations of
surveillance resources and provides mechanisms to handle the uncertainty of surveillance area, we propose an agent-based
negotiable game theoretic approach for decentralized sensor management (ANGADS). With the decentralized
sensor management scheme, sensor assignment occurs locally, and there is no central node and thus reduces the risk of
whole-system failure. Simulation results for a multi-sensor target-tracking scenario demonstrate the applicability of the
proposed approach.
Automatic target detection (ATD) systems using imaging sensors have played a critical role in site monitoring,
surveillance, and object tracking. Although numerous research efforts and systems have been designed to quickly detect
and recognize missile-like flying targets in cluttered environments, detection of flying targets from a long distance and
large format imagery data is still a challenge. The accuracy of target detection and recognition will greatly affect the
performance of the target tracking system. In this paper, we propose a novel framework to quickly detect missile-like
flying targets in a time-efficient manner. The framework is based on a coarse-to-fine strategy and consists of five
components executed in a sequential order: (1) A rapid clustering operation performs fast image segmentation; (2) based
on the segmentation results of three neighboring image frames, motion analysis identifies the regions of interest which
contain the flying targets; (3) a specially-designed double-threshholding operator precisely segments the moving targets
from the regions of interest; (4) a binary connectivity filter enhances the detected targets and removes the target noise;
and (5) a contour method analyzes the boundary of the detected targets for verification. To test the proposed approach,
a state-of-the-art 3D modeling and animation software tool was used to simulate target flight and attack. Experimental
results, obtained from the electro-optical (EO) images generated from the 3D simulations, illustrate a wide variety of
target and clutter variability, and demonstrate the effectiveness and robustness of the proposed approach.
To address the challenges on non-cooperative long-distance human authentication, identification, and verification; we
propose an innovative scheme for developing a robust and automatic long-range biometric recognition system by
combining face recognition and iris recognition of non-cooperative individuals in 24/7 operations. The system consists
of three cameras. One is a wide field of view (WFOV) CCD video camera with InfraRed (IR) filter and powerful IR
illuminators for human scan in a wide area and from a long distance. The other two cameras are high resolution video
cameras with narrow field of view (NFOV) and IR filter & illuminators, mounted on a pan-tilt-unit (PTU) to capture the
frontal view of human face and iris respectively. The WFOV detects the person and the NFOV cameras extract details
for person identification. Once the frontal view shots are captured by the NFOV cameras, the face/iris models will be
extracted by applying the state-of-the-art face/iris recognizers. In addition, a multimodality fusion approach integrates
the face and iris recognition results to improve the overall recognition performance.
KEYWORDS: Sensors, Data integration, Data fusion, 3D image processing, Data acquisition, Computing systems, Ranging, Virtual colonoscopy, Image fusion, Matrices
A method of data fusion from a set of range images for 3-D object surface reconstruction is presented. The two major steps (multiview registration and data integration) of data fusion are carefully discussed. Firstly, the range images taken from multiple views are accurately registered through a set of translation and rotation matrices whose coefficients are carefully calculated through the developed methodology. Then, three criteria for overlapping-data elimination are provided as the foundation of data integration. Compared with the most other methods, which mesh all multiple views or compute an implicit surface function for the object before integrating the data, our integration method manipulates surface data directly, thus providing a straightforward way for overlap removal. A surface-based smoothing filter and a resampling operation are also developed for data quality improvement and data size reduction. The approach is applied to various range data sets of objects with different geometric shapes. The experimental results demonstrate the efficiency and applicability of the proposed method.
Since retinal vessel detection and measurement plays an important role in diagnosing cardiovascular diseases, a lot of
efforts have been made in recent years. In this paper, we propose an efficient method for vessel detection and diameter
measurement which incorporates with edge detection, path searching, and matched filtering. Firstly, Gaussian-
Laplacian filter and zero crossing detection are applied to retinal image to get an initial edge-map. Then, for any
vessel of interest, its centerline and diameter can be accurately obtained by using region growing, thinning, path
searching, and matched filter. The robustness and efficiency of the proposed method have been demonstrated by our
extensive experiments. Another advantage is that it can process retinal image in real-time speed. The detection and
measurement results lead us to determine more parameters for further study of vascular diseases.
KEYWORDS: 3D modeling, 3D image processing, Data integration, Reconstruction algorithms, Data acquisition, Fluctuations and noise, Data processing, 3D acquisition, 3D vision, Machine vision
Three-dimensional (3D) object reconstruction from range images plays an important role in many research and application fields, including computer vision, reverse engineering, computer graphics, and CAD/CAM. Since data integration is a fundamental step in object reconstruction, a great number of research efforts have been made on that. In this paper, a novel integration algorithm is presented. Firstly, the input data (registered data) which contains overlapping data is represented by kd-tree structure. Then, three theorems are provided together with the usage of nearest neighbor searching to identify and eliminate the overlapping data. The method manipulates the registered data directly without preprocessing work, therefore, provides an efficient and straightforward way to remove the redundant data. This is different from the traditional methods which need to mesh the input data or build an implicit surface function before integration. To reduce the data size and obtain a reasonable density distribution, a reliable resampling method called ball travel based resampling is also developed. The experimental results demonstrate the efficiency of the proposed algorithm.
An algorithm for 3D surface reconstruction of large objects using a structured light pattern ranging system is presented. Highly accurate industrial inspection applications have been constrained by the limited range resolution and accuracy of current ranging devices and techniques. To overcome the limited range resolution, the ranging sensor uses a small field-of-view and multiple views. The proposed algorithm fuses surface data patches from the views to construct a large object surface. The algorithm also increases the accuracy of the reconstructed object with efficient numerical analysis and pre-processing. Experimental results show that the algorithm and the current sensor setup can reconstruct an object for inspection applications with the accuracy of approximately 1 mil (2.54μm ) tolerance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.