PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Approaches towards nonlinear state estimation have been recently advanced to include more accurate and stable alternatives. The Extended Kalman Filter (EKF), the first and most widely used approach (applied as early as the late 1960's and developed into the early 1980's), uses potentially unstable derivative-based linearization of nonlinear process and/or measurement dynamics. The Unscented Kalman Filter (UKF), developed after around 1994, approximates a distribution about the mean using a set of calculated sigma points. The Central Difference Filter (CDF), or Divided Difference Filter (DDF), developed after around 1997, uses divided difference approximations of derivatives based on Stirling's interpolation formula and results in a similar mean, but a different covariance from the EKF and using techniques based on similar principles to those of the UKF. This paper compares the performance of the three approaches above to the problem of Ballistic Missile tracking under various sensor configurations, target dynamics, measurement update / sensor communication rate and measurement noise. The importance of filter stability in some cases is emphasized as the EKF shows possible divergence due to linearization errors and overconfident state covariance while the UKF shows possibly slow convergence due to overly large state covariance approximations. The CDF demonstrates relatively consistent stability, despite its similarities to the UKF. The requirement that the UKF expected state covariance is positive definite is demonstrated to be unrealistic in a case involving multi-sensor fusion, indicating the necessity for its reportedly more robust and efficient square-root implementation. Strategies for taking advantage of the strengths (and avoiding the weaknesses) of each filter are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of tracking systems depends on numerous factors including the scenario, operating conditions, and choice of tracker algorithms. For tracker system design, mission planning, and sensor resource management, the availability of a tracker performance model (TPM) for the standard measures of performance (MOPs) would be of high practical value. Ideally, the TPM has high computational efficiency, and is insensitive to the particular low-level details of highly complex algorithms and unimportant operating conditions. These characteristics would eliminate the need for high fidelity Monte Carlo simulations that are expensive and time consuming. In this paper, we describe a performance prediction model that generates track life distributions and other MOPs. The model employs a simplified Monte Carlo simulation that accounts for sensor orbits, sensor coverage, target dynamics. A key feature is an analytical expression that approximates the probability of correct association (PCA) among reports and tracks. The expression for the PCA that we use was developed by Mori et. al. for simplified scenarios where there is a single class of targets, the noise is Gaussian, and the covariance matrices are identical for all targets. Based on heuristic considerations, we extend this result to the case of road-constrained tracking where both on-road and off-road targets are present. We investigate the validity of the proposed expression by means of Monte Carlo simulations, and present preliminary results of a validation study that compares the performance of an actual tracker with the performance predictions of our model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research in multitarget tracking has mainly focused on the development and implementation of efficient data association algorithms with acceptable performance. The Viterbi Data Association (VDA) algorithm has proven to have low computation cost and hence a good candidate for the extension to multiple target tracking case. In this paper, the VDA is implemented for tracking both the single and multiple maneuvering targets in clutter. The track initiator and the adaptive sliding window techniques are used so that the VDA can maintain a lock on the track. The performance of the algorithm is assessed via Mote-Carlo simulations. The computation complexity analysis reveals that the VDA is computationally more efficient over the known tracking techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The research conducted in the last decade in missile design has mainly focused in the area of guidance and control. Many researchers have designed interceptors with high performance; namely: ranging from the classical control to the knowledge based techniques. The homing guided missile flight simulation testbed has been developed and tested against different control systems. The missile aerodynamic model has been simulated based on NASA reports and the output aerodynamic coefficients have been compared and justified by the wind tunnel tests as well. The other missile modules have been simulated and compared to the real missile modules in terms of input/output experimental results. The guidance and control system has yield excellent performance against incoming and outgoing maneuvering targets falling within the missile's destruction zone. However, all the test scenarios assumed that the target information from the missile seeker (tracker) is exact and obtained from the observations without any major difficulty. In the case of high density clutter and false alarms as well as the low signal-to-noise-ratio (SNR) which may be due to the existence of flair, decoy or any other counter measure, the tracker accuracy plays an important role in the over all engagement scenario. In this paper, a fuzzy logic-based technique has been employed to improve the performance of the missile seeker at high density clutter and low SNR. The Interacting Multiple Model Fuzzy Data Association (IMM-FDA) has been employed to improve the missile-target intercept accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a solution to the TENET nonlinear filtering challenge is presented. The proposed approach is based on particle filtering techniques. Particle methods have already been used in this context but our method improves over previous work in several ways: better importance sampling distribution, variance reduction through Rao-Blackwellisation etc. We demonstrate the efficiency of our algorithm through simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Kalman filter, which is optimal with respect to Gaussian distributed noisy measurements, is commonly used in the Multiple Hypothesis Tracker (MHT) for state update and prediction. It has been shown that when filtering noisy measurements distributed with asymptotic power law tails the Kalman filter underestimates the state error when the tail exponent is less than two and overestimates it when the tail exponent is greater that two. This has severe implications for tracking with the MHT which uses the estimated state error for both gating and probability calculations. This paper investigates the effects of different tail exponent values on the processes of track deletion and creation in the MHT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the Variable Structure Multiple Model(VSMM) approach to the maneuvering target tracking problem is considered. A new VSMM design -- the Minimal Sub-Model-Set Switching (MSMSS) algorithm for tracking a maneuvering target is presented. In this algorithm: -- a core model is used to represent the most likely true system mode;
-- edge models are used to represent other possible true system modes based on their connectivity with the core model; -- a sub-model-set is defined by the core and edge models and their transition probabilities as determined from the model transition probability matrix for the full model set. The MSMSS algorithm adaptively determines the minimal sub-model-set of models from the total model set and uses this to perform interacting multiple model (IMM) estimation. In addition, an iterative MSMSS algorithm with improved maneuver detection and termination properties is developed. Simulation results demonstrate that, compared to a standard IMM, the proposed algorithms require significantly lower computation while maintaining similar tracking performance. Alternatively, for a computational load similar to IMM, the new algorithms display significantly improved performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Variable Structure Multiple Model (VSMM) estimation generalizes Multiple Model (MM) estimation by assuming that the set of models used for MM estimation is time varying. By using VSMM estimation, a large model set which may cover all possible target maneuvers can be used without significant increase of computational load, while maintaining a reasonable estimation accuracy. Various implementable VSMM algorithms, like Model Group Switching (MGS), Likely Model Set (LMS) and Minimal Sub-Model-Set Switching (MSMSS) using the Interacting Multiple Model (IMM) algorithm with a sub-model-set adaptation logic have appeared in the recent literature. However, the use of these algorithms for tracking maneuvering target in clutter has not been explored. In presence of clutter, one need to use data association technique to differentiate target originated measurement from clutter. The probabilistic data association (PDA) has been popularly adopted to many algorithms for tracking in clutter. In this paper, we integrate PDA technique with MSMSS and propose a VSIMM-PDA algorithm for maneuvering target tracking in clutter. A new grating technique to account for potential model errors is used. A numeric example via multiple Monte Carlo runs, which compares the performance of the new algorithm to a standard IMM-PDA in terms of Root Mean Squared error (RMS) and percentage of track loss, is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fuzzy logic algorithm has been developed that automatically allocates electronic attack resources distributed over different platforms in real-time. The controller must be able to make decisions based on rules provided by experts. The fuzzy logic approach allows the direct incorporation of expertise. Genetic algorithm based optimization is conducted to determine the form of the membership functions for the fuzzy root concepts. The resource manager is made up of five parts, the isolated platform model, the multi-platform model, the fuzzy EA model, the fuzzy parameter selection tree and the fuzzy strategy tree. Automatic determination of fuzzy decision tree topology using a genetic program, an algorithm that uses the theory of evolution to create other algorithms is discussed. A tree originally obtained from expertise is compared to a tree evolved using a genetic program. The tree created by the genetic program is superior for some applications to one constructed solely based on expertise. The concept of self-morphing trees, i.e., decision trees that can change their own computational complexity in real-time is introduced. The strategy tree concept and how various fuzzy concepts overlap in phase space to create a more robust resource manager are considered. Finally, methods of validating the algorithm are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates how the targeting capability of a distributed data fusion system can be improved though the use of intelligent sensor management. The research reported here builds upon previous results from QinetiQ's air-to-ground fusion programme and sensor management research. QinetiQ's previously reported software test-bed for developing and evaluating data fusion algorithms has been enhanced to include intelligent sensor management functions and weapon fly-out models. In this paper details of the enhancements are provided together with a review of the sensor management algorithms employed. These include flight path optimization of airborne sensors to minimize target state estimation error, sensor activation control and sightline management of individual sensors for optimal targeting performance. Initial results from investigative studies are presented and conclusions are drawn.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
“Robust identification” in SAR ATR refers to the problem of determining target identity despite the confounding effects of “extended operating conditions” (EOCs). EOC’s are statistically uncharacterizable SAR intensity-signature variations caused by mud, dents, turret articulations, etc. This paper describes a robust ATR approach based on the idea of (1) hedging against EOCs by attaching “random error bars” (random intervals) to each value of the image likelihood function; (2) constructing a “generalized likelihood function” from them; and (3) using a set-valued, MLE-like approach to robustly estimate target type. We compare three such classifiers, showing that they outperform conventional approaches under EOC conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Deputy Under Secretary of Defense for Science and Technology (DUSD/S&T), as part of their ongoing ATR Program, has sponsored an effort to develop and demonstrate methods for evaluating ATR algorithms that utilize multiple data sources, i.e., fusion-based ATR. The AFRL COMPASE Center has formed a strong ATR evaluation team and this paper presents results from this program, focusing on the human-in-the-loop, i.e. assisted image exploitation. Reliance on Automated Target Recognition (ATR) technology is essential to the future success of Intelligence, Surveillance, and Reconnaissance (ISR) missions. Often, ATR technology is designed to aid the analyst, but the final decision rests with the human. Traditionally, evaluation of ATR systems has focused mainly on the performance of the algorithm. Assessing the benefits of ATR assistance for the user raises interesting methodological challenges. We will review the critical issues associated with evaluations of human-in-the-loop ATR systems and present a methodology for conducting these evaluations. Experimental design issues addressed in this discussion include training, learning effects, and human factors issues. The evaluation process becomes increasingly complex when data fusion is introduced. Even in the absence of ATR assistance, the simultaneous exploitation of multiple frames of co-registered imagery is not well understood. We will explore how the methodology developed for exploitation of a single source of data can be extended to the fusion setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-resolution sidescan sonars are often used in underwater warfare for large-area surveys of the seafloor in the search for sea mines. Much effort has gone toward the automatic detection of sea mines. In its more advanced forms, such auto-detection entails pattern recognition: the automatic assignment of class labels (target/non-target) to signatures according to their distinctive features. This paper demonstrates a texture-based feature for automatically discriminating between man-made and natural objects. Real sonar data is used, and the demonstration includes performance estimates in the form of the receiver-operator characteristic (ROC) curves necessary (though often omitted) for evaluating detectors for operational use. The merits of redefining the allowable automatic responses-from the classes of mine targets ultimately sought, to the class of man-made objects more generally-are reviewed from both the pattern-recognition and operational perspectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to rapidly detect and identify potential targets both fixed and mobile from multiple sensor feeds is a critical function in network centric warfare. In this paper we describe the use of Image Differencing and 3D terrain database editing in order to fuse oblique aerial photos, IR sensor imagery, and other non-traditional data sources to produce battlefield metrics that support network centric operations. Such metrics include target detection, recognition, and location, and improved knowledge of the target environment. Key to our approach is the rapid generation of target and background signatures from high-resolution 1-meter object descriptor terrain databases. This technique utilizes the difference between measured and calculated sensor images to 1) update and correct knowledge of the terrain background, 2) register multi sensor imagery 3) identify potential/candidate targets based on residual image differencing and 3) measure and report target locations based on scene matching. The technique is especially suited for utilizing imagery from reconnaissance and remotely piloted vehicle sensors. It also holds promise for automation and real-time data reduction of battlefield sensor feeds and for improving now-time situational awareness. We will present the algorithms and approach utilized in the Image Differencing technique. We will also describe the software developed to implement the approach. Lastly we will present the results of experiments and benchmarks conducted to identify and measure target locations in test locations at Ft. Hood, TX and Ft. Hunter Liggett, CA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Here is what we have done in this study:
1). Our previous results of spatio-temporal fusion for target classification have been further developed for target detection. (SPIE AeroSense, Vol. 4731, pp. 204-215, April, 2002)
2). Different temporal integration (fusion) strategies have been developed and compared, including pre-detection integration (such as additive, multiplicative, MAX, and MIN fusions), as well as the traditional post-detection integration (the persistency test).
3). In our 2nd study, The temporal correlation and non-stationary properties of sensor noise have been investigated using sequences of imagery collected by an IR (256x256) sensor looking at different scenes (trees, grass, roads, buildings, etc.).
4). The natural noise extracted from the IR sensor, as well as noise generated by a computer with Gaussian and Rayleigh distributions have been used to test and compare different temporal integration strategies.
Some preliminary results are summarized here:
1). Both the pre- and post-detection temporal integrations can considerably improve target detection by integrating only 3~5 time frames (tested by real sensor noise as well as computer generated noise).
2). The detection results can be further improved by combining both the pre- and post-detection temporal integrations.
3). The sensor noise at most (> 95%) of the sensor pixels are near stationary and un-correlated between pixels as well as (almost) un-correlated across time frames under a good weather condition.
4). The noise at a few pixels near some surface edges has shown non-stationary properties (with increasing or decreasing mean across time).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital Fast Pattern Processor (DFPP) system under development for the Naval Air Warfare Center, is funded under a SBIR, Phase III contract. It is an automatic target recognizer and tracker candidate for supersonic missile guidance and unmanned air vehicle (UAV) reconnaissance to meet the U.S. navy's time-critical strike objectives. The former application requires rapid processing of moderate size, real time image arrays, versus large real time image arrays for the latter case. The DFPP correlates operator selected target filters against observed imagery at 1500 correlations per second as currently implemented with programmable logic devices (PLD's) - equivalent to thirty Pentium III (1 GHz) PC's. High performance and low weight, power, size, cost of the current version make it ideal for on-board image data processing in UAV's and cruise missiles or for ground station processing. Conversion to application specific integrated circuit (ASIC) technology provides scalable performance to meet future ATR/ATT needs. The Sanders proprietary DFPP technology embodies a Power-FFT, which is the fastest digital fast Fourier transform (DFTT) in the world with performance exceeding supercomputers, at a small fraction of the cost, size, weight, and power. The DFPP operates under control of Sanders Correlation Image Processor (SCIP) program and enables correlation against a plethora of stored target filters (templates).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classification of aircraft into types is an important aspect of the problem of air picture compilation and is required if good situation awareness is to be maintained. If this can be achieved when the aircraft are at long range (significantly beyond visual range) then these processes may be significantly enhanced.
This paper examines methods for exploiting high-resolution radar range profiles of aircraft using statistical pattern recognition techniques to produce classifications into types. The paper describes the data available, and covers pre-processing steps and the development of a range of classifiers of increasing complexity. The classifiers applied in the target recognition process include simple parametric and non-parametric methods based on single range profile samples, approaches which fuse classifications from a temporal sequence of measurements, and methods that use a sub-classing based approach. The latter technique uses multiclassifier system methods that cope well with the small training set sizes. As the assumptions in the model, and the complexity of the classifiers, increases so does the performance of the target recognition system, with error rates as low as 6% being achieved for a problem with three aircraft types. One issue with the available experimental data is that only a limited number of samples of each aircraft type are available. Care is taken to ensure the results produced using this limited data are achievable in an equivalent real-world application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biologically-based computer vision systems are now available that achieve robust image interpretation and automatic target recognition (ATR) performance. We describe two such systems and the reasons behind their robust performance. We also report results of three studies that demonstrate this robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications I
Multisensor-multitarget sensor management is at root a problem in nonlinear control theory. This paper develops a potentially computationally tractable approximation of an earlier (1996) Bayesian control-theoretic foundation for sensor management based on “finite-set statistics” (FISST) and the Bayes recursive filter for the entire multisensor-multitarget system. I analyze possible Bayesian control-theoretic objective functions: Csiszar information-theoretic functionals (which generalize Kullback-Leibler discrimination) and “geometric” functionals. I show that some of these objective functions lead to potentially tractable sensor management algorithms when used in conjunction with MHC (multi-hypothesis correlator)-like algorithms. I also take this opportunity to comment on recent misrepresentations of FISST involving so-called “joint multitarget probabilities (JMP).”.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we consider the problem of autonomously improving upon a sensor management algorithm for better tracking performance. Since various Performance Metrics have been proposed and studied for monitoring a tracking system's behavior, the problem is solvable by first parameterizing a sensor management algorithm and then searching the parameter space for a (sub-)optimal solution. Genetic Algorithms (GA) are ideally suited for this optimization task. In our GA approach, the sensor management algorithm is driven by "rules" that has a "condition" part to specify track locations and uncertainties, and an "action" part to specify where the Field of Views (FoVs) of the sensors should be directed. Initial simulation studies using a Multi-Hypothesis Tracker and the Kullback-Leibler metric (as a basis for the GA fitness function) are presented. They indicate that the method proposed is feasible and promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid weighted interacting particle filter, the selectively resampling particle filter (SERP), is used to detect and track multiple ships maneuvering in a region of water. The ship trajectories exhibit nonlinear dynamics and interact in a nonlinear manner such that the ships do not collide. There is no prior knowledge on the number of ships in the region. The observations model a sensor tracking the ships from above the region, as in a low observable SAR or infrared problem. The SERP filter simulates particles to provide the approximated conditional distribution of the signal in the signal domain at a particular time, given the sequence of observations. After each observation, the hybrid filter uses selective resampling to move some particles with low weights to locations that have a higher likelihood of being correct, without resampling all particles or creating bias. Such a method is both easy to implement and highly computationally efficient. Quantitative results recording the capacity of the filter to determine the number of ships in the region and the location of each ship are presented. Thy hybrid filter is compared against an earlier particle filtering method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a particle filtering algorithm for tracking multiple ground targets in a road-constrained environment through the use of GMTI radar measurements. Particle filters approximate the probability density function (PDF) of a target's state by a set of discrete points in the state space. The particle filter implements the step of propagating the target dynamics by simulating them. Thus, the dynamic model is not limited to that of a linear model with Gaussian noise, and the state space is not limited to linear vector spaces. Indeed, the road network is a subset (not even a vector space) of R2. Constraining the target to lie on the road leads to adhoc approaches for the standard Kalman filter. However, since the particle filter simulates the dynamics, it is able to simply sample points in the road network. Furthermore, while the target dynamics are modeled with a parasitic acceleration, a non-Gaussian discrete random variable noise process is used to simulate the target going through an intersection and choosing the next segment in the road
network on which to travel. The algorithm is implemented in the SLAMEM simulation (an extensive simulation which models roads, terrain, sensors and vehicles using GVS). Tracking results from the simulation are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a new hybrid particle filter that has two novel features: (1) it uses quasi-Monte Carlo samples rather than the conventional Monte Carlo sampling, and (2) it implements Bayes' rule exactly using smooth densities from the exponential family. Theory and numerical experiments over the last decade have shown that quasi-Monte Carlo sampling is vastly superior to Monte Carlo samples for certain high dimensional integrals, and we exploit this fact to reduce the computational complexity of our new particle filter. The main problem with conventional particle filters is the curse of dimensionality. We mitigate this issue by avoiding particle depletion, by implementing Bayes' rule exactly using smooth densities from the exponential family.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report here on an application of a particle systems implementation of the probability hypothesis density (PHD). The PHD of the multitarget posterior density has the property that, given any volume of state space, the integral of the PHD over that volume yields the expected number of targets present in the volume. The application we consider is the joint tracking and identification of multiple aircraft, with the observations consisting of noisy position measurements and high range resolution radar (HRRRR) signatures. We also take into consideration the presence of clutter and a probability of detection less than unity. Experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report here on the implementation of a particle systems approximation to the probability hypothesis density (PHD). The PHD of the multitarget posterior density has the property that, given any volume of state space, the integral of the PHD over that volume yields the expected number of targets present in the volume. As in the single target setting, upon receipt of an observation, the particle weights are updated, taking into account the sensor likelihood function, and then propagated forward in time by sampling from a Markov transition density. We also incorporate resampling and regularization into our implementation, introducing the new concept of cluster resampling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayesian multitarget tracking is an inherently nonlinear problem. Even
when the state models and sensor noise associated with individual targets and observations is Gaussian, the "true" data likelihood, as formulated within the framework of finite-set statistics, is non-Gaussian. Missed detections and false alarms, combined with the fact that targets may enter and leave the scene at random times, complicate matters further. The resulting Bayesian posterior is analytically foreboding, and many conventional estimators are not even defined. We propose an algorithm for generating samples from the posterior based on jump-diffusion processes. When discretized for computer implementation, the jump-diffusion method falls into the general class of Markov chain Monte Carlo methods. The diffusions refine estimates of continuous parameters, such as positions and velocities, whereas the jumps are responsible for major discrete changes, such as adding and removing targets. Jump-diffusion processes have been previously applied to performing automatic target recognition in infrared images and tracking multiple targets using raw narrowband sensor array and high-resolution range profile data. Here, we apply jump-diffusion to the more traditional class of target tracking problems where raw sensor data is preprocessed into reports, but the report-to-target association is unknown. Our formulation maintains the flavor of other recent work employing finite-set statistics, in that no attempts to explicitly associate specific reports with specific targets are needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Surface Classifier approach for the fusion of sensor data has been shown to produce improved classification performance over traditional methods that characterize object classes as a single mean feature vector and associated covariance. The key aspect of this approach is the notion of characterizing object classes as parametric representations of curves, or surfaces, in feature space that capture the underlying correlations between features. By performing calculations in this representation of feature space, the fusion of feature data from the two sensors was seen to be straightforward. In this paper, the Surface Classifier approach is extended to combine multiple observations of these objects into a 'manifold fragment' that is fitted to the surface representing an object's parametric representation in feature space. Additionally, by using a 'Torn-Surface' representation of the object classes, the approach is able to address discontinuities in object class representations and give estimates of non-observed, derived object features (e.g., physical dimensions). As will be shown, with added white noise, classification errors and the errors in estimating the derived features increase but remain very well behaved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mathematical models of systems arising in many practical applications are hybrid systems with both discrete-event and continuum dynamics. Theoretical and computational techniques for their analysis have largely focused on non-stochastic systems. This paper presents the path integral formalism of Feynman and Kac as an analysis framework for stochastic hybrid systems. The sum over paths formula fits well into existing behavioral approach to hybrid dynamical systems. It also provides a computational tool through particle-based methods which are widely used in nonlinear filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cluster tracking is the problem of detecting and tracking clustered formations of large numbers of targets, without necessarily being obligated to track each and every individual target. We address this problem by generalizing to the dynamic case a static Bayesian finite-mixture data-clustering approach due to P. Cheeseman. After summarizing Cheeseman's approach, we show that it implicitly draws on random set theory. Making this connection explicit allows us to incorporate it into a multitarget recursive Bayes filter, thereby leading to a rigorous Bayesian foundation for finite-mixture cluster tracking. A computational approach is proposed, based on an approximate, multitarget first-order moment filter (“cluster PHD” filter).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications II
Bayesian Networks are graphical representation of dependence relationships between domain variables. They have been applied in many areas due to their powerful probabilistic inference such as data fusion, target recognition, and medical diagnosis, etc. There exists a number of inference algorithms that have different tradeoffs in computational efficiency, accuracy, and applicable network topologies. It is well known that, in general, the exact inference algorithms are either computationally infeasible for dense networks or impossible for mixed discrete-continuous networks. However, in practice, mixed Bayesian Networks are commonly used for various applications. In this paper, we compare and analyze the trade-offs for several inference approaches. They include the exact Junction Tree algorithm for linear Gaussian networks, the exact algorithm for discretized networks, and the stochastic simulation methods. We also propose an almost instant-time algorithm (AIA) by pre-compiling the approximate likelihood tables. Preliminary experimental results show promising performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past, in multisensor fusion community, the research goal has been primarily focused on establishing a computational approach for fusion processing and algorithm. However, it would be very useful to be able to characterize the relationship between sensed information inputs available to the fusion system and the quality of fused information output. This will not only help us understand the fusion system performance but also provide high level performance bounds given sensor mix and quality for system control such as sensor resource allocation and estimate information requirements. This paper presents a fusion performance model (FPM) for a general multisensor fusion system. The model includes both kinematics and classification component and focuses on the two performance measures: positional error and classification error. The performance model is based on Bayesian theory and a combination of simulation and analytical approaches. Simulation results that validate the analytical performance predictions are also included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Multi-source Report-level Simulator (MRS) is a tool developed by Veridian Systems as part of its Model-adaptive Multi-source Track Fusion (MMTF) effort under DARPA's DTT program. MRS generates simulated multisensor contact reports for GMTI, HUMINT, IMINT, SIGINT, UGS, and video. It contains a spatial editor for creating ground tracks along which vehicles move over the terrain. Vehicles can start, stop, speed up, or slow down. The spatial editor is also used to define the locations of fixed sensors such as UGS and HUMINT observers on the ground, and flight paths of GMTI, IMINT, SIGINT, and video sensors in the air. Observation models characterize each sensor at the report level in terms of their operating characteristics (revisit rate, resolution, etc.) measurement errors, and detection/classification performance (i.e., Pd, Nfa, Pcc, and Pid). Contact reports are linked to ground truth data to facilitate the testing of track/fusion algorithms and the validation of associated performance models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of information theoretics within fusion and tracking represents an interesting addition to the problem of assessing optimal track fusion performance. This paper will explore the use of information-theoretics, namely, the use of the Kullback-Leibler as a measure of improving on the track assignment problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
BAE SYSTEMS is developing a "4D Registration" capability for DARPA's Dynamic Tactical Targeting program. This will further advance our automatic image registration capability to use moving objects for image registration, and extend our current capability to include the registration of non-imaging sensors. Moving objects produce signals that are identifiable across multiple sensors such as radar moving target indicators, unattended ground sensors, and imaging sensors. Correspondences of those signals across sensor types make it possible to improve the support data accuracy for each of the sensors involved in the correspondence. The amount of accuracy improvement possible, and the effects of the accuracy improvement on geopositioning with the sensors, is a complex problem. The main factors that contribute to the complexity are the sensor-to-target geometry, the a priori sensor support data accuracy, sensor measurement accuracy, the distribution of identified objects in ground space, and the motion and motion uncertainty of the identified objects. As part of the 4D Registration effort, BAE SYSTEMS is conducting a sensitivity study to investigate the complexities and benefits of multisensor registration with moving objects. The results of the study will be summarized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multistatic active sonar has significant potential for littoral surveillance. This paper describes a multistatic tracking algorithm, and provides performance analysis with both simulated and real multistatic sonar data. We find that sensor fusion and target tracking provided significant value added in the sonar processing chain. In particular, we are able to drastically reduce the number of objects that an operator must contend with, both by removing large numbers of false contacts as well as by associating true contacts and establishing tracks on moving targets and fixed clutter points. The association of contacts allows for recursive filtering algorithms to process kinematic measurements and provides localization and velocity information with much smaller errors than are present in multistatic contact data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given a finite collection of classifiers one might wish to combine, or fuse, the classifiers in hopes that the multiple classifier system (MCS) will perform better than the individuals. One method of fusing classifiers is to combine their final decision using Boolean rules (e.g., a logical OR, AND, or a majority vote of the classifiers in the system). An established method for evaluating a classifier is measuring some aspect of its Receiver Operating Characteristic (ROC) curve, which graphs the trade-off between the conditional probabilities of detection and false alarm. This work presents a unique method of estimating the performance of an MCS in which Boolean rules are used to combine individual decisions. The method requires performance data similar to the data available in the ROC curves for each of the individual classifiers, and the method can be used to estimate the ROC curve for the entire system. A consequence of this result is that one can save time and money by effectively evaluating the performance of an MCS without performing experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we consider the tracking of small distant objects using Radar and Electro-Optical (EO) sensors. In particular we address the problem of data association after coalescence - this happens when two objects become sufficiently close (in angular terms) that they can no longer be resolved by the EO sensor. Some moments later they de-coalesce and the resulting detections must be associated with the existing tracks in the EO sensor. Traditionally this would be solved by making use of the velocity vectors of the objects prior to coalescence. This approach can work well for crossing objects, but when the objects are largely moving in a direction radial to the sensor it becomes problematic. Here we investigate the use of data fusion to combine Radar range with a brightness measure derived from an EO sensor to enhance the accuracy of data association. We present a number of results on the performance of this approach taking into account target motion, atmospheric conditions and sensor noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion problem of dissimilar sensor data in the CASE_ATTI test-bed is considered. The sensors suite simulated includes an ESM sensor that reports bearing-only contacts, a 2D radar that reports range-bearing contacts, an IRST sensor that reports bearing-elevation contacts, and a 3D radar that reports full 3D contacts. To fuse all this information, CASE_ATTI is modified into a two-layer fusion architecture, with four sensor-level trackers and a central fusion node. Therefore, the fusion of all the dissimilar 1D, 2D and 3D tracks represents an important problem that this paper addresses. The important and directly related issue of tracking with angle-only reports is also addressed. The angle-only tracking represents an important issue in modern surveillance systems and has been extensively studied in recent years. Angle-only tracking systems are known to be unobservable unless the interceptor over-maneuvers the target. A divergence of the target state estimate may occur in the case of stationary or non-maneuvering interceptor. In this article, a new time alignment algorithm, that enhances stability, even for non-maneuvering interceptors, is developed. The proposed algorithm is based upon the modified spherical coordinate representation, but uses a different discretization approach that leads to a more stable behavior. Comparative scenario that illustrates the efficiency of the proposed architecture is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an autonomous navigation system capable of exploring an unknown environment, as implemented by the Advanced Technology Centre of BAE SYSTEMS (ATC). An overview of the enabling technology of the autonomous system, simultaneous localization and mapping (SLAM), is given before describing in detail the utility functions used to perform the strategic decision making required for autonomous exploration. Relevant studies, performed in simulation, of the major issues involved in multi-platform SLAM are also described. Initial results and conclusions are given at the end of the paper. All experiments are conducted on the Pioneer all terrain mobile robots using common off the shelf sensing devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
Level 2 fusion is defined as situation awareness. Unfortunately, that is the point where the agreement on Level 2 fusion ends. The distinctions between the boundaries between Levels 1, 2, and 3 are not clearly defined. As a result, these disputes tend to cloud the discussion on the required functionality required of a Level 2 tracking system. Our approach to develop a system that solves a perceived Level 2 problem has three basic tenets: define the problem, develop the concept of the fusion architecture, and define the object state. These tenets provide the foundation to outline and explain the conceptual approach to a Level 2 problem. Each step from the problem fundamentals to the state definition used in the formulation of algorithmic approaches are presented. The discussion begins with a summary of the military problem, which can be considered situation assessment, of multiple levels of unit aggregation to determine force composition, current capabilities, and posture. The problem consists of fusion Level 1 information, incorporating doctrine and other knowledge base information to form a coherent scene of what exists in the field that can then be used as a component of intent analysis. The development of the problem model leads to the development of a Fusion architecture approach. The approach mirrors one of the standard approaches of Level 1 fusion: detection, prediction, association, hypothesis generation and management, and update. Unlike the Level 1 problem, these implementation steps will not become a rehash of the Kalman filter or similar approaches. Instead, the architecture permits a composite set of approaches including symbolic methodologies. The problem definition and the architecture lead to the development of the system state which represents the internal composition of the units and their aggregates. From this point, the discussion concludes with a short summary of potential algorithms proposed for implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To effectively design a data-fusion system (DFS), users goals and situation and impact needs must be addressed for efficient action. Using the User-Fusion model, we explore the human’s capability to investigate the situation, determine the impact or threat, and refine DFS operations. By mapping user actions with DFS processes through “management by interaction”, the user-DFS design (1) actively engages the user in proactive control, (2) improves situation awareness, (3) reduces DFS dimensionality, (4) increases user confidence, and (5) decreases user-DFS reaction time. For example, by designing user refinement operations, we streamline DFS development for efficient target recognition and tracking scenarios by cueing an operator as well as allowing the user to prime the DFS. Notional results, using a novel 3D receiver operator curve (ROC) mapped over false alarm rate, detection rate, and time; captures the user-DFS interaction to increase target accuracy, reduce time of target identification, and increase system confidence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military services require C4I systems that support a full spectrum of operations. This is specifically relevant to the theatre missile defense (TMD) mission planning and analysis community where there have been several recent concept changes; advancements in information technology, sensors, and weapons; and expansion in the diversity and capabilities of potential adversaries. To fully support campaign development and analysis in this new environment, there is a need for systems and tools that enhance understanding of adversarial behavior, assess potential threat capabilities and vulnerabilities, perform C4I system trades, and provide methods to identify macro-level novel or emergent combat tactics and behavior derived from simpler micro-level rules. Such systems must also be interactive, collaborative, and semi-autonomous, providing the INTEL analyst with the means for exploration and potential exploitation of novel enemy behavior patterns. To address these issues we have developed an Intelligent Threat Assessment Processor (ITAP) to provide prediction and interpretation of enemy courses of actions (eCOAs) for the TMD domain. This system uses a combination of genetic algorithm-based optimization in tandem with the spatial analysis and visualization capabilities of a commercial-off-the-shelf (COTS) geographic information system to generate and evaluate potential eCOAs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To establish situation awareness during air defence surveillance missions, track level data from sensors and additional data sources are combined to form the air picture for the region under surveillance. Typically, compilation of the air picture and the C4ISR activities that it supports, namely real-time surveillance, fighter control and situation and threat assessment at the tactical level, and mission planning and intelligence collection at the theatre level, are all performed manually by defence and intelligence operators. To assist operators with compilation of the air picture and its subsequent applications, it is desirable to introduce automation to the information processing required for these activities. To accomplish this requires the use of the contextual information in the surveillance region to extract descriptive (symbolic) information about the behaviour of each detected air target from the positional and kinematic data in its state estimate. Since much of the contextual information exists in the form of entities and regions that can be modeled geometrically, it is possible to perform the information extraction using geometric criteria. In this paper, this philosophy is followed to produce such a set of geometric criteria which can be used to extract information that can be conveniently represented as predicates. First the choice of the criteria is motivated by an examination of the nature of the information which is to be extracted, before describing the mathematical details required for determining that the criteria are met. Several examples are also given to illustrate the methodology for using the criteria. Finally, the future directions for the further development, test and evaluation of the methodology are briefly discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We explore the use of fuzzy logic techniques to create an Image Correspondence Correction System. Image Correspondence Correction is important for applications such as Surveillance, Terrain Database Collection, Exploration and Map Building which often require the comparison of image data potentially from different sources or taken at different times and in varying conditions. Difficulties arise due to mis-registration of the data as well as artifacts introduced from discrete image properties. This work attempts to apply a fuzzy-type scheme to reduce the effects of these problems. Results of this system are compared to those achieved by a previous non-fuzzy system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We applied recently introduced universal image quality index Q that quantifies the distortion of a processed image relative to its original version, to assess the performance of different graylevel image fusion schemes. The method is as follows. First, we adopt an original test image as the reference image. Second, we produce several distorted versions of this reference image. The distortions in the individual images are complementary, meaning that the same distortion should not occur at the same location in all images (it should be absent in at least one image). Thus, the information content of the overall set of distorted images should equal the information content of the original test image. Third, we apply the image fusion process to the set of distorted images. Fourth, we quantify the similarity of the fused image to the reference image by computing the universal image quality index Q. The method can also be used to optimize image fusion schemes for different types of distortions, by maximizing Q through repeated application of steps two and three for different parameter settings of the fusion scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pattern recognition is an important aspect of image processing. Image features are computed from image objects and subsequently used by an object classificator to map (and therefore classify) image objects into their corresponding object classes. To avoid misclassification the image features used should be selected in such a way that they represent the image object similarity appropriately. Similarity however is a well known theoretical concept in physics, where similar phenomena are mathematically expressed as constant dimensionless numbers. These dimensionless numbers are determined from the dimensional representation of the relevant variables by means of a technique called dimensional analysis. In consequence, the concept of
dimensional analysis is applied for the derivation of dimensionless features of color images based on various color models. The properties such as color constancy of the resulting dimensionless numbers are studied using analytical and numerical examples. Also the similarity resulting from the different color models is analyzed
and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are faced with the problem of identifying and selecting the most significant data sources in developing monitoring applications for which data from a variety of sensors are available. We may also be concerned with identifying suitable alternative data sources when a preferred sensor may be temporarily unavailable or unreliable. This work describes how genetic algorithms (GA) were used to select useful sets of parameters from sensors and implicit knowledge to construct artificial neural networks to detect levels of chlorophyll-a in the Neuse River. The available parameters included six multispectral bands of Landsat imagery, chemical data (temperature, pH, salinity), and knowledge implicit in location and season. Experiments were conducted to determine which parameters the genetic algorithms would select based on the availability of other parameters, e.g., which parameter would be chosen when temperature wasn't available as compared to when near infrared data was not available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nonlinear filtering is an important and effective tool for handling
estimation of signals when observations are incomplete, distorted, and
corrupted. Quite often in real world applications, the signals to be estimated contain unknown parameters which need to be determined.
Herein, we develop and analyze non-recursive and recursive methods, which can deal with combined state and parameter estimation for nonlinear partially-observed stochastic systems. For the non-recursive
method, we obtain the unknown parameters through solving a system of non-singular finite order linear equations. For the recursive method, we generalize the least squares method and develop a particle prediction error identification algorithm so that it can be applied to general nonlinear stochastic systems. We use the branching particle filter to do the signal state estimation and implement simulations for both methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US Army Research Laboratory (ARL) has developed an acoustic signal processing toolbox (ASPT) for acoustic sensor array processing. The intent of this document is to describe the toolbox and its uses. The ASPT is a GUI-based software that is developed and runs under MATLAB. The current version, ASPT 3.0, requires MATLAB 6.0 and above. ASPT contains a variety of narrowband (NB) and incoherent and coherent wideband (WB) direction-of-arrival (DOA) estimation and beamforming algorithms that have been researched and developed at ARL. Currently, ASPT contains 16 DOA and beamforming algorithms. It contains several different NB and WB versions of the MVDR, MUSIC and ESPRIT algorithms. In addition, there are a variety of pre-processing, simulation and analysis tools available in the toolbox. The user can perform simulation or real data analysis for all algorithms with user-defined signal model parameters and array geometries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are a variety of domains in which signal channelization has proven to be useful, including the time, frequency, spatial and polarization domains. These partitioning techniques are necessary for the proper management and effective utilization of the overall channel resource. The term "multi-channel" is used to describe this partitioning of these domains. However, there are other "domains" in which channelization techniques can be employed. These include the coding domain (as in code-division multiple-access) and the less obvious steganographic domain. One can argue that these latter examples of domains lack the physical interpretation of their counterparts, or that they are each in fact a clever use of the standard domains. But from the view of the overall channel resource, very effective utilization and management tools can be developed, operated and described in these domains. In this paper, a technique is studied which is based upon a novel utilization of the signal bandwidth domain, for pre-processing prior to detection and parameter estimation. Experimental and theoretical results will be given for assessment of device performance. The studied technique is referred to as the Adjustable Bandwidth Concept (ABC) signal energy detector. When implemented digitally, this device is essentially a cepstral-based pre-processor for generating multiple channels for the analysis and detection of signal components of distinguishable bandwidths. The ABC device processes an input log-magnitude spectrogram and results in a multi-channel output. Each output channel contains information regarding the input spectrogram which is sorted or partitioned based on the bandwidth of signal components within the spectrogram. A primary application of such a device is as a pre-processing step prior to detection and estimation, for automated spectral survey and characterization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the statistical nature of scattered light there is an apparent propagation delay uncertainty when detecting scattered impulses originated from different scattering volumes. Because of the randomness of particle number, size and refractive index - scattered intensity is a random variable itself. For this reason the scattered impulse builds up randomly. If light propagation is assessed by its scattered trace, the effect leads to an apparent propagation delay uncertainty. The phenomenon is similar to the so called jitter in electrotechnics. This delay is a function of aerosol concentration and size distribution. The scope of this work is to measure and calculate the extent of this uncertainty for use in subsequent measurement units.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article describes a new, improved and fast version of our method and algorithm1 for detection of periodic signals in image sequences, i.e. signals that appear in a small number of adjacent pixels of an image sequence and are periodic in the temporal domain. The signal information is accumulated from adjacent pixels with the spectrum-specific version of Principal Components1. For this uniformly-sampled accumulated signal, a model dependent on few parameters is used for signal fitting. In this new version: 1) the sampling frequency may be below the Nyquist rate, and the model includes fold-over frequencies as well. 2) The general linear LS fit with pre-computed inverse matrixes was used for the model parameter estimation. It speeds-up the procedure. 3) The procedure is also speeded-up by preliminary pixel selection based on coarse estimation of the signal energy and SNR by the cross-power spectrum (CPS) method ith small data sub-frames. Our spectrum-specific covariance matrix estimate, employed in Spectrum-Specific Principal Components, is made more robust by utilizing the CPS method with small data sub-frames. The algorithm was tested by processing simulated image sequences as well as some real ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Notch filters are used in many industrial applications to attenuate undesired frequencies within signals. Such undesirable frequencies are common in flexible dynamic systems, power plants, medical monitoring systems, etc. In many aerospace flexible dynamic systems the desired center frequency shifts due to the nonlinearities and coupling of the system. The conventional approach in aerospace is to generate a large database filled with filter coefficients. This requires a significant verification and validation activity, as well as a large storage capacity for the filter coefficients. In this paper a model-based approach is used to design a notch filter system for a multivariable nonlinear system. Disturbance Accommodation Control ideas are presented and applied to notch filter design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deoxyribonucleic acid (DNA) sequences are difficult to analyze similarity due to their length and complexity. The challenge lies in being able to use digital signal processing (DSP) to solve highly relevant problems in DNA sequences. Here, we transfer a one-dimensional (1D) DNA sequence into a two-dimensional (2D) pattern by using the Peano scan algorithm. Four complex values are assigned to the characters “A”, “C”, “T”, and “G”, respectively. Then, Fourier transform is employed to obtain far-field amplitude distribution of the 2D pattern. Hereto, a 1D DNA sequence becomes a 2D image pattern. Features are extracted from the 2D image pattern with the Principle Component Analysis (PCA) method. Therefore, the DNA sequence database can be established. Unfortunately, comparing features may take a long time when the database is large since multi-dimensional features are often available. This problem is solved by building indexing structure like a filter to filter-out non-relevant items and select a subset of candidate DNA sequences. Clustering algorithms can organize the multi-dimensional feature data into the indexing structure for effective retrieval. Accordingly, the query sequence can be only compared against candidate ones rather than all sequences in database. In fact, our algorithm provides a pre-processing method to accelerate the DNA sequence search process. Finally, experimental results further demonstrate the efficiency of our proposed algorithm for DNA sequences similarity retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In signal processing, it is often necessary to decompose sampled data into its principal components. In adaptive sensor array processing, for example, Singular Value Decomposition (SVD) and/or Eigenvalue Decomposition (EVD) can be used to separate sensor data into “signal” and “noise” subspaces. Such decompositions are central to a number of techniques, such as MUSIC, ESPIRIT, and the Eigencanceller. Unfortunately, SVD and EVD algorithms are computationally intensive. When the underlying signals are nonstationary, “fast subspace tracking” methods provide a far less complex alternative to standard SVD methods. This paper addresses a class of subspace tracking methods known as “QR-Jacobi methods.” These methods can track the r principal eigenvectors of a correlation matrix in O(Nr) operations, where N is the dimensionality of the correlation matrix. Previously, QR-Jacobi methods have been formulated to track the principal eigenvectors of an “exponentially windowed” data correlation matrix. Finite duration data windowing strategies were not addressed. This paper extends the prior QR-Jacobi methods so as to implement rectangular sliding data windows, as well as other windows. Illustrated examples are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management
Data association is a fundamental problem in multitarget-multisensor tracking. It entails selecting the most probable association between sensor measurements and target tracks from a very large set of possibilities. With N sensors and n targets in the detection range of each sensor, even with perfect detection there are (n!)N different configurations which renders infeasible a solution by direct computation even in modestly-sized applications. We describe an iterative method for solving the optimal data association problem in a distributed fashion; the work exploits the framework of graphical models, which are a powerful tool for encoding the statistical dependencies of a set of random variables and are widely used in many applications (e.g., computer vision, error-correcting codes). Our basic idea is to treat the measurement assignment for each sensor as a random variable, which is in turn represented as a node in an underlying graph. Neighboring nodes are coupled by the targets visible to both sensors. Thus we transform the data association problem to that of computing the maximum a posteriori (MAP) configuration in a graphical model to which efficient techniques (e.g., the max-product/min-sum algorithm) can be applied. We use a tree-reweighted version of the usual max-product algorithm that either outputs the MAP data association, or acknowledges failure. For acyclic graphs, this message-passing algorithm can solve the data association problem directly and recursively with complexity O((n!)2N). On graphs with cycles, the algorithm may require more iterations to converge, and need not output an unambiguous assignment. However, for the data association problems considered here, the coupling matrices involved in computations are inherently of low rank, and experiments show that the algorithm converges very fast and finds the MAP configurations in this case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion Methodologies and Applications III
Target detection, tracking, and sensor fusion are complicated problems, which usually are performed sequentially. First detecting targets, then tracking, then fusing multiple sensors reduces computations. This procedure however is inapplicable to difficult targets which cannot be reliably detected using individual sensors, on individual scans or frames. In such more complicated cases one has to perform functions of fusing, tracking, and detecting concurrently. This often has led to prohibitive combinatorial complexity and, as a consequence, to sub-optimal performance as compared to the information-theoretic content of all the available data. It is well appreciated that in this task the human mind is by far superior qualitatively to existing mathematical methods of sensor fusion, however, the human mind is limited in the amount of information and speed of computation it can cope with. Therefore, research efforts have been devoted toward incorporating “biological lessons” into smart algorithms, yet success has been limited. Why is this so, and how to overcome existing limitations? The fundamental reasons for current limitations are analyzed and a potentially breakthrough research and development effort is outlined. We utilize the way our mind combines emotions and concepts in the thinking process and present the mathematical approach to accomplishing this in the current technology computers. The presentation will summarize the difficulties encountered by intelligent systems over the last 50 years related to combinatorial complexity, analyze the fundamental limitations of existing algorithms and neural networks, and relate it to the type of logic underlying the computational structure: formal, multivalued, and fuzzy logic. A new concept of dynamic logic will be introduced along with algorithms capable of pulling together all the available information from multiple sources. This new mathematical technique, like our brain, combines conceptual understanding with emotional evaluation and overcomes the combinatorial complexity of concurrent fusion, tracking, and detection. The presentation will discuss examples of performance, where computational speedups of many orders of magnitude were attained leading to performance improvements of up to 10 dB (and better).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the impact on target detection of several alternative sensor management schemes. Past work in this area has shown that myopic discrimination optimization can be a useful heuristic. In this paper we compare the performance obtained using discrimination with direct optimization of the detection error rate using both myopic and non-myopic optimization techniques. Our model consists of a gridded region containing a set of targets with known priors. Each grid location contains at most one target. At each time step, the sensor can sample a grid location, returning sample values that may or may not be thresholded. The sensor output distribution conditioned on the content of the location is known. Bayesian methods are used to recursively update the posterior probability that each location contains a target. These probabilities can then in turn be used to classify each location as either containing a target or not. At each time step, sensor management is used to determine which location to test next. For non-myopic optimization, graph search techniques are used. When the sensor output is thresholded, the performance obtained using myopic optimization of the expected error rate is worse then that obtained using our other three approaches. Interestingly, we find that for non-thresholded measurements on symmetric distributions, the performance is the same for the four cases tested (myopic/non-myopic discrimination gain/expected error rate). This supports that discrimination is a useful heuristic that provides near-optimal performance under the given assumptions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A tracklet is the estimate of a target state or track that is equivalent to an estimate based only a few measurements. Typically, tracklets are considered to reduce the communications costs between sensors and remote global or fusion trackers. The literature includes several methods for computing tracklets. Some of the methods compute tracklets from measurements, while others compute tracklets from the sensor-level tracks. Some of the methods ignore or omit process noise from the modeling, while others methods attempt to address the presence of process noise. The tracking of maneuvering targets requires the inclusion of process noise. When a tracklet that was developed for nonmaneuvering targets (i.e., no process noise) is used for tracking maneuvering targets, the errors of the tracklet will be somewhat cross-correlated with data from other sensors for the same target, and it is referred to as a quasi-tracklet. Due to some important practical considerations, the impact of maneuvering targets on the performance of tracklets has not been thoroughly addressed in the literature. An investigation that includes the critical practical considerations requires computer simulations with realistic target maneuvers and pertinent evaluation criteria (i.e., computation of errors). In this paper, some of the practical issues concerning the use of tracklets for tracking maneuvering targets are discussed, and the results from a simulation study of the impact of target maneuvers on tracking with tracklets are given. The study considered a fusion tracker receiving tracklets from multiple sensors at dispersed locations and targets maneuvering with either random accelerations or deterministic maneuvers. Tracklets from measurements and tracklets from tracks were studied. Since process noise was added to sensor and fusion trackers to account for target maneuvers, the tracklet methods studied are technically quasi-tracklets. A novel technique is used to compare the performance of tracklets for targets maneuvering randomly with that for targets performing deterministic maneuvers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.