An adaptive automatic threat recognition system (AATR) developed at the Lawrence Livermore National Laboratory (LLNL) is described for x-ray CT images of baggage. The AATR automatically adapts to the input object requirement specification (ORS), which can change or evolve over time. These specifications characterize materials of interest (MOIs), basic physical features of interest (FOIs) (such a mass and thickness) and performance goals (detection and false alarm probability) for objects of interest (OOIs). The need and technical requirements for an AATR were developed in collaboration with DHS’s Explosives Division and Northeastern University’s Awareness and Localization of Explosives-Related Threats (ALERT) Center, a DHS Center of Excellence (http://www.northeastern.edu/alert/). Independent of the input ORS, LLNL’s AATR always uses the same algorithm and codes to process CT images. The algorithm adapts in real-time to changes in the input ORS. LLNL’s AATR is thus suitable for dynamic scenarios in which the nature of the OOIs can change rapidly. The AATR uses a spatial consensus relaxation method to determine the most likely material composition for each CT image voxel. The resulting image of most likely material compositions is segmented. An OOI classification statistic (OOI score) is computed for each voxel and each extracted image volume. OOI recognition performance is reported using various metrics on a test set of ~180 plastic bins supplied by the ALERT Center of Excellence. A method is then proposed for automatic decision threshold estimation that can adapt to the detection performance goal, the most likely material composition, and the contents of the baggage.
Gradient direction models for corners of prescribed acuteness, leg length, and leg thickness are constructed by
generating fields of unit vectors emanating from leg pixels that point normal to the edges. A novel FFT-based algorithm
that quickly matches models of corners at all possible positions and orientations in the image to fields of gradient
directions for image pixels is described. The signal strength of a corner is discussed in terms of the number of pixels
along the edges of a corner in an image, while noise is characterized by the coherence of gradient directions along those
edges. The detection-false alarm rate behavior of our corner detector is evaluated empirically by manually constructing
maps of corner locations in typical overhead images, and then generating different ROC curves for matches to models of
corners with different leg lengths and thicknesses. We then demonstrate how corners found with our detector can be
used to quickly and automatically find families of polygons of arbitrary position, size and orientation in overhead
images.
A robust approach for automatically extracting roads from overhead images is developed in this paper. The first step involves extracting a very dense set of edge pixels using a technique based on the magnitude and direction of pixel gradients. In step two, the edges are separated into successive channels of edge orientation that each contain edge pixels whose gradient directions lie within a different angular range. A de-cluttered map of edge curve segments is extracted from each channel, and the results are merged into a single composite map of broken edge curves. The final step divides broken curves into segments that are nearly linear and classifies each segment as connected at both ends or disconnected. A measure of connectability between two disconnected line segments based on proximity and relative alignment is defined mathematically. Each disconnected segment is paired with the disconnected segment that it is most connectable to. Pairs of segments are merged if their separation and misalignment are below thresholds (manually specified at present) and the connectability of the pair is two-way optimal. Extended curve and road extraction examples are provided using commercial overhead images.
KEYWORDS: 3D modeling, Solids, Detection and tracking algorithms, 3D image processing, 3D acquisition, Image resolution, Process modeling, Model-based design, Visual process modeling, Image processing
Gradient direction matching (GDM) is the main target identification algorithm used in the Image Content Engine project at Lawrence Livermore National Laboratory. GDM is a 3D solid model-based edge-matching algorithm which does not require explicit edge extraction from the source image. The GDM algorithm is presented, identifying areas where performance enhancement seems possible. Improving the process of producing model gradient directions from the solid model by assigning different weights to different parts of the model is an extension tested in the current study. Given a simple geometric model, we attempt to determine, without obvious semantic clues, if different weight values produce significantly better matching accuracy, and how those weights should be assigned to produce the best matching accuracy. Two simple candidate strategies for assigning weights are proposed: pixel-weighted and edge-weighted. We adjust the weights of the components in a simple model of a tractor/semi-trailer using relevance feedback to produce an optimal set of weights for this model and a particular test image. The optimal weights are then compared with pixel and edge-weighting strategies to determine which is most suitable and under what circumstances.
That National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) will be the world's
largest and most energetic laser. It has thousands of optics and depends heavily on the quality and performance of these
optics. Over the past several years, we have developed the NIF Optics Inspection Analysis System that automatically
finds defects in a specific optic by analyzing images taken of that optic.
This paper describes a new and complementary approach for the automatic detection of defects based on
detecting the diffraction ring patterns in downstream optic images caused by defects in upstream optics. Our approach
applies a robust pattern matching algorithm for images called Gradient Direction Matching (GDM). GDM compares
the gradient directions (the direction of flow from dark to light) of pixels in a test image to those of a specified model
and identifies regions in the test image whose gradient directions are most in line with those of the specified model. For
finding rings, we use luminance disk models whose pixels have gradient directions all pointing toward the center of the
disk. After GDM identifies potential rings locations, we rank these rings by how well they fit the theoretical diffraction
ring pattern equation. We perform false alarm mitigation by throwing out rings of low fit. A byproduct of this fitting
procedure is an estimate of the size of the defect and its distance from the image plane. We demonstrate the potential
effectiveness of this approach by showing examples of rings detected in real images of NIF optics.
A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.
The Image Content Engine (ICE) is being developed to provide cueing assistance to human image analysts faced with increasingly large and intractable amounts of image data. The ICE architecture includes user configurable feature extraction pipelines which produce intermediate feature vector and match surface files which can then be accessed by interactive relational queries. Application of the feature extraction algorithms to large collections of images may be extremely time consuming and is launched as a batch job on a Linux cluster. The query interface accesses only the intermediate files and returns candidate hits nearly instantaneously. Queries may be posed for individual objects or collections. The query interface prompts the user for feedback, and applies relevance feedback algorithms to revise the feature vector weighting and focus on relevant search results. Examples of feature extraction and both model-based and search-by-example queries are presented.
Image segmentation transforms pixel-level information from raw images to a higher level of abstraction in which related pixels are grouped into disjoint spatial regions. Such regions typically correspond to natural or man-made objects or structures, natural variations in land
cover, etc. For many image interpretation tasks (such as land use assessment, automatic target cueing, defining relationships between objects, etc.), segmentation can be an important early step.
Remotely sensed images (e.g., multi-spectral and hyperspectral images) often contain many spectral bands (i.e., multiple layers of 2D images). Multi-band images are important because they contain more information than single -band images. Objects or natural variations
that are readily apparent in certain spectral bands may be invisible in 2D broadband images. In this paper, the classical region growing approach to image segmentation is generalized from single to multi-band images. While it is widely recognized that the quality of image segmentation is affected by which segmentation algor ithm is used, this paper shows that algorithm parameter values can have an even more profound effect. A novel self-calibration framework is developed
for automatically selecting parameter values that produce segmentations that most closely resemble a calibration edge map (derived separately using a simple edge detector). Although the
framework is generic in the sense that it can imbed any core segmentation algorithm, this paper only demonstrates self-calibration with multi-band region growing. The framework is applied to
a variety of AVIRIS image blocks at different spectral resolutions, in an effort to assess the impact of spectral resolution on segmentation quality. The image segmentations are assessed
quantitatively, and it is shown that segmentation quality does not generally appear to be highly correlated with spectral resolution.
This paper presents various architectural options for implementing a K-Means Re-Clustering algorithm suitable for unsupervised segmentation of hyperspectral images. Performance metrics are developed based upon quantitative comparisons of convergence rates and segmentation quality. A methodology for making these comparisons is developed and used to establish K values that produce the best segmentations with minimal processing requirements. Convergence rates depend on the initial choice of cluster centers. Consequently, this same methodology may be used to evaluate the effectiveness of different initialization techniques.
An approach to automatic target cueing (ATC) in hyperspectral images, referred to as K-means reclustering, is introduced. The objective is to extract spatial clusters of spectrally related pixels having specified and distinctive spatial characteristics. K-means reclustering has three steps: spectral cluster initialization, spectral clustering and spatial re-clustering, plus an optional dimensionality reduction step. It provides an alternative to classical ATC algorithms based on anomaly detection, in which pixels are classified as type anomaly or background clutter. K-means reclustering is used to cue targets of various sizes in AVIRIS imagery. Statistical performance and computational complexity are evaluated experimentally as a function of the designated number of spectral classes (K) and the initially specified spectral cluster centers.
Traditional algorithms for automatic target cueing (ATC) in hyperspectral images, such as the RX algorithm, treat anomaly detection as a simple hypothesis testing problem. Each decision threshold gives rise to a different set of anomalous pixels. The clustered RX algorithm generates target cues by grouping anomalous pixels into spatial clusters, and retaining only those clusters that satisfy target specific spatial constraints. It produces one set of target cues for each of several decision thresholds, and conservatively requires O(K2) operations per pixel, where K is the number of spectral bands (which varies from hundreds to thousands to in hyperspectral images).
A novel ATC algorithm, known as Pixel Cluster Cueing (PCC), is discussed. PCC groups pixels into clusters based on spectral similarity and spatial proximity, and then selects only those clusters that satisfy target-specific spatial constraints as target cues. PCC requires only O(K) operations per pixel, and it produces only one set of target cues because it is not an anomaly detection algorithm, i.e., it does not use a decision threshold to classify individual pixels as anomalies. PCC is compared both computationally and statistically to the RX algorithm.
This paper describes methods, algorithms, and software developed to select and represent landmarks from cartographic databases and to automatically determine the landmark position in GOES imagery. A simulation study was conducted to demonstrate that the landmark position can be measured with subpixel accuracy.
A new approach is developed for detection of image objects and their orientations, based on distance transforms of intermediate level edge information(i.e., edge segments and vertices). Objects are modeled with edge segments and these edges are matched to edges extracted from an image by correlating spatially transformed versions of one with a distance transform of the other. Upper bounds on change in cross- correlation between edge maps and distance transforms are shown to be simple functions of change in translation and rotation. The process of computing the optimal object rotation at each possible translation can be accelerated by one to two orders of magnitude when these bounds are applied in conjunction with an object translation-rotation traversal strategy. Examples with detection and acceleration results demonstrate the robustness and high discriminatory power of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.