We show multiple uses of space-time Fourier analysis in segmenting
particular "activities" from sequences of images. Examples of such analysis include the detection and characterization or mitigation of scintillations, harmonic motion, and transverse motions of objects. Characterizations can include the estimation of parameters such as oscillation frequencies and distances and velocities of moving objects. We make use of stereo and time sequence imaging to generate scene data of higher spatio-temporal dimension than two. We demonstrate purely digital as well as hybrid digital/optical (correlator) implementations and discuss techniques for mappings between space and time in utilizing imaging and optical resources.
The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not necessarily provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots must form a constellation of specific relative positions in the incoming image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1/20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow and lighting irregularity compensation are discussed.
The development of anthropomorphic robots for simple autonomous operations such as grasping of tools requires algorithms for automatic recognition and pose estimation of the objects. Correlation-based pattern recognition offers a robust set of tools for pose-specific detection and identification of objects. This paper discusses a system-level approach to image understanding whereby a robot is provided with training data (in the form of a computer model and an associated matched filter set), is presented with a view of the target object, and is expected to indicate recognition and to calculate a six degree-of-freedom pose estimate for the object. The pose information would then be used to specify a grasping orientation for the robot's hand. Examples are given of a proof-of-concept demonstration of an approach for an anthropomorphic robot developed at the Johnson Space Center.
We show that certain additions to the mathematical model of a correlator can improve the signal to noise ratio of the optical correlator itself. All computational optimizations include a digital simulation process as a critical tool. The more accurate the model of the optical correlator, the better the optimization. improvements to our correlator model, based on correlator measurements, are made to the simulator in our filter optimization process. Comparisons of the signal to noise ratio are made both digitally and optically. Laboratory results are given.
If an optical correlator is to perform at full potential, the filtersmith must know what complex action will result from the control he applies to the filter SLM. If the SLM is spatially variant (and all are, to some degree or other), the behavior may be different at every frequency plane pixel. We have previously reported characterization of the full-complex behavior at every pixel of the SLM. We have refined the method in two distinct ways: we are doing multi- step interferometry (rather than only phase quadrature), and we have significantly improved the isolation of an individual pixel's complex action.
We show how the signal to noise ratio distributes ideally in the complex plane of filter values, and we show how it is captured in its representation on the restricted set of values the filter SLM is able to realize. The ability to take strong advantage of a large dynamic range of filter magnitude is apparent. Further work will extend this concept to other metrics of optical correlator performance, including statistical pattern recognition criterion functions such as Bayes' error, ROC (receiver operating characteristic) curve's area, and Fisher ratio.
We have developed theory for computing filters with as large as Fisher ratio as possible. That theory analytically accommodates a number of real-world conditions, including noise or clutter in the input scene that is known to its power spectral density, additive noise in the detection process, and constrained filter values. The theory is adaptable to single-class pattern recognition. Using laboratory results we demonstrate Fisher-optimized filters that have improvements over some characteristics of our previous optimization of the Rayleigh quotient. To optimize a filter for the Fisher ratio is not free of side effects. We show examples of the penalty paid as one asks a filter to recognize more and more different objects.
We report algorithms and laboratory practices that tell the full complex behavior of an SLM over its entire face, pixel by pixel, and put the information into a form that is useful to our filter optimization code. We add a quadrature component to the interferometry and image each pixel of the SLM. We analyze the fringes not at one value of drive and in an across-pixels dimension, but instead at each pixel in the drive dimension. We describe details of the method and given examples of spatially-variant filter SLM behavior. We provide examples of performance degradation when the filter's spatial variance has not been accommodated.
Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.
The Atmospheric Transmission Large-Area Analysis System (ATLAS) system has been used by the West Desert Test Center (WDTC), Dugway Proving Ground, UT since 1994 to assist in the characterization of aerosol clouds. The ATLAS is a tool for measuring transmittance through aerosol clouds in the far infrared (8 - 14 micrometers ) spectral region. ATLAS is a passive single-ended system employing a thermal imager for data collection and uses the natural background as the reference source. The final ATLAS product is a 2D transmission map of the aerosol cloud as seen by the imager. Historically ATLAS data reduction and map produce has been a lengthy process. This process includes transportation of the infrared video tapes from the field test site to the WDTC Optical Data Laboratory, digitization of infrared tapes, and subsequent image processing of the video frames to produce transmission maps as a function of time. In order to significantly reduce data processing and delivery time, the WDTC and Science and Technology Corporation have developed the Real-Time ATLAS (RT-ATLAS) system. RT-ATLAS is a field- portable system that reduces turn-around time from days to real-time for approximate results and to tens of minutes for final products. This paper describes the physics of the ATLAS technique, the physical RT-ATLAS system, and new enhancements to the ATLAS system. Data examples and analysis are presented and RT-ATLAS strengths and limitations are discussed.
We report experimental laboratory results using filters that optimize the Rayleigh quotient [Richard D. Juday, 'Generalized Rayleigh quotient approach to filter optimization,' JOSA-A 15(4), 777-790 (April 1998)] for discriminating between two similar objects. That quotient is the ratio of the correlation responses to two differing objects. In distinction from previous optical processing methods it includes the phase of both objects -- not the phase of only the 'accept' object -- in the computation of the filter. In distinction from digital methods it is explicitly constrained to optically realizable filter values throughout the optimization process.
In the process of creating synthetic scenes for use in simulations/visualizations, texture is used as a surrogate to 'high' spatial definition. For example, if one were to measure the location of every blade of grass and all of the characteristics of each blade of grass in a lawn, then in the process of composing a scene of the lawn, it would be expected that the result would appear 'real;' however, because this process is excruciatingly laborious, various techniques have been devised to place the required details in the scene through the use of texturing. Experience gained during the recent Smart Weapons Operability Enhancement Joint Test and Evaluation (SWOE JT&E) has shown the need for higher fidelity texturing algorithms and a better parameterization of those that are in use. In this study, four aspects of the problem have been analyzed: texture extraction, texture insertion, texture metrics, and texture creation algorithms. The results of extracting real texture from an image, measuring it with a variety of metrics, and generating similar texture with three different algorithms is presented. These same metrics can be used to define clutter and to make comparisons between 'real' and synthetic (or artificial) scenes in an objective manner.
Current efforts to design reliable background scene generation programs require validation using real images for comparison. A crucial step in making objective comparisons is to parameterize the real and generated images into a common set of feature metrics. Such metrics can be derived from statistical and transform-based analyses and yield information about the structures and textures present in various image regions of interest. This paper presents the results of such a metrics-development process for the smart weapons operability enhancement (SWOE) joint test and evaluation (JT&E) program. Statistical and transform based techniques were applied to images obtained from two separate locations, Grayling, Michigan and Yuma, Arizona, at various times of day and under a variety of environmental conditions. Statistical analyses of scene radiance distributions and `clutter' content were performed both spatially and temporally. Fourier and wavelet transform methods were applied as well. Results and their interpretations are given for the image analyses. The metrics that provide the clearest and most reliable distinction between feature classes are recommended.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.