Based on observations of seemingly hostile aqueous environments on earth, it is possible for lifeforms not only to evolve but to thrive in conditions that, by human standards, are extreme. Such lifeforms, typically termed "extremophiles" can, for example, live in the vicinity of deep water volcanic vents that are spewing superheated water laden with sulfur compounds at intense pressures. Since similar conditions may exist on Jupiter's moon Europa, there is widespread interest in developing an autonomous search-for-life capability that could be deployed in aqueous, extraterrestrial environments.
As one step toward this goal, the DEep Phreatic THermal eXplorer (DEPTHX) is a NASA Astrobiology Science and Technology for Exploring Planets (ASTEP) project to design, develop and field-test a robotic vehicle to explore such environments. The principal astrobiological science objective of DEPTHX is to develop an advanced methodology and protocol for the discrimination of microbial life in a sub-aqueous environment. Implementation requires the design, development, and demonstration of a fully autonomous architecture for intelligent biological sample detection and collection, whereby the robotic device will be capable of performing the following functions:
1. Deep hydrothermal springs will be mapped with great accuracy in three dimensions.
2. Data will be acquired from a hierarchical suite of on-board microbial life detection and sensors and processors and will be analyzed to determine whether life is present.
3. Specimens will be aseptically collected and returned for subsequent ex-situ laboratory analysis preserved under ambient conditions.
The paper describes current progress toward these objectives, with an emphasis on the analysis of data acquired from the life sensors for the purpose of detecting lifeforms.
This paper describes a 3-D imaging technique developed as an internal research project at Southwest Research Institute. The technique is based on an extension of structured light methods in which a projected pattern of parallel lines is rotated over the surface to be measured. A sequence of images is captured and the surface elevation at any location can then be determined from measurements of the temporal pattern, at any point, without considering any other points on the surface. The paper describes techniques for system calibration and surface measurement based on the method of projected quadric shells. Algorithms were developed for image and signal analysis and computer programs were written to calibrate the system and to calculate 3-D coordinates of points on a measured surface. A prototype of the Dynamic Structured Light (DSL) 3-D imaging system was assembled and typical parts were measured. The design
procedure was verified and used to implement several different configurations with different measurement volumes and measurement accuracy. A small-parts measurement accuracy of 32 micrometers (.0012”) RMS was verified by measuring the surface of a precision-machined plane. Large aircraft control surfaces were measured with a prototype setup that provided .02” depth resolution over a 4’ by 8’ field of view. Measurement times are typically less than three
minutes for 300,000 points. A patent application has been filed.
Machine vision applications of thermal image sensors have been investigated under an Internal Research and Development Project at Southwest Research Institute. Initial investigations were conducted to determine response characteristics of a low-cost non-radiometric camera. Application investigations were conducted to develop defect detection capabilities for injection molded rubber parts. Various thermal excitation and material handling approaches were investigated. Image processing software was developed to detect anomalous temperature responses. Research findings, part inspection approaches, and image processing techniques are discussed.
The ability to extract useful and robust features from sensor data of vehicles in moving traffic is highly dependent on a number of factors. For imaging sensors that produce a two- dimensional representation of an observed scene such as a visible light camera, the principal factors influencing the quality of the acquired data include the ambient lighting and weather conditions as well the physical characteristics of the vehicles whose images are captured. Considerable variability in the ambient lighting conditions in combination with material characteristics may cause radically different appearances for various surfaces of a vehicle when viewed in the visible wavelengths. Infrared sensors, on the other hand, produce images that are far less sensitive to variations in ambient lighting conditions, but may not provide sufficient information that can be used to discriminate among vehicles. Combining information from these sensors provides the basis for exploiting the relative strengths of each sensor domain while attenuating the weaknesses that exist in single systems. This paper presents a basic framework for combining information from multiple sensor systems by describing methodologies for geometrically transforming between image spaces and extracting features using a multi-dimensional approach that exploits information gathered at different wavelengths. The potential use of point sensors (such as acoustic and microwave detectors) in combination with imaging sensors is also discussed.
A fiber optic sensor for determining the type and condition of aircraft coatings was originally developed to provide adaptive control of automated coating removal. The sensor, based on analysis of optical reflectance spectra, has also been found useful for determining the condition of other materials. Investigations have shown that artificial neural networks can be trained to recognize specific materials or material conditions from the sensor signals.
The principles of scalable computing have been used in an investigation of the application of high speed data networks and remote computer resources in providing visualization tools for research and development activities. The architecture of a distributed visualization system that can utilize either shared memory or message passing paradigms is described. The three components of the system can be physically separated if network communication is provided. A flexible data cache server is used to accommodate newly computed data or data from an earlier experiment or computation. An image specification toolset, implemented for parallel/distributed architectures using PVM, includes methods of calculating common visualization forms such as vector fields, surfaces or streamlines from cache data. An image generation library, implemented for workstations and high performance PCs, receives the data objects and provides investigators with flexibility in image display. The system has been operated with several combinations of distributed and parallel processor machines connected by networks of different bandwidths and capacities. Observations on the performance and flexibility of different system architectures are given.
A method for isolating three-dimensional features of known height in the presence of noisy data is presented. The approach is founded on observing the locations of a single light stripe in the image planes of two spatially separated cameras. Knowledge relating to the heights of sought features is used to define regions of interest in each image, which are searched to isolate the light stripe. This approach is advantageous because spurious features that may result from random reflections or refractions in the region of interest of one image usually do not appear in the corresponding region of interest of the other image. It is shown that such a system is capable of robustly locating features such as very thin vertical dividers even in the presence of spurious or noisy image data that would normally cause conventional single-camera light-striping systems to fail. The discussion that follows summarizes the advantages of the methodology in relation to conventional passive stereoscopic systems as well as light-striped triangulation systems. Results that characterize the approach in noisy images are also provided.
A method for isolating three-dimensional features of known height in the presence of noisy data is presented. The approach is founded upon observing the locations of a single light stripe in the image planes of two spatially separated cameras. Knowledge relating to the heights of sought features is used to define regions of interest in each image which are searched in order to isolate the light stripe. This approach is advantageous since spurious features that may result from random reflections or refractions in the region of interest of one image usually do not appear in the corresponding region of interest of the other image. It is shown that such a system is capable of robustly locating features such as very thin vertical dividers even in the presence of spurious or noisy image data that would normally cause conventional single camera light striping systems to fail. The discussion that follows summarizes the advantages of the methodology in relation to conventional passive stereoscopic systems as well as light striped triangulation systems. Results that characterize the approach in noisy images are also provided.
A computer vision based automated method for identifying and quantifying flaws in cast metal parts is presented. The specific defects to be isolated consist of small circular concavities in the surface (pits) and larger isolated regions (scratches) that may have been abraded due to cutting or handling operations. The approach taken identifies these anomalous features using two spatially separated light sources with different spectral characteristics to produce highly specular illumination at one wavelength and shallow diffuse illumination at a different wavelength. A bispectral image is processed to yield the sought flaws. This processing consists of identifying regions of interest in the original image that may contain potential flaws and applying a morphological region labelling operation to extract candidate pits and scratches. Geometric constraints are applied to the extracted regions in order to isolate the true flaws. The discussion that follows details the algorithmic approach used to identify flaws as well as characterizing the results obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.