We aim to identify humans in multimodal imagery by predicting the human long-wave infrared (LWIR) signature in a
variety of scenarios. By adapting Tanabe's thermocomfort model, we simulate human body heat flow both between
tissue layers (core, muscle, fat and skin) and between body segments (head, chest, upper arm, etc.). To assess the validity
of our implementation, we simulated the conditions described in actual human subject studies, and compared our results
to values reported in the literature. Inputs to the model include age, height, weight, clothing, physical activity and
ambient conditions, including temperature, humidity and wind velocity. Iteration of heat transport equations and a
thermoregulatory component yields temporal data of segment surface temperature. Our model was found to be in close
agreement with experimentally collected data, with a maximum deviation from literature values of approximately 0.80%.
By comparing the predicted human thermal signature to deblurred LWIR images and then fusing this information at the
feature level with high-resolution electro-optical image data, we can facilitate identity detection of objects in a scene
acquired under different conditions. Ultimately, our goal is to differentiate humans from their surroundings and label
non-human objects as thermal clutter.
KEYWORDS: Object recognition, Information theory, Signal to noise ratio, Monte Carlo methods, Detection and tracking algorithms, Image transmission, Associative arrays, Image analysis, Image processing, Calibration
Discrimination of friendly or hostile objects is investigated using information-theory measures/metric in an image
which has been compromised by a number of factors. In aerial military images, objects with different orientations can
be reasonably approximated by a single identification signature consisting of the average histogram of the object under
rotations. Three different information-theoretic measures/metrics are studied as possible criteria to help classify the
objects. The first measure is the standard mutual information (MI) between the sampled object and the library object
signatures. A second measure is based on information efficiency, which differs from MI. Finally an information
distance metric is employed which determines the distance, in an information sense, between the sampled object and the
library object. It is shown that the three (parsimonious) information-theoretic variables introduced here form an
independent basis in the sense that any variable in the information channel can be uniquely expressed in terms of the
three parameters introduced here. The methodology discussed is tested on a sample set of standardized images to
evaluate their efficacy. A performance standardization methodology is presented which is based on manipulation of
contrast, brightness, and size attributes of the sample objects of interest.
KEYWORDS: Signal to noise ratio, Detection and tracking algorithms, Stochastic processes, Interference (communication), Nonlinear filtering, Signal detection, Electronic filtering, Filtering (signal processing), Physics, Monte Carlo methods
Object detection in images was conducted using a nonlinear means of improving signal to noise ratio termed "stochastic
resonance" (SR). In a recent United States patent application, it was shown that arbitrarily large signal to noise ratio
gains could be realized when a signal detection problem is cast within the context of a SR filter. Signal-to-noise ratio
measures were investigated. For a binary object recognition task (friendly versus hostile), the method was implemented
by perturbing the recognition algorithm and subsequently thresholding via a computer simulation. To fairly test the
efficacy of the proposed algorithm, a unique database of images has been constructed by modifying two sample library
objects by adjusting their brightness, contrast and relative size via commercial software to gradually compromise their
saliency to identification. The key to the use of the SR method is to produce a small perturbation in the identification
algorithm and then to threshold the results, thus improving the overall system's ability to discern objects. A background
discussion of the SR method is presented. A standard test is proposed in which object identification algorithms could be
fairly compared against each other with respect to their relative performance.
Our challenge was to develop a semi-automatic target detection algorithm to aid human operators in
locating potential targets within images. In contrast to currently available methods, our approach is
relatively insensitive to image brightness, image contrast and object orientation. Working on overlapping
image blocks, we used a sliding difference method of histogram matching. Incrementally sliding the
histograms of the known object template and the image region of interest (ROI) together, the sum of
absolute histogram differences was calculated. The minimum of the resultant array was stored in the
corresponding spatial position of response surface matrix. Local minima of the response surface suggest
possible target locations. Because the template contrast will rarely perfectly match the contrast of the actual
image contrast, which can be compromised by illumination conditions, background features, cloud cover,
etc., we perform a random contrast manipulation, which we term 'wobble', on the template histogram. Our
results have shown increased object detection with the combination of the sliding histogram difference and
wobble.
On-orbit servicing (OOS) is growing in importance for the sustainment of certain satellite systems. Although it is more economical to replace satellites in many cases, OOS could be beneficial, or even critical, for more expensive satellites such as Space-Based Laser and constellations such as the Global Positioning System. Some future OOS missions including refueling and modular component replacement will be highly autonomous, but there will still be a need for humans to supervise and to recover when unexpected situations arise. Non-routine tasks such as damage repair or optics cleaning will likely require a more significant level of human control. The human interfaces for such activities can include body tracking systems, three-dimensional audio and video, tactile feedback devices, and others. This paper will provide some insights into when and at what level human interaction may be needed for OOS tasks. Example missions will be discussed and the argument will be made that human interfaces are important even for primarily autonomous missions. Finally some current research efforts within NASA, academia and the military will be discussed including research being conducted in the Air Force Research Laboratory at Wright-Patterson Air Force Base.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.