PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 6560, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and
extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS
approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first
extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that
compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g.,
office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal
sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in
video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based
on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal
order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these
sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile.
We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene
classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and
pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and
pedestrians.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a bio-inspired Visual Attention and Object Recognition System (VARS) that can (1) learn
representations of objects that are invariant to scale, position and orientation; and (2) recognize and locate these objects
in static and video imagery. The system uses modularized bio-inspired algorithms/techniques that can be applied
towards finding salient objects in a scene, recognizing those objects, and prompting the user for additional information to
facilitate interactive learning. These algorithms are based on models of human visual attention, search, recognition and
learning. The implementation is highly modular, and the modules can be used as a complete system or independently.
The underlying technologies were carefully researched in order to ensure they were robust, fast, and could be integrated
into an interactive system. We evaluated our system's capabilities on the Caltech-101 and COIL-100 datasets, which are
commonly used in machine vision, as well as on simulated scenes. Preliminary results are quite promising in that our
system is able to process these datasets with good accuracy and low computational times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advances in video surveillance technology have lead to the proliferation of surveillance video cameras for the
purposes of viewing areas of interest. Counter terrorism and surveillance applications require video forensics capabilities
like querying and searching video data for events, people or objects of interest. A human analyst may accurately spot a
suspicious activity in a small segment of video. However, due to the large volume of data collected in real-time video
surveillance, it is impractical for human analysts to watch or tag the entire video collected as this can lead to human
errors, lower throughput and inconsistencies in the level of scrutiny. In this paper, we introduce an ontology-based video
retrieval approach, which represents videos with object ontologies and event ontologies, and annotates videos
accordingly. We also describe a user-friendly interface for querying surveillance videos using event dictionaries. Our
approach leverages the capabilities of ontologies in specifying knowledge at different levels, and, in this way, provides
flexibility to a user while forming a query. It is also capable of detecting undefined events such as not previously
conceived abnormal events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Behavior analysis deals with understanding and parsing a video sequence to generate a high-level
description of object actions and inter-object interactions. In this paper, we describe a behavior recognition system that
can model and detect spatio-temporal interactions between detected entities in a visual scene by using ideas from swarm
optimization, fuzzy graphs, and object recognition. Extensions of Particle Swarm Optimization based approaches for
object recognition are first used to detect entities in video scenes. Our hierarchical generic event detection scheme uses
fuzzy graphical models for representing the spatial associations as well as the temporal dynamics of the discovered scene
entities. The spatial and temporal attributes of associated objects and groups of objects are handled in separate layers in
the hierarchy. We also describe a new behavior specification language that helps the user analyst easily describe the
event that needs to be detected using either simple linguistic queries or graphical queries. Our experimental results show
that the approach is promising for detecting complex behaviors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a multi-camera system that performs face detection and pose estimation in real-time
and may be used for intelligent computing within a visual sensor network for surveillance or human-computer
interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed
zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match
objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified
by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In
this system, face candidate regions are selected based on skin color and face detection is accomplished
using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the
face region using neural network feature detectors. Pose estimation is performed based on a geometrical
model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle
formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the
eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the
change in its angles as the yaw pose angle increases. These equations are then combined and used for
efficient pose estimation. The system achieves real-time performance for live video input. Testing results
assessing system performance are presented for both still images and video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Arathorn developed a new theory to address the translation, rotation, scale and perspective invariance problem in vision.
According to it, both natural and machine vision systems may be built using a basic block which he calls the map-seeking
circuit. In his recent book1, he informally describes the circuit and a number of simulation studies to illustrate his
ideas and support his claims. In this paper, we complement his work by providing mathematical analysis of the circuit.
We first construct difference equations describing its dynamics and study when they converge to a steady state which
represents the circuit's interpretation of the input scene image. We then show that the state corresponds to the minimum
of an upper bound on the difference between the input image and its reconstruction done by the circuit using its built-in
banks of object memories and construction operators. The fact that the upper bound can be constructed and minimized
directly in a computationally efficient and numerically robust manner, without having to recourse to the map-seeking
circuit simulation, makes our alternative approach attractive for applications. We explain why the upper bound is not
always tight, which leads to the collusion and other matching errors noticed by Arathorn.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since future air combat missions will involve both manned and unmanned aircraft, the primary motivation for this
research is to enable unmanned aircraft with intelligent maneuvering capabilities. During air combat maneuvering, pilots
use their knowledge and experience of maneuvering strategies and tactics to determine the best course of action. As a
result, we try to capture these aspects using an artificial immune system approach. The biological immune system
protects the body against intruders by recognizing and destroying harmful cells or molecules. It can be thought of as a
robust adaptive system that is capable of dealing with an enormous variety of disturbances and uncertainties. However,
another critical aspect of the immune system is that it can remember how previous encounters were successfully
defeated. As a result, it can respond faster to similar encounters in the future. This paper describes how an artificial
immune system is used to select and construct air combat maneuvers. These maneuvers are composed of autopilot mode
and target commands, which represent the low-level building blocks of the parameterized system. The resulting
command sequences are sent to a tactical autopilot system, which has been enhanced with additional modes and an
aggressiveness factor for enabling high performance maneuvers. Just as vaccinations train the biological immune system
how to combat intruders, training sets are used to teach the maneuvering system how to respond to different enemy
aircraft situations. Simulation results are presented, which demonstrate the potential of using immunized maneuver
selection for the purposes of air combat maneuvering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compared with support vector machine (SVM), least squares support vector machine (LS-SVM) has overcame the
shortcoming of higher computational burden by solving linear equations, and has been widely used in classification and
nonlinear function estimation. But there is no efficient method for parameter selection of LS-SVM. In this paper, the
sharing function based niche genetic algorithm (SNGA) is used to the parameter optimization of LS-SVM for regression.
In the SNGA approach, k-folds cross validation is used to evaluate the LS-SVM generalization performance. The inverse
of the average test error of the k trials is used as the fitness value. The hamming distance between each two individuals is
defined as the sharing function. Two benchmark problems, SINC function regression and Henon map time series
prediction are used as examples for demonstration. The results indicate that this approach can escape from the blindness
of man-made choice of the LS-SVM parameters. It enhances the efficiency and the capability of regression. With little
modification, this approach is also can be used to the parameter optimization of SVM or LS-SVM for classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensors are used to monitor and interpret many different environments and phenomena. The capability of a
sensor array or network is constrained first by the sensors included and secondly by how the sensors are allowed
to communicate and cooperatively work together. In this paper, we show how the combination of sensors, with
embedded intelligent capability, and multiagent organization systems are integrated to create a highly adaptive,
scalable and viable architecture to interpret task domains, typically monitored by a lower-functioning sensor
network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent years have seen many developments in uncertainty reasoning taking place around Bayesian Networks
(BNs). BNs allow fast and efficient probabilistic reasoning. One of the key issues that researchers have
faced in using a BN is determining its parameters and structure for a given problem. Many techniques have
been developed for learning BN parameters from a given dataset pertaining to a particular problem. Most
of the methods developed for learning BN parameters from partially observed data have evolved around the
Expectation-Maximization (EM) algorithm. In its original form, EM algorithm is a deterministic iterative two-step
procedure that converges towards the maximum-likelihood (ML) estimates.
The EM algorithm mainly focuses on learning BN parameters from imperfect data where some of the values are
missing. However in many practical applications, partial observability results in a wider range of imperfections,
e.g., uncertainties arising from incomplete, ambiguous, probabilistic, and belief theoretic data. Moreover, while
convergence is to their ML estimates, the EM algorithm does not guarantee convergence to the underlying true
parameters.
In this paper, we propose an approach that enables one to learn BN parameters from a dataset containing
a wider variety of imperfections. In addition, by introducing an early stopping criterion together with a new
initialization method to the EM-algorithm, we show how the BN parameters could be learnt so that they are
closer to the underlying true parameters than the converged ML estimated parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We establish stability results for both competitive neural networks with only neural activity levels and also
with different time-scales under parameter perturbations and determine conditions that ensure the existence
of exponentially stable equilibria of the perturbed neural system. The perturbed neural system is modeled as
nonlinear perturbations to a known nonlinear idealized system and is represented by both a short-term memory
subsystem and also by a two time-scale subsystem. Based on the theory of sliding mode control, we can determine
for the simple competitive model a reduced-order system and show that if it is asymptotically stable then the
full system will also be asymptotically stable. In addition, a region of attraction can be found. For the two
time-scales neural systems, we derive a Lyapunov function for the coupled system and a maximal upper bound
for the fast time scale associated with the neural activity state.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a system for scale and affine invariant recognition of vehicular objects in video sequences. We use
local descriptors (SIFT keypoints) from image frames to model the object. These features are claimed in the
literature to be highly distinctive and invariant to rotation, scale, and affine transformations. However, since the
SIFT keypoints that are extracted from an object are instance-specific (variable), they form a dynamic feature
space. This presents certain challenges for classification techniques, which generally require use of the same set
of features for every instance of an object to be classified. To resolve this difficulty, we associate the extracted
keypoints to the components (representative keypoints) in a mixture model for each target class. While the
extracted keypoints are variable, the mixture components are fixed. The mixture models the keypoint features,
as well as the location and scale at which each keypoint was detected in the frame. Keypoint to component
association is achieved via a switching optimization procedure that locally maximizes the joint likelihood of
keypoints and their locations and scales with the latter based on an affine transformation. To each mixture
component from a class, we link a (first layer) support vector machine (SVM) classifier which votes for or
against the hypothesis that the keypoint associated to the component belongs to the model's target class. A
second layer SVM pools the votes from the ensemble of SVM classifiers in the first layer and gives the final
class decision. We show promising results of experiments for video sequences from the VIVID database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As militaries across the world continue to evolve, the roles of humans in various theatres of operation are being
increasingly targeted by military planners for substitution with automation. Forward observation and direction of
supporting arms to neutralize threats from dynamic adversaries is one such example. However, contemporary tracking
and targeting systems are incapable of serving autonomously for they do not embody the sophisticated algorithms
necessary to predict the future positions of adversaries with the accuracy offered by the cognitive and analytical abilities
of human operators. The need for these systems to incorporate methods characterizing such intelligence is therefore
compelling. In this paper, we present a novel technique to achieve this goal by modeling the path of an entity as a
continuous polynomial function of multiple variables expressed as a Taylor series with a finite number of terms. We
demonstrate the method for evaluating the coefficient of each term to define this function unambiguously for any given
entity, and illustrate its use to determine the entity's position at any point in time in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent Foraging, Gathering and Matching (I-FGM) combines a unique multi-agent architecture with a novel partial
processing paradigm to provide a solution for real-time information retrieval in large and dynamic databases. I-FGM
provides a unified framework for combining the results from various heterogeneous databases and seeks to provide
easily verifiable performance guarantees. In our previous work, I-FGM had been implemented and validated with
experiments on dynamic text data. However, the heterogeneity of search spaces requires our system having the ability to
effectively handle various types of data. Besides texts, images are the most significant and fundamental data for
information retrieval. In this paper, we extend the I-FGM system to incorporate images in its search spaces using a
region-based Wavelet Image Retrieval algorithm called WALRUS. Similar to what we did for text retrieval, we
modified the WALRUS algorithm to partially and incrementally extract the regions from an image and measure the
similarity value of this image. Based on the obtained partial results, we refine our computational resources by updating
the priority values of image documents. Experiments have been conducted on I-FGM system with image retrieval. The
results show that I-FGM outperforms its control systems. Also, in this paper we present theoretical analysis of the
systems with a focus on performance. Based on probability theory, we provide models and predictions of the average
performance of the I-FGM system and its two control systems, as well as the systems without partial processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of low-cost, durable unmanned aerial vehicles (UAVs), it is now practical to perform persistent
sensing and target tracking autonomously over broad surveillance areas. These vehicles can sense the environment
directly through onboard active sensors, or indirectly when aimed toward ground targets in a mission environment by
ground-based passive sensors operating wirelessly as an ad hoc network in the environment. The combination of the
swarm intelligence of the airborne infrastructure comprised of UAVs with the ant-like collaborative behavior of the
unattended ground sensors creates a system capable of both persistent and pervasive sensing of mission environment,
such that, the continuous collection, analysis and tracking of targets from sensor data received from the ground can be
achieved. Mobile software agents are used to implement intelligent algorithms for the communications, formation
control and sensor data processing in this composite configuration. The enabling mobile agents are organized in a
hierarchy for the three stages of processing in the distributed system: target detection, location and recognition from the
collaborative data processing among active ground-sensor nodes; transfer of the target information processed on the
ground to the UAV swarm overhead; and formation control and sensor activation of the UAV swarm for sustained
ground-target surveillance and tracking. Intelligent algorithms are presented that can adapt to the operation of the
composite system to target dynamics and system resources. Established routines, appropriate to the processing needs of
each stage, are selected as preferred based on their published use in similar scenarios, ability to be distributively
implemented over the set of processors at system nodes, and ability to conserve the limited resources at the ground
nodes to extend the lifetime of the pervasive network.
In this paper, the performance of this distributed, collaborative system concept for persistent-pervasive sensing of a
ground environment is assessed via simulation of the selected adaptive algorithms using parameter values planned for
ground sensors and UAVs and mission scenarios found in published studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel genetic algorithm application is proposed for adaptive power and subcarrier allocation in multi-user
Orthogonal Frequency Division Multiplexing (OFDM) systems. To test the application, a simple genetic algorithm
was implemented in MATLAB language. With the goal of minimizing the overall transmit power while ensuring the
fulfillment of each user's rate and bit error rate (BER) requirements, the proposed algorithm acquires the needed
allocation through genetic search. The simulations were tested for BER 0.1 to 0.00001, data rate of 256 bit per OFDM
block and chromosome length of 128. The results show that genetic algorithm outperforms the results in [3] in
subcarrier allocation. The convergence of GA model with 8 users and 128 subcarriers performs better in power
requirement compared to that in [4] but converges more slowly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent complex systems are drawing considerable attention of researchers in various scientific areas. These
architectures require adequate assurances of security, reliability, and fault-tolerance. The implementation of security
functions such as identification, authentication, access control, and data protection can be viewed in terms of a security
assurance model. This model relies on the security architecture of a system, which in turn is based on a trusted
infrastructure. This assurance model defines the level and features of the protection it offers, and determines the need and
relevance of the deployment of specific security mechanisms.
In this article, we first examine how the verification of the security measures, and notably their presence, correctness,
effectiveness, the impact of changes in the existing intelligent complex systems with respect to vulnerabilities, systems
engineering choices, reconfigurations, patch installations, network management, etc. We then explore how we can
evaluate the overall security assurance of a given system. We emphasis that it is desirable to separate the trust providing
assurance model and the security architecture, into two separated distributed entities (instrumentations, protocols,
architectures, management). We believe that this segregation will allow us to automate and boost the trusted
infrastructure and security infrastructure, while the authorizations, exceptions, and security management as a whole, are
achieved through their interaction. Finally, we discuss the security metrics for these complex intelligent systems. New
mechanisms and tools are needed for assessing and proving the security and dependability of a complex system as the
scale of these systems and the kind of threats and assumptions on their operational environment pose new challenges.
We conclude with a description of our proposed security management model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor data generation is a key component of high fidelity design and testing of applications at scale. In
addition to its utility in validation of applications and network services, it provides a theoretical basis for the
design of algorithms for efficient sampling, compression and exfiltration of the sensor readings. Modeling of
the environmental processes that gives rise to sensor readings is the core problem in physical sciences. Sensor
modeling for wireless sensor networks combine the physics of signal generation and propagation with models of
transducer saturation and fault models for hardware. In this paper we introduce a novel modeling technique
for constructing probabilistic models for censored sensor readings. The model is an extension of the Gaussian
process regression and applies to continuous valued readings subject to censoring. We illustrate the performance
of the proposed technique in modeling wireless propagation between nodes of a wireless sensor network. The
model can capture the non-isotropic nature of the propagation characteristics and utilizes the information from
the packet reception failures. We use measured data set from the Kansei sensor network testbed using 802.15.4
radios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new autocorrelation (ACR)-based approach for pitch detection in speech, designed
especially to deal with voluntary and involuntary fast variations of the pitch period. The technique may be
employed independently, or may be used to substitute the traditional ACR function used in existing techniques.
Experimental results illustrate the effectiveness of the proposed technique in determining the pitch period, and
especially for rapid pitch period variations where the traditional ACR fails.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For this study, an algorithm was developed to determine concentration of particles less than 10&mgr;m (PM10) from still images captured
by a CCTV camera on the Penang Bridge. The objective of this study is to remotely monitor the PM10 concentrations on the Penang
Bridge through the internet. So, an algorithm was developed based on the relationship between the atmospheric reflectance and the
corresponding air quality. By doing this, the still images were separated into three bands namely red, green and blue and their digital
number values were determined. A special transformation was then performed to the data. Ground PM10 measurements were taken by
using DustTrakTM meter. The algorithm was calibrated using a regression analysis. The proposed algorithm produced a high
correlation coefficient (R) and low root-mean-square error (RMS) between the measured and produced PM10. Later, a program was
written by using Microsoft Visual Basic 6.0 to download still images from the camera over the internet and implement the newly
developed algorithm. Meanwhile, the program is running in real time and the public will know the air pollution index from time to
time. This indicates that the technique using the CCTV camera images can provide a useful tool for air quality studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents artificial emotional system based autonomous robot control architecture. Hidden Markov model
developed as mathematical background for stochastic emotional and behavior transitions. Motivation module of
architecture considered as behavioral gain effect generator for achieving multi-objective robot tasks. According to
emotional and behavioral state transition probabilities, artificial emotions determine sequences of behaviors. Also
motivational gain effects of proposed architecture can be observed on the executing behaviors during simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes the theory of PLS and K-PLS and support vector machines (SVMs). The advantage of these
algorithms is that the regression component can be trained in essentially real time, and these trained algorithms have
been shown to provide comparable and consistent results when compared to results obtained by SVMs. A performance
trade-off is made between several SVM kernels and K-PLS (as a learning system) using the Gaussian radial basis
function kernel. K-PLS and its variants and SVM's provide the theoretical capability to achieve a global minimum, but
SVM's are slightly better known. Both approaches can provide very nearly the same non-linear (or linear) decision and
regression boundaries, but SVM's are frequently more difficult to train. The hypothesis guiding the research is that K-PLS
will provide these boundaries and decision regions with slightly more accuracy and less effort in a real environment,
with less computational time. Specifically, for the screen film mammogram data set used, K-PLS resulted in the best
performance, with an Az of 0.968. SVM kernels and their respective Az performance results were: 0.870 for the
hyperbolic tangent, 0.930 for the s2000, 0.922 for the GRBF, 0.926 for the 2nd order polynomial and dot product kernels.
In addition, K-PLS achieved this superior performance result in 53.4% of the time using the GRBF kernel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An application of an unsupervised self-organizing neural network, the neural gas network, is reported for the
detection and characterization of small indeterminate breast lesions in dynamic contrast-enhanced MRI. This
technique enables the extraction of spatial and temporal features of dynamic MRI data stemming from patients
with confirmed lesion diagnosis. By revealing regional properties of contrast-agent uptake characterized by subtle
differences of signal amplitude and dynamics, this method provides both a set of prototypical time-series and a
corresponding set of cluster assignment maps which further provides a segmentation with regard to identification
and regional subclassification of pathological breast tissue lesions. We present two different segmentation methods
for the evaluation of signal intensity time courses for the differential diagnosis of enhancing lesions in breast
MRI. Starting from the conventional methodology, we proceed by introducing the separate concepts of threshold
segmentation and cluster analysis based on the neural gas network, and in the last step by combining those two
concepts. The results suggest that the neural gas network has the potential to increase the diagnostic accuracy
of MRI mammography by improving the sensitivity without reduction of specificity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main objective of this paper is to validate this newly developed Evolutionary Programming (EP) derived Support
Vector Machines (SVMs) paradigm by a performance comparison with the accepted conventional iterative gradient
method usually used to train these SVMs. The paper first reviews the background research associated with this research
problem and follows with the description of the EP developed family of SVMs. Both the mutation and selection
methods used to formulate the family of SVMs are described, which is followed by the more familiar Langrangian
formulation of SVMs. Kernel based learning methods are then discussed. The concepts described here are not limited
to SVMs, and the general principles also apply to other kernel based classifiers as well. Results are depicted for two EP
methods: the first a "crude" earlier method described in reference 7 and the more recently method described here.
Iteratively derived SVM results are also developed for comparison with the EP derived SVM approach. These results
show that both methods produced essentially perfect classification AZ results, generally ranging from 0.926 to 0.931.
Only the hyperbolic tangent kernel yielded the less accurate result of 0.87. These were expected results because all
ambiguous findings were "scrubbed" from the features describing the screen film data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commercial security and surveillance systems offer advanced sensors, optics, and display capabilities but lack intelligent
processing. This necessitates human operators who must closely monitor video for situational awareness and threat
assessment. For instance, urban environments are typically in a state of constant activity, which generates numerous
visual cues, each of which must be examined so that potential security breaches do not go unnoticed. We are building a
prototype system called BALDUR (Behavior Adaptive Learning during Urban Reconnaissance) that learns probabilistic
models of activity for a given site using online and unsupervised training techniques. Once a camera system is set up, no
operator intervention is required for the system to begin learning patterns of activity. Anomalies corresponding to unusual
or suspicious behavior are automatically detected in real time. All moving object tracks (pedestrians, vehicles,
etc.) are efficiently stored in a relational database for use in training. The database is also well suited for answering human-
initiated queries. An example of such a query is, "Display all pedestrians who approached the door of the building
between the hours of 9:00pm and 11:00pm." This forensic analysis tool complements the system's real-time situational
awareness capabilities. Several large datasets have been collected for the evaluation of the system, including one database
containing an entire month of activity from a commercial parking lot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.