PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8053, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a compact, multi-sensor design architecture capable of providing both spectral-polarimetric imaging and
adaptive matched filter target detection in real-time. The sensor suite supports airborne broad-area search missions using
multiple large-format, high speed TDI scanning sensors. The technology approach leverages Micro-Electro-Mechanical
System (MEMS) based spectral imaging systems and scanning TDI arrays originally developed for space based remote
sensing. The MEMS spectrometer system can dynamically select and switch linear combinations of single or multiple
VNIR/SWIR spectral bands with 5nm sampling resolution using a programmable MEMS mirror. The MEMS spectral
filter is capable of providing high quality spectral filtering across a large format sensor with > 1MHz optical switching &
update speeds. A dual instrument sensor suite architecture called the "PRISM sensor" has been developed which is based
on this technology and provides simultaneous spectral-polarimetric imaging and matched filter target tracking with
minimal on-board computing requirements. We describe how this technology can simultaneously perform broad-area
imaging and target identification in near real-time with a simple threshold operation. Preliminary results are illustrated
as additional layer of target-discriminate geospatial information that may be fused with geo-referenced imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Indoor localization with sensing capabilities is the missing link for a Geospatial Information System and sensor web.
The sensor network is capable of environmental monitoring and geo-tagging sensor data. This paper presents a unique
algorithm which uses fusion of Radio Signal Strength Indicator and Time Difference of Arrival for centimeter level accurate
indoor localization using wireless sensor network motes. The paper also proposes the integration of various environmental
sensors with wireless sensor network. The acquired sensor data can be geo-tagged with the translated global coordinates
and additional sensory metadata. With the use of semantic sensor web, this sensor information can be utilized in various
decision making scenarios for critical situations. The main goal of the paper is to use indoor localization assisted by sensor
fusion and semantic web for first responders in emergency scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a novel real-time image and signal processing network, RONINTM, which facilitates the rapid
design and deployment of systems providing advanced geospatial surveillance and situational awareness capability.
RONINTM is a distributed software architecture consisting of multiple agents or nodes, which can be configured to
implement a variety of state-of-the-art computer vision and signal processing algorithms. The nodes operate in an
asynchronous fashion and can run on a variety of hardware platforms, thus providing a great deal of scalability and
flexibility. Complex algorithmic configuration chains can be assembled using an intuitive graphical interface in a plug-and-
play manner. RONINTM has been successfully exploited for a number of applications, ranging from remote event
detection to complex multiple-camera real-time 3D object reconstruction. This paper describes the motivation behind the
creation of the network, the core design features, and presents details of an example application. Finally, the on-going
development of the network is discussed, which is focussed on dynamic network reconfiguration. This allows to the
network to automatically adapt itself to node or communications failure by intelligently re-routing network
communications and through adaptive resource management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the availability of geospatial data increases, there is a growing need to match these datasets together. However, since
these datasets often vary in their origins and spatial accuracy, they frequently do not correspond well to each other,
which create multiple problems. To accurately align with imagery, analysts currently either: 1) manually move the
vectors, 2) perform a labor-intensive spatial registration of vectors to imagery, 3) move imagery to vectors, or 4) redigitize
the vectors from scratch and transfer the attributes. All of these are time consuming and labor-intensive
operations. Automated matching and fusing vector datasets has been a subject of research for years, and strides are being
made. However, much less has been done with matching or fusing vector and raster data. While there are initial forays
into this research area, the approaches are not robust. The objective of this work is to design and build robust software
called MapSnap to conflate vector and image data in an automated/semi-automated manner. This paper reports the status
of the MapSnap project that includes: (i) the overall algorithmic approach and system architecture, (ii) a tiling approach
to deal with large datasets to tune MapSnap parameters, (iii) time comparison of MapSnap with re-digitizing the vectors
from scratch and transfer the attributes, and (iv) accuracy comparison of MapSnap with manual adjustment of vectors.
The paper concludes with the discussion of future work including addressing the general problem of continuous and
rapid updating vector data, and fusing vector data with other data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most of the defense and security applications involve various types of spatial temporal data coming from multiple data
source layers. How to analyze the large amount of cross layer geospatial temporal data to get actionable insights is an
increasingly important challenge to these applications. There are a lot of research work in the area of spatial data model
and management, spatial query and spatial data mining. However, applying these technologies to create a cross cutting
analytics solutions is very time consuming and challenging. There is lack of framework and mechanism to plug in
reusable spatio-temporal analytics assets. In this paper, we discuss a spatio-temporal analytics toolkit that provides a
common underlying information fusion infrastructure, and a plug-in mechanism to support reusable cross domain spatiotemporal
analytics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's high resolution remotely sensed images (<1m) pose several challenges which require solutions that go
beyond the traditional spectral based methodologies. With the rapid increase in the level of detail present in
these images, there is also an increase in the complexity. To deal with this complexity a consistent framework
and image representation is needed. An object-based scale-space representation is proposed. Principles of objectbased
design are explained and the application of these principles to image regions is introduced. Given an input
image, the scale-tree is automatically constructed using low-level information, starting with single pixels (as
objects) and ending with the root node indicating the complete image. The scale-tree is a hierarchical structure
where each level in the hierarchy differs from the next in the size of the objects/regions present at that level.
Hence, the scale-tree reflects the scale-space breakdown of the image. From another point of view the scaletree
can also be seen as a collection of multiple segmentations with varying level of detail going from fine to
coarse. Synthetic and real high resolution satellite images were used to evaluate our image representation. The
goal of the proposed representation is to facilitate applications such as target/anomaly detection, image region
classification and change detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airborne imaging sensing systems are becoming more prevalent and are producing an ever increasing volume of
data to process, exploit, and disseminate (PED). Successful PED of this data requires file format standardization
to aid in rapid exploitation, database query, and dissemination. The NITF format leverages the power of the
JPEG 2000 standard for exploitation preferred compression profiles and facilitates rapid dissemination of the
processed and exploited imagery to any decision maker or tactical warfighter over even the most constrained
bandwidth limitations via the JPEG 2000 Interactive Protocol (JPIP). The NITF standard provides a recognizable
format handling and a documented and regulated means of metadata provision to the community. Adoption
of this NITF standard for geographically corrected imagery facilitates data quality, flexibility, and the potential
for downstream GIS data fusion for any R&D sensor and promises the quickest transition to operational status.
This paper outlines a review of the PED effort exercising a Commercial-Off-The-Shelf (COTS) software toolderived
preprocessing architecture for automated generation of orthorectifed NITF products derived from the
Airborne Cueing and Exploitation System Hyperspectral (ACES HY) sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated moving object detection and tracking are becoming cornerstone technologies for the proliferation of video
and wide-area persistent surveillance systems. Emerging wide-area persistent surveillance platforms promise
increasingly long periods of surveillance and increasingly wide areas of coverage with increasingly high-detail
imagery over the entire area of interest, but the volume of data collected threatens to overwhelm our ability to
understand the implications of the data. This paper describes a standards-based metadata architecture to support
development of advanced automated movement detection and tracking capabilities - and ultimately "activity"
recognition - to help the human observer bring the data volume under control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present formats and delivery methods of Large Volume Streaming Data (LVSD) systems. LVSD
systems collect TBs of data per mission with aggregate camera sizes in the 100 Mpixel to several Gpixel range at
temporal rates of 2 - 60 Hz. We present options and recommendations for the different stages of LVSD data collection
and delivery, to include the raw (multi-camera) data, delivery of processed (stabilized mosaic) data, and delivery of user-defined
region of interest windows. Many LVSD systems use JPEG 2000 for the compression of raw and processed data.
We explore the use of the JPEG 2000 Interactive Protocol (JPIP) for interactive client/server delivery to thick-clients
(desktops and laptops) and MPEG-2 and H.264 to handheld thin-clients (tablets, cell phones). We also explore the use of
3D JPEG 2000 compression, defined in ISO 15444-2, for storage and delivery as well. The delivery of raw, processed,
and region of interest data requires different metadata delivery techniques and metadata content. Beyond the format and
delivery of data and metadata we discuss the requirements for a client/server protocol that provides data discovery and
retrieval. Finally, we look into the future as LVSD systems perform automated processing to produce "information"
from the original data. This information may include tracks of moving targets, changes of the background, snap shots of
targets, fusion of multiple sensors, and information about "events" that have happened.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated motion image-based tracking is an increasingly important tool in Intelligence, Surveillance, and
Reconnaissance (ISR). Unfortunately, current tracking technology is not up to the performance levels needed to
deliver key subtasks in this arena. We postulate that the under-performance of automated trackers derives from the
under-exploitation of the rich sets of features related the identification of items being tracked. To address this we
propose a formulation of features that supports easy exchange and integration of new features. We believe this
approach will provide the foundations for a far wider and more effective exploration of potential features related to
tracking, and as a result, significantly better and more sustainable growth in tracker performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial Data Processing Algorithms and Techniques
Military forces and law enforcement agencies are facing new challenges for persistent surveillance as the area
of interest shifts towards urban environments. Some of the challenges include tracking vehicles and dismounts
within complex road networks, traffic patterns and building structures. Under these conditions, conventional
video tracking algorithms suffer from target occlusion, lost tracks and stop-and-start. Furthermore, these
algorithms typically depend solely on pixel-based features to detect and locate potential targets, which are
computationally intensive and time consuming.
This research paper investigates the fusion of geographic information into video-based target tracking algorithms
for persistent surveillance. A geographic information system (GIS) has the capability to store attributes
about a target's surroundings - such as road direction and boundaries, intersections and speed limit - and can be
used as a decision-making tool in prediction and analysis. Fusing this prediction capability into conventional
video-centric target tracking algorithms provides geographical context to the target feature space improves
occlusion of targets and reduces the search area for tracking. The GIS component specifically improves the
performance of target tracking by minimizing the search area a target is likely to be located. We present the
results from our simulations to demonstrate the feasibility of the proposed technique with video collected from
a prototype persistent surveillance system. Our approach maintains compatibility with existing GIS databases
and provides an integrated solution for multi-source target tracking algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data registration is the foundational step for fusion applications such as change detection, data conflation, ATR, and
automated feature extraction. The efficacy of data fusion products can be limited by inadequate selection of the
transformation model, or characterization of uncertainty in the registration process. In this paper, three components of
image-to-image registration are investigated: 1) image correspondence via feature matching, 2) selection of a
transformation function, and 3) estimation of uncertainty. Experimental results are presented for photogrammetric versus
non-photogrammetric transfer of point features for four different sensor types and imaging geometries. The results
demonstrate that a photogrammetric transfer model is generally more accurate at point transfer. Moreover,
photogrammetric methods provide a reliable estimation of accuracy through the process of error propagation. Reliable
local uncertainty derived from the registration process is particularly desirable information to have for subsequent fusion
processes. To that end, uncertainty maps are generated to demonstrate global trends across the test images.
Recommendations for extending this methodology to non-image data types are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial Information Systems (GIS) collect, integrate, store, edit, analyze, share, and display geographic information.
Naturally, GIS analysts rely on external data coming from disparate sensors to associate the sensor content (e.g.
imagery) with relational databases. Inherently, these GIS sensors present differences in their data structures, labelling,
ontologies, and resolution. Given different data structures, information may be lost in the transfer of information,
alignment, and association of related context, which yields uncertainty in the meaning of the conveyed information.
Ontology alignment typically consists of manual operations from users with different experiences and understandings
and limited reporting is conducted in the quality of mappings. To assist the International Organization for Standards
(ISO) in development of information quality assessment, we propose an approach using information theory for semantic
uncertainty analysis. Information theory has widely been adopted in communications and provides uncertainty
assessment for quality of service (QOS) analysis. Quality of information (QOI) or Information Quality (IQ) definitions
for semantic assessment can be used to bridge the gap between ontology (semantic) uncertainty alignment and
information theory (symbolic) analysis. Utilizing a measure of semantic information loss, analysts can improve the
information fusion process, predict data needs, and appropriately understand the GIS product. This paper aims at
developing a semantic information loss measure based on information theory relating GIS sensor processing
uncertainties and GIS analyst syntactic associations. A maritime domain situational awareness example with waterway
semantic labels is shown to demonstrate semantic information loss.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for calculating unbiased entropic estimates of multivariate associations between mixed data is given. Since
there is no assumption of unimodality of the distributions of the categorical and continuous-valued data, measures of
central dispersion are not appropriate for the quantification of association. Empirical estimates of entropic
associations are provided with respect to the partition entropy of a uniform binning interval and the cardinality of the
sensed data. The increased computational demand incurred by the appropriate generalized measure is mitigated by a
branch and bound algorithm for information-optimal attribute selection. The methodology is applied against a known
data set used in a standard data mining competition that features both sparse categorical and continuous valued
descriptors of a target with promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors propose a metadata architecture for standards-based "interfaces" within the end-to-end detectiontracking-
activity exploitation process. The approach is built on the standardization of interfaces of five "sequential"
stages of an image-based tracker - image conditioning, motion detection, kinetics-based tracklet extraction, feature-based
track development, and track network discovery. This architecture facilitates plug-and-play insertion and
replacement of algorithms and processes, encouraging developers to concentrate on their respective areas of expertise
and leading to advances in tracking state of the art. This paper describes the Video Motion Target Indicator (VMTI)
component, the second of the five components, of the architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processing framework for cognitive modeling to predict video interpretability is discussed. Architecture consists of
spatiotemporal video preprocessing, metric computation, metric normalization, pooling of like metric groups with
masking adjustments, multinomial logistic pooling of Minkowski pooled groups of similar quality metrics, and
estimation of confidence interval of final result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.