PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conference 11756 Invited Panel Discussion: Joint Data Learning
AI techniques are based on learning a model based on a large available data set. The data sets typically are from a single modality (e.g., imagery) and hence the model is based on a single modality. Likewise, multiple models are each built for a common scenario (e.g., video and natural language processing of text describing the situation). There are issues of robustness, efficiency, and explainability needed. A second modality can improve efficiency (e.g., cueing), robustness (e.g., results can not be fooled such as adversary systems), and explainability from different sources help. The challenge is how to organize the data needed for joint data training and model building. For example, what is needed (1) structure for indexing data as an object file, (2) recording of metadata for effective correlation, and (3) supporting models and analysis for model interpretability for users. There are a variety of questions to be discussed, explored, and analyzed for fusion-based AI tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitraget Tracking, and Resource Management I
In autonomous driving systems, distributed sensor fusion is widely used where each sensor has its tracking system and only the local tracks (LT) are transmitted to the fusion center. We consider the fusion of LTs taking into account all the Fusion Center (FC) track-to-LT association hypotheses via probabilities in the proposed Hybrid Probabilistic Information Matrix Fusion (HPIMF) algorithm. In HPIMF, the track association and fusion are carried out with probabilistic weightings rather than using a single track association only. Different from track-to- track fusion (T2TF), which is one of the most commonly used approaches for distributed tracking systems, the associations considered in HPIMF are between the predicted FC state and the LTs from local sensors. At each time for an association event, up to one of the tracks within the track list of a local sensor can be associated with the FC state. In real world scenarios there can be large uncertainties and missed tracks due to sensor imperfections and sensor-target geometry. Consequently, the association might be unreliable and fusion based on only a single association hypothesis could fail. It is shown in the simulations of a realistic autonomous driving system that HPIMF can successfully track a target of interest and is superior to T2TF which relies on hard association decisions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method of determining whether two closely spaced objects observed by an electro-optical (EO) sensor are resolvable. The first goal of this work is to develop the method that will determine whether data from an EO sensor is sufficient to estimate two sets of target parameters or whether a single target is more applicable. The second goal is to quantify the effectiveness of this method theoretically and confirm these assertions via Monte Carlo (MC) simulations. Work has been done previously on extracting measurements of single targets, as well as two targets of possibly dissimilar intensity. The current work extends these works in providing a test to determine which method is best to employ. We consider point targets that deposit energy in the FPA according to a Gaussian point spread function (PSF) with parameter σPSF. The resolution determination method is framed as a hypothesis test, with the null hypothesis representing a single set of target parameters to be estimated and the alternate hypothesis indicating two sets of target parameters can be extracted. We derive the approximate type-I error and power of this test and present this data as a Receiver Operating Characteristic (ROC) for varying degrees of target separation. We also present the resolution probability versus target separation for varying target signal-to-noise ratio (SNR) differentials to compare this test with a commonly used approximation. Our simulated results show good agreement with theoretical derivations and we find that the test can nearly perfectly resolve these targets at separations above 0:8σPSF for SNR differentials up to 6dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The system considered in this paper operates with a feedback that is characterized by a gain and a desired final state (DFS), which is the main parameter of interest in the present study. The system is, however, subjected intermittently to stochastic inputs according to a Markov process. Since the system operates in two modes | under the feedback to the DFS and under a stochastic input|the Interacting Multiple Model (IMM) estimator is used. Two approaches are considered: (i) the DFS is a discrete-valued random variable | one of a finite number of possible states | with an a priori probability mass function (pmf), and (ii) the DFS is a continuous-valued random variable with an a priori probability density function (pdf). For Approach (i), we use a multiple IMM estimator (MIMM) that features one IMM for each one of the possible DFS. The a posteriori probability of the model for each IMM, i.e., of each DFS, will be computed based on the likelihood function (LF) of the corresponding IMM. For Approach (ii), we design a single IMM to handle the unknown DFS to be estimated (mode M1), and the random inputs (mode M2). Simulation results explore several scenarios and investigate the degree of observability of this stochastic problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers the problem of estimating the launch point (LP) of a thrusting object from a single fixed sensor’s 2-D angle-only measurements (azimuth and elevation). It is assumed that the target follows a mass ejection model and the measurements obtained are available starting a few seconds after the launch time due to limited visibility. Previous works on this problem estimate the target’s state, which, for a passive sensor, requires a long batch of measurements, is sensitive to noise and ill-conditioned. In this paper, a polynomial fitting with the least squares approach is presented to estimate the LP without motion state estimation. We provide a statistical analysis to choose the optimal polynomial order, including overfitting and underfitting evaluation. Next, we present Monte Carlo simulations to show the performance of the proposed approach and compare it to the much more complicated state of the art technique that relies on state estimation. It is shown that the proposed method provides a much simpler and effective way than the state estimation methods to implement in a real-time system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This brief work introduces the use of the relatively new sliding innovation filter in the field of fault detection and diagnosis. This important area is part of signal processing techniques that are widely used in industrial practice, telecommunications, optical systems, and robotics, to name a few. This filter overcomes robustness issues during faults caused by modeling uncertainties. This brief work explores the properties and quality of the filter outputs applied on an electromechanical system. The results are compared with the well-known and studied Kalman Filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this brief work, a novel filtering technique that combines the newly developed sliding innovation filter with a multiple model strategy is proposed. Introduced in 2020, the sliding innovation filter is a relatively new filter used for state and parameter estimation. Based on variable structure techniques, it shares the same principles with sliding mode observers. The filter is robust and stable under system modeling uncertainties. The proposed method multiple model-based sliding innovation filter is tested on an electrohydrostatic actuator (EHA) and the results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Multitarget Tracking, and Resource Management II
The sliding innovation filter (SIF) is a newly developed filter that shares similar principles with sliding mode observers and variable structure techniques. The SIF is formulated as a predictor-corrector method that uses the innovation or measurement error as a switching hyperplane and forces the states to remain within a region of its state trajectory. In this brief paper, the SIF is reformulated as a two-pass smoother to reduce the effects of noise and improve the overall performance. The proposed method, known as the sliding innovation smoother (SIS), is applied on an aerospace flight surface actuator, and the results are compared to the original filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Drone classification based on radar return signal is an important task for public safety applications. Determining the make or class of a drone gives information about the potential intent of the UAV. We present a novel method for classifying commercially available drones based on their radar return signal, using a convolutional neural network. Our approach achieves 0.46 mean Average Precision (mAP) on a simulated dataset at 5 dB SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications I
This paper presents an innovative image registration algorithm using the particle flow filter. The particle flow filter is an efficient Bayesian filter that uses particles to represent probability densities. It is not constrained to the highly restrictive unimodal, linear, and Gaussian assumptions of many other Bayesian filters. The particle flow filter algorithms were implemented in MATLAB for 2D rigid body point-set registration. Additionally, the particle filter method and iterative closest point algorithms were implemented for comparison. For the same alignment accuracy, the new particle flow filter approach was 169% faster than the particle filter for certain challenging problems. For the same alignment time, the particle flow filter reduced misalignment by as much as 25% over that of the particle filter. The particle flow filter achieved 100% alignment with enough particles, and reduced misalignment by as much as 75% over that of iterative closest point. These results demonstrate that image registration via the particle flow filter significantly outperforms the particle filter and iterative closest point algorithms in the presence of noise and a high degree of initial misalignment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications II
A Signal of Interest (SOI) is a signal that has been recorded for further analysis. This is driven by mission requirements for both known and anomaly signals. Identifying anomalies/SOIs is reliant on the system operator’s knowledge which can be prone to human error. The objective of our project is to improve situational awareness by automating the identification of SOIs with Machine Learning/Artificial Intelligence (ML/AI) techniques. In this paper, we describe a prototype developed and integrated into the tactical system that streams live Radio Frequency (RF) into our real-time Graphical User Interface (GUI) and implements an Artificial Neural Network (ANN) algorithm with the ability to predict potential anomalies/SOIs in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last decade, various defense and security agencies have focused on methods of data and information fusion across multiple domains (e.g., space, air, land, sea, undersea, cyber, and information). Researchers and practitioners in these communities are currently emphasizing the importance of information warfare, algorithmic warfare, joint all domain operations, and multi-source availability. A related development extending from the data analytics community is adversarial machine learning (AML), i.e., the study of attacking and defending machine learning algorithms. It is generally the case in AML for a single algorithm to be considered. However, AML research regarding multi-source data manipulation is less developed because it compounds the challenges typically addressed. That is, attacks must be perceived over numerous information streams and their effects mitigated accordingly, often across multiple algorithms. This challenge is further complicated in multi-domain applications characterized by distributed control wherein agents have distinct capabilities (e.g., people or technological tools); the AML approaches required for operator infusion, information fusion, and control diffusion likely vary across each actor. Noting these challenges, this manuscript reviews command and control constructs, surveys related literature, and explores opportunities for adversarial risk analysis, a decision-theoretic alternative to game theory, to address AML in multi-source command and control settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Risk-based security is a concept introduced to provide security checks without inconveniencing travelers that are being checked with unqualified scrutiny checks while maintaining the same level of security with current check point practices without compromising security standards. Furthermore, risk-based security, as a means of improving travelers’ experience at check points is expected to reduce queueing and waiting times while improving at the same travelers’ experience during checks. Several projects have been funded by the European Commission to investigate the concept of risk-based security and develop the means and technology required to implement it. This paper discusses and analyses the concept of riskbased security, the inherent competing mechanism between risk assessment, screening time and level of security, and means to implement risk-based security based on anomaly detection using deep learning and artificial intelligence (AI) methods. This paper summarizes work that has been carried out in the project FLYSEC [2] and continues in the projects TRESSPASS ]3], D4FLY and SAFETY4RAIL (see Acknowledgments), and has previously been published in [13], [7], [8].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications III
There is an increasing need for both governments and businesses to discover latent anomalous activities in unstructured publicly-available data, produced by professional agencies and the general public. Over the past two decades, consumers have begun to use smart devices to both take in and generate a large volume of open-source text-based data, providing the opportunity for latent anomaly analysis. However, real-time data acquisition, and the processing and interpretation of various types of unstructured data, remains a great challenge. Recent efforts have focused on artificial intelligence / machine learning (AI/ML) solutions to accelerate the labor-intensive linear collection, exploitation, and dissemination analysis cycle and enhance it with a data-driven rapid integration and correlation process of open-source data. This paper describes an Activity Based Intelligence framework for anomaly detection of open-source big data using AI/ML to perform semantic analysis. The proposed Anomaly Detection using Semantic Analysis Knowledge (ADUSAK) framework includes four layers: input layer, knowledge layer, reasoning layer, and graphical user interface (GUI)/output layer. The corresponding main technologies include: Information Extraction, Knowledge Graph (KG) construction, Semantic Reasoning, and Pattern Discovery. Finally, ADUSAK was verified by performing Emerging Events Detection, Fake News Detection, and Suspicious Network Analysis. The generalized ADUSAK framework can be easily extended to a wide range of applications by adjusting the data collection, modeling construction, and event alerting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Trajectory data has numerous commercial applications, e.g., location-based services, travel forecasting, health monitoring, land use analysis, urban planning, and robotics. However, traditional trajectory mining algorithms do not explain how and why the motion was generated, limiting their utility in GEOINT applications when data is unlabeled, noisy, and does not contain contextual layers. In this paper, we describe a methodology that analyzes spatiotemporal trajectory data to produce semantic labels. We describe the methodology to learn behavior models that most likely generated input trajectory data, and use these models to transfer labels across unlabeled ambiguous tracks. Behavior models include both movers’ intent encoded as motion reward functions and behavior policy encoded as the state-conditioned movement action distribution. We show that learned behavior models provide an efficient mechanism for relating noisy tracks, allowing accurate semi-supervised learning (>90% f-score over labeling outcomes) with just few labeled examples per type of motion behavior. We further hypothesize that learned behavior models contain latent statistical and structural information that may be exploited to label trajectories in completely unsupervised manner in the future, which will allow military analysts or civilian consumers to explain the observed trajectory data, derive semantic motion-based features to improve object and region classification, and reason about motion changes in different contexts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teams of manned and unmanned active sensors can provide tactical military units, search and rescue teams, and emergency response units with timely information; however, limited numbers of these systems mean their tasking must be prioritized, and the information they provide needs to be synthesized to avoid overwhelming users. Automated methods can fuse a priori and real-time information to provide decision-makers with time- critical situational awareness and a basis for search prioritization and route planning. Previous work has shown how expected entropic information gain can be used as a measure of utility in motion planning, though in multi- target search scenarios not all information is equally valuable. This research investigates generating certainty grids for dynamic search prioritization using a time-dependent cell valuation that incorporates entropy as well as threat- and geography-specific importance of information relative to the mission. We compare two different approaches to calculating posterior probability and entropy: a Bayesian log odds method based on prior works on obstacle avoidance; and a Dempster-Shafer Theory approach using a plausibility measure. The resulting weighted certainty grid map is provided for dynamic search. We then demonstrate how this adaptive, integrated situational awareness approach performs in different simulated, small unit tactical scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Methodologies and Applications IV
A recent publication entitled, “The US Army in Multi-Domain Operations 2028”1 stated that the current strategic environment is characterized by continuous competition involving “great powers”, particularly China and Russia, which challenge in all domains and leverage the “competition” space to achieve operational and strategic objectives. For example, diplomatic, economic actions, information warfare, unconventional and conventional operations are being integrated to fracture alliances (e.g., NATO) and partnerships. In the transition from “competition” to conflict, space, cyber, electromagnetic and information would be integrated to create standoff in order to separate friendly forces over time, space and function. In response to the realization that information has significant impacts on national security (e.g., foreign manipulation of elections), Information was declared the 7th joint warfighting function. Information Operations is a subset of the Information function focused on the employment of military capabilities to change adversary behavior. Operating successfully requires the ability to characterize and assess the impact of various actions, messages, events on actors, communities and a mastery of the tools, techniques and activities to affect the dimension of the information environment (individuals, organizations and systems that collect, process, disseminate or act on information).2 This paper will introduce a Playbook concept to enable successful operations in the Information Environment (IE) based on the characterization of multiple domain information using the BEND framework3, extended beyond Social Media to encompass Information Domains or Information Related Capabilities (IRC). An example of Playbooks based on a historical scenario will be provided. Exemplars of analytics to support IE characterization will be included, along with a discussion of remaining research gaps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Risk-based and automatic security systems require to monitor passengers’ whereabouts in a terminal discretely to allow timely detection of suspicious behaviors and preventing malicious actions. In a series of two papers Thomopoulos et al. have introduced a methodology providing real-time risk assessment for airport passengers based on their trajectories. The proposed methodology implements a deep learning architecture. It is fully automated, reducing the workload of the video surveillance operators leading to less error-prone conclusions. Furthermore, the proposed methodology has been integrated with the OCULUS Command & Control (C2) System and the i-Crowd Simulator, a crowd simulation platform developed in the Integrated Systems Lab (ISL) of the Institute of Informatics and Telecommunications at NCSR “Demokritos.” In this paper we extend our previous work by introducing noise in both training and testing data used for tracking passengers and detecting anomalies in their tracks. Extensive testing of the anomaly detection system in the presence of noise demonstrates that the system is extremely resilient in noise. Furthermore, we consider the case of missing data in both training and testing data in order to model a realistic scenario of tracking with cameras with gaps in the passengers tracks from camera to camera due to missing data from transmission delays and/or data overflow. Extensive testing with the i- Crowd simulator demonstrates considerable robustness in the performance of the anomaly detection system in both noisy and missing data. The experimental results indicate that the proposed anomaly detection system is robust to both noisy and missing data and thus a very promising risk assessment scheme that can reliably be used for risk-based security under realistic operational conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dempster Shafer theory (DST) of evidence is an effective approach for decision analysis, especially in cases of high uncertainty. It is an evidence based probabilistic reasoning technique which differs from the traditional decision support methods by introducing belief functions not only on sets of propositions but also on all corresponding subsets which provides the capability of distinguishing beliefs on propositions from potential uncertainties among them. However,a significant factor that determines the reliability of reasoning systems is the Fairness that characterizes the processes and the outcomes of the systems. In this paper, it is proposed a modified fairness – by – design Dempster – Shafer reasoning system that quantitative fairness metrics are taken into consideration within the algorithmic procedure. For each evidence provided, a dedicated fairness estimation function determines whether the evidence is compliant with the pre – defined Ethics/Legal regulations of a given Fairness framework. Each fairness estimation function acts as a doubt factor for the evidence and reduces the belief value of the corresponding hypothesis and increases the related to the hypothesis uncertainty. This way unfairness limits the trustfulness of the corresponding evidence and as a result weakens its contribution. The proposed solution is tested against a simulated queue surveillance use case scenario, where 2 CCTV cameras input are used for inferring malicious behavior of people in the queue. For proof of concept, the one of the two cameras, introduces discrimination bias which violates the pre-defined Fairness regulation. Results show that the modified DST systems tolerates unfairness effectively while retaining algorithmic accuracy to a satisfying level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prior to the Covid-19 pandemic, increased passenger flows at airports and the need for enhanced security measures from ever increasing and more complex threats lead to long security lines, increased waiting times, as well as often intrusive and disproportionate security measures that result in passenger dissatisfaction and escalating costs. Prior to the pandemic, the International Air Transport Association (IATA), the Airports Council International, (ACI) and the respective industry, have expressed their concern that the then today’s airport security model was not sustainable in the long term. The vision for a seamless and continuous journey throughout the airport and efficient security resources allocation based on intelligent risk analysis, set the challenging objectives for the Smart Security of the airport of the future. Several projects, such as FLYSEC and TRESSPASS, have been funded by the European Commission (EU) to develop and demonstrate innovative integrated and risk-based end-to-end airport security processes for passengers, while enabling a guided and streamlined procedure from landside to airside and into the boarding gates, offering for the first time an operationally validated innovative concept for end-to-end aviation security. Ironically, the Covid-19 pandemic has put air, but also sea and land, travelling to an almost complete stand-still that has, temporarily at least, muted the debate about alternative security protocols and processes for accommodating ever increasing passenger flows more effectively. Despite the dramatically reduced passenger flows, the concept of risk-based security remains valid both as a research topic and a valid security doctrine that can increase security efficacy. This paper reports developments in risk-based security and updates the status of the OCULUS C2I system as a integrated platform for comprehensive analysis, testing, simulation and implementation of comprehensive risk-based security protocols, algorithms, and processes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geocoding information in the 2D or 3D space in a systematic and dynamic way is a challenging problem. Furthermore, indoor navigating around the content in the physical or virtual space is also a demanding problem. wayGoo is a 2D/3D geocoding platform that offers dynamic geocoding and navigation in both physical and virtual spaces. In a security application, one would like to connect the dynamic geocoding capabilities with control and command functionalities required in monitoring and managing security environments. Integrating wayGoo with the OCULUS C2I system provides exactly this capability for dynamically monitoring, interacting and managing security spaces. In this paper, we present the integrated wayGoo OCULUS C2I environment and demonstrate its functionalities through several use case scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications I
With the rise of high-quality and affordable video cameras, it has never been more accessible to record and broadcast live amateur sports games. However, to maintain a decent viewing angle of the field and its players, one must continuously adjust their camera's orientation to follow the players. While there have been several commercial products that give sports teams equipment and software to record their games with higher quality and convenience, these technologies are usually proprietary and quite expensive. Therefore, we propose a solution to utilize the existing lighting poles around a sports field for mounting cameras. In this way, sports teams and field managers can utilize low-cost hardware to record games. Additionally, this offers the opportunity for a permanent fixture of cameras around the field that can be readily available whenever a game takes place. To this end, this paper investigates how to find the ideal placement of cameras on lighting poles to best capture a nearby field in the following two scenarios. (i) Single-camera scenario: when the properties of a pinhole camera such as focal distance, sensor dimensions, and image resolution are known, the camera's projection quadrilateral on the field can be calculated. Based on this, we optimize the camera's height and elevation angle based on a desired ground sampling distance. (ii) Multi-camera scenario: when the properties of multiple cameras are known, we optimize camera placement to either capture the same field area from different views or to capture different areas of the field for greater coverage. To showcase the potential application of these optimization methods, we have implemented them into a web-based simulator that can run in almost any browser (source code available at https://github.com/figuedj1/SportsFilmingOptimization). When a user inputs their camera and field specifications, they are not only given both a 2D and 3D view of the field and environment but also methods for optimizing each camera's orientation and placement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless devices identify themselves using media access control (MAC) addresses which can be easily intercepted and mimicked by an adversary. Mobile devices also have a unique physical fingerprint represented by perturbations in the frequency of broadcasted signals caused by differences in the manufacturing process of their hardware components. This unique fingerprint is much more difficult to mimic. The short time Fourier transform (STFT) is used to analyze how the frequency content of a signal changes over time, and may provide a better representation of mobile signals in order to detect their unique fingerprint. In this paper, we have collected wireless signals using the 802.11 a/g protocol, showing the effect on classification performance of applying the STFT when varying the choice of window lengths, augmenting the data with complex Gaussian noise, and concatenating STFTs of different frequency resolutions, achieving state-of-the-art performance of 99.94% accuracy in the process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational neuroscience models can be used to understand neural dynamics in the brain and these dynamics change as the physiological and other conditions like aging. One such approach we have used in this work is Energy Landscape analysis based on resting-state fMRI data. The dataset consists of 70 subjects with normal cognitive function, of which 23 are young adults and 47 are old adults. In this analysis, disconnectivity graphs and activity patterns are generated and using connectivity statistics among seven prominent brain networks. To study brain dynamic behaviors, we perform sliding window studies on the dataset and observe local minima of each window evolving in time. By varying the window shift from multiple seconds to 1 second, we can obtain statistics and evaluate the speed and activity pattern holding time of individual and group subjects. We found that older subjects can hold the brain states for a longer time but then jump to other dominated brain state local minima with a large hamming distance, whereas young subjects change dominated local minima more frequently but with a small hamming distance of 1 or 2. In fact, when averaged over the full time course, old subjects have more stable brain states local minima compared to young subjects. For both young and old subjects, the default mode network (DMN) and visual network (VIS) are coupled but for young subjects the two networks are on and off together and strongly correlated. For old subjects, there is an extra dominated brain state local minimum that the DMN and attention network (ATN) are correlated and anti-correlated with (VIS) and sensory-motor networks (SMN). This state may suggest old subjects are more capable of focusing on brain internal models and not getting influenced by external visual and sensory factors than young subjects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain connectivity biomarkers are powerful tools for not only identifying neuropsychiatric disorders in patients but also validating treatment effectiveness. In this work, we used energy landscape techniques to analyze resting state fMRI data collected from 107 healthy control (HC) and 86 Schizophrenia patients (SZ). Activity patterns and disconnectivity graphs were obtained from 264 ROIs and 180-second fMRI time course of each subject. Statistics of individual and subgroups’ inter-network and intra-network connections of Auditory Network (AUD), Attention Network (ATN), Default – Mode Network (DMN), Frontoparietal Network (FPN), Salience Network (SAN), Sensorimotor Network (SSM), and Visual Network (VIS) were analyzed. For inter-network results we found that the DMN and ATN of SZ are strongly coupled. But for HC, a stable brain states that the ATN, SAN, and FPN are coupled as a group and anti-correlated with the other coupled group of DMN, SSM, VIS, and AUD. For intra-networks we found that in FPN, controls have more flexibility to allow the Inferior Frontal Gyrus independently working together with the Superior Temporal Gyrus. In FPN we found that regions that process language and regions that process motor and planning can sometimes be decoupled in SZ. In SMN, some controls can accomplish a brain state to separate voluntary and autopilot activities. In VIS, controls have the ability to separate lower-level visual processing from working memory, motor planning, and guided coordination, whereas patients mixed some of them together, suggesting lack of self-awareness and self-constraint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications II
Many correctional facilities suffer from the smuggling of cell phones and other wireless devices into prison walls. In order to locate these devices for confiscation, we must be able to map intercepted signals to indoor locations within a few meter radius. We chose to use cell phones of varying models and multiple low-cost software-defined radios for this task. The different types of cell phones provide us with a more robust dataset for location fingerprinting due to the different transmitter hardware in each. Furthermore, the SDRs allow us to easily receive the raw IQ data from WiFi signals while being more cost-efficient for smaller facilities. This raw data is collected from a harsh prison-like environment in a grid pattern and associated with the location they were captured. An advanced machine learning network uses the raw signals as input and locations as labels in order to map the signals to their respective locations. The accuracy of our system is then compared and discussed against prior works in this field. These studies often use values other than the raw IQ data such as channel state information and received signal strength indicator. Therefore, we augment our original input with each of these values and measure their effect on the system’s overall performance. The end result provides prisons with a tool capable of locating devices used in unauthorized zones for confiscation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Responding to health crises requires the deployment of accurate and timely situation awareness. Understanding the location of geographical risk factors could assist in preventing the spread of contagious diseases and the system developed, Covid ID, is an attempt to solve this problem through the crowd sourcing of machine learning sensor-based health related detection reports. Specifically, Covid ID uses mobile-based Computer Vision and Machine Learning with a multi-faceted approach to understanding potential risks related to Mask Detection, Crowd Density Estimation, Social Distancing Analysis, and IR Fever Detection. Both visible-spectrum and LWIR images are used. Real results for all modules are presented along with the developed Android Application and supporting backend.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In times of health crises disease situation awareness is critical in the prevention and containment of the disease. One indicator for the development of many contagious diseases is the presence of fever and the proposed system, IRFIS, extends prior research into fever detection via infrared imaging in two key ways. Firstly, the system utilizes a modern, machine learning based object detection model for detecting heads, supplanting the traditional methods that relied upon shape matching. Secondly, IRFIS is capable of running from the Android mobile platform using a small, commercial-grade infrared camera. IRFIS’s head detection model when evaluated on a dataset of unseen images, achieved an AP of 96.7% with an IoU of 0.50 and an AR of 75.7% averaged over IoU values between 0.50 and 0.95. IRFIS calculates the target’s maximum temperature in the detected head sub-image and real results are presented as well as avenues of future work are explored.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of accurately detecting signals from contraband WiFi devices. Source locations may be selected in a worst-case fashion from within an indoor structure, such as a correctional facility. The structure layout is known, but inaccessible prior to deployment, and only a small number of detectors are available for sensing these signals. Our approach treats this setting as a covering problem, where the aim is to achieve a high probability of detection at each of the grid points of the terrain. Unlike prior approaches, we employ (1) a variant of the maximum coverage problem, which allows us to account for aggregate coverage by several detectors, and (2) a state-of-the-art commercial wireless simulator to provide SINR measurements that inform our problem instances. This approach is formulated as a mathematical program to which additional constraints are added to limit the number of detectors. Solving the program produces a placement of detectors whose performance is then evaluated for classifier accuracy. We present preliminary results, combining both simulation data and real-world data to evaluate the performance our approach against two competitors inspired by the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The existing target detection methods mostly focus on single-target. In remote sensing images, some natural or manmade targets often appear in groups or formations, such as ship formation. Ship formation not only contains the attribute information of single-ship, but also has the spatial distribution characteristics of formation. In this paper, a detection method for ship formation is proposed. The method mainly includes three stages: sub-target detection, formation extraction and formation association. In the first stage, the three features of the target's shape, gradient and texture are extracted by multi-feature fusion, on this basis, the sub-target is detected by support vector machine. In addition, the maximum symmetrical surround and spectral residual model are used to remove the possible interferences like ship-like reefs and cloud. In the second stage, agglomerative hierarchical clustering is adopted to obtain the ship formation information. Since the number of formations and the distribution of formation members are unknown, hierarchical clustering avoids the selection of cluster centers and the number of categories. In the last stage, by analyzing the spatial distribution and attribute information of ship formation, the topological features of ship formation are extracted and reconstructed based on spectral graph partitioning. Finally, combined with topological features and attribute information, the ship formation detection is realized by formation association. Experiments conducted on the simulation data set show that this method can detect ship formation effectively in the case of interferences, and is faster and more accurate than traditional fuzzy inference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoacoustic microscopy with large depth of focus is significant to the biomedical research. The conventional opticalresolution photoacoustic microscope (OR-PAM) suffers from limited depth of field (DoF) since the employed focused Gaussian beam only has a narrow depth range in focus, little details in depth direction can be revealed. Here, we developed a computed extended depth of field method for photoacoustic microscope by using ratio of low-pass pyramid fusion rules. Using a self-made optical-resolution photoacoustic microscope to obtain source images of the same sample with different focus. First, constructing a low-pass pyramid ratio for each source image. Second, an ratio of low-pass pyramid is constructed for the fusion image by selecting values from corresponding nodes in the component ratio of lowpass pyramids. Finally, the fusion image is recovered from its ratio of low-pass pyramid. Fusion image are more explanatory than single source image, and more details can be revealed due to the extended DoF. Simulation was performed to test the performance of our method, different focused images were used to verify the feasibility of the method. Performance of our method was analyzed by calculating Entropy, Average gradient, Mean Square Error (MSE) and Edge strength. The result of simulation shown that this method can extend the depth of field of PAM two times without the sacrifice of lateral resolution. And the in vivo imaging of the zebrafish demonstrates the feasibility of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal and Image Processing, and Information Fusion Applications III
Current perception systems often carry multimodal imagers and sensors such as 2D cameras and 3D LiDAR sensors. To fuse and utilize the data for downstream perception tasks, robust and accurate calibration of the multimodal sensor data is essential. We propose a novel deep learning-driven technique (CalibDNN) for accurate calibration among multimodal sensor, specifically LiDAR-Camera pairs. The key innovation of the proposed work is that it does not require any specific calibration targets or hardware assistants, and the entire processing is fully automatic with a single model and single iteration. Results comparison among different methods and extensive experiments on different datasets demonstrates the state-of-the-art performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper demonstrates the combination of lidar and passive polarimetric infrared imaging for object detection and classification. Lidar imaging characterizes reflective properties and provides high-resolution 3-D spatial information, and passive imaging offers faster imaging capabilities of large scenes. A cooperative imaging approach improves the imaging process by exploiting polarimetric features of hidden objects and cueing only the anomalous regions for further interrogation. Then, features from each sensor are combined for object classification. A demonstrator is assembled and utilized to evaluate the hybrid approach in an outdoor environment. The adaptive scanning technique reduces lidar scan time by 99%, locates the hidden object among the top areas cued, and outperforms single modality classification accuracy by over 20%. The demonstration verifies that hybrid lidar and passive polarimetric imaging is applicable for the classification of objects hidden in a large scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid development of new technologies employing laser, electron beam, electroerosion and other processes as the working ones requires new approaches to developing systems controlling operations being performed. However, these processes do not lend themselves to visual observation and do not allow the introduction of any sensors into the working process zone, which suggests that vibroacoustic diagnostics methods should be used. The article discusses obtaining information from a vacuum chamber under electron-beam action on thin films of reinforcing coatings. It is shown that the parameters of vibroacoustic signals accompanying the formation of new structures can be recorded with the help of flexible waveguides in the form of a wire drawn from a vacuum chamber to a plate with an accelerometer. The article presents and discusses monitoring the formation of intermetallic compounds under the influence of a pulsed electron beam on an aluminum plate covered with a thin film of a heat-resistant nickel alloy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Identification of target molecules, based on spectrum-feature extraction by comparison of spectra, can be accomplished using signal templates having patterns associated with known materials. This study examines the concept of using IR spectra calculated using density functional theory (DFT) as signal templates. In principle, DFT calculated IR spectra should provide reasonable templates for comparison with IR spectral measurements associated with different types of detector schemes and complex spectral-signature backgrounds. In practice, however, there exists artifacts due to computational errors and model assumptions in the case of DFT calculated spectra, and artifacts due to measurement errors and experimental-design assumptions in the case of spectral measurements. Accordingly, the use of DFT calculated spectra as signal templates must consider these artifacts. In this study, case-study analysis of IR absorption spectra for a water contaminant of interest is presented, which demonstrates aspects of using DFT calculated IR spectra to determine the presence of target molecules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a new non-destructive biomedical imaging modality, photoacoustic imaging not only has high contrast of optical imaging but also has high penetration depth of ultrasonic imaging. And it has developed rapidly in recent years and has been widely used in the fields of biomedical clinical diagnosis and volume imaging, attracted the eager attention of more and more researchers in the biomedical field. In biomedicine, image reconstruction needs to process the huge amount of information obtained. How to compress the data without distortion in this process has become an important research topic. In this paper, based on photoacoustic imaging technology and compression sensing reconstruction algorithm, a virtual simulation platform for compression sensing photoacoustic tomography is constructed by using k-wave simulation toolbox. Through this platform, a simulation model of photoacoustic propagation was established, we analyzed the photoacoustic signal generated by the simulation model. Finally, image reconstruction is completed by using compression sensing reconstruction algorithm. Then, in order to test the performance of the platform, we reconstructed part of the blood vessel network image based on the simulation platform. The results show that the virtual simulation platform successfully realizes the compressed sensing photoacoustic tomography with small amount of data but high reconstruction quality, which has practical significance and theoretical value for the research of the application of compress sensing in photoacoustic imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.