PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13058, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report favorable preliminary findings of work in progress bridging the Artificial Intelligence (AI) gap between bottom-up data-driven Machine Learning (ML) and top-down conceptually driven symbolic reasoning. Our overall goal is automatic generation, maintenance and utilization of explainable, parsimonious, plausibly causal, probably approximately correct, hybrid symbolic/numeric models of the world, the self and other agents, for prediction, what-if (counter-factual) analysis and control. Our old Evolutionary Learning with Information Theoretic Evaluation of Ensembles (ELITE2) techniques quantify strengths of arbitrary multivariate nonlinear statistical dependencies, prior to discovering forms by which observed variables may drive others. We extend these to apply Granger causality, in terms of conditional Mutual Information (MI), to distinguish causal relationships and find their directions. As MI can reflect one observable driving a second directly or via a mediator, two being driven by a common cause, etc., to untangle the causal graph we will apply Pearl causality with its back- and front-door adjustments and criteria. Initial efforts verified that our information theoretic indices detect causality in noise corrupted data despite complex relationships among hidden variables with chaotic dynamics disturbed by process noise, The next step is to apply these information theoretic filters in Genetic Programming (GP) to reduce the population of discovered statistical dependencies to plausibly causal relationships, represented symbolically for use by a reasoning engine in a cognitive architecture. Success could bring broader generalization, using not just learned patterns but learned general principles, enabling AI/ML based systems to autonomously navigate complex unknown environments and handle “black swans”.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an era of immense data generation, unlocking the full potential of Machine Learning (ML) hinges on overcoming the limitations posed by the scarcity of labeled data. In Computer Vision (CV) research, algorithm design must consider this shift and focus instead on the abundance of unlabeled imagery. In recent years, there has been a notable trend within the community toward Self-Supervised Learning (SSL) methods that can leverage this untapped data pool. ML practice promotes self-supervised pre-training for generalized feature extraction on a diverse unlabeled dataset followed by supervised transfer learning on a smaller set of labeled, application-specific images. This shift in learning methods elicits conversation about the importance of pre-training data composition for optimizing downstream performance. We evaluate models with varying measures of similarity between pre-training and transfer learning data compositions. Our findings indicate that front-end embeddings sufficiently generalize learned image features independent of data composition, leaving transfer learning to inject the majority of application-specific understanding into the model. Composition may be irrelevant in self-supervised pre-training, suggesting target data is a primary driver of application specificity. Thus, pre-training deep learning models with application-specific data, which is often difficult to acquire, is not necessary for reaching competitive downstream performance. The capability to pre-train on more accessible datasets invites more flexibility in practical deep learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given a maze populated with different objects, one may task a robot with a sequential goal completion task, e.g. 1) pick up a key then 2) unlock the door then 3) unlock the treasure chest. A typical Machine Learning (ML) solution would involve a monolithically trained Artificial Neural Network (ANN). However, if the sequence of goals or the goals themselves change, then the ANN must be significantly (or, at worst, completely) retrained. Instead of a monolithic ANN, a modular ML component would be 1) independently optimizable (task-agnostic) and 2) arbitrarily reconfigurable with other ML modules. This work describes a modular, hierarchical ML framework by integrating two emerging ML techniques: 1) Cognitive Map Learners (CML) and 2) Hyperdimensional Computing (HDC). A CML is a collection of three single layer ANNs (matrices) collaboratively trained to learn the topology of an abstract graph. Here, two CMLs were constructed, one describing locations on in 2D physical space and the other the relative distribution of objects found in this space. Each CML node states was encoded as a high-dimensional vector to utilize HDC, an ML algebra, for symbolic reasoning over these high-dimensional “symbol” vectors. In this way, each sub-goal above was described by algebraic equations of CML node states. Multiple, independently trained CMLs were subsequently assembled together to navigate a maze to solve a sequential goal task. Critically, changes to these goals required only localized changes in the CML-HDC architecture, as opposed to a global ANN retraining scheme. This framework therefore enabled a more traditional engineering approach to ML, akin to digital logic design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a pioneering approach for controlling a unilateral lower extremity exoskeleton designed for rehabilitation and enhancing the quality of life for individuals with neuromuscular weakness of the lower limbs. At the core of our methodology is the integration of Long Short-Term Memory (LSTM) networks with Proximal Policy Optimization (PPO) models, utilizing a deep reinforcement learning framework to interpret and predict user movement intentions in real time. By harnessing sensor fusion that combines surface electromyography (sEMG) and Inertial Measurement Units (IMU) from sensor arrays placed around the quadriceps and gastrocnemius muscles, our system employs an adaptive nonlinear sliding mode control with Pneumatic Artificial Muscles (PAMs), thereby directing the exoskeleton's movement and positioning. The LSTM network processes temporal sequences of sensor data to capture the dynamics of human motion, while the PPO model optimizes the control policy to ensure smooth and responsive movements aligned with the user intentions. Focusing initially on basic maneuvers integral to Activities of Daily Living (ADL), our system demonstrates promising preliminary results in mimicking natural limb movements, laying the groundwork for future clinical applications. This paper specifically delves into the utilization of the LSTM-PPO framework for controlling an avatar prior to testing the exoskeleton, representing a significant step towards realizing a responsive and intuitive exoskeleton control system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a pioneering first-generation dive-mask integrated eye tracking system for underwater health and cognition monitoring. Building on this foundation, we're exploring its potential for enhancing human-machine teaming in low-visibility, low-communication scenarios. By harnessing eye metrics to inform decision field theory, our aim is to revolutionize task allocation in extreme environments, prioritizing safety and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Label-Diffusion-LIDAR-Segmentation (LDLS) algorithm uses multi-modal data for enhanced inference of environmental categories. The algorithm segments the Red-Green-Blue (RGB) channels and maps the results to the LIDAR point cloud using matrix calculations to reduce noise. Recent research has developed custom optimization techniques using quantization to accelerate the 3D object detection using LDLS in robotic systems. These optimizations achieve a 3x speedup over the original algorithm, making it possible to deploy the algorithm in real-world applications. The optimizations include quantization for the segmentation inference as well as matrix optimizations for the label diffusion. We will present our results, compare them with the baseline, and discuss their significance in achieving real-time object detection in resource-constrained environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advancement of open architecture ecosystems is fundamentally dependent on the interoperability, scalability, and adaptability of their constituent elements. As Machine Learning (ML) systems become increasingly integral to these ecosystems, the need for a systematic approach to engineer, deploy, and re-engineer them grows. This paper presents a novel modeling approach based on recently published, formal, systems-theoretic models of learning systems. These models serve dual purposes: first, they give a theoretical grounding to standards that govern the architecture, functionality, and performance criteria for ML systems; second, they allow for requirements to be specified at various levels of abstraction to ensure the systems are intrinsically aligned with the overall objectives of the open architecture ecosystem they belong to. Through the proposed modeling approach, we demonstrate how the adoption of standardized models can significantly enhance interoperability between disparate machine learning systems and other architectural components. Further, we relate our framework to on-going efforts such as Open Neural Network Exchange (ONNX). We identify how our approach can be used to address limitations in government acquisition processes for ML systems. The proposed systems-theoretic framework provides a structured methodology that contributes to the foundational building blocks for open architecture ecosystems for ML systems, thereby advancing the state-of-the-art in complex system integration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large Language Models (LLMs) provide new capabilities to rapidly reform, regroup; and reskill for new missions, opportunities, and respond to an ever-changing operational landscape. Agile contracts can enable larger flow of value in new development contexts. These methods of engagement and partnership enable the establishment of high performing teams through the forming, storming, norming, and performing stages that then inform the best liberating structures that exceed traditional rigid hierarchical models or even established mission engineering methods. Use of Generative AI based on LLMs coupled with modern agile model-based engineering in design allows for automated requirements decomposition trained in the lingua franca of the development team and translation to the dialects of other domain disciplines with the business acumen afforded by proven approaches in industry. Cutting-edge AI automations to track and adapt knowledge, skills, and abilities across ever changing jobs and roles will be illustrated using prevailing architecture frameworks, model-based system engineering, simulation, and decision-making assisted approaches to emergent objectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent progress in generative AI, including Large Language Models (LLMs) like ChatGPT, has opened up significant opportunities in fields ranging from natural language processing to knowledge discovery and data mining. However, there is also a growing awareness that the models can be prone to problems such as making information up or ‘hallucinations’, and faulty reasoning on seemingly simple problems. Because of the popularity of models like ChatGPT, both academic scholars and citizen scientists have documented hallucinations of several different types and severity. Despite this body of work, a formal model for describing and representing these hallucinations (with relevant meta-data) at a fine-grained level, is still lacking. In this paper, we address this gap by presenting the Hallucination Ontology or HALO, a formal, extensible ontology written in OWL that currently offers support for six different types of hallucinations known to arise in LLMs, along with support for provenance and experimental metadata. We also collect and publish a dataset containing hallucinations that we inductively gathered across multiple independent Web sources and show that HALO can be successfully used to model this dataset and answer competency questions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We optimized and deployed the adaptive framework Virtuoso that can maintain real-time object detection even when experiencing high contention scenarios. The original Virtuoso framework uses an adaptive algorithm for the detection frame followed by a low-cost algorithm for the tracker frame which uses down-sampled images to reduce computation. One of our optimizations include detaching the single synchronous thread for detection and tracking into two parallel threads. This multi-threaded implementation allows for computationally high-cost detection algorithms to be used while still maintaining real-time output from the tracker thread. Another optimization we developed uses multiple down-sampled images to initialize each tracker based on the size of the input box; the multiple down-sampled images allow each tracker to choose the optimal image size for the box that it is tracking rather than a single down-sampled image being used for all trackers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The idea of Subspace Learning Machine (SLM) has been a powerful tool for Machine Learning (ML), and it has been successfully applied to the task of image classification. Recently, a novel SLM method was proposed, which (i) projects high-dimensional feature vectors into a 1D feature subspace, and (ii) partitions it into two disjoint sets. SLM with soft partitioning (SLM/SP) extends this approach by learning an adaptive Soft Decision Tree (SDT) structure using local greedy subspace partitioning. After meeting the stopping criteria for all child nodes and determining the tree structure, it updates all Projection Vectors (PVs) globally. It enables efficient training, high classification accuracy, and a small model size. It is applied to experimental data to show its performance as a lightweight and high-performance classification method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous work we have introduced our (proposed) architecture that connects a ‘Real’ and ‘Imaginary’ Neural Network. The ‘Real’ portion is represented by exploiting Striatal Beat Frequencies in an EEG with the patented Single-Period Single-Frequency (SPSF) method and the ‘Imaginary’ is represented by a convolutional neural network transformed into bi-directional associative memory matrices. We demonstrated that we could interconnect, i.e., bridge, the intermediate layers of two broken CNNs both of which were trained for object detection and still make a good prediction. In this work we will use a dual sensory CNN implementation of speech and object detection and we will incorporate Neural Decoding into the EEG SPSF method to emulate how to circumvent the broken neural networks in a human-computer interface situation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the rapidly evolving landscape of artificial intelligence, Large Language Models (LLMs) and Generative AI (GenAI) have emerged as front-runners in shaping the next generation of intelligent applications, where human-like data generation is necessary. While their capabilities have shown transformative potential in centralized computing environments, there is a growing shift towards decentralized edge AI models, where computations are orchestrated closer to data sources to provide immediate insights, faster response times, and localized intelligence without the overhead of cloud communication. For latency-critical applications like autonomous vehicle driving, GenAI at the edge is vital, allowing vehicles to instantly generate and adapt driving strategies based on ever-changing road conditions and traffic patterns. In this paper, we propose a latency-aware service placement approach, designed for the seamless deployment of GenAI services on these cloudlets. We represent GenAI as a Direct Acyclic Graph, where GenAI operations represent the nodes and the dependencies between these operations represent the edges. We propose an Ant Colony Optimization approach that guides the placement of GenAI services at the edge based on capabilities of cloudlets and network conditions. Through experimental validation, we achieve notable GenAI performance at the edge with lower latency and efficient resource utilization. This advancement is expected to revolutionize and innovate in the field of GenAI, paving the way for more efficient and transformative applications at the edge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a 10 August 2023 memo to Senior Department of Defense (DoD) Leaders, the Deputy Secretary of Defense outlined the roles and responsibilities of the DoD's newly established Chief Digital and Artificial Intelligence Officer Generative Artificial Intelligence and Large Language Models Task Force, Task Force Lima. The AI and LLM Task Force is charged with focusing the DoD's exploration and responsible fielding of generative AI and LLM capabilities. While AI and LLM have revolutionized natural language processing in commercial applications, significant concerns must be addressed before the technology is fully deployed within the DoD. This study will explore the current biases in training data, ethical violations, security breaches, potential misuse, and challenges with AI and LLM interpretability. Industry, academic and government partnerships need to ensure a responsible and equitable deployment of LLMs that harnesses the full potential of the capabilities in a manner that is responsible, secure, and well understood by the end user community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decision Advantage is a goal in current and future military operations. Achieving such an advantage can be done by degrading adversaries’ decision-making ability through imposition of complexity into the decision problems they have to make. This paper describes mathematical techniques for quantifying decision complexity in Integrated Air Defense Systems (IADS). The methods are based on graph properties derived from the defender’s IADS’ System of Systems description and the attacker’s Course of Action (COA) plans. Multiple plans can be compared quantitatively with respect to the decision complexity they impose on the defender. using metrics that are semantically meaningful to planners. The metrics developed are able to expose subtle ways that COAs impose complexity on an adversary, that may not be obvious to an operational planner at first glance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
AI-enabled capabilities are reaching the requisite level of maturity to be deployed in the real world. Yet, the ability of these systems to always make correct or safe decisions is a constant source of criticism and reluctance to use them. One way of addressing these concerns is to leverage AI control systems alongside and in support of human decisions, relying on the AI control system in safe situations while calling on a human co-decider for critical situations. Additionally, by leveraging an AI control system built specifically to assist in joint human/machine decisions, the opportunity naturally arises to then use human interactions to continuously improve the AI control system’s accuracy and robustness. We extend a methodology for Adversarial Explanations (AE) to state-of-the-art reinforcement learning frameworks, including MuZero. Multiple improvements to the base agent architecture are proposed. We demonstrate how this technology has two applications: for intelligent decision tools and to enhance training / learning frameworks. In a decision support context, adversarial explanations help a user make the correct decision by highlighting those contextual factors that would need to change for a different AI-recommended decision. As another benefit of adversarial explanations, we show that the learned AI control system demonstrates robustness against adversarial tampering. Additionally, we supplement AE by introducing Strategically Similar Autoencoders (SSAs) to help users identify and understand all salient factors being considered by the AI system. In a training / learning framework, this technology can improve both the AI’s decisions and explanations through human interaction. Finally, to identify when AI decisions would most benefit from human oversight, we tie this combined system to our prior art on statistically verified analyses of the criticality of decisions at any point in time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Graph Neural Networks (GNN) were originally developed to infer relationships between objects in complex graph environments such as social networks. However, they have recently been applied to other domains which naturally support graph expression, such as hardware and software analysis. We propose to extend the application of GNNs to datasets which contain a temporal component, thus enabling GNN inference of event-driven situations involving the radio frequency (RF) spectrum. Post-battle analysis can train a GNN to identify individual subgraphs representing sequences of events. Trained GNNs can then be used in war time to infer a larger situation as a series of subgraphs are identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Saltzer and Schroeder’s security principles define complete mediation as to verify all access rights and authority. Conventional architectures focus on speed at all costs using predictors, caches, out-of-order execution, speculative execution, etc. A new approach is required to overcome the limitations of conventional architectures: the clock speed differential between a microprocessor and memory, and the resulting self-imposed, never-ending cyber security problems. The Aberdeen Architecture uses the cache bank pipeline memory architecture from the Redstone Architecture to overcome some of the speed differential between a microprocessor and memory. The trusted computing base uses hardware state machine monitors (hardware-based nano-operating system kernels). The state machine monitors use register and memory tags to manage and track information flows during instruction execution. The Aberdeen Architecture tracks and monitors four information flows: data flow integrity, memory access flow integrity, control flow integrity, and instruction execution flow integrity. All information flows are data flow driven. The state machine monitors completely virtualize the execution pipeline. The Aberdeen Architecture achieves near complete mediation for instruction execution. This paper focuses on data flow integrity and memory access flow integrity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we systematically investigate the use of delays to optimize the throughput for the working Maximum-On-Ground (MOG) problem space. The MOG optimization refers to the management of the transport aircraft in-and-around an airfield. The working MOG refers to the fulfilling of the servicing requirements of the aircraft. The effective and efficient daily MOG management enables the U.S. Air Force (USAF) Air Mobility Command (AMC) to rapidly deploy and sustain the equipment, and personnel anywhere in the world. However, the seemingly solved problem can quickly grow out of hand when the number of interruptions exceed past a certain point; this due to the combinatorial nature of the scheduling problem, where the order, and the mission dependencies matter. The opportunistic delays optimization explores the trade-off space between the efficiency (throughput maximization) and the resilience to schedule disruptions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This abstract outlines two significant innovations in AI and cybersecurity education within the "Deep HoriXons" 3D virtual campus, addressing the urgent need for skilled professionals in these domains. First, the paper introduces "Deep HoriXons," an immersive 3D virtual learning environment designed to democratize and enhance the educational experience for AI and cybersecurity. This innovation is notable for its global accessibility and ability to simulate real-world scenarios, providing an interactive platform for experiential learning, which is a marked departure from traditional educational models. The second innovation discussed is the strategic integration of ChatGPT as a digital educator and tutor within this virtual environment. ChatGPT's role is pivotal in offering tailored, real-time educational support, making complex AI and cybersecurity concepts more accessible and engaging for learners. This application of ChatGPT is an innovation worth noting for its ability to adapt to individual learning styles, provide interactive scenario-based learning, and support a deeper understanding of technical subjects through dynamic, responsive interaction. Together, these innovations represent a significant advancement in the field of AI and cybersecurity education, addressing the critical talent shortage by making high-quality, interactive learning experiences accessible on a global scale. The paper highlights the importance of these innovations in creating a skilled workforce capable of tackling the evolving challenges in AI and cybersecurity, underscoring the need for ongoing research and development in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When you think of different standards of encryption you may think of Data Encryption Standard, Advanced Encryption Standard or Elliptic Curve Cryptography. However, a new standard for encryption, called homomorphic encryption, is being researched and put into use. Homomorphic encryption is a cryptographic technique that has the potential to significantly impact the field of Artificial Intelligence (AI). It allows data to be processed in an encrypted form without first decrypting it, thus preserving privacy and security while still enabling meaningful computation. Homomorphic encryption can also be applied in federated learning, a decentralized approach to machine learning. Multiple parties can collaborate to train a machine learning model without sharing their individual data directly. Throughout this paper first we will discuss what homomorphic encryption is and then, we explore how homomorphic encryption can be used to ensure that data remains encrypted during model updates and aggregation, enhancing privacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Internet of Things (IoT) and other emerging ubiquitous technologies are supporting the rapid spread of smart systems, which has underlined the need for safe, open, and decentralized data storage solutions. With its inherent decentralization and immutability, blockchain offers itself as a potential solution for these requirements. However, the practicality of incorporating blockchain into real-time sensor data storage systems is a topic that demands in-depth examination. While blockchain promises unmatched data security and auditability, some intrinsic qualities, namely scalability restrictions, transactional delays, and escalating storage demands, impede its seamless deployment in high-frequency, voluminous data contexts typical of real-time sensors. This essay launches a methodical investigation into these difficulties, illuminating their underlying causes, potential effects, and potential countermeasures. In addition, we present a novel pragmatic experimental setup and analysis of blockchain for smart system applications, with an extended discussion of the benefits and disadvantages of deploying blockchain based solutions for smart system ecosystems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of research papers has been published using the architecture of adversarial neural networks to prove that communication between two neural net based on synchronized input can be achieved, and without knowledge of this synchronized information these systems can not be breached. In this paper we will try to evaluate these adversarial neural net architectures when a third party gain access to partial secret key, or a noisy secret key, or has knowledge about loss function, or loss values itself, or activation functions used during training of encryption layers. We explore the cryptanalysis side of it in which we will focus on vulnerabilities a neural-net based cryptography network can face. This can be used in future to improve the current neural net based cryptography architectures. In this paper we show that while the encryption key is necessary to decrypt the messages in neural network domain, the adversarial neural networks can occasionally decrypt messages or raise a concern which will require further training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ISAAC is a 3D-printed pneumatic spacecraft for attitude control system development in a 3-axis gimbal ring. This allows for simulated free-space movement of a cold gas thruster-controlled probe in a controlled test environment. The purpose of this open-sourced control platform is to allow students, professors, and researchers to test their control algorithms on real hardware in real-time. The end goal is to have a website allowing anyone to upload their code and watch it run via live stream. The spacecraft uses a pneumatic system to mimic cold gas thrusters by using compressed air as a means of propulsion. The delivery system uses solenoids to control the thrust, stabilizing the craft. The hardware is simple and consists of custom Arduino Printed Circuit Boards (PCB), a Raspberry Pi, an Inertial Measurement Unit (IMU) for total orientation data, and 2 LiPo batteries. The craft is entirely 3D printed, including the mounts for the components, to be accessible for future research and upgrades. The attitude controller will be integrated into the website easycontrols.org, which will allow anyone interested, both students and researchers alike, to upload their Python control algorithm and watch it run on hardware in real-time. The website will have built-in functions and examples, allowing the user to create their algorithm easily. A proof of concept of this system has been the application of a sliding mode controller in one axis of the gimbal rings. Future work can include the application of more modern control methods for students and facilities to display and follow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Zero Trust security is being adopted across companies and government organizations to continually verify cybersecurity requirements. This paper investigates architecture development methodologies that can develop a Zero Trust architecture that implements the missions of an enterprise. An enterprise architecture depends on an organization’s strategic priorities and should reflect the organization’s critical decisions. These decisions can be evaluated according to criteria such as interoperability, speed of operations tempo, and cyber-resilience to failures. Zero-Trust architectures must define alternatives tailored to missions. An enterprise architecture can then be developed that describes the context, operations, and resources associated with a strategic implementation decision. A multi-criteria decision-making method such as the Analytic Hierarchy Process can help guide the development and implementation of Zero Trust strategy. Zero Trust criteria are defined according to quality attributes associated with the DoD Reference Architecture Pillars, and security solutions are evaluated against how well they meet these criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neuromorphic computing is of high importance in Artificial Intelligence (AI) and Machine Learning (ML) to sidestep challenges inherent to neural-inspired computations in modern computing systems. Throughout the development history of neuromorphic computing, Compute-In-Memory (CIM) with emerging memory technologies, such as Resistive Random-Access Memory (RRAM), offer advantages by performing tasks in place, in the memory itself, leading to significant improvement in architectural complexity, data throughput, area density, and energy efficiency. In this article, in-house research efforts in designing and applying innovative memristive circuitry for AI/ML related workloads are showcased. To be specific, Multiply-and-Accumulate (MAC) operations and classification tasks can be obtained on a crossbar array made of 1-transistor-1-RRAM (1T1R) cells. With the same circuit structure, flow-based Boolean arithmetic is made possible by directing the paths of current flow through the crossbar. Better yet, high-precision operations for in-situ training can be realized with an enhanced crossbar array made of 6-transistor-1-RRAM (6T1R) cells alongside the bidirectional current control mechanism. Where possible, our neuromorphic solutions optimized for AI-enabled cognitive operations offer faster and more robust yet more efficient decision-making to support future battlespaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The OpenMutt platform is a modular, robotic quadruped for use as a testbed for a variety of research opportunities to increase multidisciplinary research. The OpenMutt quadruped allows for a low-cost testbed for actuator drive design, biomimicry, and instrumentation. The current design is intended to be modular and facilitate different research disciplines with the usage of a robust 13:1 cycloidal actuator, modular feet, and multiple mounting points for the investigation of various sensing modalities and hardware packages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In traditional classroom settings, spacecraft attitude dynamics and controls are typically presented through 2-D illustrations of complex 3-D dynamics. This often results in students finding it challenging to bridge the gap between theoretical physics and its practical, real-world applications. To address this challenge, our project aims to design, develop, and manufacture CubeSat controls testbeds. These testbeds are equipped with reaction wheels to enable autonomous attitude control system applications. Notably, each testbed will incorporate three distinct reaction wheels, each mounted orthogonally. This arrangement ensures precise attitude control in all three degrees of freedom. The versatility of these CubeSat testbeds allows users to explore and implement a broad range of control systems. These can range from classical PID controllers, state-space control methods, adaptive controllers, sliding mode control, to more advanced techniques like model predictive control, and robust control methods. The platform can serve both as an educational tool for students and a research apparatus for professionals. The ultimate vision for the CubeSat Reaction Wheel Attitude Control Platform is its seamless integration into a dedicated website called Easy Controls. Here, users worldwide can upload their control algorithms. They can then view a live stream of their algorithm being tested and operationalized in real-time on the physical hardware. This platform not only demystifies spacecraft control dynamics for learners but also fosters a global community of innovators collaborating and refining their control algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The widespread misinformation in the digital age has emerged as a significant societal challenge with far-reaching implications. While concerns about the threats of misinformation on the mental health of individuals have garnered attention, there remains a critical gap in our understanding of how misinformation uniquely affects the young generation, particularly those belonging to underrepresented groups. Emerging evidence suggests that underrepresented groups among the young generation, including marginalized communities, ethnic minorities, and socioeconomically disadvantaged individuals, often face heightened vulnerabilities to the harmful effects of misinformation. These groups encounter a unique intersection of social, economic, and cultural factors, exacerbating their susceptibility to false or harmful information. Understanding the differential impacts of misinformation within these communities is vital for creating targeted interventions and support mechanisms. With a long-term goal of offering a thorough understanding of the current state of knowledge in this critical area, this paper reports a preliminary literature review examining how false information about vaccines spreads on social media, creating a huge problem called an “infodemic” and revealing how misinformation against vaccines gets shared in social media and why people believe them. In addition, a small-scale case study is conducted based on the dataset collected by the team.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study delves into the interconnected realms of Debates, Fake News, and Propaganda, with an emphasis on discerning prominent ideological underpinnings distinguishing Russian from English authors. Leveraging the advanced capabilities of Large Language Models (LLMs), particularly GPT-4, we process and analyze a large corpus of over 80,000 Wikipedia articles to unearth significant insights. Despite the inherent linguistic distinctions between Russian and English texts, our research highlights the adeptness of LLMs in bridging these variances. Our approach includes translation, question generation and answering, along with emotional analysis, to probe the gathered information. A ranking metric based on the emotional content is used to assess the impact of our approach. Furthermore, our research identifies important limitations within existing data resources for propaganda identification. To address these challenges and foster future research, we present a curated synthetic dataset designed to encompass a diverse spectrum of topics and achieve balance across various propaganda types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past three decades, information warfare has emerged as a critical component of international conflict. With the proliferation of technology and media networks, various stakeholders have exploited their capabilities to manipulate public perception. The ongoing Russo-Ukrainian conflict underscores the potency of these tactics, as they deepen societal divides and obstruct efforts at conflict resolution. Disinformation spans a wide gamut, from denials of human rights abuses to personal defamation. A primary challenge in countering disinformation lies in its tenacity, bolstered by confirmation bias—a tendency to dismiss evidence that contradicts existing beliefs. Against this backdrop, we delve into the potential of commercial satellites, which operate outside of state domains, as a means of countering falsehoods through textual and visual evidence. Utilizing an online survey targeting citizens in Russia, we assess the impact of disinformation campaigns on attitudes towards the war and its leaders and evaluate the efficacy of commercial satellites as a debunking instrument.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid advancement of multimedia content editing software tools has made it increasingly easy for malicious actors to manipulate real-time multimedia data streams, encompassing audio and video. Among the notorious cybercrimes, replay attacks have gained widespread prevalence, necessitating the development of more efficient authentication methods for detection. A cutting-edge authentication technique leverages Electrical Network Frequency (ENF) signals embedded within multimedia content. ENF signals offer a range of advantageous attributes, including uniqueness, unpredictability, and total randomness, rendering them highly effective for detecting replay attacks. To counter potential attackers who may seek to deceive detection systems by embedding fake ENF signals, this study harnesses the growing accessibility of deep Convolutional Neural Networks (CNNs). These CNNs are not only deployable on platforms with limited computational resources, such as Single-Board Computers (SBCs), but they also exhibit the capacity to swiftly identify interference within a signal by learning distinctive spatio-temporal patterns. In this paper, we explore applying a Computationally Efficient Deep Learning Model (CEDM) as a powerful tool for rapidly detecting potential fabrications within ENF signals originating from diverse audio sources. Our experimental study validates the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an era characterized by the prolific generation of digital imagery through advanced artificial intelligence, the need for reliable methods to authenticate actual photographs from AI-generated ones has become paramount. The ever-increasing ubiquity of AI-generated imagery, which seamlessly blends with authentic photographs, raises concerns about misinformation and trustworthiness. Authenticating these images has taken on critical significance in various domains, including journalism, forensic science, and social media. Traditional methods of image authentication often struggle to adapt to the increasingly sophisticated nature of AI-generated content. In this context, frequency domain analysis emerges as a promising avenue due to its effectiveness in uncovering subtle discrepancies and patterns that are less apparent in the spatial domain. Delving into the imperative task of imagery authentication, this paper introduces a novel Generative Adversarial Networks (GANs) based AI-generated Imagery Authentication (GANIA) method using frequency domain analysis. By exploiting the inherent differences in frequency spectra, GANIA uncovers unique signatures that are difficult to replicate, ensuring the integrity and authenticity of visual content. By training GANs on vast datasets of real images, we create AI-generated counterparts that closely mimic the characteristics of authentic photographs. This approach enables us to construct a challenging and realistic dataset, ideal for evaluating the efficacy of frequency domain analysis techniques in image authentication. Our work not only highlights the potential of frequency domain analysis for image authentication but also underscores the importance of adopting generative AI approaches in studying this critical topic. Through this innovative fusion of AI and frequency domain analysis, we contribute to advancing image forensics and preserving trust in visual information in an AI-driven world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to create and detect synthetic video is becoming critically important to scene understanding. Techniques for synthetic manipulation and augmentation of data increase diversity within available datasets, while not requiring laborious labeling efforts. That is, the ability to create synthetic video can enable augmentation of small realistic datasets on which to further train Artificial Intelligence and Machine Learning (AI/ML) algorithms. Thus, it may be desirable to add, remove, or modify vehicles in satellite and overhead video. In our previous work, we leveraged Generative Adversarial Networks (GANs) to transform cars into trucks (and vice versa) in static images. We utilized an attention-based masking approach that assists the network in transformation of the object and not background. In addition, we demonstrated the benefits of numerous data augmentation procedures, including presenting a new artificial dataset of vehicles from an aerial perspective and introducing novel augmentation techniques appropriate for our network architectures. This work extends the applied techniques from still imagery to video. We employ a few different architectures: (1) a fully dynamic 3D convolutional discriminator network with static generators, (2) a fully dynamic 3D convolutional discriminator and generator network, and (3) an architecture that computes "warp" between frames for input to a static generator. Additionally, to help enforce consistency, we experiment with an interframe classifier that verifies whether two frames belong to the same video sequence or not. We run experiments on a real-world dataset, presenting promising results in terms of FID, KID, and metrics developed from a classifier trained on our dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the landscape of global conflict evolves, the integration of Information Warfare (IW) strategies with emerging technologies like Artificial Intelligence (AI) becomes increasingly crucial for national security. This presentation, led by Terry Traylor, a seasoned military professional with expertise in IW and AI, aims to explore the synergies between these domains. Drawing from his experience as Deputy G39 for Information Warfare at MARSOC and his academic background in computer science focusing on machine learning, the talk will delve into the challenges and opportunities of incorporating AI into IW operations. Special attention will be given to the ethical considerations of AI in IW and how machine learning can be leveraged to counter disinformation campaigns effectively. The session will also discuss a case study involving a Phase II SBIR for USSOCOM, emphasizing the practical applications of these integrations in real-world scenarios. This presentation aims to provide a comprehensive understanding of how AI can augment IW strategies, offering a forward-looking perspective on national security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Continuous Time Digital Signal Processing (CT-DSP) has the potential of being disruptive in four engineering disciplines: digital signal processing, control systems, compressive sensing, and spiking neural networks. In July 2022, a pipeline level crossing analog-to-digital architecture was published by Jungwirth and Crowe. In this paper a real-time level crossing sampling interpolation algorithm is introduced. Digital Signal Processing (DSP) systems are treated as Linear Time-Invariant (LTI) systems, and the reconstruction operator is also LTI. This provides DSP with some important advantages. It benefits from mature linear system theory, mature Discrete Time (DT) systems theory, the ability to postpone the reconstruction operator until the final stage, and the well understood Whittaker-Kotel'nikov-Shannon reconstruction. However, CT-DSP is not linear; the reconstruction is time-variant and complicated. Design of CT-DSP systems is more difficult than for DSP, but the justification for assuming this added difficulty is based on significant advantages in signal capture accuracy and in reduction in power requirements. For DSP, the quantization noise floor is determined by Bennett's quantization error equation, and it remains fixed, relative to the Analog-Digital-Converter's (ADC) input range. However, the noise floor for CT-DSP is largely determined by the reconstruction algorithm and is not entirely dependent on the number of quantization levels. For example, Tsividis demonstrated ~100 dB Signal-to-Noise and Distortion ratio (SINAD) for a 16-level (4-bit equivalent) level crossing ADC, using offline signal reconstruction. This implies that CT-DSP’s SINAD does not significantly degrade for weak signals. In addition to the Tsividis revelation of the accuracy of these signals, several demonstrations of the advantages of CT-DSP have been reported. Zhao and Prodic demonstrated reduced lag and a 3x reduction in overshoot in the controller for a DC-DC buck-boost converter. Qaisar and Hussain reported a 3x decrease in the number of sample points needed for accurate classification of arrythmias using level crossing Electrocardiogram (ECG) signals. Alier et al. have demonstrated a 10x reduction in sample data, when level crossing sampling is performed on audio speech waveforms. A novel real-time CT-DSP reconstruction algorithm is presented, for the first time, in this paper. The technique makes use of the aliased sinc (asinc) function in order to accomplish a compact, trigonometric spline interpolation. Although the technique is not strictly ideal, corrective measures have been included to maintain accuracy. It provides 20-40 dB SINAD improvement over comparable DSP systems, depending on the application. It is applicable to low lag, real time processing while allowing a trade-off between accuracy and computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to Executive Order 14110 emphasizing the safe, secure, and trustworthy advancement and utilization of artificial intelligence, it mandates conducting research, development and deployment with a strong emphasis on ethical practices. The United States’ Department of Defense (DoD) and numerous commercial industries require a robust methodology to test their AI/ML models, ensuring regulatory compliance while safeguarding the security and intellectual property of their models. Reverse Engineering and Vulnerability Elucidation of ALgorithms (REVEAL) is a groundbreaking invention which is designed to enable the reverse engineering and characterization of AI/ML algorithms without access to the underlying code base. With this type of “blackbox assessment”, REVEAL will provide a method to enumerate the security and robustness of various AI enabled systems. It will allow developers to improve models, provide new strategies for test and evaluation, and help ensure comprehensive adherence to governmental regulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper gives a bibliometric summary of Unscented Kalman Filter (UKF) in AI-infused robotics, highlighting its role in unifying control and cognition. Using a systematic approach that includes literature collection from IEEE Xplore, Web of Science and Google Scholar, rigorous screening and selection, and VOSviewer for a comprehensive bibliometric analysis. This analysis reports major trends, primary contributors and central themes, highlighting UKF’s pivotal role in improving robotics cognitive and control capacities. The study emphasizes the universally used UKF in many fields of robotics, i.e. in navigation and mapping, sensor fusion, and state estimation, as one of its principal developers, which illustrates its vital role in promoting robotic autonomy and intelligence. The integration of findings from the bibliometric analysis thus not only presents the current state of research but also identifies possible future research directions, highlighting the increasing unification of control theories and cognitive processes in robotics. This research adds to the body of knowledge by delivering a comprehensive map of the UKF application. In this light, the UKF will be able to penetrate AI-infused robotics, the future of robotic developments will rely on the deep fusion of control and cognition facilitated by UKF and alike.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, the interest in Software-Defined Networking (SDN) has been very high. Many applications of traditional networking have been implemented in SDN environments in order to test the performance of the different network devices. In this paper, server Load-Balancing (LB) based on SDN has been developed and tested in order to verify the effectiveness of this approach inside the new networking approach. In our implementation, we have used a Ruy controller for controlling and managing network devices and two different LB algorithms have been implemented. We have performed an analysis of these two algorithms with a system without load balancing in a server-client system changing the number of servers and clients in order to show the performance of the SDN network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The air pollution, with its impacts on human health and the environment, is a growing global issue. In this article, we propose the implementation of a Multi-Interface Mobile Gateway (MIMG) with LPWAN technology in public transportation vehicles for monitoring air quality. The idea is to use a mobile monitoring system that can reduce the cost of the classical fixed air pollution and environmental monitoring stations. This approach addresses challenges such as data transfer, interference, and data pre-processing to reduce the amount of data sent over the remote data management center. We conducted a system emulation to evaluate some data forwarding strategies and to evaluate the overall traffic load generated by the mobile station over the overall network. Furthermore, the MIMG manages the use of the communication interface, uses data aggregation techniques to reduce the amount of data to be transmitted, and utilizes machine learning to enhance the accuracy of the low-cost sensor readings. Our approach has significant applications in urban air quality management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.