Researchers are exploring how radio frequency (RF) sensors can be used to create new interfaces and smart environments that respond to human movement. This technology has the potential to be used in things like gesture recognition or smart home systems. While there are different types of RF sensors that can be used, this study focuses on using Wi-Fi signals for this purpose. The researchers collected data using a Raspberry Pi equipped with special software. They then analyzed this data to see if it could be used to identify different human activities. They made their data and code publicly available so that others can build on their work. The study found that Wi-Fi signals could be used to identify activities with an accuracy of around 65%. This suggests that Wi-Fi has potential for being used to monitor activity indoors.
KEYWORDS: Signal attenuation, Neural networks, Fourier transforms, Data processing, Data conversion, Cell phones, Neurons, Mobile devices, Received signal strength, Computer engineering
Mobile devices have distinct RF fingerprints, which are reflected by changes in the frequency of transmitted signals. The Short-Time Fourier Transform (STFT) is a suitable technique for evaluating this frequency content and thus identifying them. In this paper, we take advantage of STFT processing and perform roomlevel location classification. The raw in-phase and quadrature (IQ) signals and channel state information (CSI) frames have been collected using seven different cell phones. The data collection process has been performed in eight different locations on the same floor of our engineering building, which contains indoor hallways and rooms of different sizes. Three software-defined radios (SDRs) are placed in three different locations to receive signals simultaneously but separately. The IQ and CSI frames have been concatenated together for training a neural network. A Multi-Layer Perception (MLP) network has been used to train the concatenated signals as input and their corresponding locations as labels. A challenging aspect is that our dataset does not contain the same number of samples per location. Moreover, several locations have insufficient training data due to signal attenuation. An imbalanced learning method has been implemented in this dataset to overcome this limitation and improve the classification accuracy. The classification strategy involves binary classification like individual location vs. other. Using this approach, we obtain a mean accuracy of around 95%.
Many correctional facilities suffer from the smuggling of cell phones and other wireless devices into prison walls. In order to locate these devices for confiscation, we must be able to map intercepted signals to indoor locations within a few meter radius. We chose to use cell phones of varying models and multiple low-cost software-defined radios for this task. The different types of cell phones provide us with a more robust dataset for location fingerprinting due to the different transmitter hardware in each. Furthermore, the SDRs allow us to easily receive the raw IQ data from WiFi signals while being more cost-efficient for smaller facilities. This raw data is collected from a harsh prison-like environment in a grid pattern and associated with the location they were captured. An advanced machine learning network uses the raw signals as input and locations as labels in order to map the signals to their respective locations. The accuracy of our system is then compared and discussed against prior works in this field. These studies often use values other than the raw IQ data such as channel state information and received signal strength indicator. Therefore, we augment our original input with each of these values and measure their effect on the system’s overall performance. The end result provides prisons with a tool capable of locating devices used in unauthorized zones for confiscation.
Wireless devices identify themselves using media access control (MAC) addresses which can be easily intercepted and mimicked by an adversary. Mobile devices also have a unique physical fingerprint represented by perturbations in the frequency of broadcasted signals caused by differences in the manufacturing process of their hardware components. This unique fingerprint is much more difficult to mimic. The short time Fourier transform (STFT) is used to analyze how the frequency content of a signal changes over time, and may provide a better representation of mobile signals in order to detect their unique fingerprint. In this paper, we have collected wireless signals using the 802.11 a/g protocol, showing the effect on classification performance of applying the STFT when varying the choice of window lengths, augmenting the data with complex Gaussian noise, and concatenating STFTs of different frequency resolutions, achieving state-of-the-art performance of 99.94% accuracy in the process.
We consider the problem of accurately detecting signals from contraband WiFi devices. Source locations may be selected in a worst-case fashion from within an indoor structure, such as a correctional facility. The structure layout is known, but inaccessible prior to deployment, and only a small number of detectors are available for sensing these signals. Our approach treats this setting as a covering problem, where the aim is to achieve a high probability of detection at each of the grid points of the terrain. Unlike prior approaches, we employ (1) a variant of the maximum coverage problem, which allows us to account for aggregate coverage by several detectors, and (2) a state-of-the-art commercial wireless simulator to provide SINR measurements that inform our problem instances. This approach is formulated as a mathematical program to which additional constraints are added to limit the number of detectors. Solving the program produces a placement of detectors whose performance is then evaluated for classifier accuracy. We present preliminary results, combining both simulation data and real-world data to evaluate the performance our approach against two competitors inspired by the literature.
This paper explores a multimodal deep learning network based on SqueezeSeg. We extend the standard SqueezeSeg architecture to enable camera and lidar fusion. The sensor processing method is termed pixel-block point-cloud fusion. Using co-registered camera and lidar sensors, the input section of the proposed network creates a feature vector by extracting information from a block of RGB pixels from each point in the point-cloud that is also in the camera’s field of view. Essentially, each lidar point is paired with neighboring RGB data so the feature extractor has more meaningful information from the image. This fusion method adds rich information on object color and texture from the camera data to enhance the overall performance. The image pixel blocks will not only add color information to the lidar data, but it will also add information about texture. The proposed pixel-block point-cloud fusion method yields better results than single-pixel fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.