PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11533 including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Welcome to the Image and Signal Processing for Remote Sensing XXVI conference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric Corrections, Calibration, and Image Enhancement
Processing of remote sensing data is often based on an assumption that noise parameters in component images are a priori known. If this assumption is not valid, it is desired to perform estimation of noise parameters directly from noisy image patches. In this paper, two estimators - model- and learning-based ones possessing the ability to evaluate noise standard deviation (SD) or variance and to predict estimation accuracy for each image patch are considered. The former approach is the representative of maximum likelihood estimator (MLE) of parameters for anisotropic fractional Brownian motion (afBm) field whilst the learning-based one is the representative of convolutional neural networks (CNN) that employs training on real-life images. Our goal is to compare the performance for two cases: for pure afBm data and for real-life images. It is shown that the learning-based approach occurs to be less effective for pure afBm data since it produces a certain bias whilst the model-based approach runs into problems for complex image patches in reallife images. Based on this analysis, we propose to use synthetic afBm data as additional source of training data for learning-based methods of noise parameters estimation. By mixing real and synthetic data for training of the NoiseNet CNN, we were able to improve its performance in both domains. For afBm data, NoiseNet bias was significantly reduced and ability to predict noise SD estimates confidence improved. On NED2012 database of real images, the modified NoiseNet reduces signal-independent noise SD component estimation error by about 40% as compared to the original CNN version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Sentinel-2 mission is dedicated to land monitoring, emergency management and security. It serves for monitoring of land-cover change and biophysical variables related to agriculture and forestry. The mission is also used to monitor coastal and inland waters and is useful for risk and disaster mapping. The Sentinel-2 mission is fully operating since June 2017 with a constellation of two polar orbiting satellite units. Both Sentinel-2A and Sentinel-2B are equipped with an optical imaging sensor MSI (Multi-Spectral Instrument) which acquires optical data products with spatial resolution up to 10 m. Accurate atmospheric correction of satellite observations is a precondition for the development and delivery of high quality applications. Therefore the atmospheric correction processor Sen2Cor was developed with the objective of delivering land surface reflectance products. Sen2Cor is designed to process monotemporal single tile Level-1C products, providing Level-2A surface (Bottom-of-Atmosphere) reflectance product together with Aerosol Optical Thickness (AOT), Water Vapour (WV) estimation maps and a Scene Classification (SCL) map for further processing. The paper will give an overview of the Level-2A product content and up-to-date information about the data quality of the Level-2A products generated with Sen2Cor 2.8 in terms of Cloud Screening and Atmospheric Correction. In addition the paper gives an outlook on the next updates of Sen2Cor and their impact on Level-2A Data Quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Satellite imagery provides information crucial for remote sensing applications. However, the images themselves can suffer from systematic and random artefacts which reduce the utility and accuracy of datasets. In particular, radiometric miscalibration due to temporal variation of the detector response may result in stripe noise. We report a method for suppressing striping in remote sensing images by use of a Fourier filter shaped like a superGaussian function. In comparison to both established ‘traditional’ and deep-learning-based destriping techniques, our method demonstrates superior destriping performance for both remote sensing images with native striping as well as those with stripes added to them. Our method simultaneously meets the three criteria of fidelity, speed and flexibility, enabling an efficient improvement in the radiometric accuracy of images from a wide range of satellite sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article considers the issue of using the multi-criteria smoothing method, with the possibility of adaptive parameter changes for various types of images. As an approach to implement the improvement of the group of images, the work proposes phased processing for each multi-channel image. As a first step, an algorithm for changing the color space is applied, in which multiple adaptive compression of the range occurs, based on a change in the size of the clusters. This algorithm allows adaptive absorption of adjacent pixel regions by analysis of histograms of the gradients. The application of this approach allows performing primary localization and simplification of the image. In the next step, we search for areas of significance (maximum number of transitions or complexity of an object). We check the coincidence of areas in a multi-channel image. Next, we perform image smoothing. As a filter mask, the data obtained at the previous stages of processing are used. The parameters of the multicriteria method depend on the value of a certain standard deviation coefficient and the analysis area (object boundary, detailed section, or locally stationary region). At the final stage, we perform an image enhancement operation based on the application of the α-rooting algorithm in local areas defined in the first stages of the algorithm. All operations are performed for each image in all the channels. The approach proposed in the article showed high efficiency and the possibility of applying for the processing of multichannel images. This is method can be expanded to other groups and types of sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Landsat 8 data and Breaks For Additive Season and Trend (BFAST) were used in a region of central Portugal to detect forest clear-cuts and burnt areas. A total of 79 Landsat 8 images from 2013 to 2019 were downloaded for path/row 204/032, and the NDVI was calculated. The same data processing was done for path/row 203/032 to create a denser time series in the overlapping area, which increased to 124 images. The output of the analysis is a binary map of change (i.e., forest loss) and no-change. A probabilistic accuracy assessment based on random stratified sampling was implemented with 100 random points per stratum. Each point was interpreted as being either “no-change”, “clear-cut” or “burnt area” based on reference data. Furthermore, the date of change (if any) was defined. Results show an overall accuracy of 0.85±0.02 for the binary classification with omission and commission errors of class “Change” of 0.30±0.02 and 0.19±0.02. Moreover, it is estimated that 32% of the forested area in path/row 204/032 went through at least one episode of clear-cut or fire in the period analyzed. The time lag between the date of change and detection was about 2.5 months on average, which decreased to 1.5 months in the regions of the denser time series. The results are promising but BFAST is somewhat slow and hence some concerns remain about its efficiency in operation use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thanks to the freely availability of several Satellite Image Time Series (SITS) covering the Earth, it is now possible to monitor and analyse Land Covers (LC) and Land Cover Changes (LCC) on a yearly or even longer time span. Such applications are relevant in the context of Climate Change (CC), where consequences of the changes can only be seen on long term. Nevertheless, SITS suffer from atmospheric condition related problems (when talking about passive sensors) that reduce the temporal resolution of images in SITS. Several methods have been proposed in literature to mitigate these problems, and are placed under gap filling or SITS fitting methods. Such methods generally work with a single feature, being it a radiometric index or a spectral band. The use of multiple features is limited to specific single LC class or satellite sensor, limiting its usage in LCC and CC. Thus, in this paper, we propose an approach that is automatic, and both LC and feature independent. Here we propose the use of Normalized Difference Indices (NDI), with combination of all available spectral bands. The proposed approach uses a dropout upper-envelope strategy to reconstruct SITS trends, based on a set of rules, and guarantees a smoother closer trend to that of the original data. The proposed approach has been applied over two regions (Amazonia and Saudi Arabia) in the period 2013-2017, and has been compared to other fitting methods: Cubic Splines and Univariate Splines. It has been further evaluated by detecting LCC with long SITS methods such as Breaks For Additive Seasonal and Trend (BFAST). The preliminary results are promising demonstrating the robustness of the approach across different LCs and across different features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Different from the traditional daytime Remote Sensing (RS) observation data, Nighttime light (NTL) RS images have shown their great potential in earth observation applications from a unique point of view. With the launch of the China’s new generation Luojia1-01 (LJ1-01) NTL satellite, the acquisition of the high spatial resolution and high quality NTL imagery make it possible to identify the disaster event and its temporal change by using the automatic Change Detection (CD) techniques. It is a strong complement to the daytime remote sensing information. In this paper, we proposed a multiple feature fusion CD approach for fire disaster event monitoring in multitemporal high resolution LJ1-01 NTL images. The multiple texture features were fused by taking advantages of the Multivariate Alteration Detection (MAD) and its Iteratively-Reweighted version (IR-MAD) algorithms, in order to improve the CD performance limited by using the original single-band gray-level NTL images. Experimental results obtained on the multitemporal LJ1-01 NTL images demonstrated the effectiveness of the proposed CD technique in implementing an automatic and accurate extraction of fire disaster event of the 2018 California Camp fire. The proposed approach outperformed the ones only relying on the gray-scale original band and single texture features. The conclusion of this study explores the possibility and potential by using high resolution NTL data for CD, in particular, for the effective emergency and rescue in major disaster monitoring applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main objective of this study is to investigate suitable approaches to monitor the land infrastructure growth over a period of time using multimodality of remote sensing satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time and thus to achieve this purpose, synthetic aperture radar (SAR) and multispectral satellite images of same geographical region over a period of 2015 to 2018 are obtained and analyzed. SAR data from Sentinel-1 and multispectral image data from Sentinel-2 and Landsat-8 are used. Statistical composite hypothesis technique is used for estimating pixel-based change detection. The well-established likelihood ratio test (LRT) statistic is used for determining the pixel-wise change in a series of complex covariance matrices of multilooked polarimetric SAR data. In case of multispectral images, the approach used is to estimate a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The generalized likelihood ratio test (GLRT) is used to detect the target (changed pixel) from probabilistic estimated model of the corresponding background clutter (non-changed pixels). To minimize error due to co-registration, 8- neighborhood pixels around the pixel under test are also considered. There are different challenges in both the cases. SAR images have the advantage of being insensitive to atmospheric and light conditions, but it suffers the presence of speckle phenomenon. In case of multispectral, challenge is to get quite large number of datasets without cloud coverage in region of interest for multivariate distribution modelling. Due to imperfect modelling there will be high probability of false alarm. Co-registration is also an important criterion in multitemporal image analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Change detection (CD) benefits of the capability of deep-learning (DL) methods of exploiting complex temporal behaviors in a large amount of data. Unsupervised CD DL methods are preferred since they do not require labeled data. Unsupervised CD methods use autoencoders (AE) or convolutional AE (CAE) for CD. However, features provided by the CAE hidden layers tend to degrade the geometrical information during the encoding. To mitigate this effect, we propose an unsupervised CD exploiting a multilayer CAE trained by a hierarchical loss function. This loss function guarantees a better trade-off between noise reduction and preservation of geometrical details at each hidden layer of the CAE. On the contrary to standard CAE, the proposed novel loss function considers input/output specular pairs of multiple hidden layers. These layers are analyzed by considering encoder/decoder pairs that work at corresponding geometrical resolution and show similar spatialcontext information. Single-layer loss functions are defined by comparing the specular corresponding encoder and decoder pairs then aggregated to design a multilayer loss function. The proposed hierarchical loss function allows for a layer-by-layer control of the training and improvement of the reconstruction quality of the hidden layers that better preserves the geometrical details while reducing noise. The CD is performed by processing bi-temporal remote sensing images with the CAE. A detail-preserving multi-scale CD process exploits the most informative features for bi-temporal images to compute the change map. Preliminary experimental results conducted on a couple of Landsat-8 multitemporal images acquired before and after a fire near Granada, Spain of July 8th, 2015, provided promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is an essential preprocessing task in many applications of hyperspectral images capturing the Earth surface. Maximally Stable Extremal Regions (MSER) is a feature–based method for image registration which extracts regions by thresholding the image at different grey levels. Its invariance to affine transformations makes it ideal for image registration. This method is usually employed in text detection and recognition as well as in the medical domain. Hyperspectral images contain spectral information that can be used for improving the image alignment. This article presents a first approach to a hyperspectral remote sensing image registration method based on MSER that efficiently exploits the information contained in the different spectral bands. The experimental results over four hyperspectral images show that the proposed method is promising as it achieves a higher number of correct registration cases than other feature–based methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Change detection methods are frequently associated with wavelength-resolution synthetic aperture radar (SAR) images for foliage-penetrating (FOPEN) applications (e.g., the detection of concealed targets in forestry areas), being a research topic of interest over the last decades. The challenge associated with the design of automated change detection techniques goes beyond performing the target detection. It is also related to clutter suppression aiming at a low false alarm rate (FAR). The problem of detecting targets and removing content in SAR data can be treated as an unsupervised signal separation problem, usually referred to as blind source separation (BSS). Additionally, low frequency wavelength-resolution SAR images can be considered to follow an additive separation model due to their backscatter characteristics. In this context, it is possible to explore robust principal component analysis (RPCA) as a source-separation method for problems in which the mixing model is additive and two-dimensional, as the interest SAR images. This paper presents a change detection method for wavelengthresolution SAR images based on the RPCA via principal component pursuit (PCP), considering the use of small image stacks to explore the data diversity from measurements of different flight headings. The proposed method is evaluated using real data obtained from measurements of the ultrawideband (UWB) very high frequency (VHF) SAR system CARABAS II. The experimental results show that the proposed method can achieve a high probability of detection (PD) values for a low FAR (i.e., PD of 0.98 for a FAR of 0.41 objects per square kilometer). Finally, discussions regarding the use of the RPCA in change detection methods and the diversity gains are provided in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SAR raw data are usually oversampled and spectrally weighted along the Fourier-domain processing chain, to avoid Gibbs effects around point targets. During the range and azimuth SAR focusing operations, the use of matched filters, whose frequency responses are smoothed by means of tapering functions, such as Hamming, Taylor, or Kaiser windows, introduces a significant degree of spatial correlation of the noise in the single-look complex 2D signal at the output of the SAR processor. In this work, we make use an unsupervised procedure for the spatial decorrelation of fully-developed speckle, originally developed for improving the despeckling performance in the case of single-look SAR images [Lapini et al., 2014]. Now, the goal is evaluating the impact of a preliminary spatial decorrelation of the noise on the accuracy of temporal change detection between two single-look images of the same scene taken at different times. In a likely simulated scenario, we optionally introduce a spatial correlation of the noise in the synthetic complex data by means of a 2D separable Hamming window in the Fourier domain. Then, we remove such a correlation by using the whitening procedure, take the modulus of the SLC images, apply change detection algorithms suitable for detected data, and compare the geometric and radiometric accuracy of the estimated change maps for the three following cases: uncorrelated noise, correlated noise, and decorrelated noise. Several change detection methods are considered: from the simple Log-Ratio operator preceded by despeckling, to more advanced parametric or nonparametric methods based on the Kullback-Leibler divergence [Inglada and Mercier, 2007] or on the mean-shift clustering of the bivariate scatterplot [Aiazzi et al., 2013]. Simulation results show a consistent improvement of performance, notably the geometric accuracy of changes, but also their local extent. The benefits of noise decorrelation are noticeable in experiments carried out on true COSMO-SkyMed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep Learning for the Analysis of Multispectral Images
In the last decades, the amount of data obtained from electro-optical sensor systems has been steadily increasing in remote sensing (RS). Manual analysis of remote sensing images is a time-consuming task. Therefore, machine learning methods for detection and classification have become an appealing field of RS. In particular, the family of region convolutional neural networks (R-CNN) shows considerable success in different RS tasks. Advanced RCNN methods are multistage approaches, where first objects are detected and secondly classified with an optional segmentation step. However, the detection performance of advanced R-CNN algorithms suffers in areas with noticeably varying object densities and scales. Advanced R-CNN architectures usually consist of a detector stage and multiple heads. In the detector stage, regions of interest (ROI) are proposed and filtered by a non-maximum suppression (NMS) layer. In an area with a high density of objects, a strictly adjusted NMS may lead to missed detections. In contrast, a low threshold value for NMS can cause multiple overlapping detections for large objects. To address this challenge, we present our approach improving the results of object detector methods in scenes with varying densities of objects. Therefore, we add an encoder-decoder based density estimation network to our detector network to obtain the location of high-density areas. For these locations, additional fine detection of objects is performed. In order to exhibit the effectiveness of our approach, we evaluate our method on common crowd counting and object detection datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The number and location of oil wells represent the status of oilfield development, which is important for policyholders considering their impact on energy resources planning. More importantly, petroleum production has a potential risk on the environment and public health due to its impact associated with local soil and water. With the advancement of satellite remote sensing and computer vision, there is emerging research interest in the area of object detection using optical remote sensing images. The detection of oil wells from remote sensing images remains an unexplored research area. Therefore, automatic detection of oil wells is explored in this paper and aims to help the policyholders with resources planning and environment monitoring. CNN (Convolutional Neural Network) based deep learning methods are able to learn distinctive high-level features efficiently, which address the challenges in the object detection in remote sensing. In this paper, we explore frameworks to automatically detect oil wells from the optical remote sensing images based on Faster R-CNN (Regional Convolutional Neural Network). In order to evaluate our methods, we have built a dataset of oil wells named NEPU-OWOD V1.0 (Northeast Petroleum University - Oil Well Object Detection Version 1.0) based on high-resolution remote sensing images from Google Earth Imagery. The experimental results show high precision up to 92.4%, which demonstrate that our methods can detect the oil wells from remote sensing images effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The regular monitoring of agricultural areas is extremely important for mitigating food insecurity risks and for planning government interventions. In the literature, several deep learning algorithms have been recently proposed to perform land cover/ land use classification by using multispectral optical images. However, most of the considered deep learning models, such as the standard Convolutional Neural Networks (CNN), rely on mono-temporal images, focusing on spectral and textural features while discarding the temporal component, which is crucial for the accurate crop type mapping. In this work, we exploit a Long Short Term Memory (LSTM) deep learning classification architecture to characterize agricultural area dynamics by using the multitemporal multispectral information provided by satellite multispectral sensor Sentinel 2. Instead of considering a pre-trained network and applying to it a fine-tuning, the proposed architecture is trained from scratch in order to be tailored to the specific properties of the long time series of Sentinel 2 multispectral images. To face the lack of labeled training database, existing crop type maps available at the country level are used to generate a large set of weak reference data. First, the proposed method automatically extracts a large training dataset from existing crop type maps, by detecting those samples having the highest probability of being correctly classified. Then, the weak labeled samples extracted are used to train the deep LSTM architecture on a time series of Sentinel 2 images acquired over an entire year. The preliminary results obtained demonstrate the effectiveness of the proposed approach, which is promising at large scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generative Adversarial Networks (GAN) have been used for both image generation and image style translation. In this paper, we aim to apply GANs to multispectral satellite image. For the image generation, we take advantage of the progressive GAN training methodology, that is purposely modified to generate multi-band 16 bits satellite images that are similar to a Sentinel-2 level-1C product. The generated images that we obtained imitate closely the spectral signatures of the kind of terrain in the images, as it can be seen by comparing typical spectral view between synthetic and natural images. Furthermore, we consider the recent use of GAN architectures for transferring the style of the images and apply them to perform land-cover transfer of satellite images. Specifically, we used the unpaired style transfer method to modify images that are dominant in vegetation land cover into images that are dominated by bare land cover and vice versa. The land-cover transfer via GANs gives very promising results and the visual quality for the transferred images is also satisfactory, showing that the land-cover transfer is an easier task compared to the GAN generation from scratch. Especially, results are good when the target domain is bare land, in which the visual quality for the transferred images is also very good.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last decade, the number of forest fires events is growing due to the fast change of earth’s climate. Hence, more automatized fire fighting actions had become necessary. Deep learning had drawn interesting results for pixel level classification for smoke detection, but few systems are proposed for fire flame detection. In this paper, a semantic segmentation approach using Deeplabv3+ architecture for wildfire detection is proposed. The network uses Deeplabv3 architecture as encoder and Atrous Spatial Pyramid Pooling (ASPP) which allows to encode multi-scale information and boost the network performance. In fact, the ASPP block concatenates a stack of parallel Atrous convolutions with graduating rates, which produces multi-scale feature map that is further resized. The tests were performed on a public dataset, Corsican fire dataset, which contains 1135 RGB images and 640 infrared pictures. The experiments were conducted on two customized datasets, one using the whole dataset within a single channel information (red and infra-red), and another using only the RGB images set that contains information coded in 3 channels. The used dataset is unbalanced, which could induces high precision with very low sensitivity. Therefore, to measure the performance Dice similarity and Tversky loss functions with cross-entropy are adopted. The capability of the Deeplabv3+ was tested with two different backbones, ResNet18 and ResNet-50, and compared to a very simple Convolutional Neural Network (CNN) architecture with dilated convolution. Four different metrics were used to evaluate the segmentation capability: Accuracy, mean Intersection over Union (IoU), Mean Boundary F1 (BF) Score, and Mean Dice coefficient. The experimental results demonstrate that the Deeplabv3+ with ResNet-50 backbone and a loss function type Dice or Tversky can accurately detect the fire flame, the given results are very encouraging for further study using deep learning approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep Learning for the Analysis of Multi-/Hyperspectral Images
Standard deep-learning (DL) architectures do not optimize the use of the spatial and spectral information in the multi-spectral images but often consider only one of the two components. Two-stream DL architectures split and process them separately. However, the fusing of the output of the two streams is a challenging task. 3D-CNN processes spatial and spectral information together at the cost of a large number of parameters. To overcome these limitations, we propose a novel DL data structure that re-organizes the spectral and spatial information in remote-sensing (RS) images and process them together. Representing a RS image I as a data cube, we handle the spatial and spectral information by reducing the spectral bands from N to M, where M can drop out to one. The spectral information is projected in the spatial dimensions and re-organized in 2-dimensional B blocks. The proposed approach analyzes the spectral information of each block by using 2-dimensional convolutional kernels of appropriate size and stride. The output represents the relationship between the spectral bands of the input image and preserves the spatial relationship between its neighboring pixels. The spatial relationships are analyzed by processing the output of the previous layer with standard 2D-CNNs. Experiments by using images acquired by Sentinel-2 and Landsat-8 data and the labels of the LUCAS database released in 2018 provide promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Impressive progress in technical characteristics of modern unmanned aerial vehicles (UAV) provides new opportunities for their exploiting in different applications and missions which were impossible earlier. The growing applicability of UAVs is based on high performance of modern computers and latest advances in sensor data processing techniques.
Recent decades modern convolutional neural network (CNN) models have demonstrated the state of the art performance in many computer vision problems seemed to be solved properly only by a human. The study is aimed at developing a deep learning techniques for UAV autonomous navigation in complex environment in obstacle avoidance mode. Such kind of navigation is required for cargo delivery or rescue mission in urban, industrial or forestry environment when global geo-positioning system can be unavailable.
For navigating in complex environment UAV have to recognize objects of observed scene and to estimate distance for possible obstacle. The proposed technique to solve these tasks exploits deep learning approach for image segmentation and depth map estimation using an image of the observed scene.
The convolutional neural network model is developed capable to predict depth map of the observed scene along with scene segmentation according the predefined object classes. The proposed neural network architecture is based on generative adversarial model with generative part translating an input color image into an output voxel model. The aim of the discriminative part is to estimate how close the output to real data and to penalize false output. Both generative and discriminative parts are trained simultaneously on the specially prepared dataset.
Evaluation on the testing part of the prepared dataset has demonstrated the ability of the developed neural network model to perform segmentation of unobserved complex scenes containing several objects and estimating depth map for this scene. The proposed neural network architecture provides high generalization ability for new scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collaborative representation classifier (CRC) is an efficient classifier for hyperspectral imagery. It represents a testing sample using labeled ones, and the testing sample is assigned to the class whose labeled samples yield the minimum representation error. The CRC allows all the samples to have equal chance to participate in the representation by imposing an L2 norm minimization constraint. The solution has a closed form, offering computational convenience. Various techniques have been developed for further improvement of CRC-based classifiers, and probabilistic collaborative representation-based classifier (ProCRC) is one of techniques to enhance CRC by using maximum likelihood concept of testing sample that belongs to multiple classes. Taking into consideration for distance-weighted Tikhonov regularization, probabilistic collaborative representation-based classifier with Tikhonov regularization (ProCRT) can enhance the performance of the original ProCRC. In this paper, spatial regularization term is added in the objective function to incorporate spatial information, and the resulting spatial-aware ProCRC (SaProCRC) and spatial-aware ProCRT (SaProCRT) can offer even better classification accuracy with comparable computational cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep Learning for the Analysis of Hyperspectral Images
A hyperspectral (HS) image is typically a stack of frames, where each frame represents the intensity of a different wavelength of light. Each spatial pixel has a spectrum. In the classification of the HS image, each spectrum is classified pixel-by-pixel. In some of the real-time applications, the amount of the HS image data causes performance challenges. Those issues relate to the platforms (e.g. drones) payload restrictions, the issues of the available energy and to the complexity of the machine learning models. In this study, we introduce the minimal learning machine (MLM) as a computationally cheap training and classification machine learning method for the hyperspectral imaging classification. MLM is a distance-based method that utilizes mapping between input and and output distances. Input distance is a distance between the training set and its subset R. Output distance is corresponding distances between the label values of the training set and the subset R. We propose a training point selection framework, which reduces the number of data points in the R by selecting the points class-by-class, in the direction of the principal components of each class. We test MLM’s performance against four other classification machine learning methods: Random Forest, Artificial Neural Network, Support Vector Machine and Nearest Neighbours classifier with three known hyper- spectral data sets. As the main outcomes, we will show how the performance is affected by the size of the subset R. We compare our subset selection method MLM’s performance to the random selection MLM’s perfor- mance. Results show that MLM is an computationally efficient way to train large training sets. MLM reduces the complexity of the analysis and provides computational benefits against other models. Proposed framework offers tools that can improve the MLM’s classification time and the accuracy rate compared to the MLM with randomly picked training points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral cluster analysis represents a powerful instrument for land cover classification. It consists of grouping hyperspectral pixels based on a similarity measure that determines the affinity level between data points. Many of the existing clustering methods are not suitable for hyperspectral data due mainly to the socalled curse of dimensionality. The previous fact motivates researchers to develop new clustering algorithms for dealing with high dimensional data. Among these are the techniques based on Spectral Graph Theory (SGT). They regard objects as vertices and their pair-wise similarity as weighted edge to transform the clustering problem into a graph partition task. Their properties make them well-suited for datasets with arbitrary shape and high dimensionality. The current approach strives the unsupervised classification of hyperspectral imagery employing Similarity Graphs (SG). To achieve this goal, a superpixel-based segmentation using the Simple Linear Iterative Clustering (SLIC) algorithm is executed. It takes the input data and groups pixels considering their image proximity and spectral similarity. Subsequently, the superpixels are converted into a Similarity Graph G = (V, E) with vertex set V = V1, V2, ..., Vn, where n represents the vertex number. For this conversion, the Adjacency Matrix (AM) is constructed with the similarities between vertices. Consequently, the Laplacian Matrix (LM) is determined to embed the data points into a low-dimensional space. This embedding occurs after finding the eigenvalues and eigenvectors of the LM. At this point, the clustering algorithm groups relevant LM eigenvectors to generate the land cover map. Finally, a comparison between the classified maps and the results of directly applying the Hierarchical Agglomerative Custering (HAC) algorithm on the corresponding superpixels is executed. This analysis considers the correspondence of the results with reality and the magnitude of the Cohen’s Kappa coefficient. The proposed method uses two benchmark datasets to create land cover classification maps. The results show that the method is capable of accurately partitioning data points with moderate overlapping level, where established algorithms such as the HAC still experiences difficulties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classification of hyperspectral images is an important step of hyperspectral image interpretation. Different studies demonstrate that spatial features can provide complementary information for increasing the accuracy of hyperspectral image classification. In this study, we propose a method of spectral-spatial classification of hyperspectral images that is based on the use of specific multifractal features as the spatial features. The proposed method of hyperspectral image classification consists of the following steps. First, informative multifractal features are extracted from first few principal components of spectral features. For construction of the multifractal features, in the windows centered on each element of principal component images, using a generalized local-global multifractal image analysis, various 1D and 2D multiracial characteristics can be calculated including our early introduced 2D multifractal characteristics of global scaling exponents. After that, obtained multifractal features are stacked with spectral features into high-dimensional feature vectors. Finally, the resulting high-dimensional vectors of spectral and multifractal features are classified by a support vector machine classifier. The multifractal characteristics that are used to construct multifractal features have a lot of advantages: these characteristics provide a good textural separability of image objects, demonstrate an invariance to image scaling and rotation, and they are also insensitive to image noise. The experiments performed on several widely known test hyperspectral images have demonstrated that proposed method exhibits better performance than competitive methods of spectral-spatial classification of hyperspectral images, in terms of the overall accuracy and kappa statistic. In addition, it is shown that the introduced classification method can outperform some deep learning methods of hyperspectral image classification, which in recent years have attracted great interest in hyperspectral image classification. In particular, it was established that the proposed method can achieve good classification results over deep learning methods if we use small training samples for classification. In the future, we will focus on developing methods for object-oriented classification of hyperspectral images, which are based on the use of multifractal features. The study has been supported by the Ministry of Education and Science of the Russian Federation (Project No. МК-3477.2019.5) and by the Russian Foundation for Basic Research (Project No. 19-05-00330 А)."
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep autoencoders have recently been applied to blind hyperspectral unmixing task to estimate endmembers and their corresponding abundances simultaneously. The objective of an original autoencoder is to reconstruct an input data matrix unsupervisedly with an encoder network and a decoder network. For the purpose of spectral unmixing, the activations of the final layer of the encoder and the weights of the decoder form abundances and endmember signatures, respectively; constraints, e.g., abundance non-negativity and abundance sum-to-one, can be imposed. In this paper, we present a novel regularization technique for autoencoder-based hyperspectral unmixing. The basic idea is the inclusion of a generative adversarial network (GAN) joint training objective to condition the decoder to generalize to unseen abundance mixtures. In addition to regularizing the endmember weights of the decoder, this approach has the benefit of explicitly modeling the prior distribution of hyperspectral pixels for a given scene as the abundance output of the generator. The benefit of the proposed strategy is evaluated on synthetic and real data sets, demonstrating that it can produce endmember estimates closer to the ground truth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral unmixing is often relying on a mixing model that is only an approximation. Artificial neural networks have the advantage of not requiring model knowledge. Additional advantages in the domain of spectral unmixing are the easy handling of spectral variability and the possibility to force the sum-to-one and the non-negativity constraints. However, they need a lot of significant training data to achieve good results. To overcome this problem, mainly for classification problems, augmentation strategies are widely used to increase the size of training datasets synthetically. Spectral unmixing can be considered as a regression problem, where data augmentation is also feasible. One intuitive strategy is to generate spectra based on abundances that do not occur in the training dataset, while taking spectral variability into account. For the implementation of this approach, we use a convolutional neural network (CNN), where the input variables are extended by random values. This allows spectral variability to be taken into account. The random inputs are re-sampled for each data point in every epoch. During training the CNN learns the mixing model and the characteristic spectral variability of the training dataset. Additional spectra can be generated afterwards for any given abundances to extend the original training dataset. Because the generative CNN minimizes the error between generated spectra and the corresponding ground truth for the whole dataset during training, the variance of the spectra based on the same abundances is lower than in the training data. We have investigated two approaches for improvement. One is to increase the variance of the random input variables when generating new spectra. For the second, the estimated covariance matrices are considered by the objective function. The presented method is evaluated with real data, which were captured in our image processing laboratory. We found that the augmentation of the training dataset with the presented strategy leads to an improvement for spectral unmixing of the test dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep Learning for the Analysis of SAR, LiDAR, and Multisensor Data
In this research, we propose the use of end-to-end deep learning simulation approach for assisting the design of LiDAR. The results show that two million points per second rate is optimal for point cloud based intersection classification task. The detection range of up to 100 meters corresponds to optimal classification performance. The 10 degree of upper field of view and 10 degree of lower field of view is sufficient for intersection classification. A linear increase of classification accuracy from 10 to 70 channels is evident. The research bridges the gap of lower level LiDAR simulation and development and self-driving visual tasks and expected to find applications to improve self-driving performance and safety.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, the development of sensor technology in the area of hyperspectral imaging has been continuously moving forward. The result is a significant increase in availability, applications and, consequently, data volume. Compression is required to facilitate the transmission and storage of hyperspectral data sets. The high spectral correlation between adjacent bands allows for decorrelation approaches to compress the data with minimal loss of important information. Since it is not known a priori which feature is essential, the compression of hyperspectral data is a challenging task. In this paper, we introduce an approach to compress hyperspectral data using a Deep Autoencoder. An Autoencoder is an artificial neural network that first learns the important features from the data, and subsequently reconstructs the data from the reduced encoded representation. The evaluation is done by comparing the classification performance between the original and the reconstructed data. As a classifier we use the Adaptive Coherence Estimator to compare the spectral signatures. Performance is assessed by comparing the mean classification accuracy for a fixed false alarm rate. Additionally, the Signal to Noise Ratio and the spectral angle are used as metrics for evaluating the reconstruction performance. Airborne hyperspectral data were used in combination with simulated data, representing a linear mixture with different ratios of target and background spectra. Multiple target and background materials are tested to compare the performance. The selected data provide a representative set of target and background spectra to evaluate the compression method in relation to the detection limit. The compression rate is set to 4:1 and the reconstruction accuracy is investigated. Additionally, classification of noisy data is compared to the compression results to show the impact of information loss. If both results are similar, it can be deduced that the compression process is near-lossless.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate low-complexity encryption solutions to be embedded in the recently proposed CCSDS standard for lossless and near-lossless multispectral and hyperspectral image compression. The proposed approach is based on the randomization of selected components in the image compression pipeline, namely the sign of prediction residual and the fixed part of Rice-Golomb codes, inspired by similar solutions adopted in video coding. Thanks to the adaptive nature of the CCSDS algorithm, even simple randomization of the sign of prediction residuals can provide a sufficient scrambling of the decoded image when the encryption key is not available. Results on the standard CCSDS test set show that the proposed technique uses on average only about 20% of the keystream compared to a conventional stream cipher, with a negligible increase of the rate of the encoder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of distinguishing features of the present is the explosive increase in data amount including digital images such as satellite remote sensing images. Processing, storing and transmission via networks of a huge number of digital images requires considerable resources in the sense of memory, time, computational power, etc. In this regard, the importance of image compression is growing and the development of novel compression techniques that would satisfy new requirements, for instance, to security and level of privacy protection continues. Atomic wavelets and their generalizations can be a useful tool for this. They are constructed using atomic functions, which are compactly supported solutions of special functional differential equations. Discrete atomic transform (DAT), which is a process of computation of expansion coefficients of the function describing the source discrete data, is applied in discrete atomic compression (DAC). DAC can be applied to compression of full color digital images, as well as monocomponent ones. In this paper, we investigate efficiency, which is measured by compression ratio (CR), of satellite image compression using DAC and the corresponding loss of quality measured by several quantitative criteria. They are maximum absolute deviation (MAD), root mean square (RMS) and peak signal-to-noise ratio (PSNR). We show that DAC provides near lossless compression, when quality loss is minor in a sense of the MAD-metric. Also, it is proved that using DAC it is possible to obtain better compression than by applying JPEG with the same quality of the obtained results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The infrared observation sensors aboard remote sensing satellites play an important role in the applications of crop yield estimation, environmental protection, land resources survey, disaster monitoring, etc. But infrared remote sensing data is always in low resolution due to hardware limitations. It is a high cost-effective choice to improve the spatial resolution of infrared remote sensing data through super-resolution (SR) algorithm. Deep learning methods have made great breakthroughs in super-resolution of natural images. In this paper, we comparably study five recently popular supervised-deep-learning-based single image SR models for the purpose of super-resolving infrared images, including SRGAN, ESRGAN, LapSRN, RCAN, and SRFBN. We first test the performance of models trained by natural images on infrared remote sensing images to obtain a benchmark, and then specially fine-tune the SR models using infrared images of Landsat8 in a transfer-learning manner. We evaluate the performance of all these fine-tuned models on infrared images with three indicators including PSNR, SSIM, and NIQE. The experimental results show that the SRFBN model achieves the best generalization ability and SR performance. Therefore, we suggest using SRFBN for super-resolution reconstruction of single infrared remote sensing image in applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global mapping of forest aboveground biomass (AGB) is a challenging task and crucial for forest management and planning. Lidar data provides 3D information of forest stands and has been consistently identified as the state-of-the-art technology for monitoring forests. However, lidar data are expensive and their acquisition is spatially restricted to some forest areas. A reasonable solution for overcoming these restrictions is the use of spaceborne data. In recent years, the number of multispectral sensor based satellites have consistently increased, including open data sources like Sentinel-2 and small satellite sources like RapidEye and Dove constellations. In this work, we compared and evaluated different multispectral satellite data like Sentinel-2, RapidEye and Dove on the basis of different available spectral, spatial and temporal information for modelling AGB. We also used airborne lidar data as the state-of-the-art to compare results from multispectral data models. The experiments were performed under a common framework of variable elimination based on autocorrelation analysis and variable selection using stepAIC (Akaike Information Criterion) algorithm. A multiple linear regression with leave-one-out cross validation (LOOCV) was used to perform p-value quartiling for spectral information analysis, generate LOOCV metrics for temporal information analysis and modelling at spectral parity for spatial information analysis. Results demonstrate a clear and extensive influence of spectral information from specific channels like red-edge and SWIR for modelling AGB. Also, the addition of temporal information increases the precision and agreement of multispectral AGB models. Differently, the spatial information may not be relevant unless datasets are at spectral parity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Microwave Remote Sensing: Data Processing and Applications
Spire Global, Inc. operates a large and rapidly growing constellation of CubeSats performing GNSS-based science and Earth observation. In a few short years, Spire has grown from a modest CubeSat kickstarter campaign to a paradigm-shifting provider of satellite data to NOAA, NASA, and other customers of Earth observations. Spire specializes in using science-quality observations of GNSS signals (e.g., GPS, GLONASS, Galileo, QZSS, etc.) to derive valuable information about the Earth environment. Currently, these observations include radio occultations to profile the neutral atmosphere with high accuracy and vertical resolution for applications such as NWP assimilation and climate monitoring, as well as to measure ionosphere slant total electron content and scintillation indices for space weather applications. As of May 2020, and after 21 deployments, Spire now has over 80, 3U Cubesats satellites capable of performing a variety of GNSS science, with plans to grow the constellation to well over 100 operational and continuously replenished satellites. Beginning in 2018, Spire began an effort to design and build the first of many GNSS bistatic radar (or GNSS-R) missions for Earth observations for a variety of applications, including soil moisture measurement, wetlands and flood inundation mapping, sea surface roughness and winds, and sea ice characterization. Following an agile model of rapid, iterative satellite development that has been refined over a few years to produce radio occultation payloads optimized for operation on ultra-small, 3U CubeSats, we adopted a very aggressive schedule to adapt the current Spire 3U bus and STRATOS GNSS science receiver to perform GNSS-R measurements, with a launch of satellites in December of 2019, and plans for two more GNSS-R satellites to be launched in 2020. We will discuss the goals and accomplishments of the Spire GNSS-R mission, the design and operational modes of the first batches of Spire GNSS-R satellites, and plans for a full, operational constellation of GNSS-R satellites. The Spire GNSS-R effort also has a parallel path that is already harnessing existing orbiting Spire satellites used for radio occultation to additionally perform grazing angle GNSS-R measurements for high-precision, phase-delay altimetry. This presentation will additionally discuss the unique experience of adapting the current constellation of radio occultation satellites to perform these new and valuable grazing angle GNSS-R Earth observations. We will introduce the concept of phase-delay altimetry and its potential to estimate surface heights on the order of 10 cm using observations of coherent GNSS signals reflected from various Earth surfaces. We will also show sea ice products derived from these new observations. Finally, we will discuss Spire’s potential to rapidly proceed with these measurements from research to operations and to make them available as a new set of Earth observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Passive radiometers are well-known instruments used in the characterization of soil, sea surfaces and remote sensing of the earth atmosphere with satellites or airplanes. The instrument described in this article is a dual-polarised superheterodyne radiometer operating around 93 GHz. It is placed on a structure to measure road surface conditions (ice, water or oil) in a laboratory-controlled environment. This radiometer measures the reflected and emitted radiations from the road surface (asphalt and concrete) and the background temperature, in two orthogonal polarizations (H and V). The difference in the dielectric properties of the ice, oil and water from dry road surface allows to distinguish them efficiently. This kind of technique can be used for road surface recognition in all weather conditions and does not require presence of daylight or other sources of illumination. In this paper, calibration procedures and radiometric characterisations of the radiometer are studied in order to select the best and simpler method to operate the radiometer. It was found that calibrating the radiometer with only one blackbody target or using a table of gain and system noise temperature is sufficiently accurate over a long time to be able to distinguish dry from ice or water covered surfaces. The laboratory results are showing a high difference in the brightness temperature between road surface covered with ice, water or oil and the dry road surface. No ambiguities between those conditions exist but potential limitations could rise, for example if the road surface roughness changes during a measurement. Those promising results validate the potential of using radiometer for road safety and the automotive industry. The presented laboratory measurements are the first step towards the implementation of the instrument into a moving vehicle for alerting drivers ahead of unforeseen dangers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research is focused on registering the movements along the slope of the several slopes located on south-west of the mountain Stara Planina and establishing their average annual values. Currently at national level there are a low number of studies targeted at operational monitoring of the investigated slopes. These objects are quite specific for research since those kind of natural phenomena are inaccessible by other means or are quite dangerous to be investigated. On the other hand, the moving slopes are causing damages to infrastructural objects such as roads, bridges or power lines. Their behavior is difficult to forecast and for this reason they can be considered as natural hazards. Obtaining precise data for the single slope movements is done by in-situ investigations such as geodetic acquisitions, terrestrial laser scanning, and geological observations, which all require financial resources and human effort. For this reason, we used remotely sensed data from satellite based SAR instruments processed using the DInSAR method in order to analyze the motions of single slope and to establish a technique for the investigation of mountain slopes. An advantage of the selected method is the possibility to register the vertical movements of the whole slope with centimeter accuracy. This approach is based on the free access to the SAR data and tools for their thematic processing provided by ESA. In this study an emphasis is put on the manner how the obstacles encountered during the interferometric processing (e.g. presence of vegetation or topography) have been overcome. From the downloaded set of SAR images covering the region created were two multitemporal InSAR data series from ascending and descending orbits of the satellite. The results from the autumn-winter pairs exhibited good correlation with the expected displacements along the studied slope having a magnitude of 0.8 m.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the network world, the physical model, sensor update, running history and other data mapping a three- dimensional world, this is digital twinning. Remote sensing image combined with high precision elevation data can reproduce three-dimensional space-time background. Based on the data of historical remote sensing image, the target is detected, located, classified and recognized from real-time video data, the spatiotemporal behavior of the role is extracted, the spatiotemporal events are predicted, and the spatiotemporal scenario twin mapping is completed by using simulation technology in virtual space.Through image texture matching, special point alignment and geometric transformation, the spatiotemporal alignment of video and remote sensing images is completed.Under the condition of no control points and obvious markers, the relative motion conditions of the target and the sensor are used to realize the simultaneous detection, positioning, tracking and recognition of the target.The twin fusion of satellite video, UAV video and surveillance video and remote sensing image is completed through experiments, which is helpful to better understand space-time analysis and influence space-time control decision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances in unmanned aerial vehicles (UAV) and optical sensors of various types provide new opportunities for collecting and processing a remote sensing data of a new quality. Applying UAVs to acquire high-resolution imagery makes it possible to produce a digital elevation model (DEM) of high quality and resolution. New quality of an available DEMs allows to analyze small details of the land surface and to retrieve valuable information about hidden archaeological content. Our study addresses to creating and analysing of DEM of large-scale and high-resolution for detecting the traces of hidden ancient artefacts at archaeological sites. The survey for acquiring an imagery for this study has been carried out at Taman Peninsula (Russia) as a part of Russian State Historical Museum expedition aimed at studying of the Bosporan Kingdom (VI-I century BC). We presents the developed techniques for UAV imagery processing which provides improved accuracy of photogrammetric 3D measurements comparing with standard photogrammetric image processing by commercial software. These approaches have been developed for interpretation of terrain models for predicting possible spatial distribution of archaeological artefacts. The proposed techniques allows creating large-scaled digital terrain models of the archaeological sites which can serve for more reliable archaeological prediction and accurate geo-positioning of possible findings. It has showed that the developed techniques provide accurate high quality DEM and serve as useful tool for archaeological sites analyses and predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D reconstruction from satellite stereo images is important for numerous applications like 3D city modelling and urban mapping. The Semi Global Matching (SGM) is the state-of-the-art algorithm for dense image matching of satellite stereo images. However, the results of dense image matching are controlled by many factors like, occlusions (presence of some features only in one image), radiometric differences, shadows, texture less surfaces and perspective distortions, which results in significant amount of outliers and missing data in the resulting disparity maps. This problem can be reduced if image segmentation is available. This work explores the fusion of the image segmentation with 3D reconstruction not only to improve the 3D reconstruction results but also to provide the semantic label for each pixel of the image. The image segmentation is itself a challenging problem. Here, we leverage the recent advancements in deep learning to train a Convolution Neural Network (CNN) for segmentation of satellite images in to buildings, roads, water and vegetation classes. The CNN architecture used here is based on the popular U-Net architecture for semantic segmentation. The data from 2019 IEEE GRSS data fusion contest track 2 is used for training of the CNN. The trained network is used for semantic segmentation of the satellite images. These segmentation masks are used to refine the SGM based disparity maps by filtering the disparities and void filling only in the connected component of each segment, which helped in computing smoother disparity maps with fewer outliers and missing data. The evaluation of results of refined disparity and image segmentation was done according to GRSS semantic Labeling, on the bases of IoU. Our net mIoU was 0.7612. As compared to results of simple disparities from SGM results of refined disparity maps were better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, satellite images are used in various governmental applications, such as urbanization and monitoring the environment. Spatial resolution is an element of crucial impact on the usage of remote sensing imagery. As such, increasing the spatial resolution of an image is an important pre-processing step that can improve the performance of various image processing tasks, such as segmentation. Once a satellite is launched, the more practical solution to improve the resolution of its captured images is to use Single Image Super Resolution (SISR) techniques. In the recent years, Deep Convolutional Neural Networks (DCNNs) have been recognized as a highly effective tool to reconstruct a High Resolution (HR) image from its Low Resolution (LR) counterpart, which is an open problem due to the inherent difficulty of estimating the missing high frequency components. The aim of this research paper is to design and implement a satellite image SISR algorithm by estimating high frequency details through training Deep Convolutional Neural Network (DCNNs) with respect to wavelet analysis. The goal is to improve the spatial resolution of multispectral remote sensing images captured by DubaiSat-2 satellite. The accuracy of the proposed algorithm is assessed using several metrics such as Peak Signal-to-Noise Ratio (PSNR), Wavelet-based Signal-to-Noise Ratio (WSNR) and Structural Similarity Index Measurement (SSIM).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study presents a remote sensing application of using time-series satellite images for monitoring the solid waste disposal facilities (WDF). Solid waste management and monitoring is a critical issue for the metropolitan authorities of developed and developing countries. This is due to the appearance of natural, unauthorized garbage dumps that negatively affect the ecological and epidemiological state of the environment. It is advisable to solve this problem remotely, using remote sensing technologies, having high-and medium-resolution information from spacecraft. We propose a method of filtrate analysis and space images (SI) decryption are carried out with the use of a DOT apparatus and, in particular, with the use of Viner filtering. In this work, Winer filtering is used, as part of the proposed algorithm of computer simulation of the fractal-percolation process of filtrate of the underlying surface of WDF, the filtering threshold is determined, as well as studies for correctness on Tikhonov are carried out. The experiment is carried out on the example of a SI with a WDF image. An extension of the feature space is also modeled using stochastic geometry. The results obtained can serve as a basis for the development of a methodology for assessing the effectiveness of measures to neutralize the underlying surface of the WDF from the filtrate and leak it into the soil using remote sensing of the Earth technologies. This methodology can be the subject of further research on the development of a medical and preventive expert system at the territorial level for the detection and neutralization of unauthorized WDFs on medium and high-resolution space images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article provides an analysis of new code structures of maximum length obtained from rows of quasi-orthogonal matrices. Codes of maximum length (m-sequences, codes constructed on the basis of Legendre symbols or quadratic residues, codes generated on the basis of Jacobi symbols and others) have the potential to be widely used both in remote sensing radar systems and in noise-immune and noise-proof systems high-speed communications as a replacement for widely used Barker codes. The questions of searching and researching new noise-resistant codes constructed on the basis of persymmetric circulants are considered. Comparison of performance ratings obtained with the new code sequences is given. The advantages of the codes obtained in the work are discussed in the aspects of increasing the correlation characteristics, their detection, and noise immunity in the radio channels of distributed systems. Since these codes have potential applications in radar, not only the correlation properties (autocorrelation and periodic correlation function) are considered, as for communication systems, but also the ambiguity function, which takes into account not only the correlation of signals in time, but also in frequency. The results show the possibility for their use in distributed location systems, not only in the case of remote sensing of the earth, but also in the field of application of the transmission of generated signals to points of joint information processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The investigation of surface film pollution is extremely important for ocean ecology and developing the methods of ocean remote sensing. The goal of this work is an experimental study of parameters of surfactant films in real conditions on the sea surface and their effect on measured radar return contrast in the film slicks. The properties of films of oleic acid in real conditions were studied under moderate winds in the Gorky Reservoir. Previously, the dependence of the elasticity and the surface tension coefficient of the oleic acid film (the parameters that determine the wave damping) on the surface of the distilled liquid were studied in detail in our laboratory. To study the properties of the film on the water surface under real conditions, a surfactant was sprayed onto the water surface, after which surface samples were taken using a net method. The film concentration and elasticity of the film were retrieved in the IAP laboratory. It is shown that the mean surface concentration of the film is several times higher than the concentration of the monomolecular layer of oleic acid. In different areas of the film slick, the concentration can vary by 2-3 times. The elasticity of the film formed by oleic acid on the water surface in real conditions is approximately two times less than the elasticity of the oleic acid film previously measured in laboratory conditions. The retrieved elasticity was used to explain the suppression of the X-band radar signal operating at VV polarization at an incidence angle of 60 degrees. To calculate the damping, a model was used that takes into account nonlinear sources of wind wave generation. Using the new elasticity value improves the agreement between the measured and calculated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The possibility of applying spectroscopic measurements in the implementation of multi-alternative automatic control of physical and physicochemical processes that are accompanied by electromagnetic radiation in the optical range is reviewed in this paper. First of all, combustion processes mean that optical spectra contain multilateral and extensive spectroscopic information of the course of this process, which, to form the output vector of the system of multi-alternative automatic control, could be used. The essence of the proposed method is that this versatile and extensive spectroscopic information is given, as a result of measuring the spectrum at predetermined frequencies (wavelengths). It's an output vector and replaces the output vector in the system of multi-alternative automatic control, which in known systems is usually formed using a set of sensors with a matching set of measuring devices. Thus, the system of measuring devices matching the system of various sensors is replaced by one optical spectral device. It's proposed to use grating and prism diffraction spectral devices as optical spectral devices. Since a discrete mathematical expression describes the state vector, the spectroscopic measurements in the form of count-values of the measured spectrum are should be presented. Such count-values values could be obtained if linear-CCD arrays read spectroscopic information. The specificity of the linear-CCD arrays action requires the further development of the theory of spectrum measurement by optical diffraction spectral devices under conditions of spectrum measurement by these devices. From the available ratios describing the obtaining of complex spectra in the output plane of a diffraction spectral device, i.e., on the sensitive surface of the linear-CCD array's pixels, and it's necessary to obtain ratios describing the properties of the signals that are elements of the matrix of the output vector. In this case, the properties of the resulting spectrum assessment vary from pixel to pixel. That means the adaptation and further development of the spectral measurement theory applied to the problems of multi-alternative automatic control, where the output vector from the spectroscopic measurements, is formed. As a result of the performed researches, a complex momentum spectrum of optical radiation in the output plane of the optical diffraction spectral device is formed. And its quadratic detection with subsequent time integration gives an assessment of the energy spectrum of the analyzed optical radiation with a Bartlett spectral window. The pixel of the linear-CCD array, in addition to photodetection, runs integration over the frequency range matching to the frequency domain (wavelengths) and its size along the frequency axis of this pixel, where the pixel is located. An element of the output vector matrix is the result of frequency averaging assessment over the pixel surface of the energy spectrum of the analyzed optical radiation in the vicinity of this pixel. That is a property of the output vector element. The issues of reading spectroscopic information are significant in the framework of the indicated problem, which entailed the development of the architecture of the corresponding devices since the result of measuring the optical spectrum is to obtain an output vector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sentinel-2 and Landsat satellites provide huge amount of optical images with high spatial and temporal resolution. These dense Time Series (TS) of multispectral data are used for a wide range of applications enabling multi-temporal monitoring of physical phenomena. Nevertheless, one of the main challenges in their usage is related to missing information caused by cloud occlusions. In the literature, many cloud restoration approaches have been proposed. However, to properly recover missing information, sophisticated and usually computationally intensive techniques should be used. In this work, we consider the deep Long Short Term Memory (LSTM) classifier which is very promising for classification of dense time series of images, and investigate its robustness to the cloud presence without any cloud restoration. Indeed, this classifier has proven to be able to handle the presence of clouds. However, no work which extensively analyzes the robustness of LSTM to clouds can be found in the literature. In this study, we aim to quantitatively asses the capability of the network of handling different amount of cloud coverage under different lengths of the TS. In greater detail, we analyze the effect of the cloud coverage on the classification maps produced by the LSTM by considering: (i) simulated cloud values, (ii) detected clouds represented by zeros values, and (iii) restored images by simple linear temporal gap filling (i.e., average of the spectral values acquired in the previous and following cloud-free images in the TS). The obtained results demonstrate that the capability of the LSTM to handle the cloud cover depends on: (i) the length of the TS, (ii) the position of the cloudy images in the TS, and (iii) the cloud representation values. For example, when clouds are restored with very simple and fast linear temporal gap filling, the map agreement between the cloud-free and the cloudy map is 96% even when the 40% of images in the TS are covered with clouds, regardless of their position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new kind of feature(AT feature) to measure the possibility of artificial target in SAR image was proposed based on the difference between real part and imaginary part of complex data of single-channel SAR imagery. Under the guidance of this feature, a superpixel-level CFAR detection method (AS-CFAR) was proposed for vehicle targets. The proposed method consists of three steps. Firstly superpixel segmentation algorithm was used to presegment SAR images. Then AT feature was calculated to find out the potential target superpixel and background superpixel using SAR complex data. Finally, CFAR detection was carried out only for the superpixels of potential targets, and the selection of background area was also conducted under AT features, which maximally ensured the uniformity of background area. Experiments on miniSAR data verified that the proposed method could detect more target pixels in less time than other CFAR detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Satellite-borne remote sensing images are considered as one of the most important data sources for lithological mapping due to their extensive geographical coverage at an efficient cost. Integration of optical along with microwave satellite datasets, are leading to increase the lithological mapping accuracy. In this study, Geographic Object-Based Image Analysis (GEOBIA) was applied, using data from the freely available European Space Agency (ESA) Sentinel 1 SAR data and Sentinel 2 optical imagery, that were fused With a digital elevation model (DEM) of 13m spatial resolution, generated from two single look complex (SLC) sentinel 1 (C band) interferometry, in conjunction with slope and two geomorphic indices, Terrain Ruggedness (TRI) and Terrain Position Index (TPI) for mapping lithology in the southern of the Palaeozoic massif of Skhour Rehamna in Morocco. The statistical results of the fusion of Sentinel 1 and Sentinel 2 datasets have shown the highest accuracies, showing an overall accuracy (OA) of 92.80% and a kappa coefficient of and 0.89 compared to the layer stack of Sentinel 2 image bands with the first three Minimum noise fraction (MNF) and the Principal components bands (PC1, PC2 and PC6) that showed an OA of 91.50% and a kappa coefficient of and 0.87. With the achievable results in this study, the technique is useful in discriminating general rock type that outcrop in semiarid regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work is devoted to the study of the effect of soil pollution by heavy metal on its electromagnetic response. Heavy metal ions are contained in the soil in the form of salts CuSO4, NiSO4, MnSO4, ZnS, Pb (NO3) 2, some of which were used in the experiment. Samples of light loamy soil were collected in Krasnoyarsk region, Eastern Siberia, Russia. Complex dielectric permittivity was measured in coaxial section by Agilent Technologies vector network analyzer E8363B in the frequency range 1 – 18 GHz. Soil samples had some moisture contents W from 1% to 25 %. The dependences of the dielectric constant on frequency for samples with Ni are more pronounced than for samples with Cu. the addition of an Adding an impurity to the sample increases the amount of bound moisture. So for the metal content in the soil 165 mg / kg: Wb (Cu) = 0.1822, Wb (Ni) = 0.1657, while for the sample without impurity Wb = 0.1138. The addition of an impurity introduces a change in the dependence of the dielectric constant on moisture, the metal content can be estimated from the change in the dielectric constant when the moisture changes. A method for detecting the content of heavy metals in soil using the dε'/dW (C) dependence is proposed. Dependences of the derivative dielectric constant with moisture on the metal content in the soil were tested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many advantages of using unmanned aerial vehicles (UAVs) in remote sensing but when using radiometrically corrected multispectral images. This study focuses on two techniques of obtain a multispectral orthomosaic with suitable radiometric quality considering a day period with minor variations in illumination and clouds. The first technique comprises a radiometric block adjustment combined with empirical line whilst the second technique uses only empirical line. Field measurements with spectrometers were used to assess the techniques. The obtained results show that the radiometric block adjustment presented better results when compared to the radiometric reference targets and its calculated Hemispherical Conical Reflectance Factor (HCRF) from the spectrometer. However, the root mean square error (RMSE), normalized root mean square error (NRMSE) and mean absolute percentage error (MAPE) were similar in both cases, showing that the two proposed workflows can generate multispectral mosaics with acceptable radiometric quality for a period in which illumination conditions are stable. Images difference between each band was produced showing that there was a stronger variation of pixels in the higher slope region, which indicates that additional corrections beyond empirical line are needed in these situations
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.