Change Detection (CD) approaches for hyperspectral images (HSI) are mainly unsupervised and hierarchically extract the endmembers to determine the multiple change classes but require many parameters to set manually. Recently, HSI CD has been approached with DL methods because of their capacity to learn features of changes automatically, but they require a huge amount of labeled data for weakly or fully supervised training. They mostly perform binary CD only and do not fully exploit the spectral information. Accordingly, we propose an unsupervised DL CD method to identify multiple change classes in bi-temporal HSIs, inspired by a sparse autoencoder for spectral unmixing. The proposed method learns the endmembers of the unchanged class and the various classes of change by solving an unmixing problem with a Convolutional Autoencoder (CAE) trained in an unsupervised way using unlabeled patches sampled from the difference of the bi-temporal HSIs. The spectral unmixing problem is solved by applying three constraints to the CAE: a sparsity l21-norm constraint that forces the model to learn non-redundant information, a non-negativity constraint, and the sum-to-one constraint. After the training, we process the difference image with the trained Autoencoder to extract the abundance maps of the various change types being derived from the endmembers learned by the model during the training. A Change Vector Analysis approach detects the changed areas that are clustered with an X-means approach using the change abundances to obtain a multi-class change map. We obtained promising results by testing the proposed method on bi-temporal Hyperion images acquired on Benton County, Washington, USA, in May 2004 and May 2007, and bi-temporal PRISMA images acquired on an area close to Vienna in April 2020 and September 2021 that show the changes in crop fields.
Many Change Detection (CD) methods exploit the bi-temporal multi-modal data derived by multiple sensors to find the changes effectively. State-of-the-Art CD methods define features with a common domain between the multi-modal data by normalizing input images or ad hoc feature extraction/selection methods. Deep Learning (DL) CD methods automatically learn features with a common domain during the training or adapt the features derived by multi-modal data. However, CD methods focusing on multi-sensor multi-frequency SAR data are still poorly investigated. We propose a DL CD method that exploits a Cycle Generative Adversarial Network (CycleGAN) to automatically learn and extract multi-scale feature maps in a domain common to the input multifrequency multi-sensor SAR data. The feature maps are learned, during unsupervised training, by generators that aim to transform the input data domain into the target one while preserving the semantic information and aligning the feature domain. We process the multi-sensor multi-frequency SAR data with the trained generators to produce bi-temporal multi-scale feature maps that are compared to enhance changes. A standard-deviation-based feature selection is applied to keep only the most informative comparisons and reject the ones with poor change information. The multi-scale comparisons are used for a detail preserving CD. Preliminary experimental results conducted on bi-temporal SAR data acquired by Cosmo-SkyMed and SAOCOM on the urban area of Milan, Italy, in January 2020 and August 2021 provided promising results.
Standard deep-learning (DL) architectures do not optimize the use of the spatial and spectral information in the multi-spectral images but often consider only one of the two components. Two-stream DL architectures split and process them separately. However, the fusing of the output of the two streams is a challenging task. 3D-CNN processes spatial and spectral information together at the cost of a large number of parameters. To overcome these limitations, we propose a novel DL data structure that re-organizes the spectral and spatial information in remote-sensing (RS) images and process them together. Representing a RS image I as a data cube, we handle the spatial and spectral information by reducing the spectral bands from N to M, where M can drop out to one. The spectral information is projected in the spatial dimensions and re-organized in 2-dimensional B blocks. The proposed approach analyzes the spectral information of each block by using 2-dimensional convolutional kernels of appropriate size and stride. The output represents the relationship between the spectral bands of the input image and preserves the spatial relationship between its neighboring pixels. The spatial relationships are analyzed by processing the output of the previous layer with standard 2D-CNNs. Experiments by using images acquired by Sentinel-2 and Landsat-8 data and the labels of the LUCAS database released in 2018 provide promising results.
Change detection (CD) benefits of the capability of deep-learning (DL) methods of exploiting complex temporal behaviors in a large amount of data. Unsupervised CD DL methods are preferred since they do not require labeled data. Unsupervised CD methods use autoencoders (AE) or convolutional AE (CAE) for CD. However, features provided by the CAE hidden layers tend to degrade the geometrical information during the encoding. To mitigate this effect, we propose an unsupervised CD exploiting a multilayer CAE trained by a hierarchical loss function. This loss function guarantees a better trade-off between noise reduction and preservation of geometrical details at each hidden layer of the CAE. On the contrary to standard CAE, the proposed novel loss function considers input/output specular pairs of multiple hidden layers. These layers are analyzed by considering encoder/decoder pairs that work at corresponding geometrical resolution and show similar spatialcontext information. Single-layer loss functions are defined by comparing the specular corresponding encoder and decoder pairs then aggregated to design a multilayer loss function. The proposed hierarchical loss function allows for a layer-by-layer control of the training and improvement of the reconstruction quality of the hidden layers that better preserves the geometrical details while reducing noise. The CD is performed by processing bi-temporal remote sensing images with the CAE. A detail-preserving multi-scale CD process exploits the most informative features for bi-temporal images to compute the change map. Preliminary experimental results conducted on a couple of Landsat-8 multitemporal images acquired before and after a fire near Granada, Spain of July 8th, 2015, provided promising results.
Rapid identification of areas affected by changes is a challenging task in many remote sensing applications. Sentinel-1 (S1) images provided by the European Space Agency (ESA) can be used to monitor such situations due to its high temporal and spatial resolution and indifference to weather. Though a number of deep learning based methods have been proposed in the literature for change detection (CD) in multi-temporal SAR images, most of them require labeled training data. Collecting sufficient labeled multi-temporal data is not trivial, however S1 provides abundant unlabeled data. To this end, we propose a solution for CD in multi-temporal S1 images based on unsupervised training of deep neural networks (DNNs). Unlabeled single-time image patches are used to train a multilayer convolutional-autoencoder (CAE) in unsupervised fashion by minimizing the reconstruction error between the reconstructed output and the input. The trained multilayer CAE is used to extract multi-scale features from both the pre and post change images that are analyzed for CD. The multi-scale features are fused according to a detail-preserving scale-driven approach that allows us to generate change maps by preserving details. The experiments conducted on a S1 dataset from Brumadinho, Brazil confirms the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.