|
1.IntroductionFlooding events are the most common natural disaster worldwide, and their frequency may increase in the future due to global climate change;1 thus, flood monitoring is a national policy issue of increasing importance, requiring rapid access to accurate information that identifies changes induced by floods. Multitemporal remote sensing imagery has proven particularly useful in addressing computer-assisted change-detection applications related to flood monitoring.2–4 To precisely extract the flood extent information from multitemporal satellite imagery, it is necessary to carry out radiometric correction, which minimizes the unfavorable impact of radiometric differences on change detection caused by variations in imaging conditions. Two types of radiometric corrections, absolute and relative, are commonly employed to normalize remote-sensing images for the comparison of multitemporal satellite images.5 The absolute radiometric correction extracts the absolute reflectance of scene targets at the time of data acquisition. Most methods for absolute radiometric correction require the input of simultaneous atmospheric conditions and sensor calibration parameters, which are difficult to acquire in many cases, especially when historical data are used for change-detection analysis.6 Relative radiometric normalization is preferred because it does not require in situ atmospheric data at the time of satellite overpass.7,8 This method involves normalizing the intensities of multitemporal images, band-by-band, to a reference image selected by the analyst. Well-normalized images would appear as if they were acquired with the same sensor under similar atmospheric and illumination conditions to those of the reference image.9 In performing the relative radiometric normalization, it is assumed that the relationship between the radiance obtained by the sensors at two different times can be approximated with linear functions.10 In this method, the critical issue is determining suitable time-invariant features that can be used as the basis for normalization.11,12 A variety of relative radiometric normalization methods, such as pseudoinvariant features (PIFs), dark and bright set, simple regression, no-change determined from scattergrams, histogram matching, and multivariate alteration detection (MAD), have been investigated extensively from a theoretical and practical aspect over the last few decades.13–15 A new method based on PIFs, which includes automatic selection and optimization of PIFs for the radiometric normalization of multisensor images16, has been introduced. A hierarchical regression method based on spectral difference has been proposed recently to reduce the radiation difference for multitemporal images, which extracts PIFs and optimizes the normalization parameters.17 To generate a mosaic image using multitemporal airborne thermal infrared images, a polynomial regression method was recently applied to improve the radiometric agreement between adjacent stitched images.18 A recent study used a parallelization method and iterative MAD to meet the demands of rapid radiometric normalization of remote sensing images for mosaicking.19 One of the widely used methods is the MAD transformation because it is invariant to linear transformations of the original image intensities, which indicates that it is insensitive to differences in atmospheric conditions or sensor calibrations. For this reason, it is considered a more robust method over traditional methods for change detection.20 The iteratively reweighted (IR)-MAD was proposed by Nielsen to improve the robustness of the MAD transformation with the iterative updating of weights.21 The conceptual basis for IR-MAD is simply an iterative scheme to assign high weights on pixels that exhibit little change over time. The chi-square distribution is used to model the probability of no-change at every pixel. These probabilities are then used as weights for each iteration. The IR-MAD procedure is superior to the ordinary MAD transformation in isolating the no-change pixels suitable for use in relative radiometric normalization, particularly for data sets that exhibit large seasonal changes. However, the efficiency and robustness of IR-MAD has not yet been verified in data sets that exhibit substantial changes in land cover due to floods. The objective of this study is to extract reliable PIFs from the images acquired before and after flooding and provide an accurate radiometric normalization of a series of bitemporal very high-resolution (VHR) satellite images for flood change detection. To accomplish this, an MAD-based method that can consider the influence of open water at the study sites in the computation of the covariance matrices of the MAD transform is presented. The MAD method aims at finding a linear relationship between two sets of variables. The target image is therefore corrected by the linear relationship on the PIFs. If image pixels affected by the flood are extracted as PIFs, linear correlation information on the PIFs becomes unreliable. To accurately extract the flood-induced change information, it is necessary to carry out radiometric corrections that minimize the unfavorable impact of radiometric differences caused by image pixels affected by the flood. To address this issue, the current study uses the normalized difference water index (NDWI) to estimate open water features in the study site. In addition, the current study develops a weighting function to adaptively assign weights to the pixels based on the NDWI difference value for the radiometric normalization of multitemporal images acquired before and after flooding. 2.Image PreparationIn this study, two bitemporal images acquired by the KOMPSAT-2 satellite over the city of N’djamena, Chad, and the Atbara River, Sudan, were used to evaluate the performance and feasibility of our methodology.22 The topography of the two regions is relatively flat and intersected by rivers. In Chad, flooding is a frequent consequence of heavy rainfall caused by tropical cyclones. Despite serious water shortages, flash floods caused by torrential rainfall and run-off are common in Sudan. Major floods in Chad and Sudan occurred due to heavy rainfall episodes on October 12, 2012, and August 2, 2013, respectively. The technical specifications of the KOMPSAT-2 datasets are described in Table 1. In general, a radiometric difference exists between the bitemporal images acquired under different view directions at different solar hours. Such a radiometric difference is clearly observed in the vegetated area on the left bottom side of the bitemporal images of Chad in Fig. 1. There is also a large time difference between the bitemporal images for each site, as reported in Table 1. Table 1KOMPSAT-2 satellite data characteristics.
Bitemporal images just before and after the flood event are more appropriate for this application. Unfortunately, bitemporal images with short time differences could not be acquired in this experiment because these images are generally difficult to obtain due to cloud cover, long revisiting cycles of high-resolution data gathering satellites, and scarcity of data of appropriate quality. The images for each site exhibit a high proportion of changes due to significant flooding, as shown in Fig. 1. As indicted in Table 1, VHR satellite images, such as KOMPSAT-2, provide a high spatial resolution panchromatic (PAN) image and a set of multispectral (MS) images with lower spatial resolution but higher spectral resolution. To take advantage of both high spatial and spectral resolution in the change-detection process, a pan-sharpening process that merges the PAN and MS images to create a set of MS images with both a high spectral resolution and enhanced spatial resolution is required. In this study, the Gram–Schmidt adaptive (GSA) pan-sharpening method was used to generate single high-resolution MS images for both images of each study site. The GSA method provided by ENVI software comprises two steps: high-frequency spatial information is extracted from the PAN image and then injected into the resized MS images.23 The images were taken with different off-nadir look angles, as shown in Table 1. Therefore, it was necessary to georeference the datasets to a common coordinate system using an image registration technique. The bitemporal pan-sharpened images of each site were coregistered using the manual image-to-image registration module provided in the ENVI image processing software; the accuracy of the coregistration evaluated using 10 checkpoints gave a positional accuracy within a root mean square error of 0.5 pixels for each study site. 3.MethodologyThe schematic diagram in Fig. 2 illustrates the concept and procedure of the proposed method. Our procedure for relative radiometric normalization based on the MAD transformation is conducted using the following six steps. (1) Generate the NDWI difference between the two images taken before and after the flood event. (2) Compute the weight value using the proposed weighting function defined in Eq. (6), which used the NDWI difference value and near-infrared (NIR) reflectance. (3) Apply the weight value to the bitemporal image to determine its mean and covariance matrices defined in Eq. (8). (4) Perform canonical correlation analysis (CCA) to construct the MAD variates. (5) Select pixels with no-change probability [Eq. (10)] exceeding a predefined threshold as PIFs. (6) Perform the orthogonal linear regression based on the selected PIFs to normalize the image taken after the flood band-by-band to the one taken before the flood. In this study, we compared our method with the IR-MAD method to evaluate its performance and feasibility. The quality of the normalized images generated from each method was evaluated in terms of the paired -test and -tests. The accuracy of flood change detection is also used as another way to examine the performance of the proposed method. Under the same condition, change vector analysis (CVA)- and MAD-based change detections are applied to the normalized images obtained by each method. 3.1.MAD and IR-MAD TransformationThe MAD transformation is an orthogonal transformation based on the CCA between two groups of variables; the transformation identifies the linear combinations that provide a set of mutually orthogonal difference images (MAD components) of decreasing variance.20 Let us consider two -band MS images, and , acquired in the same geographical area at two different times. We can represent the observations in different bands of the multispectral images as random vectors and . The MAD transformation can be formulated as follows: where and are the canonical variates. The MAD variates are the difference images of the corresponding canonical variates. The problem is to determine the linear combination coefficient vectors and . This can be achieved by minimizing the canonical correlation , which is equivalent to maximizing the variance , subject to the constraints . Vectors and are found by solving the coupled generalized eigenvalue problems as follows: where the canonical correlation , are the square roots of the eigenvalues and ;and ;are pairs of eigenvectors. and are the covariance matrices of the two images and is the interimage covariance matrix. The solution to the eigenvalue problem generates new MS images and . Lower canonical correlations result in the larger variances in . The weighting concept in IR-MAD is simply an iterative scheme to put high weights on observations that exhibit little change over time. For each iteration, the observations can be given weights determined by the chi-square distribution. Iterations are performed until the largest absolute change in the canonical correlations , becomes smaller than some preset small value.213.2.Determining Weights for the Weighted Covariance MatricesFrom the above section, it can be seen that the MAD and IR-MAD transformations intrinsically project data containing the total difference between two images into uncorrelated components to detect the difference between them, subject to the constraint of maintaining the total difference information as much as possible. However, these methods lead to an incorrect projection of the MAD variates in data sets with a large number of pixels in the scene that change over time, such as flood disaster data sets. The problem arises because the covariance matrix is greatly influenced by the image pixels affected by the flood; the PIF extraction result is unreliable to a considerable degree.24 To circumvent this problem and achieve reliable extractions of PIFs in such data sets, we devised a weighting function for the robust estimation of the covariance matrices. The basic concept for calculating weights is to assign a small weight value to pixels over open water features in calculating the covariance matrices in Eq. (2) because these pixels have a high probability of belonging to the area affected by the flood. In this study, the NDWI is employed to generate weights according to the presence of an open water feature in the image pixel. The NDWI has been developed to delineate open water features in remotely sensed digital imagery.25,26 There are two popular versions of NDWI, one using NIR and short-wave infrared (SWIR) bands proposed by Gao25 and the other using green and NIR bands proposed by McFeeters.26 In this study, we used the NDWI version proposed by McFeeters because the KOMPSAT-2 satellite does not provide SWIR band data. The NDWI uses reflectance information from the green and NIR spectral bands. The open water condition influences the interaction between these two spectral regions. The NDWI is expressed as follows: where is the spectral reflectance in the green region of visible spectrum and is the spectral reflectance in the NIR region.Water usually has higher green reflectance than NIR reflectance. As a result, the NDWI value generally takes positive values in regions with open water within the range . In this study, we propose a weighting function that allows different weights to be assigned according to the NDWI difference value and NIR reflectance. The proposed weighting function consists of the product of two simple probability functions. In the first function, which is designed using Gaussian function, each pixel is weighted according to the difference in NDWI value. The function is defined as follows: where is the difference in NDWI values before and after the flood at a pixel and is the standard deviation that determines the height and width of the bell-shaped curve. A large standard deviation creates a bell that is short and wide while a small standard deviation creates a tall and narrow curve. This function assigns higher weights when the differences in the NDWI values are small. In the second function, which is designed using a logistic function, each pixel is weighted according to the NIR reflectance value of the image taken after the flood. The function is defined as follows: where is the NIR reflectance value of the image taken after the flood event, controls the steepness of the sigmoid curve, and is the midpoint of the sigmoid curve at which the curvature changes from concave to convex. In this study, this midpoint was set to the third quartile of the NIR reflectance values in the NIR image, where the third quartile is the central value between the median and the highest value of the data set. This function assigns higher weights when the NIR reflectance is high. The final weighting function made with these two functions is defined as follows:We performed the experiment using various values of standard deviation and steepness to find the most acceptable value of these parameters. The best results were acquired when we set the standard deviation to 0.0001 and the steepness to 3 in the Chad dataset (site 1). The same parameters were applied to the Sudan dataset (site 2) to check whether these parameters are suitable for the extraction of PIFs. These weights enter the calculation of the mean and covariance matrix as feature vectors during the MAD method procedure. For example, the covariance matrix in Eq. (2) is simply recalculated considering the weights as follows: where is the weighted mean value of and is the total number of pixels in an image. The elements of the weighted covariance matrix are calculated as follows:If all the elements of vector are set to 1, the proposed method is identical to the original MAD transformation. 3.3.Relative Radiometric Normalization Using MAD Combined with NDWIThe most important step is the determination of suitable PIFs in the relative radiometric normalization because the normalization performance can vary depending on the quality and quantity of PIFs selected from the image. As described in Sec. 3.1, the MAD variates are uncorrelated (orthogonal) and invariant under affine transformations of the bitemporal images. This invariance can be exploited to determine PIFs suitable for relative radiometric normalization. To choose the PIFs using the MAD transformation, the random variable represents the sum of the squares of the standardized MAD variates where are the standard deviations of the MAD variates. Because no-change observations are normally distributed and uncorrelated, the realization of the random variable should be chi-square distributed with degrees of freedom. This allows us to define the no-change probabilities as where represents the chi-square distribution and is the probability that a sample drawn from the chi-square distribution could be that large or larger. A small implies a high probability of no-change. The no-change probabilities of the pixels derived by this method can be used to determine suitable PIFs from the time-series images. To conduct the radiometric normalization, those pixels that satisfy are chosen as the PIFs. In this study, the decision threshold was set to 0.99, which is the same threshold as in the original MAD designed by Nielsen.21 The calibration parameters for radiometric normalization were determined using the orthogonal linear regression and the selected PIFs.244.Experimental Results and Discussion4.1.Results of Relative Radiometric NormalizationTo test the performance of the proposed method, a comparison with the IR-MAD method was conducted. The PIFs obtained from each method were used to perform an orthogonal regression for relative radiometric normalization. To compare these two methods, we visually inspected the exact geometrical position of the PIFs selected in each method. The PIF extraction results obtained using these two methods are shown in Fig. 3. The pixel positions of the PIFs are shown on the flooded images with green arrows for visual inspection in Fig. 3. As shown in Fig. 3, the number of the extracted PIFs using the IR-MAD method is smaller than that of the proposed method, and the majority of extracted PIFs using the IR-MAD method are located in the area affected by flooding, which are unsuitable for radiometric normalization. The proposed method produces relatively more PIFs in the nonflooded area, with more homogeneous surface characteristics compared with the IR-MAD methods. This improvement is due to using the proposed weighting method in calculating the covariance matrices for the MAD transformation. The selected PIFs were subsequently used to normalize the target image to the reference image using orthogonal linear regression; we designated the image taken before the flood event as the reference image and the image taken after the flood event as the target image to be normalized. Figures 4 and 5 show the results of orthogonal regression analysis using the PIFs obtained by each method for each site. As shown in Figs. 4 and 5, it is clear that the proposed method produced a better result than the IR-MAD method in terms of the linear relationship for both sites. Figure 6 shows the radiometric normalized images generated from the orthogonal linear regression using the PIFs selected from each method. In the Chad scenes, the normalized image using the IR-MAD method provides a visually bad result, as shown in Fig. 6(a); there are clear radiometric distortions throughout all regions when compared with the proposed method. This difference is due to the quality of the extracted PIFs; most of the extracted 150 PIFs in the IR-MAD method are located in the area affected by flooding, as shown in Fig. 3(a), which are unreliable PIFs for the radiometric normalization. 4.2.Discussion Using Statistical Evaluation MethodTo quantify the comparison between the proposed and IR-MAD methods, the quality of the normalized images generated from each method was evaluated in terms of the paired -test and -tests for equal means and variance, respectively. In general, paired -test values close to zero and -test values close to one indicate good matches. In both statistical tests, for a significance level of 0.05, -values close to one are desirable. The null hypothesis of equal mean and variance is accepted when the -values are greater than a predefined significance level, which is traditionally set to 0.05.24 The statistical comparisons of hold-out test pixels for the normalized images obtained from the two methods for each site are listed in Tables 2–5. The hold-out test pixels are those that are not used in the estimation of orthogonal regression parameters and are only used for the assessment of accuracy. Figure 7 shows hold-out test pixels to estimate the accuracy of each method for each site. Table 2Comparison of means and variance for 44 hold-out test pixels, with paired t-tests and F-tests for equal means and variances, and normalization using the IR-MAD method for the Chad scenes.
Table 3Comparison of means and variance for 44 hold-out test pixels, with paired t-tests and F-tests for equal means and variances, and normalization using the proposed method for the Chad scenes.
Table 4Comparison of means and variance for 326 hold-out test pixels, with paired t-tests and F-tests for equal means and variances, and normalization using the IR-MAD method for the Sudan scenes.
Table 5Comparison of means and variance for 326 hold-out test pixels, with paired t-tests and F-tests for equal means and variances, and normalization using the proposed method for the Sudan scenes.
Comparing data for the Chad scenes (Tables 2 and 3), it is clear that the proposed method produced a better result than the IR-MAD method in both statistical tests; none of the -values for bandwise tests for equal means and variance are acceptable in the IR-MAD method results. In the Sudan scene, as shown in Tables 4 and 5, the proposed method also generated a better result than the IR-MAD method in both statistical tests; the difference between the reference and normalized mean is much smaller than that of the IR-method. From these results, the weighting scheme in the proposed method allowed a more precise identification of PIFs prior to the relative radiometric normalization for bitemporal images exhibiting a significant amount of change due to flooding. 4.3.Results of Change DetectionBecause the significance of the statistical evaluation using the paired -test and -tests crucially depends on the test pixels, it is difficult to conclude, using these tests alone, that the proposed method is better than the IR-MAD method. Another approach to assessing the performance of the proposed method is to compare the accuracy of flood change detection using the proposed method with that using the IR-MAD method. Under the same conditions, changes are detected in the radiometric normalized images produced from each method. Several change-detection methods have been proposed for remote sensing change detection.27–29 Among them, we used the CVA- and MAD-based change detection-methods for flood change detection. The MAD approach, originally designed by Nielsen et al.,20 can be used directly for change detection. The thresholds for deciding between change and no-change can be set in terms of the standard deviation about the mean for each MAD component. All pixels in an MAD component whose intensities are within , where is the standard deviation, are no-change pixels. The final change pixels are obtained by applying a union operator on the change-detection results of each MAD component. The CVA is one of the simplest and most widely used change-detection methods in the literature.30 The CVA is applied to the radiometric normalized image generated from each method. The basis for CVA is that a particular pixel with different values over time resides at substantially different locations in the feature space. The spectral change vector (SCV) is calculated from the vector difference of spectral feature vectors associated with pairs of corresponding pixels in two images acquired at two different times.31 The magnitude of SCV is used to establish a simple criterion for identifying the changed area.32 Due to the properties of the magnitude operator, it is possible to assert that pixels showing a magnitude higher than a given threshold value are changed, while pixels showing a magnitude lower than the threshold value are unchanged.33 This method performed best using Landsat TM data in a comparative evaluation of change-detection techniques for detecting areas associated with flood events.34 A threshold, indicating the changed area, needs to be determined on the SCV magnitude image. Selecting an appropriate threshold value to identify change is difficult.35 Too low a threshold will exclude areas of change, and too high will include too many areas of change. In the literature, several threshold-selection methods have been proposed for identifying the threshold value, which separates changed areas in bitemporal satellite images.36 Among them, we applied the expectation maximization (EM)-based thresholding method to the SCV magnitude image to assign each image pixel to one of two opposing classes: changed and unchanged areas. The EM-based threshold algorithm requires estimates of the statistical parameters of classes, i.e., the class prior probabilities and class-conditional probabilities. The estimated class-statistical parameters are then used with the Bayes decision rule for minimum-error in the automatic determination of an optimal decision threshold.37 Figure 8 shows the change-detection results obtained by the MAD-based change-detection method for each site; whereas, Fig. 9 shows the change-detection results using the CVA-based change-detection method for each site. In general, the results from both change-detection methods suggest that change has been overdetected, as shown in Figs. 5 and 6. 4.4.Discussion Using Change-Detection AccuracyTo evaluate and compare the proposed algorithm, reference images for each site were produced from the original image by manually digitizing the flooded areas,38 as shown in Fig. 7. In the construction of the reference image, we only considered the visually salient flooded area along the river; it is challenging to visually identify all changes in urban residential districts, and we were focusing on changes due to floods. By comparing this reference image with the results from each change-detection method, we obtained a measure of detection accuracy. Comparisons between the reference and the results from each change-detection method were quantified using constructed error matrices; commission error (CE), omission error (OE), and overall accuracy (OA) were calculated to assess the whole accuracies of each result image. The OA is the sum of the correctly classified pixels divided by the total number of reference pixels.39 Table 6 shows detailed quantitative results using the MAD-based change-detection method for both sites. When visually compared with the reference image from each site, there are many omissions in the results for both sites, but the OEs of the proposed method are slightly lower than those of the IR-MAD method, as shown in Table 6. From Table 6, it is also observed that the proposed method yields better accuracy in measuring OA for both sites. Table 7 represents detailed quantitative results using the CVA-based change-detection method for both sites. The CVA-based method provided a better visual and quantitative result than the MAD-based method for both sites. Table 6Accuracy assessment results of the MAD-based change-detection method for each site: (OA) overall accuracy, (CE) commission error, (OE) omission error, (F) flood (in pixels), and (NF) no flood (in pixels).
Table 7Accuracy assessment results of the CVA-based change-detection method for each site: (OA) overall accuracy, (CE) commission error, (OE) omission error, (F) flood (in pixels), and (NF) no flood (in pixels).
Results obtained for both sites with both the IR-MAD and proposed methods were consistent with actual flood changes. Upon close inspection of the change-detection results using the reference images (Fig. 10), the flood extent extracted using the IR-MAD method overestimated changes in comparison to the results from the proposed method. In the results from the IR-MAD method, more pixels were identified as flooded areas than were identified by visual inspection; there was also a considerable CE, particularly in the forest and permanent water body region, relative to the proposed method. As shown in Table 7, the proposed method produced a better result than the IR-MAD method with an OA of 83.62% and 75.91%, respectively, for each site. The OA of the proposed method is higher by 1.8% and 12.6% for the two sites than the OA of the IR-MAD method. These results demonstrate the feasibility and effectiveness of employing the NDWI difference and NIR reflectance for the relative radiometric correction of remote sensing imagery to extract flooded areas. The proposed method has the advantages of not only being able to extract more precise PIFs but also performing well in differentiating the flooded area in comparison to the IR-MAD method. Our method is computationally efficient since it does not require an iterative procedure. However, there are disadvantages in that it focuses only on flood-related issues and is sensitive to the NIR band because it is designed based on the NIR band of the post-flood image. In future work, to increase the robustness of the proposed method, we will explore new strategies to fuse flood-related information of VHR synthetic aperture radar (SAR) images when SAR images of the same area acquired during and after the flood are available. 5.ConclusionsIn this study, we presented a new approach combining MAD transformation and open water features for the relative radiometric normalization of bitemporal VHR satellite imagery. The proposed method constructs a weighting function based on differences in NDWI and NIR reflectance; this function is used to estimate the covariance matrix for the MAD transformation and reliably extract PIFs from images acquired at different times. The effectiveness of this approach was verified with the experimental results, showing the extraction of PIFs from two KOMPSAT-2 VHR satellite images acquired before and after flooding. To test the performance of the proposed approach, the results were compared with those of IR-MAD-based radiometric normalization techniques. Both statistical tests and actual performances of the flood change detection were evaluated, and results demonstrated that the proposed method can extract PIFs suitable for use in relative radiometric normalization for flood change detection. Both the statistical paired -test and -test on hold-out test pixels also convincingly indicate that, for bitemporal scenes exhibiting a large amount of change due to flooding, the proposed method produces a better result than the IR-MAD method. Based on OA, in actual experiments of flood change detection using the MAD- and CVA-based methods, the proposed method also produced better results than the IR-MAD method. The CVA-based method produces better change-detection results than the MAD-based method for both sites. Using the CVA-based change-detection method, the OA achieved with the proposed method was 1.8% and 12.6% better than those obtained with the IR-MAD method for the two study sites. To improve the accuracy and effectiveness of the proposed method, our future research will focus on developing a strategy to utilize VHR SAR images. AcknowledgmentsThis research was supported by a grant (18RDRP-B076564-05) from the Regional Development Research Program funded by the Ministry of Land, Infrastructure, and Transport of Korean Government. ReferencesM. Morita,
“Quantification of increased flood risk due to global climate change for urban river management planning,”
Water Sci. Technol., 63
(12), 2967
–2974
(2011). https://doi.org/10.2166/wst.2011.172 Google Scholar
S. Ghoshal et al.,
“Channel and floodplain change analysis over a 100 year period: lower Yuba River, California,”
Remote Sens., 2
(7), 1797
–1825
(2010). https://doi.org/10.3390/rs2071797 Google Scholar
H. Ban et al.,
“Flood monitoring using satellite-based RGB composite imagery and refractive index retrieval in visible and near-infrared bands,”
Remote Sens., 9
(4), 313
(2017). https://doi.org/10.3390/rs9040313 Google Scholar
S. Martinis and A. Twele,
“A hierarchical spatio-temporal Markov model for improved flood mapping multi-temporal X-band SAR data,”
Remote Sens., 2
(9), 2240
–2258
(2010). https://doi.org/10.3390/rs2092240 Google Scholar
A. Davranche, G. Lefebvre and B. Poulin,
“Radiometric normalization of SPOT-5 scenes: 6S atmospheric model versus pseudo-invariant features,”
Photogramm. Eng. Remote Sens., 75
(6), 723
–728
(2009). https://doi.org/10.14358/PERS.75.6.723 Google Scholar
Y. Du, P. M. Teillet and J. Cihlar,
“Radiometric normalization of multitemporal high-resolution satellite images with quality control for land cover change detection,”
Remote Sens. Environ., 82
(1), 123
–134
(2002). https://doi.org/10.1016/S0034-4257(02)00029-9 Google Scholar
H. Olsson,
“Reflectance calibration of thematic mapper data for forest change detection,”
Int. J. Remote Sens., 16
(1), 81
–96
(1995). https://doi.org/10.1080/01431169508954373 IJSEDK 0143-1161 Google Scholar
X. Yang and C. P. Lo,
“Relative radiometric normalization performance for change detection from multi-date satellite images,”
Photogramm. Eng. Remote Sens., 66
(8), 967
–980
(2000). Google Scholar
M. M. Rahman et al.,
“An assessment of polynomial regression techniques for the relative radiometric normalization (RRN) of high-resolution multi-temporal thermal infrared (TIR) imagery,”
Remote Sens., 6
(12), 11810
–11828
(2014). https://doi.org/10.3390/rs61211810 Google Scholar
D. S. Kim et al.,
“Automatic pseudo-invariant feature extraction for the relative radiometric normalization of Hyperion hyperspectral images,”
GISci. Remote Sens., 49
(5), 755
–773
(2012). https://doi.org/10.2747/1548-1603.49.5.755 Google Scholar
A. C. J. Osmar et al.,
“Radiometric normalization of temporal images combining automatic detection of pseudo-invariant features from the distance and similarity spectral measures, density scatterplot analysis, and robust regression,”
Remote Sens., 5
(6), 2763
–2794
(2013). https://doi.org/10.3390/rs5062763 Google Scholar
C. H. Lin, B. Y. Lin and Y. C. Chen,
“Radiometric normalization and cloud detection of optical satellite images using invariant pixels,”
ISPRS J. Photogramm. Remote Sens., 106 107
–117
(2015). https://doi.org/10.1016/j.isprsjprs.2015.05.003 Google Scholar
E. H. Helmer and B. A. Ruefenacht,
“Comparison of radiometric normalization methods when filling cloud gaps in Landsat imagery,”
Can. J. Remote Sens., 33
(4), 325
–340
(2007). https://doi.org/10.5589/m07-028 Google Scholar
G. Hong and Y. Zhang,
“A comparative study on radiometric normalization using high resolution satellite images,”
Int. J. Remote Sens., 29
(2), 425
–438
(2008). https://doi.org/10.1080/01431160601086019 IJSEDK 0143-1161 Google Scholar
M. M. Rahman et al.,
“A comparison of four relative radiometric normalization (RRN) techniques for mosaicing H-res multi-temporal thermal infrared (TIR) flight-lines of a complex urban scene,”
ISPRS. J. Photogramm. Remote Sens., 106 82
–94
(2015). https://doi.org/10.1016/j.isprsjprs.2015.05.002 Google Scholar
H. Zhou et al.,
“A new model for the automatic relative radiometric normalization of multiple images with pseudo-invariant features,”
Int. J. Remote Sens., 37
(19), 4554
–4573
(2016). https://doi.org/10.1080/01431161.2016.1213922 IJSEDK 0143-1161 Google Scholar
C. Zhong, Q. Xu and B. Li,
“Relative radiometric normalization for multitemporal remote sensing images by hierarchical regression,”
IEEE Geosci. Remote Sens. Lett., 13
(2), 217
–221
(2016). https://doi.org/10.1109/LGRS.2015.2506643 Google Scholar
M. R. Mir et al.,
“An assessment of polynomial regression techniques for the relative radiometric normalization (RRN) of high-resolution multi-temporal airborne thermal infrared (TIR) imagery,”
Remote Sens., 6
(12), 11810
–11828
(2014). https://doi.org/10.3390/rs61211810 Google Scholar
C. Chen et al.,
“Parallel relative radiometric normalization for remote sensing image mosaics,”
Comput. Geosci., 73 28
–36
(2014). https://doi.org/10.1016/j.cageo.2014.08.007 CGEODT 0098-3004 Google Scholar
A. A. Nielsen, K. Conradsen and J. J. Simpson,
“Multivariate alteration detection (MAD) and MAF postprocessing in multispectral, bitemporal image data: new approach to change detection studies,”
Remote Sens. Environ., 64
(1), 1
–19
(1998). https://doi.org/10.1016/S0034-4257(97)00162-4 Google Scholar
A. A. Nielsen,
“The regularized iteratively reweighted MAD method for change detection in multi- and hyperspectral data,”
IEEE Trans. Image Process., 16
(2), 463
–478
(2007). https://doi.org/10.1109/TIP.2006.888195 Google Scholar
Y. Byun, Y. Han and T. Chae,
“Image fusion-based change detection for flood extent extraction using bi-temporal very high-resolution satellite images,”
Remote Sens., 7
(8), 10347
–10363
(2015). https://doi.org/10.3390/rs70810347 Google Scholar
B. Aiazzi, S. Baronti and M. Selva,
“Improving component substitution pansharpening through multivariate regression of MS+pan data,”
IEEE Geosci. Remote Sens., 45
(10), 3230
–3239
(2007). https://doi.org/10.1109/TGRS.2007.901007 Google Scholar
M. J. Canty and A. A. Nielsen,
“Automatic radiometric normalization of multitemporal satellite imagery with the iteratively re-weighted MAD transformation,”
Remote Sens. Environ., 112
(3), 1025
–1036
(2008). https://doi.org/10.1016/j.rse.2007.07.013 Google Scholar
B. Gao,
“NDWI—a normalized difference water index for remote sensing of vegetation liquid water from space,”
Remote Sens. Environ., 58
(3), 257
–266
(1996). https://doi.org/10.1016/S0034-4257(96)00067-3 Google Scholar
S. K. McFeeters,
“The use of the normalized difference water index (NDWI) in the delineation of open water features,”
Int. J. Remote Sens., 17
(7), 1425
–1432
(1996). https://doi.org/10.1080/01431169608948714 IJSEDK 0143-1161 Google Scholar
X. Chen, L. Vierling and D. Deering,
“A simple and effective radiometric correction method to improve landscape change detection across sensors and across time,”
Remote Sens. Environ., 98
(1), 63
–79
(2005). https://doi.org/10.1016/j.rse.2005.05.021 Google Scholar
M. Hussain et al.,
“Change detection from remotely sensed images: from pixel-based to object-based approaches,”
ISPRS J. Photogramm. Remote Sens., 80 91
–106
(2013). https://doi.org/10.1016/j.isprsjprs.2013.03.006 Google Scholar
N. Longbotham et al.,
“Multi-modal change detection, application to the detection of flooded areas: outcome of the 2009–2010 data fusion contest,”
IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 5
(1), 331
–342
(2012). https://doi.org/10.1109/JSTARS.2011.2179638 Google Scholar
A. Singh,
“Review article digital change detection techniques using remotely-sensed data,”
Int. J. Remote Sens., 10
(6), 989
–1003
(1989). https://doi.org/10.1080/01431168908903939 IJSEDK 0143-1161 Google Scholar
R. D. Johnson and E. S. Kasischke,
“Change vector analysis: a technique for the multispectral monitoring for land cover and condition,”
Int. J. Remote Sens., 19
(3), 411
–426
(1998). https://doi.org/10.1080/014311698216062 IJSEDK 0143-1161 Google Scholar
R. S. Dewi, W. Bijker and A. Stein,
“Change vector analysis to monitor the changes in fuzzy shorelines,”
Remote Sens., 9
(2), 147
(2017). https://doi.org/10.3390/rs9020147 Google Scholar
F. Bovolo and L. Bruzzone,
“A theoretical framework for unsupervised change detection based on change vector analysis in polar domain,”
IEEE Geosci. Remote Sens., 45
(1), 218
–236
(2007). https://doi.org/10.1109/TGRS.2006.885408 Google Scholar
A. S. Dhakal et al.,
“Detection of areas associated with flood and erosion caused by a heavy rainfall using multitemporal Landsat TM data,”
Photogramm. Eng. Remote Sens., 68
(3), 233
–239
(2002). Google Scholar
G. Xian and C. Homer,
“Updating the 2001 national land cover database impervious surface products to 2006 using Landsat imagery change detection,”
Remote Sens. Environ., 114
(8), 1676
–1686
(2010). https://doi.org/10.1016/j.rse.2010.02.018 Google Scholar
L. Bruzzone and F. D. Prieto,
“Automatic analysis of the difference image for unsupervised change detection,”
IEEE Geosci. Remote Sens., 38
(3), 1171
–1182
(2000). https://doi.org/10.1109/36.843009 Google Scholar
T. K. Moon,
“The expectation-maximization problem,”
IEEE Signal Process. Mag., 13
(6), 47
–60
(1996). https://doi.org/10.1109/79.543975 Google Scholar
Z. Zhang et al.,
“A 2010 update of national land use/cover database of china at 1:100000 scale using medium spatial resolution satellite images,”
Remote Sens. Environ., 149 142
–154
(2014). https://doi.org/10.1016/j.rse.2014.04.004 Google Scholar
R. G. Congalton,
“A review of assessing the accuracy of classification of remotely sensed data,”
Remote Sens. Environ., 37
(1), 35
–46
(1991). https://doi.org/10.1016/0034-4257(91)90048-B Google Scholar
BiographyYounggi Byun received his MS degree and PhD in civil and environmental engineering from Seoul National University, Seoul, South Korea, in 2004 and 2011, respectively. His major research interests include UAV image processing, change-detection, and multisensor image matching and fusion. Dongyeob Han received his MS degree in civil and environmental engineering and his PhD in civil, urban, and geosystem engineering from Seoul National University, Seoul, South Korea, in 1998 and 2007, respectively. His research interests include UAV image processing, laser scanning data processing, and multisensor image registration. |