Literature DB >> 35205585

Multiscale Geometric Analysis Fusion-Based Unsupervised Change Detection in Remote Sensing Images via FLICM Model.

Liangliang Li1, Hongbing Ma1, Zhenhong Jia2.   

Abstract

Remote sensing image change detection is widely used in land use and natural disaster detection. In order to improve the accuracy of change detection, a robust change detection method based on nonsubsampled contourlet transform (NSCT) fusion and fuzzy local information C-means clustering (FLICM) model is introduced in this paper. Firstly, the log-ratio and mean-ratio operators are used to generate the difference image (DI), respectively; then, the NSCT fusion model is utilized to fuse the two difference images, and one new DI is obtained. The fused DI can not only reflect the real change trend but also suppress the background. The FLICM is performed on the new DI to obtain the final change detection map. Four groups of homogeneous remote sensing images are selected for simulation experiments, and the experimental results demonstrate that the proposed homogeneous change detection method has a superior performance than other state-of-the-art algorithms.

Entities:  

Keywords:  FLICM; NSCT; change detection; difference image; remote sensing image

Year:  2022        PMID: 35205585      PMCID: PMC8871418          DOI: 10.3390/e24020291

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

The application of remote sensing images is more and more extensive in the current research. These applications include image fusion [1,2,3,4,5,6], image classification [7,8,9,10,11], change detection [12,13,14,15,16,17], etc. In particular, remote sensing image change detection is to calculate the changed region from the images obtained in two different periods, and this method plays a significant role in the change observation of land use change, flood disaster, earthquake, and fire. Many remote sensing image change detection methods have been proposed to detect the changed information, and these methods can be divided into two components: supervised and unsupervised algorithms [18,19]. Because the corresponding classifier in supervised change detection method usually needs to be trained with available labeled data, its acquisition usually takes time and is costly. Compared with the supervised method, the unsupervised method does not need labeled reference images for training; in general, the multi-temporal remote sensing images we obtained do not have reference images, which matches the practical applications. Remote sensing image change detection mainly contains three steps: preprocessing (e.g., geometric registration or denoising); difference image generation; and analyzing the difference image to obtain the change detection map. The thresholding-based, segmentation-based, and clustering-based methods are widely used in unsupervised change detection approaches [15]. In terms of the thresholding-based methods, the Kittler-Illingworth minimum-error thresholding method [20], the Otsu method [21], and likelihood ratio method [22] are used. Gong et al. [23] introduced a synthetic aperture radar (SAR) image change detection method based on a neighborhood-based ratio (NR) operator and the generalization of Kittler and Illingworth thresholding (GKIT) model. Xu et al. [24] proposed SAR image change detection method using a modified neighborhood-based operator and iterative Otsu model. Geetha et al. [25] proposed multi-temporal SAR image change detection using a Laplacian pyramid and Otsu model. For the segmentation-based methods, Celik et al. [26] proposed a remote sensing image change detection method based on an undecimated discrete wavelet transform and Chan–Vese segmentation model. The clustering-based methods are most popular in the image change detection, e.g., Celik et al. [27] introduced one remote sensing image change detection method using a principal component analysis and K-means clustering (PCAKM) model. Li et al. [28] proposed an unsupervised SAR change detection using gabor wavelet and fuzzy C-means clustering. Chen et al. [29] introduced the nonsubsampled contourlet transform-Hidden Markov Tree model (NSCT-HMT) model and fuzzy local information c-means (FLICM) into the remote sensing image change detection. The aforementioned change detection methods have made some achievements in the field of remote sensing change detection. In recent studies, the deep learning methods have been successfully applied to remote sensing change detection. These methods include principal component analysis network (PCANet) [30], channel weighting-based deep cascade network [31], convolutional-wavelet neural networks [32], multiscale capsule network [33], transferred deep learning [34], deep pyramid feature learning networks [35], attention-based deeply supervised network [36], etc. Because the methods based on deep learning use training samples for training, the accuracy of the final change detection results is also relatively high. In this paper, we present a novel remote sensing image change detection method based on a multiscale geometric analysis fusion and FLICM model. Simulation experiments on four groups of remote sensing images verify the practicability and effectiveness of the proposed algorithm.

2. Methodology

This section introduces the proposed remote sensing image change detection method, and we assume that multi-temporal remote sensing images are registered. The main contents include the difference image (DI) calculated by log-ratio operator (LR) and mean-ratio operator (MR), respectively; the fused difference image generated by NSCT fusion; and the final change detection map computed by FLICM model. The structure of the proposed algorithm is shown in Figure 1.
Figure 1

The structure of the proposed remote sensing image change detection algorithm.

2.1. Multiscale Geometric Analysis

Multiscale geometric analysis includes ridgelet, curvelet, contourlet, and shearlet transform, etc. [37]. These transforms have been widely used in image processing, such as image denoising and image fusion. Nonsubsampled contourlet transform (NSCT) is the optimization model of contourlet [38], and it is a translation invariant, multiscale, and multidirectional transformation. NSCT is constructed by a nonsubsampled pyramid (NSP) and nonsubsampled directional filter bank (NSDFB). Firstly, NSP decomposes the input image into high-pass and low-pass parts, and then NSDFB decomposes the high-frequency sub-band into multiple directional sub-bands, and the low-frequency part continues to be decomposed, as above. Liu et al. [39] introduced the image fusion based on NSCT and sparse representation model.

2.2. Difference Image Generation

In the process of remote sensing image change detection, the difference image (DI) generation is an important step. It is assumed that there are registered and corrected remote sensing images X and Y, and the difference images computed by the log-ratio operator (LR) [40] and mean-ratio operator (MR) [41] are described as follows: where and show the local mean values of the remote sensing images X and Y, respectively. The background information generated by the log-ratio image is relatively flat, and the change area information reflected by the mean-ratio image is relatively consistent with the real change trend of the remote sensing image. Therefore, the log-ratio image and mean-ratio image can be integrated into one new difference image with complementary information. Compared with single difference image computed by the log-ratio or mean-ratio operator, the fused difference image can not only reflect the real change trend but also suppress the background. In order to achieve more useful information, we integrate the two difference images through NSCT transformation. The main step of the NSCT-based fusion can be concluded as follows. Step 1: The LR and MR images are decomposed by NSCT into low-frequency (LF) and high-frequency (HF) components, respectively. We define them as and . Step 2: Fuse the low- and high-frequency components using the average rule and Gaussian weighted local area energy rule, respectively. where shows the Gaussian weighted local area energy coefficient, and it is computed by where shows the element of the rotationally symmetric Gaussian low-pass filter of size with standard deviation . Step 3: The fused difference image is calculated by the inverse NSCT performing on fused low-frequency and high-frequency . In this section, the NSCT decomposition level is one, and it has one low-frequency sub-band and two high-frequency sub-bands. This ensures the running time of the algorithm and achieves good fusion effect. Subsequently, the fused difference image will be analyzed by the FLICM model.

2.3. FLICM Model

In the fuzzy local information c-means (FLICM) clustering model, the fuzzy factor is defined as follows [42]: where the ith pixel represents the center of the local window, the jth pixel depicts the neighboring pixels falling into the window around the ith pixel, and presents the spatial Euclidean distance between pixels i and j. shows the prototype of the center of cluster k, and shows the fuzzy membership of the gray value j with respect to the kth cluster. shows the Euclidean distance between object and cluster center . According to the previously defined function , the objective function of the FLICM model is calculated by where and have the same meaning as in Equation (6). N and c represent the number of the data items and clusters, respectively. shows the Euclidean distance between object and cluster center . The and are defined as follows:

3. Experimental Results and Discussion

In this section, two groups of SAR images and two groups of optical images are used to simulate. In order to evaluate the detection accuracy of the proposed algorithm more accurately, subjective and objective evaluations are adopted. Some state-of-the-art change detection methods are compared, such as PCAKM [27], Gabor wavelet and two-level clustering (GaborTLC) [28], LMT [43], PCANet [30], NRELM [44], neighborhood-based ratio and collaborative representation (NRCR) [45], and convolutional-wavelet neural networks (CWNN) [32]. Meanwhile, the false negative (FN) [32], false positive (FP) [32], overall error (OE) [32], percentage correct classification (PCC) [32], kappa coefficient (KC) [32,46,47], and F1-score (F1) [18] are used as the objective evaluation metrics. Figure 2, Figure 3, Figure 4 and Figure 5 show the remote sensing images for simulating.
Figure 2

Ottawa data set. (a) Image acquired in May 1997; (b) image acquired in August 1997; (c) reference.

Figure 3

Wenchuan data set. (a) Image acquired on 3 March 2008; (b) image acquired on 16 June 2008; (c) reference.

Figure 4

Mexico data set. (a) Image acquired in April 2000; (b) image acquired in May 2005; (c) reference.

Figure 5

Yambulla data set. (a) Image acquired on 1 October 2015; (b) image acquired on 6 February 2016; (c) reference.

3.1. Experimental Data

The first data utilized in the experiment is the Ottawa data set with the size 290 × 350 pixels. The original images were obtained in May and August 1997, respectively, which are shown in Figure 2a,b. The corresponding ground-truth image is depicted in Figure 2c. The second data is the Whenchuan data set with the size 442 × 301 obtained by ESA/ASAR on 3 March 2008 and 16 June 2008, respectively, which are shown in Figure 3a,b. The corresponding reference image is shown in Figure 3c. The third data is the Mexico data set of optical images with the size 512 × 512 captured in April 2000 and May 2005, respectively. The two original images and the reference image are depicted in Figure 4. The fourth data is the Yambulla data set consists of two optical images with the size of 500 × 500 pixels (as shown in Figure 5); they were acquired on 1 October 2015 and 6 February 2016 over the area of the Yambulla State Forest (Australia), respectively. More details of the data sets are concluded in Table 1.
Table 1

The description of the four data sets used in the experiment.

Scenario(Data Set)LocationDataEventSizeSatelliteSensor Type
1Ottawa, CanadaMay 1997August 1997Flood290 × 350Radarsat-1SAR
2Wenchuan, China3 March 200816 June 2008Earthquake442 × 301Radarsat-2SAR
3MexicoApril 2000May 2005Fire512 × 512Landsat-7Optical
4Yambulla, Australia1 October 20156 February 2016Bushfire500 × 500Landsat-8Optical

3.2. Analysis of the Difference Image

In this subsection, we discuss the difference images generated by different methods and the change detection results generated by FLICM model. In Figure 6, we can see the difference images computed by the log-ratio operator (LR), mean-ratio operator (MR), and NSCT fusion, respectively.
Figure 6

The difference images with different methods. (a) Log-ratio operator; (b) mean-ratio operator; (c) NSCT fusion.

The performance of the difference images (DIs) computed by the LR, MR, and NSCT fusion models are evaluated by the empirical receiver operating characteristics (ROC) curves (as shown in Figure 7), which are plotted by utilizing the true positive (TP) rate (TPR) versus the false positive (FP) rate (FPR). Moreover, two quantitative criteria derived from the ROC curve can be calculated: the area under the curve (AUC) [48] and the diagonal distance (Ddist) [48], as well as the corresponding metrics, are shown in Table 2. For the two metrics, the larger the criterion, the better the detection. From Table 2, we can denote that the NSCT fusion model performs better than the LR and MR operators.
Figure 7

The ROC curves of operators generated DIs. (a) Ottawa; (b) Wenchuan; (c) Mexico; (d) Yambulla.

Table 2

The quantitative criteria AUC and Ddist of different operators on remote sensing image data sets.

MethodsOttawaWenchuanMexicoYambulla
AUCDdistAUCDdistAUCDdistAUCDdist
LR0.95731.28290.96181.27010.98771.34670.99541.3815
MR0.99691.38280.96651.29530.99371.36890.99871.3980
NSCT0.99801.38570.97291.30630.99381.36810.99901.3986
Figure 8 shows the change detection results of the difference images with the FLICM model on Ottawa data set, and the corresponding metrics data are shown in Table 3. Figure 8a has the high alarm missing rate; in other words, FN value is too large; Figure 8b has the high false detection rate, and the FP is large; Figure 8c is the best change detection result, with the highest values of PCC, KC, and F1; at the same time, the balanced FN and FP values are generated, and it has the lowest OE value. This also shows that the result of fused difference image computed by the proposed method is better than that of single LR and MR images.
Figure 8

The change detection results with FLICM model. (a) LR_FLICM; (b) MR_FLICM; (c) NSCT_FLICM.

Table 3

The objective evaluations of change detection on Ottawa in Figure 8.

FNFPOEPCC (%)KC (%)F1 (%)
LR_FLICM2588224281297.2388.9390.54
MR_FLICM340896123698.7895.4996.21
NSCT_FLICM658366102498.9996.1896.78

3.3. Experimental Comparison

The change detection results generated by the proposed remote sensing image change detection algorithm, as well as seven comparative approaches, are depicted in Figure 9, Figure 10, Figure 11 and Figure 12 and Table 4, Table 5, Table 6 and Table 7.
Figure 9

The results of different methods on Ottawa data set. (a) PCAKM; (b) GaborTLC; (c) LMT; (d) PCANet; (e) NRELM; (f) NRCR; (g) CWNN; (h) proposed method; (i) reference.

Figure 10

The results of different methods on Wenchuan data set. (a) PCAKM; (b) GaborTLC; (c) LMT; (d) PCANet; (e) NRELM; (f) NRCR; (g) CWNN; (h) proposed method; (i) reference.

Figure 11

The results of different methods on Mexico data set. (a) PCAKM; (b) GaborTLC; (c) LMT; (d) PCANet; (e) NRELM; (f) NRCR; (g) CWNN; (h) proposed method; (i) reference.

Figure 12

The results of different methods on Yambulla data set. (a) PCAKM; (b) GaborTLC; (c) LMT; (d) PCANet; (e) NRELM; (f) NRCR; (g) CWNN; (h) proposed method; (i) reference.

Table 4

The objective evaluations of change detection on Ottawa in Figure 9.

FNFPOEPCC (%)KC (%)F1 (%)
PCAKM1901582248397.5590.4991.93
GaborTLC2531253278497.2689.0790.66
LMT526623528994.7977.4380.31
PCANet1011839185098.1893.1294.21
NRELM1157578173598.2993.4894.50
NRCR7391900263997.4090.5192.07
CWNN3991208160798.4294.1795.12
Proposed658366102498.9996.1896.78
Table 5

The objective evaluations of change detection on Wenchuan in Figure 10.

FNFPOEPCC (%)KC (%)F1 (%)
PCAKM7111939805093.9576.2779.73
GaborTLC8155688884393.3573.2776.98
LMT9333635996892.5169.1173.19
PCANet52841437672194.9581.0484.01
NRELM6492873736594.4678.5281.71
NRCR7638713835193.7275.0278.56
CWNN97205781029892.2667.8071.97
Proposed36122117572995.6984.5187.09
Table 6

The objective evaluations of change detection on Mexico in Figure 11.

FNFPOEPCC (%)KC (%)F1 (%)
PCAKM5543759630297.6085.1186.42
GaborTLC8515296881196.6477.7379.49
LMT5855640649597.5284.5385.87
PCANet4946713565997.8486.7787.95
NRELM3702943464598.2389.4390.41
NRCR37341252498698.1088.7289.76
CWNN44911053554497.8987.2388.39
Proposed33161223453998.2789.8090.75
Table 7

The objective evaluations of change detection on Yambulla in Figure 12.

FNFPOEPCC (%)KC (%)F1 (%)
PCAKM2956116307298.7792.8693.54
GaborTLC610534613997.5484.8386.15
LMT457160463198.1588.9089.91
PCANet3979134411398.3590.2791.17
NRELM732533735897.0681.3782.93
NRCR634831637997.4584.1685.53
CWNN2629153278298.8993.5894.20
Proposed1782227200999.2095.4495.89
Figure 9 shows the change maps on Ottawa data set. From the results, it can be seen that the LMT method generates the worst performance, and it has the highest FN value. The PCAKM and GaborTLC methods have high missed detection, losing some detail information. The NRCR method has more false detection, exhibiting many isolated spots with the highest FP values. The PCANet and NRELM algorithms give a similar performance, but these two methods still have some missed detection with high FN value. The visual performance obtained by CWNN technique is better than the previously mentioned six algorithms, while it has some false detection with high FP value. For the proposed change detection model, it achieves the best performance compared to other state-of-the-art approaches, and the change map is closer to the reference image. Table 4 gives the FN, FP, OE, PCC, KC, and F1 values for the different image change detection algorithms on the Ottawa data set, respectively. The proposed method achieves the best OE, PCC, KC, and F1 values, and these values are consistent with the visual effect of the experiment. Figure 10 shows the change results on Wenchuan data set, and the corresponding quantitative evaluation is given in Table 5. From the results, it can be observed that the PCAKM, GaborTLC, LMT, PCANet, NRELM, NRCR, and CWNN methods have high missed detection, and they have high FN values, especially in the CWNN model, and the FN is the highest. Compared to other approaches, the change detection result obtained by the proposed method is the best, the balanced FN and FP values are generated, it matches the reference image best. From Table 5, we can conclude that the FN, OE, PCC, KC, and F1 values achieved by the proposed technique are the best, and the FP value generated by the CWNN model is the best. KC is a more comprehensive evaluation metric, and the KC value of the proposed method is 8.24%, 11.24%, 15.40%, 3.47%, 5.99%, 9.49%, and 16.71% ahead of PCAKM, GaborTLC, LMT, PCANet, NRELM, NRCR, and CWNN, respectively. Figure 11 and Table 6 give the results on Mexico data set. From the results, it can be seen that the PCAKM, GaborTLC, LMT, and PCANet approaches have high missed detection, the corresponding FN values are high, and the FN value achieved by GaborTLC is the highest. The NRELM, NRCR, and CWNN techniques achieve better performance compared to aforementioned four methods. The result generated by the proposed technique has the highest visual effect advantage compared to the state-of-the-art methods. From the data as shown in Table 6, we can see that the FN, OE, PCC, KC, and F1 values generated by our method are the best, and the FP value achieved by the GaborTLC method is the best. The KC value of the proposed algorithm is 4.69%, 12.07%, 5.27%, 3.03%, 0.37%, 1.08%, and 2.57% ahead of PCAKM, GaborTLC, LMT, PCANet, NRELM, NRCR, and CWNN, respectively. Qualitative and quantitative evaluations of this group of experiments have achieved consistency. Figure 12 depicts the change maps on Yambulla data set. From the results, it can be seen that the GaborTLC, LMT, NRELM, and NRCR techniques suppress the noise, and the false detection rate is reduced, while they have high missed detection. The PCAKM, PCANet, and CWNN methods generate better performance, but the missed detection rate is still high. Compared with other seven algorithms, the change map generated by our method is the best, and it has the lowest missed detection rate. From the data as shown in Table 7, the values of FN, OE, PCC, KC, and F1 achieved by the proposed method are the best. The KC value of the proposed method is 2.58%, 10.61%, 6.54%, 5.17%, 14.07%, 11.28%, and 1.86% ahead of PCAKM, GaborTLC, LMT, PCANet, NRELM, NRCR, and CWNN, respectively. The qualitative and quantitative evaluations of this group of data are consistent, which proves the superiority of our algorithm. In order to verify the effectiveness and superiority of the proposed algorithm more accurately, we take the average value of the simulation experimental data of four groups of remote sensing images, as shown in Table 8. The index value distribution fluctuation line of each group of data and comparison algorithms are shown in Figure 13, and the average values are given in the legend. From Table 8, we can denote that the scores of FN, OE, PCC, KC, and F1 generated by the proposed method are the best. The effectiveness of the proposed algorithm is objectively proved.
Table 8

The average objective evaluations of change detection on the four data sets.

FNFPOEPCC (%)KC (%)F1 (%)
PCAKM4378599497796.9786.1887.91
GaborTLC6327318664496.2081.2283.32
LMT6256340659695.7479.9982.32
PCANet3805781458697.3387.8089.33
NRELM4669607527697.0185.7087.39
NRCR4615974558996.6784.6086.48
CWNN4310748505896.8685.6987.42
Proposed2342983332598.0491.4892.63
Figure 13

Objective performance of the methods on different data sets. (a) FN; (b) FP; (c) OE; (d) PCC; (e) KC; (f) F1.

4. Conclusions

In this paper, a novel remote sensing image change detection method based on NSCT fusion and FLICM model is proposed. The background information computed by the log-ratio image is relatively flat, and the change area information reflected by the mean-ratio image is relatively consistent with the real change trend of the remote sensing image. Therefore, the log-ratio image and mean-ratio image can be integrated into one new difference image with complementary information. Based on these analysis, the difference images generated by log-ratio and mean-ratio operators are fused by the NSCT model, and the fused difference image is obtained. Then, the FLICM model is used to generate the final change detection map. We carried out simulation experiments on four groups of remote sensing images. The experimental results verify the effectiveness of our algorithm by qualitative and quantitative evaluations with other algorithms. Our method can be effectively applied to land cover, flood, earthquake, and forest fire monitoring. In our experiment, we only simulate and verify the change detection in homogeneous remote sensing images. In future work, we will explore and improve the proposed algorithm for change detection in heterogeneous remote sensing images.
  7 in total

1.  A robust fuzzy local information C-Means clustering algorithm.

Authors:  Stelios Krinidis; Vassilios Chatzis
Journal:  IEEE Trans Image Process       Date:  2010-01-19       Impact factor: 10.856

2.  DPFL-Nets: Deep Pyramid Feature Learning Networks for Multiscale Change Detection.

Authors:  Meijuan Yang; Licheng Jiao; Fang Liu; Biao Hou; Shuyuan Yang; Meng Jian
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2021-05-24       Impact factor: 10.451

3.  Multilayer Spectral-Spatial Graphs for Label Noisy Robust Hyperspectral Image Classification.

Authors:  Junjun Jiang; Jiayi Ma; Xianming Liu
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2022-02-03       Impact factor: 10.451

4.  Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application.

Authors:  Pengyun Chen; Yichen Zhang; Zhenhong Jia; Jie Yang; Nikola Kasabov
Journal:  Sensors (Basel)       Date:  2017-06-06       Impact factor: 3.576

5.  Erratum: Lou, X.; Jia, Z.; Yang, J.; Kasabov, N. Change Detection in SAR Images Based on the ROF Model Semi-Implicit Denoising Method. Sensors 2019, 19, 1179.

Authors:  Xuemei Lou; Zhenhong Jia; Jie Yang; Nikola Kasabov
Journal:  Sensors (Basel)       Date:  2019-05-20       Impact factor: 3.576

6.  Pulse Coupled Neural Network-Based Multimodal Medical Image Fusion via Guided Filtering and WSEML in NSCT Domain.

Authors:  Liangliang Li; Hongbing Ma
Journal:  Entropy (Basel)       Date:  2021-05-11       Impact factor: 2.524

  7 in total
  1 in total

1.  CT and MRI Medical Image Fusion Using Noise-Removal and Contrast Enhancement Scheme with Convolutional Neural Network.

Authors:  Jameel Ahmed Bhutto; Lianfang Tian; Qiliang Du; Zhengzheng Sun; Lubin Yu; Muhammad Faizan Tahir
Journal:  Entropy (Basel)       Date:  2022-03-11       Impact factor: 2.524

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.