Literature DB >> 34458559

Modeling of texture quantification and image classification for change prediction due to COVID lockdown using Skysat and Planetscope imagery.

Amit Kumar Shakya1, Ayushman Ramola1, Anurag Vidyarthi2.   

Abstract

This research work models two methods together to provide maximum information about a study area. The quantification of image texture is performed using the "grey level co-occurrence matrix ( GLCM )" technique. Image classification-based "object-based change detection ( OBCD )" methods are used to visually represent the developed transformation in the study area. Pre-COVID and post-COVID (during lockdown) panchromatic images of Connaught Place, New Delhi, are investigated in this research work to develop a model for the study area. Texture classification of the study area is performed based on visual texture features for eight distances and four orientations. Six different image classification methodologies are used for mapping the study area. These methodologies are "Parallelepiped classification ( PC )," "Minimum distance classification ( MDC )," "Maximum likelihood classification ( MLC )," "Spectral angle mapper ( SAM )," "Spectral information divergence ( SID )" and "Support vector machine ( SVM )." GLCM calculations have provided a pattern in texture features contrast, correlation, ASM , and IDM . Maximum classification accuracy of 83.68 % and 73.65 % are obtained for pre-COVID and post-COVID image data through MLC classification technique. Finally, a model is presented to analyze before and after COVID images to get complete information about the study area numerically and visually.
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2021.

Entities:  

Keywords:  Grey level co-occurrence matrix; Image classification; Object-based change detection; Texture quantification

Year:  2021        PMID: 34458559      PMCID: PMC8384559          DOI: 10.1007/s40808-021-01258-6

Source DB:  PubMed          Journal:  Model Earth Syst Environ


Introduction

Estimating changes developed in land use/land cover is a hot research area these days. Researchers investigate changes in the land pattern through satellite data, microsatellite data, drone data, unmanned aerial vehicles () data, terrain analysis, etc. (Chen et al. 2018). Several space agencies have conducted a series of successful space exploration missions like the super excited Apollo mission (Papanastassiou and Wasserburg 1971), Hubble mission (Baker et al. 2020), Voyager mission (Cohen and Rymer 2020), Cassini-Huygens mission (Sotin et al. 2021), Chandra mission (Tomsick et al. 2021) of the National Aeronautics and Space Administration (). Aryabhata (Damle et al. 1976), Chandrayaan 2 carried by GSLV Mark 3 (Chandrashekar 2016), Mangalyaan (Haider and Pandya 2015), launching 104 satellites in a single attempt (Muraleedharan et al. 2019), etc., are prominent successful space missions conducted by the Indian Space Research Organization (). These space missions provide information about the capability of the individual space agency in space exploration. Due to these missions, space agencies generate extensive data to analyze specific situations or save records for future analysis (Mathieu et al. 2017). The data used in this research work is a perfect example of this scenario as the pre-COVID image of the study area was snapped for a general-purpose. Still, the post-COVID image was snapped to study the consequence of lockdown. Thus, the combination of pre-COVID data and post-COVID data becomes a great scenario to explore by remote sensing professionals, scientists, and researchers. Nowadays, small aircraft-like devices popularly known as “drones” and are also used for data collections and day-to-day purposes (Otto et al. 2018). These devices are operated by the human expect or some onboard computer device (Jiang et al. 2020). They are also used for several applications related to medical diagnostic, defense, transportation, film making, scientific research, firefighting, emergency services, etc. (Kerle et al. 2020). are now introduced in satellite mapping of the land-sides affected by the landslides (Niethammer et al. 2012), crop damage assessment caused due to the natural phenomenon (Maimaitijiang et al. 2020), mapping disputed territory (defense application) (Li et al. 2020), model development of terrain, etc. Today, the world is infected with the novel coronavirus (Nascimento et al. 2020) (Wang et al. 2020a, b, c). In this situation, has found some new application areas like spraying the disinfectant, scanning body temperature, broadcasting the message at extremely dangerous COVID hotspots, cargo delivery, codes, connectivity, mapping, etc. (d’Italie 2020). Thus besides satellites, , drones, high-resolution optical cameras, etc. are some of the primary sources through which high quality imagery can be obtained. Figure 1 represents the pictorial representation of the application areas of the drones. It can be observed that in the coming future, many earth exploration activities will be performed with the assistance of drones.
Fig. 1

Application areas of drones (d’Italie 2020)

Application areas of drones (d’Italie 2020) In satellite, remote sensing change detection methodologies are broadly classified into two categories (Woodcock et al. 2020), i.e., “pixel-based change detection ()” and “object-based change detection ()” (Hussain et al. 2013). Pre-classification technique provide information about the study area in binary (change/no change) format. Another popular technique for image classification is through which information of study area is obtained by analysing the difference developed in the classification classes. Through techniques when pre- and post-image of an event is classified, the comparison among them is performed by analyzing the same category for both images. and methodologies are further categorized in several techniques presented in Table 1.
Table 1

Classification of various and techniques

\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{PBCD}$$\end{document}PBCD and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{OBCD}$$\end{document}OBCD change detection techniques
Pre classification change detection (binary information about changes)Post classification change detection (detail information about changes)
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{PBCD}$$\end{document}PBCD techniques\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{PBCD}$$\end{document}PBCD techniques\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{OBCD}$$\end{document}OBCD techniques
Image differencing (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{ID}$$\end{document}ID) (Seydi et al. 2020; Arefin et al. 2020)Composite or multi-date classification (Venugopal 2020)Direct object change detection (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{DOCD}$$\end{document}DOCD) (Saha et al. 2021)
Image ratio (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{IR}$$\end{document}IR) (Zhu et al. 2020; Kalaiselvi and Gomathi 2020)Machine learning (Pati et al. 2020)Classified object change detection (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{COCD}$$\end{document}COCD) (Shi et al. 2020)
Image regression (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{IReg}$$\end{document}IReg) (Zhao et al. 2021; Zhang et al. 2020)GIS-based (Wang et al. 2020a, b, c)Multi-temporal object change detection (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{MOCD}$$\end{document}MOCD) (Eid et al. 2020)
Vegetation index differencing (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{VID}$$\end{document}VID) (Polykretis et al. 2020)Texture analysis (Hajeb et al. 2020)
Change vector analysis (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{CVA}$$\end{document}CVA) (Du et al. 2020)Fuzzy change detection (Liu et al. 2020)
Principle component analysis (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{PCA}$$\end{document}PCA) (Schwartz et al. 2020)Multi-sensor data fusion (Wang et al. 2020a, b, c)
Deep learning (Khelifi and Mignotte 2020)
Classification of various and techniques Some notable work done in satellite remote sensing by fusion of two algorithms, techniques, and classification schemes are presented in these reviews. Garg and Dhiman (2021) proposed a fusion of “grey level co-occurrence matrix ()” features and “local binary pattern ()” to develop a novel “content based image retrieval ()” system. They have used three different classification approaches in their experiment, i.e., support vector machine (), decision tree () algorithm and K-nearest neighbourhood. They have concluded that their proposed algorithm performs better with superior recall, precision, and accuracy. Iqbal et al. (2021) used features fused with “machine learning ()” approach to obtain similarity in different crop fields. The investigation is performed on the based low altitude high resolution data. They have obtained prenominal results with this merger of these two techniques. Thus the overall accuracy of their developed system is increased by Caballero et al. (2020) obtained band imagery to differentiate between onion and sunflower crop. In their classification technique, they have used the combination of along with approach. Through their developed methodology they have obtained “overall accuracy ()” and “Kappa coefficient (K)” of and respectively while differentiating onion crop with sunflower crop. Singh and Singh (2020) used SCATSAT-1 data to distinguish “multi-year ice” and “first-year ice” of the arctic region using “maximum likelihood classification ()”. They have obtained over all classification accuracy of in their experiment. Rimal et al. (2020) used the Landsat imagery of the Kathmandu valley of Nepal between 1988 and 2016 to compare the efficiency of object-based image classification “” and “” image classification algorithm. The experimental result obtained from their investigation suggest that performs better than classification algorithm. Thus scientist and researchers are working to develop a new methodology by combining two or more techniques to obtain maximum accuracy and complete information from image classification and feature quantification. In this research work, a model is presented employing a combination of (texture analysis based ) and (Classified object change detection ) techniques by analyzing the pre-COVID and post-COVID (during lockdown) panchromatic images of Connaught Place, New Delhi, India. The pixel-based texture analysis technique is used for texture classification and quantification of the study area. , has provided information about the statistical and spectral behavior of the image pixels through the mathematical analysis. The quantification of the features for pre-COVID and post-COVID images produces a new relationship among features. Histogram signature plotting represents the changes in the frequency of intensity values of the study area. technique provides information about the study area in a different pattern. In this technique, for pre-COVID and post-COVID images, “region of interest ()” is selected by allotting pixels values to the . These behave as a “region” based on which classification of the study area is performed. Another set of is also created, which assists in the accuracy assessment. The advantage of technique over technique lies in the “visual point of view.” In this classification, there is also the possibility to compare only a “particular class,” leaving rest classes. The article is divided into six separate sections. “Background of PBCD (GLCM) and OBCD techniques” provides detailed background information about and techniques. “Background of Skysat satellite program and details of the study area” presents a brief report on the Skysat satellite program and study area. “Experimental results” offers detail regarding experimental results of texture quantification and image classification. “Discussion” presents discussions and outcomes from the proposed research work. Finally, “Conclusion” offers concluding remarks on the research work.

Background of PBCD (GLCM) and OBCD techniques

GLCM-based texture classification technique

The texture is an essential aspect of gathering information from remote sensing images. Through texture analysis, spectral as well as spatial information of the study area is obtained. This technique is extensively used in various inaccessible sensing applications. Harlick et al. invented (1973). He presented a set of “fourteen” different features to classify the image texture (Harlick et al 1973). Later a lot of work was done on these features. Gotlieb and Kreyszig organized these fourteen features into a set of four different categories (1990). Visual texture features are considered most important in remote sensing applications because they directly impact human visual perception. Texture visual features include contrast, correlation, angular second moment (), and inverse difference moment ( (Haralick et al. 1973). The formation from an input image is presented in Fig. 2. The location of pixel position in the input image is illustrated in Fig. 2a. Input image of dimension is presented in Fig. 2b. The image of the input image is shown in Fig. 2c. The normalized image is presented in Fig. 2d.
Fig. 2

a Pixel position of the input image, b input image of dimension , c of the input image, and d normalized representation of the image

a Pixel position of the input image, b input image of dimension , c of the input image, and d normalized representation of the image calculation of any input image is dependent on two critical parameters. These parameters are “distance” and “angle of orientation.” The distance represents space between the “pixel of interest” and the “neighboring pixel.” This distance can be varied to obtain different values of the texture features. The distance can be varied starting from , and so on. The orientation angle presents the direction of the variation of the texture features following the distance. The orientation angle of an image can vary from 0° to 315°. This situation can be understood from Fig. 3, where different combinations of distances and orientations from a “pixel of interest ()” are presented.
Fig. 3

pixel, distance, and the orientation arrangement from the center image pixel

pixel, distance, and the orientation arrangement from the center image pixel Let us assume an image with resolution cells in the “horizontal direction” and resolution cells in the “vertical direction.” The grey tone shown up in the image is quantized to level. Then the “horizontal spatial domain,” “vertical spatial domain,” and the “set of the quantized grey levels” are expressed by , and . The set is the resolution cell. Thus the grey tone in each resolution cell is expressed as . Therefore the expression for the angle quantized from 0° to 315° are expressed by Eqs. (1)–(8), where represents the image pixels. Texture features developed by Haralick other than visual texture features are presented in Eqs. (9)–(18) (1973). These features are based on information theory, statistical measures, and information measures of correlation. features based on the “information theory,” in particular, entropyWhere and denote elements of the “row” and “column” respectively of the co-occurrence matrix and represents the “probability of the co-occurrence matrix” corresponding to . Similarly represent the “probability of the co-occurrence matrix” corresponding to . features based on “statistical measures.” features based on the “information measure of correlation.”Where , are the entropies of p and p, , features representing visual texture features are presented in Table 2, which explains their mathematical notation, range, and discussion of these features.
Table 2

Discussion of the visual texture features

S. noTexture featuresMathematical expressionDiscussion
1IDM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(k,l)=\sum_{k}\sum_{l}\frac{1}{1+{\left(k-l\right)}^{2}}p(k,l)$$\end{document}f(k,l)=kl11+k-l2p(k,l)This parameter measures the closeness of the distribution between a pixel of interest and neighboring pixels. It is the measure of the similarity of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{GLCM}$$\end{document}GLCM element along the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{GLCM}$$\end{document}GLCM diagonal. The normalized range of homogeneity is [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{0,1}$$\end{document}0,1]
2ASM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f\left(k,l\right)=\sum_{k}\sum_{l}{\left\{p(k,l)\right\}}^{2}$$\end{document}fk,l=klp(k,l)2It is a measure of the sum of squared elements in the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{GLCM}$$\end{document}GLCM. This parameter is also known as “energy.” The normalized range of “energy” is [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{0,1}$$\end{document}0,1]
3Correlation\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f\left(k,l\right)=\frac{\left(k,l\right)p\left(k,l\right)-{\mu }_{k}{\mu }_{l}}{{\sigma }_{k}{\sigma }_{l}}$$\end{document}fk,l=k,lpk,l-μkμlσkσlIt is the measure of the “joint probability occurrence” of the specified pixel pairs. The normalized range of the correlation is [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-\mathrm{1,1}$$\end{document}-1,1]. But good correlation value lies in the positive direction
4Contrast\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f\left(k,l\right)=\sum_{0}^{{n}_{g}-1}{n}^{2}\times \left\{\sum_{k=1}^{{n}_{g}}\sum_{l=1}^{{n}_{g}}p(k,l)\right\}$$\end{document}fk,l=0ng-1n2×k=1ngl=1ngp(k,l)It is the measure of the local variations in the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{GLCM}$$\end{document}GLCM. The normalized range of contrast is [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{0,1}$$\end{document}0,1]. If the value of the discrepancy is zero, then it is a constant image. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{g}$$\end{document}ng is the number of “distinct grey levels” in the image. The difference in the grey level is the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left|k-l\right|$$\end{document}k-l. If the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left|k-l\right|$$\end{document}k-l is a zero, then the image is a constant image with no variation
Discussion of the visual texture features The pixel-based techniques have several advantages and shortcomings listed as follows.

Advantages of GLCM

In the based change detection technique, spectral and spatial information of the study area is obtained. GLCM offers two different procedures for the quantification of the image pixels. Firstly selection of the window size of dimensions like or over a complete image. Secondly, numerical quantification of the total image pixels can be performed, and on that basis, texture features can be quantified. GLCM can offer information about the image features in four categories based on human visual perception, statistical measures, entropy measures, and correlation information. The future aspect of the can be understood from the fact that earlier can provide information only in two dimensional () surfaces. Today, researchers and scientists have developed a procedure to calculate the across three dimensions known as 3D GLCM. GLCM can calculate the pixel brightness of the image through different combinations of the image pixels. GLCM is used in remote sensing applications, but today, the technique is also used in earth scattering data analysis to predict “Earthquakes” and “Tsunami” possibilities. An essential advantage of the is that its features can be obtained for the “single orientation and distance” along with “combination of directions and distances.”

Shortcoming of GLCM

Computation of the is a time-consuming process. The main problem during calculation is the computational cost using pixel to pixel combination of the image. The issue of computation can be overcome by the use of with the Sobel operator.

Background of the image classification technique

Image classification techniques are used to classify the image into several small objects or classes. These objects can be classified as soil, urban, agriculture, plants, trees, water, etc. When an image is segregated using an image classification technique, all the essential areas of the image can be classified into objects or classes. These areas of the image selected as the object depend upon the type of study, e.g., Fig. 4a consists of a band-fused “Phased array band synthetic aperture radar ()” image of the Roorkee region of Uttarakhand, India. This image is classified using three different classification techniques and in four other classes, i.e., bare soil (black color), water (blue color), urban (red color), and agriculture (green color). Thus if anyone wishes to study only two classes, water, and agriculture, their task is completed by Fig. 4d. If a study about three categories is required, they can opt for Fig. 4c and can obtain information about four classes through Fig. 4b. Thus performing an image classification and creating the number of objects depends entirely upon the application.
Fig. 4

a Band fused data, b classified image with four prominent class, c classified image with three prominent class, and d classified image with two prominent class (Data Courtesy: Japanese Aerospace Exploration Agency, JAXA)

a Band fused data, b classified image with four prominent class, c classified image with three prominent class, and d classified image with two prominent class (Data Courtesy: Japanese Aerospace Exploration Agency, JAXA) A comparison can be made when pre and post-image related to any natural phenomenon are classified based on several classes. Then this type of image classification approach falls under object-based change detection () (Zhang et al. 2018). When the comparison is made between two images using methodologies, they are compared based on standard classification features, i.e., “user accuracy () (Tong and Feng 2020)”, “producer accuracy () (Tong and Feng 2020)”, “commission error () (Agariga et al. 2021),” “omission error () (Agariga et al. 2021),” “overall accuracy () (Tong and Feng 2020)”, and “kappa coefficient (K) (Tong and Feng 2020).” Some of the prominent image classification techniques using object formation to classify an image are “Maximum likelihood classification ()” (Soni et al. 2021), “Spectral angle mapper ()” (Wang et al. 2021), “Support vector machine ()” (Leonga et al. 2021), “Minimum distance classification ()” (Nie et al. 2021), “Parallelepiped classification (PC)” (Kundu et al. 2021), and “Spectral information divergence (SID)” (Hunt 2021). Brief details of these classification techniques and their methodologies are presented in these reviews.

Parallelepiped classification (PC)

uses a “decision rule” to classify the multispectral, hyperspectral, image data. The image data boundaries are created through the “n—dimensional parallelepiped” in the image data space. Figure 5 represents the classification of the image data through parallelepiped classification. Here image pixels are classified into four different classes, i.e., Class A, Class B, Class C, and unclassified pixels. Here, while performing these classifications, pixels of one class get merged with the other class’s pixel unintentionally. The user estimates the minimum and maximum pixel value corresponding to each band or a range in this classification scheme. It is expressed in terms of “standard deviation” on either side of the “mean” of each feature. These values determine the scope of the parallelepiped classification, i.e., for Band A, the range of the category is and the content of variety for Band B is where is the standard deviation of the image.
Fig. 5

Pixel allotment to a particular class according to mean value

Pixel allotment to a particular class according to mean value

Advantage of parallelepiped classification

This technique performs fast image classification. This technique is suitable for non-normal distribution. This technique can be applied over a limited land cover area.

Shortcoming of parallelepiped classification

Overlapping of classified classes are allowed in this classification technique, resulting in less accurate results. In this classification technique, all the pixels are not classified. One of the main problems with this algorithm is that pixels are spectrally apart from the signature mean, affecting the pixel classification.

Minimum distance classification (MDC)

technique is employed to classify “unknown image data” into separate object classes presented in Fig. 6. This approach’s main objective is to minimize the distance between the different object classes and anonymous image data. In other words, the distance is considered as the parameter of similarity, so the minimum distance between two observation classes is identical to the maximum similarity. The minimum distance can be calculated with the assistance of Eq. (19).where is defined as the test pixel and ( represents the mean value of the classified object.
Fig. 6

Unknown data allotment to a particular class

Unknown data allotment to a particular class

Advantage of minimum distance classification

All the regions of the n—dimensional space get classified under this classification scheme. The main advantage of this classification scheme is no overlapping of the image classes occurs during the image classification.

Shortcoming of minimum distance classification

In this classification scheme, spectral variability is assumed to be the same in all directions, producing false results.

Maximum likelihood classification (MLC)

This classification technique assumes that an individual class’s statistics in the respective band are typically distributed, represented in Fig. 7. This methodology calculates the “probability of the particular pixel” to a specific category, and that pixel is assigned to the object class having the “maximum probability” or “maximum likelihood.”
Fig. 7

Pixel allotment in “Maximum likelihood image classification”

Pixel allotment in “Maximum likelihood image classification” The mathematical relationship for establishing the maximum likelihood between the image pixels is expressed by Eq. (20).where x represents the band image data, is the expression of the “likelihood of object” of class belonging to the class p. represents the “mean vector” of the class k. is the representation for the “variance matrix.”

Advantage of maximum likelihood classification

This scheme is assumed to be the most sophisticated as under this image classification scheme, good separation between different classes is obtained.

Shortcoming of maximum likelihood classification

Accuracy assessment requires an intense training of the dataset to describe covariance and the mean structure of the classes.

Spectral angle mapper (SAM)

The created object’s spectrum is compared with the already known “object spectrum” in this classification technique.” As a result of classification, an image is obtained with the best match for the individual image pixel. This similarity in the range is analyzed in terms of similarity, with the “vector originated from the origin.” Usually, reflection intensity is represented by the length of the vectors illustrated in Fig. 8.
Fig. 8

Representation of the spectral angle mapper corresponding to “Band A” and “Band B”

Representation of the spectral angle mapper corresponding to “Band A” and “Band B” The spectrum angle describes the difference between the spectra of Band A and Band B. Finally, an image is classified into several classes by evaluating the rise developed between the reference spectrum and the object’s spectrum. The angle formed between two vectors is the cosine angle. This angle is expressed by Eq. (21).where (alpha) is the angle between the “two vectors,” represents the “total numbers of the spectral bands,” is the “target pixels” present in an image, and is the total number of the reference pixels.

Advantage of spectral angle mapper

This method is considered a user-friendly and quick method to map the spectral similarity of the image spectra against a reference spectrum. This technique produces good classification results even in the presence of the scaling noise.

Shortcoming of spectral angle mapper

This technique does not respond to a situation in which a vector magnitude fails to discriminate information in many instances.

Spectral information divergence (SID)

uses a “spectral classification approach” to compare image pixels with the reference spectrum. The tool used for the comparison is the divergence measure. Slight divergence is the indication of the small image pixels. Pixels having considerable divergence values above a predefined threshold are not classified under this approach. Spectra of the end member can be extracted directly from an image. The method computes the “spectral similarity” based on the divergence between the probability distribution of two spectra. Let us assume an embodiment with reference spectra () and test spectra (). The distribution value for the reference spectra can be expressed by Eq. (22). The distribution value for the test spectra can be expressed by Eq. (23). The corresponding to the reference and the test spectra is expressed by Eq. (24). Figure 9 represents the stacking of bands from Band 1 to Band 8 to develop a multiband image model using the technique.
Fig. 9

Band fusion in the spectral information divergence

Band fusion in the spectral information divergence

Advantage of spectral information divergence

SID measures the amount of deviation by analyzing the probabilistic behaviors of the pixel’s spectral signatures. This comparison depends on the information theory, which is considered the more effective in retaining spectral properties; through this methodology, the spectral similarity between two image pixels can also be measured.

Shortcoming of spectral information divergence

SID is considered an efficient image processing technique. A critical drawback of this technique is the variation in the “output results” due to a change in the “light intensity,” which affects the classification results.

Support vector machine (SVM)

The “support vector machine ()” classification technique is based on “supervised learning for data analysis and study.” This technique uses a “machine learning approach.” This technique follows image classification and regression data analysis. follows the kernel principle to perform linear and nonlinear image regression. In this algorithm, two hyperplanes are separated from each other using “support vector 1” and “support vector 2”. The pixels that need to be classified are situated opposite the hyperplane represented by Fig. 10. This algorithm of image classification was developed by “Hava Siegelmann” and “Vladimir Vapnik.” (Tiwari et al. 2021). This algorithm was initially developed for computer vision and pattern recognition but later used in satellite remote sensing and image processing applications.
Fig. 10

Hyperloop diagram for support vector machine

Hyperloop diagram for support vector machine The hypothesis function is expressed by Eq. (25). Thus the pixels “above the hyperplane” is classified as , and the pixels “below the hyperplane” is classified as .

Advantage of the SVM

Perform effective classification in high dimensions compared to the nearest neighbor algorithm. The most effective classification technique for the cases having several sizes and remarkable than the number of samples. SVM is a versatile technique having different functions to be specified for the decision function. When the separation margin between a plane is evident, then this methodology works most efficiently.

Shortcoming of SVM

Several parameters like Kernel, , and Gamma function need to be set correctly in the classification approach to obtain better classification results. So at the same time, different parameters need to be taken care of for better classification. SVM does not assist in determining probability estimates directly. These features require a “fivefold cross-validation process” to compute. A good classification result is challenging to obtain through this methodology when the dataset contains extreme noise.

Background of Skysat satellite program and details of the study area

In today’s world, several states governments, and private industries compete to explore massive and vital information from Earth. Several government space agencies are actively participating in researching information about the origin of a novel coronavirus and its impact on the daily activity of ordinary human beings. Currently, six different space agencies “European Space Agency () (Wörner 1975),” “National Aeronautics and Space Administration () (Dunbar 1958),” “Japan Aerospace Exploration Agency () (Yamakawa 2003),” “Russian Federal Space Agency ( or Roscosmos) (Government 1992),” “China National Space Administration () (Kejian 1993)” and “Indian Space Research Organisation () (Sarabhai 1969)” are working in the field of satellite launch and satellite recovery. These agencies have their own satellite launch capacities. Besides these their several other governments sponsored space agencies like the “Canadian Space Agency (François-Philippe 1989),” “UK Space Agency (Annett 2010),” “Australian Space Agency (Palermo 2018),” etc., that are actively working in the field of Earth exploration and remote sensing. Some prominent private players in space exploration include “SpaceX (Musk 2002),” “Boeing (Calhoun 1916),” “Sierra Nevada Corporation (Corporation 1963),” “Orbital (Thompson 1982),” etc. Skysat is a commercial microsatellite of Skybox imaging. This Earth observational satellite was developed to collect high-resolution multispectral and panchromatic images of the Earth's surface. This satellite universe consists of 21 satellites dedicated to Earth imaging. A private firm, “Planet own these satellites.” First Skysat-1 was launched on 21 November 2013 (Marshall et al. 2010). Last year on 18 August 2020, Skysat 19–21 were launched (Marshall et al. 2010). The orbital type of Skysat 1–15 is sun-synchronous, whereas Skysat 16–21 is non-sun-synchronous. Skysat 1–2 has an orbital altitude of 600 km, Skysat 3–15 has an orbital altitude of 500 km, and Skysat 16–18 has an orbital length of 400 km. The sensor installed on these satellite systems operates at spectral bandwidth of blue 450–515 nm, green 515–595 nm, red 605–695 nm, NIR 740–900 nm, and 450–900 nm (Marshall et al. 2010). In this research work, two panchromatic images of the pre-COVID and post-COVID (during lockdown) are obtained for the Skysat image database under the “research and training program” (Marshall et al. 2010). The pre-COVID image and post-COVID image of the Connaught Place, New Delhi, investigated in this research work are represented in Fig. 11d, e. The study area's pre-COVID and post-COVID images are obtained on 30 April 2019 and 14 April 2020, respectively (Marshall et al. 2010). Connaught place’s study area is popularly known as “Rajiv Chowk” (Hazarika et al. 2015). It is the leading financial and commercial center of the National capital of India. The study area is located at the heart of the National capital. It is having a latitude and longitude position of 28°37′58′ N and 77°13′11′ E (Hazarika et al. 2015). The area of the Connaught place is 2.36 km2. The rapid urbanization in Connaught place has increased energy consumption and traffic density. Moreover the level of the Connaught place is highest in the National capital region (Shukla et al. 2020). Even the concentration of the level has even touched 999 μg/m3 during the worst time period (Mukherjee et al. 2020). The temperature of Connaught place raise to 45 °C during summers of April-June and falls to 8 °C during winters of December to January. Thus all these features significantly affect image classification and texture quantification.
Fig. 11

Earthly location of the Connaught Place, New Delhi (Study area)

Earthly location of the Connaught Place, New Delhi (Study area)

Experimental results

Change estimation through PBCD GLCM based technique

In this investigation, three different band images of the study area are fused with the layer stacking technique available in ENVI 5.2. The layer stacking approach assists in generating a band-fused image of the study area. The final image of the study area consists of all the essentials bands; thus method is applied to the study area. Figure 12a presents the band pre-COVID image of the study area, Fig. 12b presents the band pre-COVID image of the study area and Fig. 12c presents the band pre-COVID image of the study area. Finally, all the bands are fused to create an image of the study area presented in Fig. 12d.
Fig. 12

Pre-COVID image of the study area a Band, b Band, c Band, and d Stacked ()

Pre-COVID image of the study area a Band, b Band, c Band, and d Stacked () The image of the study area is converted to a gray level image to identify the changes in the study area represented in Fig. 13. The pixels count along the X-axis is pixels and along the Y-axis is pixels. Thus the total pixel count in the image is pixels. Here, through the approach, one can identify the changes as a whole no separate object-based information is provided through . Figure 13a represents the grey level representation of the band fused pre-COVID image of the study area. Histogram signature plot of the band combined image represents the distribution of the frequency of the intensity value corresponding to the image pixels illustrated in Fig. 13b. The visual texture features for the pre-COVID image are quantified and presented in Table 3. Here the feature values are calculated corresponding to the total image pixels. Texture features are quantified corresponding to four orientation angles i.e. 0°, 45°, 90° and 135° and eight different distances , , , , , , and . Later the texture features are averaged to make “ direction independent.”
Fig. 13

Pre-COVID image. a Gray level representation. b Histogram signature plot

Table 3

Quantification of the features for pre-COVID image

TextureAngleVariation in the distance (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis}$$\end{document}dis)Features
GLCM featuresAngular orientation (°)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis}=1$$\end{document}dis=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=2$$\end{document}dis=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=3$$\end{document}dis=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis}=4$$\end{document}dis=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis}=5$$\end{document}dis=5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=6$$\end{document}dis=6\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis}=7$$\end{document}dis=7\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=8$$\end{document}dis=8Sum featuresAverage features
Contrast\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0$$\end{document}00.45270.86101.17571.42871.64201.82761.99752.156211.54141.4426
Correlation\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$45$$\end{document}450.90420.81770.75110.69750.65240.61310.57720.54365.55680.6946
ASM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$90$$\end{document}900.10280.08060.07150.06550.06110.05780.05520.05300.54750.0684
IDM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$135$$\end{document}1350.82520.75370.71710.69070.66990.65320.6390.62695.57570.6969
Contrast\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0$$\end{document}00.73521.24361.61881.9022.14162.35522.35692.714615.06791.8834
Correlation\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$45$$\end{document}450.84430.73670.65730.59730.54660.50140.46080.42544.76980.5962
ASM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$90$$\end{document}900.08560.07080.06270.05770.05420.05150.04940.04780.47970.0599
IDM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$135$$\end{document}1350.77240.71360.67630.65140.63240.61660.60330.59245.25840.6573
Contrast\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0$$\end{document}00.50650.98581.31121.58221.80951.99852.16752.321012.68221.5852
Correlation\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$45$$\end{document}450.89280.79130.72240.6650.61680.57680.54100.50855.31460.6643
ASM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$90$$\end{document}900.09760.07690.06890.06320.05920.05680.05380.05190.52830.0660
IDM1350.81210.74010.70580.67890.65880.64310.63000.61905.48780.6859
Contrast00.63741.10291.48911.79352.04622.26352.45562.626614.41401.8018
Correlation450.86510.76650.68470.62030.56680.52080.48010.44404.94830.6185
ASM900.08890.0730.06410.05860.05480.05190.04980.04810.48920.0611
IDM1350.78450.72370.68390.65670.6360.61950.60610.59485.30520.6631
Pre-COVID image. a Gray level representation. b Histogram signature plot Quantification of the features for pre-COVID image The texture features contrast, correlation, , and are plotted to obtain their specific behavior. Concerning 0° for eight different distances , , , , , , and it has been observed that “contrast” is sharply increasing with the orientation and distance. The correlation feature has presented a sharp decline in the quality. Energy features have indeed shown a reduction in the feature value, but the decline rate is prolonged. Finally, the feature has also presented a sharp decline in the feature values similar to the decline rate of the correlation feature represented in Fig. 14a.
Fig. 14

Comparison of the texture features for pre-COVID image at different distances. a Pre-COVID (0°). b Pre-COVID (45°). c Pre-COVID (90°). d Pre-COVID (135°)

Comparison of the texture features for pre-COVID image at different distances. a Pre-COVID (0°). b Pre-COVID (45°). c Pre-COVID (90°). d Pre-COVID (135°) Similarly, the feature behavior corresponding to a 45° is presented in Fig. 14b, corresponding to a 90° is introduced in Fig. 14c and corresponding to a 135° is illustrated in Fig. 14d. It has been observed that features have obtained the same pattern in the quantified feature values for different orientation angles. Thus from this analysis, new information about the is discovered that this approach can also be used for pattern recognition. is mainly considered an approach for texture classification, but has provided information about the image texture as a whole; not specific details on any classified object can be identified from the . Similarly, the post-COVID image of the study area is explored, and texture feature quantification and feature plotting are performed to obtain new information. The post-COVID image analysis of the study area is presented in Fig. 15. Figure 15a represents the band image, Fig. 15b presents the band image, and Fig. 15c represents the band image of the study area. All the images are fused to obtain a multiband image of the study area. Some general assumption about the post-COVID image is that low or no traffic on the streets will be observed. Due to the continuous lockdown situation, it is expected that the air quality will also be undoubtedly improved. Thus it will be interesting to observe the behavior of the texture features for the post-COVID image. The layer stacking technique available with ENVI 5.2 is adopted to perform band fusion and develop an image of the study area. Figure 15d represents the band fused post-COVID image of the study area.
Fig. 15

Post-COVID image of the study area. a Band, b Band, c Band, and d Stacked () band

Post-COVID image of the study area. a Band, b Band, c Band, and d Stacked () band The grey level representation of the post-COVID study area is presented in Fig. 16. Figure 16a represents the post-COVID image of the study area, which visually looks similar to the pre-COVID image. No significant change is visually identified in the study area. The quantification of the texture features can obtain a difference in the study area, and the modifications are developed in the histogram signature plot of the study area. The histogram signature plot in Fig. 16b visually represents the changing pattern of the surface of the study area. The quantification of the texture features for the post-COVID image is presented in Table 4.
Fig. 16

Post-COVID image. a Gray level representation. b Histogram signature plot

Table 4

Quantification of the feature for post-COVID image

TextureAngleVariation in the distance (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis}$$\end{document}dis)Features
GLCM featuresAngular orientation (°)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=1$$\end{document}dis=1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=2$$\end{document}dis=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=3$$\end{document}dis=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=4$$\end{document}dis=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=5$$\end{document}dis=5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=6$$\end{document}dis=6\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=7$$\end{document}dis=7\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{dis }=8$$\end{document}dis=8Sum featuresAverage features
Contrast\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0$$\end{document}00.45270.86101.17571.42871.64201.82761.99752.156211.54141.4426
Correlation\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$45$$\end{document}450.90420.81770.75110.69750.65240.61310.57720.54365.55680.6946
ASM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$90$$\end{document}900.10280.08060.07150.06550.06110.05780.05520.05300.54750.0684
IDM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$135$$\end{document}1350.82520.75370.71710.69070.66990.65320.63900.62695.57570.6969
Contrast\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0$$\end{document}00.73521.24361.61881.90202.14162.35522.35692.714615.06791.8834
Correlation\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$45$$\end{document}450.84430.73670.65730.59730.54660.50140.46080.42544.76980.5962
ASM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$90$$\end{document}900.08560.07080.06270.05770.05420.05150.04940.04780.47970.0599
IDM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$135$$\end{document}1350.77240.71360.67630.65140.63240.61660.60330.59245.25840.6573
Contrast\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0$$\end{document}00.50650.98581.31121.58221.80951.99852.16752.321012.68221.5852
Correlation\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$45$$\end{document}450.89280.79130.72240.66500.61680.57680.54100.50855.31460.6643
ASM\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$90$$\end{document}900.09760.07690.06890.06320.05920.05680.05380.05190.52830.0660
IDM1350.81210.74010.70580.67890.65880.64310.63000.61905.48780.6859
Contrast00.63741.10291.48911.79352.04622.26352.45562.626614.41481.8018
Correlation450.86510.76650.68470.62030.56680.52080.48010.44404.94830.6185
ASM900.08890.07300.06410.05860.05480.05190.04980.04810.48920.0611
IDM1350.78450.72370.68390.65670.63600.61950.60610.59485.30520.6631
Post-COVID image. a Gray level representation. b Histogram signature plot Quantification of the feature for post-COVID image The texture features for the post-COVID image seem to be identical with the texture features of the pre-COVID image shown in Fig. 17. The behavior of the texture feature “contrast” has shown an increasing pattern for 0°, 45°, 90° and 135°. Texture features correlation has shown decreasing pattern for all four degrees. The decrease of these texture features is high compared to the remaining texture features for all four orientations. Texture feature has shown a minimum decrease rate. Finally, texture feature has also shown decreasing pattern for all four directions. Here it has been observed that when is used to compare two multi-band images. It may be possible that the change developed in the two images can be visually identified. Histogram signature plotting is a method to determine the occurrence of changes. The most important procedure to detect the developed changes is the comparison of the average texture features. As discussed earlier, an average of the features is done to make the direction independent. Figure 18a compares the texture features for the pre-COVID and post-COVID images. Here it can be seen that “contrast” has obtained the highest peak value. Contrast is highest for 0°, 45°, 135° with corresponds to post-COVID image, for 90° contrast for pre-COVID image is high. Correlation has obtained a high-value corresponding to all orientations for the pre-COVID image. has received a high value for all directions corresponding to the post-COVID image. Finally, received a high value for all orientations corresponding to the post-COVID image. Thus, texture features have obtained high feature values for the post-COVID image for most cases. Figure 18b represents the change in the texture features along the positive or negative end. Five times, the difference of texture features for pre-COVID and post-COVID images have produced positive values, and on eleven occasions, texture features have negative values. This also suggests that post-COVID image texture features have obtained higher values than the pre-COVID image.
Fig. 17

Comparison of the texture features for post-COVID image at different distances. a Post-COVID (0°). b Post-COVID (45°). c Post-COVID (90°). d Post-COVID (135°)

Fig. 18

a Change in the texture features corresponding to pre-COVID and post-COVID image. b Change pattern analysis of the feature values

Comparison of the texture features for post-COVID image at different distances. a Post-COVID (0°). b Post-COVID (45°). c Post-COVID (90°). d Post-COVID (135°) a Change in the texture features corresponding to pre-COVID and post-COVID image. b Change pattern analysis of the feature values

Change analysis through object-based image classification techniques

While performing image classification through object-based image classification techniques. A particular area is classified into several objects. These objects are geographical areas of the study area like water, land, soil, tree, etc. Some of the prominent terms associated with the object-based image classification through which the image classification is performed are , , , , and K. The procedure to calculate these image classification features can be understood by the example of a confusion matrix represented in Fig. 19. Here a confusion matrix with arbitrary image classification is assumed. This random image is classified into four different classes as water, vegetation, urban, and soil.
Fig. 19

Test data for the image classification

Test data for the image classification Now different parameters corresponding to the image classification are computed as follows. Commission error () is generated when the pixels of one class get wrongly introduced in the class under observation. concerning the classified categories can be expressed by Eqs. (26–29). Omission error () refers to the classified pixels that are accidentally omitted from the investigation classes. concerning the classified classes is expressed by Eqs. (30)–(33). User accuracy () is defined as the “accuracy” from the perspective of the “map user,” not from the perspective of the “map maker.” concerning the classified classes are expressed by Eq. (34)–(38). Producer accuracy () is defined as the “accuracy” from the perspective of the “map maker,” not from the perspective of the “map user.” concerning the classified classes is expressed by Eqs. (39)–(43). Overall accuracy () can be understood as a ratio between the “correctly classified pixels” to the “total number of pixels” present in the image. It is expressed by Eq. (44). for the confusion matrix presented in Fig. 19 is calculated as . Thus the classification accuracy for the presented confusion matrix is Finally, the K is calculated to obtain classification accuracy. It is the measure of how well the classification of the study area dataset is performed. The range of K is . Mathematically it is expressed by Eq. (45).where is considered as the “observed agreement” among various classification raters. This is assumed to be identical with accuracy. is considered as a “theoretical probability of the chance agreement.” While performing the image classification using the K following assumptions are kept into consideration. If K is lying close to , then it is assumed to be a worse image classification. If K is lying close to 0, then it is random image classification. If K is lying close to , then the classification is assumed to be significantly realistic and close to accurate. In early , coronavirus was assumed to be spread in the entire world. In the , during early January of , coronavirus was considered to have limited spread (Jorden et al. 2020). In China, cases related to the novel coronavirus began to report from late 2019 (Xu et al. 2020). Issues related to coronavirus were highlighted in early , but the government of South Africa imposed a lockdown by March (Atangana and Araz 2020). Figure 20 presents the geographical locations of some prominent places before and during the lockdown. Here the pre lockdown images represent normal day-to-day activities of the people. A sudden stop in everyday activities is reported during the lockdown, i.e., no people on the road, empty roads, no industrial activities, etc.
Fig. 20

Pre and post-COVID Skysat images of the various location, pre-COVID. a China, Beijing (12 April 2020), c Sudan Omdurman, (23 April 2020), e Tyson foods, Washington, USA (30 April 2020), g South Africa, Johannesburg (27 September 2019); post-COVID. b China, Beijing (12 April 2020), d Sudan, Omdurman (23 April 2020), f Tyson foods, Washington, USA (30 April 2020), h South Africa, Johannesburg (27 September 2019)

Pre and post-COVID Skysat images of the various location, pre-COVID. a China, Beijing (12 April 2020), c Sudan Omdurman, (23 April 2020), e Tyson foods, Washington, USA (30 April 2020), g South Africa, Johannesburg (27 September 2019); post-COVID. b China, Beijing (12 April 2020), d Sudan, Omdurman (23 April 2020), f Tyson foods, Washington, USA (30 April 2020), h South Africa, Johannesburg (27 September 2019) The pre and post-study area images are classified using six different image classification techniques, i.e., , , , , , and . The image classification is performed based on six other classes, i.e., buildings, trees, roads, grasslands, metro, and cars. Two different sets of are created for classifying the images. set 1 contain image pixels for training the data and set 2 image pixels for the accuracy assessment. Table 5 presents details of the pixel’s count collected for training and accuracy assessment of the image data. Pre-COVID classification results of the study area are shown in Fig. 21.
Table 5

Pixel counts for training and accuracy assessment of study area

S. no.ClassTraining pixelsAccuracy assessment pixelsTraining pixelsAccuracy assessment pixels
Pre-COVID imagePost-COVID image
1Building1471161114581544
2Tree1090146911071484
3Road1540109516781578
4Grassland1748172314781536
5Metro1898218317881849
6Car1026102413951748
Fig. 21

Pre-COVID image classification. a “Parallelepiped classification”, b “Minimum distance classification”, c “Maximum likelihood classification”, d “Spectral angle mapper”, e “Spectral Information Divergence”, f “Support Vector Machine”

Pixel counts for training and accuracy assessment of study area Pre-COVID image classification. a “Parallelepiped classification”, b “Minimum distance classification”, c “Maximum likelihood classification”, d “Spectral angle mapper”, e “Spectral Information Divergence”, f “Support Vector Machine” The image classification features for different classification schemes are tabulated in Table 6.
Table 6

Classification features for the pre-COVID classified image

SchemeClassification classClassification featuresOverall accuracyKappa coefficient
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{UA}$$\end{document}UA\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{PA}$$\end{document}PA\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{OE}$$\end{document}OE\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{CE}$$\end{document}CE
Parallelepiped classificationBuilding31.5092.557.4568.50(3841/6428) = 59.7542%0.3068
Tree63.98100.000.0036.02
Road34.7451.1448.8665.26
Grassland0.000.00100.00100.00
Metro73.0414.5285.4826.96
Car100.000.3999.610.00
Minimum distance classificationBuilding76.9775.0424.9623.03(4821/6428) = 75.0000%0.6853
Tree99.4496.323.680.56
Road53.3299.540.4646.68
Grassland79.5990.209.8020.41
Metro45.831.9798.0354.17
Car48.5424.1175.8951.46
Maximum likelihood classificationBuilding75.4182.6517.3524.59(4149/6428) = 64.5457%0.7885
Tree100.00100.000.000.00
Road88.9793.816.1911.03
Grassland91.5975.5524.458.41
Metro51.7745.6154.3948.23
Car53.4830.4769.5346.52
Spectral angle mapperBuilding42.0258.8541.1557.98(3508/6428) = 54.5737%0.2791
Tree100.0019.6180.390.00
Road54.2917.3582.6545.71
Grassland71.9696.813.1928.04
Metro0.000.00100.00100.00
Car25.1740.4359.5774.83
Spectral information divergenceBuilding28.2133.4666.5476.79(1494/6428) = 23.2420%0.0154
Tree81.3612.1987.8118.64
Road35.0921.9278.0864.91
Grassland11.288.9491.0688.72
Metro16.5110.5889.4283.49
Car5.2214.7585.2594.78
Support vector machineBuilding66.9265.0534.9533.08(1332/6428) = 20.7218%0.5131
Tree98.9489.3810.621.06
Road37.3195.984.0262.69
Grassland98.4696.403.601.54
Metro0.000.00100.00100.00
Car16.2325.2974.7183.77
Classification features for the pre-COVID classified image The image classification of the post-COVID study area is performed and presented in Fig. 22. The image classification features for different classification schemes obtained for the post-COVID image are tabulated in Table 7.
Fig. 22

Post-COVID image classification. a “Parallelepiped classification”, b “Minimum distance classification”, c “Maximum likelihood classification”, d “Spectral angle mapper”, e “Spectral Information Divergence”, f “Support Vector Machine”

Table 7

Classification features for the post-COVID classified image

SchemeClassification classClassification featuresOverall accuracyKappa coefficient
\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{UA}$$\end{document}UA\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{PA}$$\end{document}PA\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{OE}$$\end{document}OE\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm{CE}$$\end{document}CE
Parallelepiped classificationBuilding25.9699.450.5574.04(4350/7727) = 56.2961%0.4749
Tree98.0999.550.451.91
Road86.2999.390.6113.71
Grassland0.000.00100.00100.00
Metro100.0016.6083.400.00
Car0.000.00100.00100.00
Minimum distance classificationBuilding66.4576.8423.1633.55(5189/7727) = 67.1541%0.6013
Tree99.9078.1521.850.10
Road72.9799.940.0627.03
Grassland93.2180.8619.1406.79
Metro31.7312.0487.9668.27
Car0.7533.3366.6799.25
Maximum likelihood classificationBuilding66.3077.5722.4333.70(5691/7727) = 73.6508%0.6746
Tree96.7198.661.343.29
Road85.2599.020.9814.75
Grassland81.0778.1121.9818.93
Metro48.5327.5172.4951.47
Car0.489.5290.4899.52
Spectral angle mapperBuilding45.8423.4576.5554.16(3479/7727) = 45.0239%0.3298
Tree89.4736.4463.5610.53
Road68.5445.8454.1631.46
Grassland54.4536.5463.4645.55
Metro78.8428.5471.4621.16
Car68.5434.4565.5531.46
Spectral information divergenceBuilding26.9560.4839.5273.05(2260/7727) = 29.2481%0.1735
Tree94.1163.1636.845.89
Road8.053.8696.1491.95
Grassland33.9435.6364.3766.06
Metro81.822.4297.5818.18
Car1.4857.1442.8698.52
Support vector machineBuilding31.4993.476.5368.51(2758/7727) 35.6930%0.2285
Tree41.1596.873.1358.85
Road32.9927.0572.9567.01
Grassland0.000.00100.000.00
Metro0.000.00100.000.00
Car0.000.00100.000.00
Post-COVID image classification. a “Parallelepiped classification”, b “Minimum distance classification”, c “Maximum likelihood classification”, d “Spectral angle mapper”, e “Spectral Information Divergence”, f “Support Vector Machine” Classification features for the post-COVID classified image The image classification of the pre-COVID and post-COVID image through different object-based image classification techniques visually presents the change in the study area. It has been observed that the pre-COVID image “ scheme” has shown the most satisfactory results both visually and numerically. In the post, COVID image classification “ scheme” has produced the most reliable results. As in both cases, image classification is performed through technique, and most of the classes in image classification have appeared. In technique building, trees and roads classes are most dominant while other classes remain inactive during the classification. In the pre-COVID classified image through technique, four classes are prevalent, i.e., trees, cars, roads, and buildings. In contrast, three classes are dominant in the post-COVID classified image, i.e., trees, roads, and buildings. In the pre-classified image through technique, four classes are most prevalent, i.e., building, grassland, trees, and road. In contrast, only three classes are dominant in the post-classified image, i.e., building, tree, and road. classification technique has produced the most abrupt result against the image classification. In the pre-COVID image, it is observed that the class “car” is most dominant, whereas, in the post-COVID image, the class building is dominant. This classification scheme has produced the worst classification results. Finally, through classification scheme, buildings, trees, and roads are the most prevalent class in the pre-COVID classified image. In the post-COVID image tree, cars and roads have appeared as the most dominant image class. The linear fitting of the degree (1) corresponding to kappa coefficient (K) and overall accuracy () is present in Fig. 23. The curve fitting value is for pre-COVID and post-COVID images. This suggests that the overall classification accuracy is directly proportional to the K One important conclusion derived from this experiment is that has emerged as a reasonable classification scheme with superior accuracy and kappa coefficient.
Fig. 23

Relationship of degree (1) between kappa coefficient and overall accuracy. a Pre-COVID. b Post-COVID

Relationship of degree (1) between kappa coefficient and overall accuracy. a Pre-COVID. b Post-COVID

Discussion

In this research work, an innovative combination of and techniques is presented. has emerged as a creative technique that contains information about the “statistical and spectral arrangement” of the image pixels. This information can be concluded in common words as offer information about the changes occurring inside the image, i.e., information about the spectral and spatial arrangement of the image pixels. The techniques have emerged as an ideal method to represent the changes developed in the study area visually. Here, users can create an object (class) of interest and compare pre-and post-image specifications based on the same object. Through technique, not only classes can be made, but the accuracy of the classification can also be obtained. Thus it is observed that PBCD GLCM approach and both techniques provide useful information related to image classification. Usage of any one of the techniques or both techniques depends upon the application only. In this research, a fusion methodology of and techniques is presented to extract maximum information from the study area. Finally, based on the experimental results, a model is developed to exact the full report of the study area shown in Fig. 24. It is also expected that the proposed model will work efficiently with other types of images, including multispectral images and hyperspectral images.
Fig. 24

Proposed and fusion model

Proposed and fusion model The presented model for image analysis includes based approach, which provides complete image information about the “spectral and spatial arrangement.” In process, select the best classification methodology having superior accuracy with maximum . Thus all the features obtained from and assist to under an event altogether. The texture classification of the study area presented in this work presents information about specific changes caused due to COVID lockdown in the study area. These changes are represented in the histogram signature plot of the pre-COVID and post-COVID images. features have also presented a pattern of statistical variation in which have show exponential increase in the feature values whereas correlation, energy, and have shown fall in their feature values. The role of technique in change identification is remarkably selecting an appropriate classification algorithm depends upon several factors like the study area. If the spatial resolution of the image is less (study area has less number of image pixels), then the user has a clear view of the classification classes. Then, in this case, good classification accuracy is expected from , , approaches. Likewise, the K will also have a high value. Now, suppose the study area has high resolution with a large number of image pixels. In this case, it is expected that selecting an appropriate number of pixel numbers for a particular class is difficult. Thus low overall classification accuracy along with less K will be obtained in this case. technique is quite effective in representing the visual difference of the developed changes. All the classified objects are visible and easily distinguishable through schemes. Thus the fusion of and techniques are desirable for the cases where internal and external information of the study area is required.

Conclusion

This research work presents two different methodologies fused to obtain maximum information from data of interest. Firstly quantification of texture features based on the “grey level co-occurrence matrix ()” technique is performed. In the second step, image classification based on “object-based change detection ()” methods visually represents the transformation developed in the study area due to COVID lockdown. Pre-COVID and post-COVID (during lockdown) panchromatic images of Connaught Place, New Delhi, are analyzed in this research work to develop an accurate model for the study area. Texture classification of the images is performed based on visual texture features for eight distances and four orientations. Six different image classification methodologies are used for performing the image classification of the study area. These methodologies are “Parallelepiped classification (),” “Minimum distance classification (),” “Maximum likelihood classification (),” “Spectral angle mapper (),” “Spectral Information Divergence ()” and “Support Vector Machine ().” features quantification has provided a novel pattern of texture variations, i.e., contrast, correlation, , and . based techniques have provided a maximum classification accuracy of and for the pre-COVID and post-COVID image data. Finally, a model is presented based on the above investigation for analyzing before and after COVID images. The model follows a two-step methodology with a final fusion of the obtained information to produce complete information about the study area numerically and visually.
  9 in total

1.  Source identification and metallic profiles of size-segregated particulate matters at various sites in Delhi.

Authors:  Naba Hazarika; V K Jain; Arun Srivastava
Journal:  Environ Monit Assess       Date:  2015-08-29       Impact factor: 2.513

2.  Parameter importance assessment improves efficacy of machine learning methods for predicting snow avalanche sites in Leh-Manali Highway, India.

Authors:  Anuj Tiwari; Arun G; Bramha Dutt Vishwakarma
Journal:  Sci Total Environ       Date:  2021-06-29       Impact factor: 7.963

3.  Mathematical model of COVID-19 spread in Turkey and South Africa: theory, methods, and applications.

Authors:  Abdon Atangana; Seda İğret Araz
Journal:  Adv Differ Equ       Date:  2020-11-25

4.  Evidence for Limited Early Spread of COVID-19 Within the United States, January-February 2020.

Authors:  Michelle A Jorden; Sarah L Rudman; Elsa Villarino; Stacey Hoferka; Megan T Patel; Kelley Bemis; Cristal R Simmons; Megan Jespersen; Jenna Iberg Johnson; Elizabeth Mytty; Katherine D Arends; Justin J Henderson; Robert W Mathes; Charlene X Weng; Jeffrey Duchin; Jennifer Lenahan; Natasha Close; Trevor Bedford; Michael Boeckh; Helen Y Chu; Janet A Englund; Michael Famulare; Deborah A Nickerson; Mark J Rieder; Jay Shendure; Lea M Starita
Journal:  MMWR Morb Mortal Wkly Rep       Date:  2020-06-05       Impact factor: 17.586

5.  Possible environmental effects on the spread of COVID-19 in China.

Authors:  Hao Xu; Chonghuai Yan; Qingyan Fu; Kai Xiao; Yamei Yu; Deming Han; Wenhua Wang; Jinping Cheng
Journal:  Sci Total Environ       Date:  2020-05-07       Impact factor: 7.963

6.  Gray level co-occurrence matrix (GLCM) texture based crop classification using low altitude remote sensing platforms.

Authors:  Naveed Iqbal; Rafia Mumtaz; Uferah Shafi; Syed Mohammad Hassan Zaidi
Journal:  PeerJ Comput Sci       Date:  2021-05-19

Review 7.  Novel Coronavirus Infection (COVID-19) in Humans: A Scoping Review and Meta-Analysis.

Authors:  Israel Júnior Borges do Nascimento; Nensi Cacic; Hebatullah Mohamed Abdulazeem; Thilo Caspar von Groote; Umesh Jayarajah; Ishanka Weerasekara; Meisam Abdar Esfahani; Vinicius Tassoni Civile; Ana Marusic; Ana Jeroncic; Nelson Carvas Junior; Tina Poklepovic Pericic; Irena Zakarija-Grkovic; Silvana Mangeon Meirelles Guimarães; Nicola Luigi Bragazzi; Maria Bjorklund; Ahmad Sofi-Mahmudi; Mohammad Altujjar; Maoyi Tian; Diana Maria Cespedes Arcani; Dónal P O'Mathúna; Milena Soriano Marcolino
Journal:  J Clin Med       Date:  2020-03-30       Impact factor: 4.241

8.  Clinical manifestations and evidence of neurological involvement in 2019 novel coronavirus SARS-CoV-2: a systematic review and meta-analysis.

Authors:  Lei Wang; Yin Shen; Man Li; Haoyu Chuang; Youfan Ye; Hongyang Zhao; Haijun Wang
Journal:  J Neurol       Date:  2020-06-11       Impact factor: 4.849

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.