Literature DB >> 34337030

Retinal OCT Texture Analysis for Differentiating Healthy Controls from Multiple Sclerosis (MS) with/without Optic Neuritis.

Hamidreza Dehghan Tazarjani1, Zahra Amini1, Rahele Kafieh1, Fereshteh Ashtari2, Erfan Sadeghi3.   

Abstract

Multiple sclerosis (MS) is an inflammatory disease damaging the myelin sheath in the central and peripheral nervous system in the brain and spinal cord. Optic Neuritis (ON) is one of the most prevalent ocular demonstrations of MS. The current diagnosis protocol for MS is MRI, but newer modalities like Optical Coherence Tomography (OCT) are already of interest in early detection and progression analysis. OCT reveals the symptoms of MS in the Central Nervous System (CNS) through cross-sectional images from neural retinal layers. Previous works on OCT were mostly focused on the thickness of retinal layers; however, texture features seem also to have information in this regard. In this research, we introduce a new pipeline that constructs layer-stacked (LS) images containing data from each specific layer. A variety of texture features are then extracted from LS images to differentiate between healthy controls and ON/None-ON MS cases. Furthermore, the definition of texture extraction methods is tailored for this application. After performing a vast survey on available texture analysis methods, a treasury of powerful features is collected in this paper. As a primary work, this paper shows the ability of such features in the diagnosis of HC and MS (ON and None-ON) cases. Our findings show that the texture features are powerful to diagnose MS cases. Furthermore, adding information of conventional thickness values to texture features improves considerably the discrimination between most of the target groups including HC vs. MS, HC vs. MS-None-ON, and HC vs. MS-ON.
Copyright © 2021 Hamidreza Dehghan Tazarjani et al.

Entities:  

Year:  2021        PMID: 34337030      PMCID: PMC8298144          DOI: 10.1155/2021/5579018

Source DB:  PubMed          Journal:  Biomed Res Int            Impact factor:   3.411


1. Introduction

Multiple sclerosis (MS) is an inflammatory disease damaging the myelin sheath in the central and peripheral nervous system in the brain and spinal cord. This disease causes the immune system to attack one or more proteins of the myelin structure and disrupts the ability of the nervous system to communicate and therefore brings about many physical signs and symptoms [1]. Those suffering from MS show neurological symptoms including disorders in the autonomic, visual, motor, and sensory nervous system [2]. Optic Neuritis (ON) is a common eye problem where inflammation or demyelination affects the optic nerve. It occurs when inflammation damages the optic nerve, a bundle of nerve fibers that transmits visual information from the eye to the brain. Signs and symptoms of ON can be the first indication of MS, or they can occur later in the course of MS. Not everyone who experiences ON goes on to develop further symptoms of MS, but a significant proportion does [3]. The current diagnosis protocol for MS is Magnetic Resonance Imaging (MRI); however, researchers are already looking for substitute methods to overcome MRI limitations like high cost, late-stage diagnosis, and inaccurate signs due to aging rather than MS [4]. The effects of MS on the Central Nervous System (CNS) make the retinal nerve fiber layer (RNFL) a proper candidate for being imaged instead of brain MRIs. The thickness of RNFL can be used to assess the existence of any damage in the CNS. Moreover, RNFL is considered as one of the main retinal layers. The role of the remaining layers is not exactly known in the case of MS and needs more investigations. Optical Coherence Tomography (OCT) is a noninvasive imaging modality to take cross-sectional images of biological tissues. Retinal OCT provides information on symptoms of many eye diseases such as macular degeneration, glaucoma, and diabetic retinopathy and helps ophthalmologists to diagnose and treat such diseases in a timely manner [5-7]. Parisi et al.'s study in 1998 on the diagnosis of MS using retinal OCT was the first work in this field [8]. He investigated whether there is a relationship between RNFL thickness and visual pathway function in patients with MS. Since then, a great deal of research has been done on thickness changes in different retinal layers and the possibility of their use for diagnosing MS. Petzold et al. in 2017 prepared a survey covering this topic and reviewed 110 articles from 1991 to 2016 and provided a good overview of the subject [9]. In recent years, in the field of retinal OCT image processing, much attention has been paid to extracting and using texture features of layers while these types of features have not yet been used widely for MS diagnosis and there are few works addressing this issue. As an example, Varga et al. in 2015 had a study investigating the differences in texture descriptors and optical properties of retinal tissue layers in patients with MS and evaluated their usefulness in the detection of neurodegenerative changes using OCT image segmentation [10]. The term texture in image processing and machine vision refers to the amount, type, and distribution of pixel brightness throughout the image along with the texture of the image [11]. Researchers have defined it as “A texture area in an image can be constructed with an irregular and varied spatial distribution of the intensity of the brightness or color [12].” In this regard, four general categories named statistical, structural, signal processing-based, and model-based features are usually used [11]. In this study, we want to examine the texture of OCT images, and we suspect that changes in the texture of the layers must occur before the thickness changes. It seems that the deterioration of axons in the retinal nerve fiber layer and changes in the texture layers can be determined by the noninvasive OCT method, making them possible to be used as a complementary diagnostic tool in addition to the existing methods for early detection of recurrent MS-ON and MS-None-ON [13]. Here is an overview of the literature investigating texture features in OCT images. In 2007, Baroni et al. investigated the possibility of discriminating retinal OCT image layers in texture processing using Grey-Level Cooccurrence Matrix (GLCM) feature extraction [14]. In 2014, Anantrasirichai et al. presented a new method for extracting the texture of OCT retinal images in glaucoma [15]. In 2018, Sawyer et al. examined the possibility of using texture analysis to classify ovarian OCT images [16]. In 2019, Nunes et al. used texture analysis of OCT data to define new biomarkers for MS, of course, only on one specific retinal layer [17]. The rest of this paper is as follows. The proposed method for texture extraction of retinal OCT layers is described in Section 2. The performance of the method is evaluated and discussed in Section 3. Finally, Section 4 presents the conclusions of the study.

2. Material and Method

2.1. Database

The data in this study is obtained from Spectralis Heidelberg HRA+OCT device in Faiz Hospital and Sadra Ophthalmology Center, Isfahan, Iran. The size of each B-scan is 496 × 480 pixels. For some subjects, data contains 19 B-scans, and for others, it includes 25 B-scans. OCT data includes 36 health control (HC) eyes and 39 patients suffering from MS (20 eyes suffering from MS with no history of ON (MS-None-ON) and 19 eyes suffering from MS with a history of ON (MS-ON)). HC and patient populations have matched gender and age approximately. A summary data flow diagram is presented in Figure 1.
Figure 1

Detailed structure of the data.

2.2. Algorithm Flow

The workflow of the proposed method is shown in Figure 2. The first step is the preprocessing block in which the retinal delineation [18] is used to extract the layers. In the second block, layer-stacked (LS) images are created by stacking each specific layer from all B-scans of one subject. The third block is applied for masking the images as input to the next feature extraction block. Five different groups of texture features are utilized in this step. In the following, the most effective features are selected based on p value for distinguishing HC, MS-ON, and MS-None-ON population from retinal OCT layers around the fovea. Finally, in the last step, a classification between HC and abnormal population is performed. Each block of the proposed algorithm flow is elaborated below.
Figure 2

Algorithm flow of the proposed method.

The sample output of preprocessing block is shown in Figure 3. To construct layer-stacked images, we consider that data for each subject consists of a number of B-scans, and each B-scan contains 10 layers, locations of which are obtained in preprocessing step. Accordingly, we construct 10 layer-stacked images by cutting and stacking each individual layer from all B-scans of one subject (Figures 4 and 5). A sample of layer-stacked images is demonstrated in Figure 6.
Figure 3

Interretinal layers (a sample output of preprocessing block).

Figure 4

Sample of individual layers in one B-scan.

Figure 5

Construction process for layer-stacked (LS) images.

Figure 6

Layer-stacked (LS) images corresponding to each retinal layer. (a) First layers of all B-scans. (b) Second layers of all B-scans. (c) Third layers of all B-scans. (d) Fourth layers of all B-scans. (e) Fifth layers of all B-scans. (f) Sixth layers of all B-scans. (g) Seventh layers of all B-scans. (h) Eighth layers of all B-scans. (i) Ninth layers of all B-scans. (j) Tenth layers of all B-scans.

During texture calculation, boundary points in layer-stacked images have synthetic contrast which may fool the feature extraction method and lead to incorrect and outlier values. To solve this problem, an eliminating mask is developed to ignore pixels located on both sides of each individual layer. Feature extraction is then performed on masked layer-stacked images. The features used in our work are GLCM, Local Binary Pattern (LBP), Local Directional Pattern (LDP), Local Optimal Oriented Pattern (LOOP), and fractal dimension. Finally, discriminant features are fed into Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) classifiers for differentiating between HC and MS cases.

2.3. Texture Feature Extraction

Investigating texture features is an efficient way to characterize various properties, such as structure, orientation, roughness, smoothness, or regularity of an image. Extracting features from masked layer-stacked images, we apply two categories of texture features including original and modified features.

2.3.1. Original Features

Different texture analysis methods are utilized in this research and elaborated in the next subsections. A set of features are then extracted according to Table 1.
Table 1

List of utilized features.

Texture analysis methodFeaturesDescriptionTexture analysis methodFeaturesDescription
GLCMEnergyProvides the sum of squared elements in the GLCM. It has values between 0 and 1GLCMDifference varianceMeasures the dispersion (with regard to the mean) of the grey-level difference distribution of the image
EntropyMeasure of randomness that can be used to characterize the texture of an imageDifference entropyMeasures the disorder related to the grey-level difference distribution of the image
ContrastA measure of intensity contrast between a pixel and its neighbor over the whole imageMaximum probabilityMeasures the maximum likelihood of producing the pixels of interest
HomogeneityMeasures the closeness of the distribution of elements in the GLCM to the GLCM diagonalIMC1Measure of dependency between two random variables
CorrelationA measure of how correlated a pixel is to its neighbor over the whole imageIMC2
Sum of squaresMeasures the dispersion (with regard to the mean) of the grey-level distributionLBPLDPLOOPMeanMeasures mean and standard deviation of histograms
Cluster shadeCharacterizes the tendency of clustering of the pixels in the region of interest
Cluster prominenceStandard deviation
DissimilarityA measure of distance between pairs of pixels in the region of interestDynamic rangeMeasures the ratio between the largest and smallest values
AutocorrelationRepresents the degree of similarity between a given time series and a lagged version of itKurtosisMeasure of the “tailedness” of the probability distribution
Sum averageMeasures the mean of the grey-level sum distribution of the imageSkewnessMeasure of the asymmetry of the probability distribution
Sum entropyMeasures the disorder related to the grey-level sum distribution of the imageFractal dimensionMeanMeasures mean and standard deviation of the box-counting method
Sum varianceMeasures the dispersion (with regard to the mean) of the grey-level sum distribution of the imageStandard deviation
Inverse differenceA measure of local homogeneity of an image
(1) Grey-Level Cooccurrence Matrix. GLCM describes the spatial relationship between each intensity tone by considering changes between grey levels i and j at a particular displacement distance d and at a particular angle θ [15]. Here, we use a 256 quantization level and the distance is selected as one pixel with four distinct orientations (0, 45, 90, and 135 degrees). Furthermore, conditions of those pixels with 180 degrees in difference are considered to be the same. (2) Local Binary Pattern. LBP is a method for describing the texture characteristics introduced in 1990 [19]. LBP compares the intensity of each pixel with neighboring pixels and determines the output value based on equation (1), where P is the number of neighboring points that are chosen, i.e., 8, i is the intensity of the neighborhood points, and  i is the intensity of the central point.  LBP calculates the output of LBP for P neighboring points. (3) Local Directional Pattern. A more robust to noise modified version of LBP is LDP which computes directional components for each pixel with Kirsch kernels and provides a measure of the strength of intensity variation in those directions [20]. For each central pixel located at (x, y) with intensity i, eight rotated versions of the Kirsch edge detector should be applied on neighboring pixels with intensities i n = 0, 1, ⋯, 7. Eight corresponding responses of the Kirsch masks are m n = 0, 1, ⋯, 7. m is the kth highest Kirsch activation, and all the neighboring pixels having Kirsch response higher than m are assigned 1, and others 0. Then, the LDP value for the pixel (x, y) is given by (4) Local Oriented Optimization Pattern. LOOP offers a nonlinear combination of LBP and LDP that overcomes their individual problems while maintaining the strengths of each. Compared to LDP, LOOP assigns an exponential weight w to each of neighboring pixels. w is a digit between 0 and 7, according to the rank of the magnitude of m among the 8 Kirsch mask outputs [21]. The value of the LOOP in (x, y) is given by (5) Fractal Analysis. Images with self-similarity characteristics are called fractal. The box-counting analysis is an appropriate method of fractal dimension estimation for images with or without self-similarity [22]. We have a basic equation for calculating fractal dimension given by equation (4), in which N is the number of boxes that cover the pattern, and r is the magnification or inverse value of the box size. A higher slope means that the object is more fractal, i.e., reduction in the size of the box reveals more complexity. The lower slope means that the object is closer to the straight line, i.e., less fractal, and the amount of details does not increase rapidly with increasing magnification.

2.3.2. Modified Features

Inserting zero values by masking layer-stacked images (third block in Figure 2) causes unwanted strip artifact. In order to solve this problem, we modify the output of the abovementioned texture analysis methods, to extract more accurate features. A list of used abbreviations in this paper and their explanations is shown in Table 2.
Table 2

A list of used abbreviations and their explanations.

AbbreviationExplanation
MSMultiple sclerosis
ONOptic neuritis
OCTOptical coherence tomography
CNSCentral nervous system
LSILayer-stacked images
MS-ONMultiple sclerosis with optic neuritis
MS-None ONMultiple sclerosis without optic neuritis
MRIMagnetic resonance imaging
RNFLRetinal nerve fiber layer
GLCMGrey-level cooccurrence matrix
HCHealth control
LBPLocal binary pattern
LDPLocal directional pattern
LOOPLocal optimal oriented pattern
SVMSupport vector machine
LDALinear discriminant analysis
FDFractal dimension
LS imageLayer-stacked image
For GLCM, the first row and column of the output matrix (which represent unwanted zero pixels) are eliminated. The GLCM features listed in Table 1 can then be calculated fromwhere element [i, j] of the matrix is generated by counting the number of times a pixel with value i is adjacent to a pixel with value j and then dividing the entire matrix by the total number of such comparisons made. Each entry is therefore considered to be the probability that a pixel with value i will be found adjacent to a pixel of value j. μ‚ μ, σ, and σ are means and standard deviations. p and p are partial probability density functions. x and y are the coordinates (row and column) of an entry in the cooccurrence matrix, and p(i) is the probability of cooccurrence matrix coordinates summing to x + y. HX and HY are the entropies of p and p. Finally, HXY, HXY1, and HXY2 are shown in In LBP, LDP, and LOOP methods, the features in Table 1 should be extracted from the histogram of the output. To solve the same problem of unwanted strip artifact, the first column of the histogram (which represent unwanted zero pixels) is eliminated. Finally, five statistical features including mean, standard deviation, dynamic range, kurtosis, and skewness are extracted. The last category of texture analysis methods to be considered is fractal analysis. Here, we remove the black background above the layer-stacked images before performing the masking step. The mean and standard deviation of the fractal dimensions for each image is then reported.

2.4. Feature Selection and Classification

To handle the course of dimensionality problem caused by small number of available data compared to bunch of calculated features, more significant features are selected based on t-test and Bonferroni correction. The Bonferroni correction is an adjustment made to p values when several dependent or independent statistical tests are being performed simultaneously on a single data set. To perform a Bonferroni correction, the critical p value (α) is divided by the number of comparisons being made. Here, considering that the majority of the features have been extracted from the GLCM matrix and this matrix has produced the features in four different angles, according to Bonferroni correction, the value of meaningful p value (p < 0.005) is divided by 4 and p < 0.001 is considered as a significant level for cut-off. After Bonferroni adjustment for multiple comparisons, features with p < 0.001 are selected as significant features. Then, two classification models, SVM and LDA, are utilized for differentiating between four possible groups including HC vs. MS, HC vs. MS-ON, HC vs. MS-None-ON, and MS-ON vs. MS-None-ON. A 10-fold cross-validation is used to evaluate accuracy, for each classification model.

3. Result

3.1. Feature Analysis

To evaluate the statistical significance of the extracted features, the t-test is used to identify which features show significant differences between healthy and MS (ON and None-ON) cases (Table 3). The p values indicate the test rejection of the null hypothesis at the 5% significance level, considering the Bonferroni correction (p value < 0.001). Frequencies of significant selected features for each retinal layer are also presented in Table 4.
Table 3

Evaluation of statistical significance of the extracted features, before feature selection. The t-test was used to identify which features show significant differences between healthy and MS (ON and None-ON) cases. The p values indicate the test rejection of the null hypothesis at 5% significance level, considering the Bonferroni correction (p value < 0.001).

FeaturesLayer p (HC vs. MS-None-ON) p (HC vs. MS-ON) p (MS-ON vs. MS-None-ON) p(HC vs. MS)FeaturesLayer p (HC vs. MS-None-ON) p (HC vs. MS-ON) p (MS-ON vs. MS-None-ON) p(HC vs. MS)
Autocorrelation2<0.001<0.0010.966<0.001Fractal mean1<0.001<0.0010.8320.051
Autocorrelation3<0.0010.1010.0920.003Fractal mean4<0.001<0.0010.7990.955
Autocorrelation4<0.0010.0260.2220.008Fractal mean5<0.001<0.0010.8620.896
Cluster prominence3<0.0010.0860.033<0.001Fractal mean8<0.001<0.0010.3390.785
Cluster prominence40.0020.0570.525<0.001Fractal mean9<0.001<0.0010.9640.710
Cluster shade20.038<0.0010.2090.044Fractal mean10<0.001<0.0010.5140.408
Cluster shade3<0.0010.0250.054<0.001Fractal Std.5<0.001<0.0010.7440.014
Cluster shade4<0.0010.0150.298<0.001Fractal Std.80.1130.0010.276<0.001
Contrast10.341<0.0010.0580.012Fractal Std.9<0.001<0.0010.751<0.001
Contrast2<0.001<0.0010.944<0.001Fractal Std.10<0.001<0.0011.000<0.001
Contrast3<0.0010.0050.1500.001LBP mean20.0010.0030.981<0.001
Contrast4<0.0010.0010.398<0.001LBP mean30.001<0.0010.9500.017
Correlation1<0.001<0.0010.9990.927LBP mean4<0.001<0.0010.1270.024
Correlation2<0.0010.0040.0290.546LBP mean5<0.001<0.0010.3530.061
Difference entropy2<0.001<0.0010.9290.951LBP mean6<0.001<0.0010.9410.335
Difference entropy3<0.001<0.0010.5130.677LBP mean7<0.001<0.0010.908<0.001
Difference entropy4<0.001<0.0010.5460.263LBP mean9<0.001<0.0010.2510.068
Difference entropy5<0.0010.0040.4130.733LBP Std.4<0.001<0.0010.4090.319
Difference variance2<0.001<0.0010.9990.001LBP Std.5<0.001<0.0010.4560.645
Difference variance3<0.0010.0010.169<0.001LBP Std.6<0.0010.0010.9050.234
Difference variance4<0.001<0.0010.523<0.001LBP Std.7<0.001<0.0010.2050.116
Dissimilarity2<0.001<0.0010.7900.010LBP Std.9<0.001<0.0010.4170.150
Dissimilarity3<0.0010.0010.2710.030LBP dynamic range30.5250.0020.088<0.001
Dissimilarity4<0.0010.0010.3800.013LBP dynamic range5<0.0010.0040.5670.164
Dissimilarity5<0.0010.0350.2740.302LBP dynamic range6<0.0010.0060.675<0.001
Energy2<0.001<0.0010.999<0.001LBP kurtosis2<0.001<0.0010.8040.968
Energy3<0.001<0.0010.921<0.001LBP kurtosis3<0.001<0.0010.9410.048
Energy4<0.0010.0010.5280.861LBP kurtosis40.001<0.0010.8050.001
Energy5<0.0010.0190.3290.008LDP mean20.4590.6590.961<0.001
Entropy2<0.001<0.0010.8960.903LDP mean4<0.001<0.0010.5640.067
Entropy3<0.001<0.0010.4460.765LDP mean5<0.001<0.0010.5550.035
Entropy4<0.001<0.0010.4530.274LDP mean6<0.001<0.0010.1210.799
Entropy5<0.0010.0290.2220.871LDP mean7<0.001<0.0010.8830.047
Homogeneity2<0.001<0.0010.9420.900LDP mean8<0.001<0.0010.8080.055
Homogeneity3<0.001<0.0010.6810.690LDP mean9<0.001<0.0010.2190.037
Homogeneity4<0.001<0.0010.5040.201LDP mean10<0.001<0.0010.8340.782
Homogeneity5<0.0010.0060.3960.680LDP skewness2<0.001<0.0010.8130.530
IMC12<0.001<0.0010.8620.017LDP skewness3<0.001<0.0010.9300.085
IMC290.004<0.0010.6190.043LDP skewness40.004<0.0010.6330.002
Inverse difference moment normalized10.319<0.0010.0630.011LDP Std.2<0.001<0.0010.2760.018
Inverse difference moment normalized2<0.001<0.0010.920<0.001LDP Std.4<0.001<0.0010.6520.083
Inverse difference moment normalized3<0.0010.0060.1540.001LDP Std.5<0.001<0.0010.8820.025
Inverse difference moment normalized4<0.0010.0010.381<0.001LDP Std.60.005<0.0010..0930.597
Maximum probability2<0.001<0.0011.000<0.001LDP Std.7<0.001<0.0010.5540.310
Maximum probability3<0.001<0.0010.876<0.001LDP Std.8<0.001<0.0010.9890.870
Maximum probability4<0.0010.0680.2930.430LDP Std.9<0.001<0.0010.629.882
Maximum probability5<0.0010.0620.2220.647LDP Std.10<0.001<0.0010.9010.792
Sum average2<0.001<0.0010.8140.005LDP dynamic range10.1880.2190.999<0.001
Sum average3<0.0010.0130.1940.028LDP kurtosis2<0.001<0.0010.998<0.001
Sum average4<0.0010.0060.3830.048LDP kurtosis3<0.001<0.0011.000<0.001
Sum entropy2<0.001<0.0010.9530.708LDP kurtosis4<0.001<0.0010.8010.466
Sum entropy3<0.0010.0020.4020.938LDP kurtosis5<0.0010.0030.7810.001
Sum entropy4<0.0010.0010.5140.384LOOP mean20.4590.6590.961<0.001
Sum entropy5<0.0010.0640.1820.806LOOP mean4<0.001<0.0010.5640.067
Sum of squares2<0.001<0.0010.5280.001LOOP mean5<0.001<0.0010.5550.035
Sum of squares3<0.0010.0140.092<0.001LOOP mean6<0.001<0.0010.1210.799
Sum of squares4<0.0010.0030.262<0.001LOOP mean7<0.001<0.0010.8830.047
Sum variance2<0.001<0.0010.4000.002LOOP mean8<0.001<0.0010.8080.055
Sum variance3<0.0010.0160.087<0.001LOOP mean9<0.001<0.0010.2190.037
Sum variance4<0.0010.0040.248<0.001LOOP mean10<0.001<0.0010.8340.782
LOOP Std.2<0.001<0.0010.9640.013LOOP skewness2<0.001<0.0010.998<0.001
LOOP Std.3<0.001<0.0010.2020.021LOOP skewness3<0.001<0.0011.000<0.001
LOOP Std.4<0.001<0.0010.8000.218LOOP skewness4<0.001<0.0010.8180.641
LOOP Std.5<0.001<0.0010.8300.019LOOP skewness5<0.0010.0020.7690.001
LOOP Std.60.014<0.0010.0910.548LOOP Std.9<0.001<0.0010.5380.390
LOOP Std.7<0.001<0.0010.5760.425LOOP Std.10<0.001<0.0010.7860.799
LOOP Std.8<0.001<0.0010.4630.316
Table 4

Frequency of significant selected features for each retinal layer.

LayersFrequency
HC vs. MSHC vs. MS-ONHC vs. MS-None-ONMS-ON vs. MS-None-ON
11420
21022220
31011210
4715240
508180
61550
71660
81550
91980
101570

Total33901170

3.2. Classification Result

According to Tables 3 and 4, 15 common significant features between three groups (HC, MS-ON, and MS-None-ON) are selected as input of each classifier. However, no significant feature is found for the last group (MS-ON vs. MS-None-ON). Then, the classification step is done to discriminate between four target groups including HC vs. MS, HC vs. MS-None-ON, HC vs. MS-ON, and MS-ON vs. MS-None-ON. The accuracy results obtained from our classifiers in different groups are shown in Table 5.
Table 5

The accuracy of texture features, thickness, and combination of texture features and thicknesses.

MethodsClassifiersHC vs. MSHC vs. MS-ONHC vs. MS-None-ONMS-ON vs. MS-None-ON
Texture featuresSVM85.383.678.664.1
LDA72.074.664.348.8

ThicknessesSVM84.081.890.089.7
LDA6469.173.382.1

Texture features & thicknessesSVM96.087.396.482
LDA10098.296.556.4
In addition, to prepare a fair comparison with the previous studies, we also test the performance of the two classification models using thickness features as input. As abovementioned, utilizing texture features for our intended goal is totally novel and previous researches were only relying on thickness as discriminant features. Therefore, the thickness features are calculated as the average value of distance between two consecutive boundaries, which lead to 10 thickness values out of 11 retinal layer boundaries, and this thickness feature vector is fed also to each classifier. In summary, the following set of information are utilized as input of each two classifiers: 15 common selected texture features based on t-test and Bonferroni correction Thickness features Combination of I and II As it can be found in Table 5, in cases I and II, SVM outperforms LAD. In analyzing the effect of texture and thickness features separately, it has to be mentioned that the best accuracy result for groups HC vs. MS and HC vs. MS-ON is found using texture features and SVM classifier. Meanwhile, for groups HC vs. MS-None-ON and MS-ON vs. MS-None-ON, thickness features and SVM classifier obtain the best accuracy. Furthermore, the impressive point is that in case III and with the combination of texture and thickness features as input of the classifiers, the result improved considerably and also the performance of the LDA classifier is superior to the SVM performance in most of the conditions.

4. Conclusion

There are no specific tests for MS detection. Instead, a diagnosis of MS often relies on ruling out other conditions that might produce similar signs and symptoms, known as a differential diagnosis. Blood tests, spinal tap (lumbar puncture), evoked potential tests, and MRI are the conventional MS diagnosis methods. The first MR images of MS were produced in the early 1980s [23]. In most people with relapsing-remitting MS, the diagnosis is fairly straightforward and based on a pattern of symptoms consistent with the disease and confirmed by brain imaging scans such as MRI; however, MS diagnosis can be more difficult in patients with unusual symptoms or progressive disease. MRI-based methods have been indeed the most successful techniques to estimate CNS damage up to the present, although it is becoming increasingly clear that due to the ability of direct visualization of retinal axons, OCT has become an extremely sensitive method for imaging neurodegeneration in MS patients. Studies show thickness reduction in retinal layers of MS patients with and without history of ON by OCT image analysis [4, 9, 24–28]. Hence, OCT is suggested as an important tool for monitoring MS and also as a complementary method for MRI-based diagnosis techniques [29-32]. However, as mentioned above, the majority of previous works are on the thickness analysis of retinal layers. Here, by combining the information of thickness and texture of retinal layers, we prepared a more comprehensive analysis of OCT imaging performance in the diagnosis of MS with or without ON. Indeed, texture analysis is a novel strategy for studying intrinsic changes in retinal layers during neurodegenerative diseases. MS, as one of the famous neurodegenerative disorders, is investigated in this research. After performing a vast survey on available texture analysis methods, a treasury of powerful features is collected in this paper. As a primary work, this paper shows the ability of such features in discrimination of HC and MS (ON and None-ON) cases. Even with simple classification methods, the texture features are powerful to diagnose MS cases (from HC ones) with accuracy of 85.3% and 72% with SVM and LDA classifiers, respectively. Another valuable point is that adding information of conventional thickness values to texture features improves the discrimination between most of the target groups including HC vs. MS, HC vs. MS-None-O, and HC vs. MS-ON. It should be noted that the results of the last group (MS-ON vs. MS-None-ON) are generally weaker than other groups due to the lack of significant discriminant texture features for this group. Furthermore, the findings show that some layers like 2, 3, and 4 carry more texture information useful in separation of HC from MS cases. Such finding can be a start point for further investigation in this area.
  20 in total

1.  Three-dimensional texture analysis of optical coherence tomography images of ovarian tissue.

Authors:  Travis W Sawyer; Swati Chandra; Photini F S Rice; Jennifer W Koevary; Jennifer K Barton
Journal:  Phys Med Biol       Date:  2018-12-04       Impact factor: 3.609

Review 2.  Optical coherence tomography in multiple sclerosis: a systematic review and meta-analysis.

Authors:  Axel Petzold; Johannes F de Boer; Sven Schippling; Patrik Vermersch; Randy Kardon; Ari Green; Peter A Calabresi; Chris Polman
Journal:  Lancet Neurol       Date:  2010-09       Impact factor: 44.182

3.  Correlation between morphological and functional retinal impairment in multiple sclerosis patients.

Authors:  V Parisi; G Manni; M Spadaro; G Colacino; R Restuccia; S Marchi; M G Bucci; F Pierelli
Journal:  Invest Ophthalmol Vis Sci       Date:  1999-10       Impact factor: 4.799

4.  Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map.

Authors:  Raheleh Kafieh; Hossein Rabbani; Michael D Abramoff; Milan Sonka
Journal:  Med Image Anal       Date:  2013-06-11       Impact factor: 8.545

Review 5.  Retinal layer segmentation in multiple sclerosis: a systematic review and meta-analysis.

Authors:  Axel Petzold; Laura J Balcer; Peter A Calabresi; Fiona Costello; Teresa C Frohman; Elliot M Frohman; Elena H Martinez-Lapiscina; Ari J Green; Randy Kardon; Olivier Outteryck; Friedemann Paul; Sven Schippling; Patrik Vermersch; Pablo Villoslada; Lisanne J Balk
Journal:  Lancet Neurol       Date:  2017-09-12       Impact factor: 44.182

Review 6.  The investigation of acute optic neuritis: a review and proposed protocol.

Authors:  Axel Petzold; Mike P Wattjes; Fiona Costello; Jose Flores-Rivera; Clare L Fraser; Kazuo Fujihara; Jacqueline Leavitt; Romain Marignier; Friedemann Paul; Sven Schippling; Christian Sindic; Pablo Villoslada; Brian Weinshenker; Gordon T Plant
Journal:  Nat Rev Neurol       Date:  2014-07-08       Impact factor: 42.937

7.  Relationship of optic nerve and brain conventional and non-conventional MRI measures and retinal nerve fiber layer thickness, as assessed by OCT and GDx: a pilot study.

Authors:  Elliot M Frohman; Michael G Dwyer; Teresa Frohman; Jennifer L Cox; Amber Salter; Benjamin M Greenberg; Sara Hussein; Amy Conger; Peter Calabresi; Laura J Balcer; Robert Zivadinov
Journal:  J Neurol Sci       Date:  2009-05-12       Impact factor: 3.181

8.  Association of retinal and macular damage with brain atrophy in multiple sclerosis.

Authors:  Jan Dörr; Klaus D Wernecke; Markus Bock; Gunnar Gaede; Jens T Wuerfel; Caspar F Pfueller; Judith Bellmann-Strobl; Alina Freing; Alexander U Brandt; Paul Friedemann
Journal:  PLoS One       Date:  2011-04-08       Impact factor: 3.240

9.  Comparative diagnostic accuracy of ganglion cell-inner plexiform and retinal nerve fiber layer thickness measures by Cirrus and Spectralis optical coherence tomography in relapsing-remitting multiple sclerosis.

Authors:  Julio J González-López; Gema Rebolleda; Marina Leal; Noelia Oblanca; Francisco J Muñoz-Negrete; Lucienne Costa-Frossard; José C Alvarez-Cermeño
Journal:  Biomed Res Int       Date:  2014-09-18       Impact factor: 3.411

10.  Correlation of choroidal thickness and volume measurements with axial length and age using swept source optical coherence tomography and optical low-coherence reflectometry.

Authors:  Janusz Michalewski; Zofia Michalewska; Zofia Nawrocka; Maciej Bednarski; Jerzy Nawrocki
Journal:  Biomed Res Int       Date:  2014-06-12       Impact factor: 3.411

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.