Literature DB >> 35814547

Classification of Transgenic Mice by Retinal Imaging Using SVMS.

Farrukh Sayeed1, K Rafeeq Ahmed2, M S Vinmathi3, A Indira Priyadarsini4, Charles Babu Gundupalli5, Vikas Tripathi6, Wesam Shishah7, Venkatesa Prabhu Sundramurthy8.   

Abstract

Alzheimer's disease is the neuro disorder which characterized by means of Amyloid- β (A  β) in brain. However, accurate detection of this disease is a challenging task since the pathological issues of brain are complex in identification. In this paper, the changes associated with the retinal imaging for Alzheimer's disease are classified into two classes such as wild-type (WT) and transgenic mice model (TMM). For testing, optical coherence tomography (OCT) images are used to classify into two groups. The classification is implemented by support vector machines with the optimum kernel selection using a genetic algorithm. Among several kernel functions of SVM, the radial basis kernel function provides the better classification result. In order to deal with an effective classification using SVM, texture features of retinal images are extracted and selected. The overall accuracy reached 92% and 91% of precision for the classification of transgenic mice.
Copyright © 2022 Farrukh Sayeed et al.

Entities:  

Mesh:

Year:  2022        PMID: 35814547      PMCID: PMC9259271          DOI: 10.1155/2022/9063880

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

The most common form of disability is neurodegenerative disease [1, 2]. Because Alzheimer's disease has such a long development period, patients can benefit from frequent testing and receive early treatment. However, due to their high cost and limited choice, current clinical diagnostic imaging techniques do not match the specific needs of screening methods [3, 4]. We made it a priority in this study to assess the retinal, particularly the retinal vasculature, as a potential solution for performing dementia assessments in Alzheimer's chronic conditions. Inflammatory alterations may begin 20+ years before neurological dysfunction manifests, and though the time neurotoxic effects manifest, cerebral deterioration has so far gradually extended. The Alzheimer's Society, the National Institute of Health, and thus the Global Advisory Committee on AD have suggested a study paradigm given a set of confirmed indicators connected towards both kinds of abnormalities that are proxies for AD to identify AD in actual persons [5-7]. All across the process, flexible scalable neural nets were used. The process obtained an overall accuracy rate of 82.44 percent using data from either the UK Biobank. It included a saliency analysis of this pipeline's understandability in addition to a high classifier shown in Figure 1. The detection of transgenic mice is carried out from the input fundus image, but the existing approaches possess a higher false detection rate which degrades the accuracy of the system. Additionally, the following problems are faced in optimal detection of classes which are listed as
Figure 1

Typical flow for transgenic mice.

Difficulty in feature differentiation: the detection of transgenic mice is based on various features such as texture, color, and intensity, but the differentiation of these minute features from each other is a hard task that degrades the computation of accurate diseases Class overlapping: the class of input image is also determined by the existing approaches, but the limited set of training data of each severity results in a class imbalance problem affecting the accuracy of classification Improper preprocessing: the execution of conventional proper preprocessing and effective enhancement of contrast techniques by the existing approaches results in difficulty in identifying the features from the background The major objective of this study is to provide precise classification between the WT and TMM and to compute the accuracy of the diseases in an accurate manner. This objective is achieved by fulfilling the subobjectives which are listed as follows: To minimize the level of artifacts in the input image by performing effective preprocessing of the image To maximize the precise identification of features from the preprocessed image by performing enhancement of contrast level To effective classify the images into two classes based on the extraction of significant features To determine the features related to the disease based on the variation in the intensity of the features for the purpose of diagnosis

2. Related Work

Under [8] article, authors investigated alterations in optic disc linked with Alzheimer's disease using the retinal as a window into the central and peripheral nervous system. Optical coherence tomography would be used to analyse the retinas of transgenic mice models (TMM) and wild-type (WT) of Alzheimer's disease, and support vector machines with the radial basis function kernel were used to categorize the cells in the retina into TMM and WT classes. At the age of four months, predictions were over 80% accurate, and at the age of eight months, they were over 90% accurate. In line with the results, feature extraction of generated fundus images acquired shows a much more diverse retinal architecture in mouse models at the age of eight. Utilizing coregistered angle-resolved [9] low-coherence interferometry (a/LCI) and optical coherence tomography, we obtained insight light scattering data from the retinas of triple transgenic Alzheimer's disease (3xTg-AD) mice and wild-type (WT) age-matched controls (OCT). Visual guiding and segmentation depths supplied by cross OCT B-scans were used to obtain perspective dispersion data from the peripheral nerve layer, outer papillary overlay, and endodermal epithelial. When comparing vivo mouse cells in the retina to WT controls, OCT imaging revealed a substantial weakening of the nerve fibre layer. The a/LCI scattering measures offered additional information which helps to differentiate AD mice by quantifying tissue heterogeneity. While compared to the WT mice, the AD mice's eyes demonstrated an increased range of values in motor neuron layer interferometric strength. In [10], the authors of this article describe the relationship between retinal image characteristics and cerebral-amyloid (A) load in the hopes of establishing a benign method for predicting A deposit in Alzheimer's illness. Moreover, while comparing to A+ individuals, a substantial variation in textural predefined sequence across retina capillaries and their neighbouring areas was detected in A+ participants. Using the collected characteristics, classifiers are trained to classify new individuals. Including an efficiency of 85 percent, the classification can distinguish A+ patients from “A” patients.

3. Proposed Work

This section presents the description of the proposed model for the classification of transgenic mice using SVMs.

3.1. Preprocessing

For enhancing the information for the disease diagnosis system, it is necessary to use some of the preprocessing steps as follows: Artifacts removal: blurriness, poor edges, and illumination are called as artefacts, which are removed using the nonlinear diffusion filtering algorithm, which eliminates all kinds of artefacts and ensures the image quality in terms of illumination correction and edge preservation Contrast enhancement: low contrast is one of the important issues of image classification. In this work, we consider that contrast enhancement is an optimization problem that intention is to optimize the pixel values based on the contrasting level of the input image. Image normalization: normalization of the image is valuable to variation of pixel intensity or RGB color values for retina images that increase the quality of acquired fundus images by decreasing the equipment and desired noises of the retina images. Following, the misrepresentations and fluctuations that happened in the retina images because of inexact image internment are recognized. Throughout the normalization of the image, the learned image is transformed into predetermined values. The formula for the estimation of image normalization is exactly denoted as follows. Image normalization is a technique of preprocessing that uses certain types of range as an expected outcome for the given inputs. It is useful for the prediction of forecasting purposes. Here, we know that there are several ways for forecasting and also prediction to maintain the large variations and also forecasting the normalized values makes the closer. There are some existing normalization techniques that are used for image normalization, which are as follows: Min-max normalization Z-score normalization Decimal scaling Figure 2 describes the proposed work. In the following, the description of these normalization techniques is given in detail.
Figure 2

Proposed work.

Min-max normalization: this technique provides the transformation function for linear cases by the original values of data which is known as the min-max normalization technique. This technique uses predefined boundary for the specific retina images. The min-max normalization for the proposed technique is estimated as follows:  where represents the normalized value of min-max data values and when the predefined boundary is between the C and D. When the range of values of A and B is matched between one another is used for result validation. In general unstructured data can be normalized using Z-score normalization, which is represented as follows: where V is the Z − score normalized values of the input, and  E represents the row E of the ith column. where  E is the mean value of the inputs. This technique uses five rows such as X,  Y,  Z,  U, and V for different columns for “N” for each row in which each row represents the Z-score technique that applies for computation of the normalized values. So that the standard deviation of the row is equal to the zero, then all values for the row are fixed to the zero values. It also gives the range of values between 0 and 1. In the technique of decimal scaling, the range is between −1 and 1. Based on the decimal scaling for image normalization, it is computed by the following equation: where v represents the scaled values, v represents the range of values, and j represents the small integer Max (v) < 1. The above-mentioned techniques can be useful for discussing the values of normalization. The combination of the above three techniques helps in producing the result, that is, improved min-max decimal with Z_normalization). The proposed retina image normalization technique is the advanced and most effective normalization technique that uses various types of input images, and also, it produces outputs in the range of 0 to 1. The normalization techniques can be possible for taking the average values as a threshold and then normalizing or replacing the values of the other side of pixels using the mean and standard deviation. As compared to the min-max, Z-score, and decimal scaling techniques for image normalization, the proposed advanced technique for image normalization produces an effective result. The proposed technique is used for image normalization that produces the following advantages than the other existing methods. Suited for any volume of datasets (large, small, or medium size datasets) Individual pixel-based scaling and transformation are possible Used to make the independent data size Set the range between 0 and 1 and have the normalized values Easy to apply for whole numerical data values The proposed innovative normalization technique is mathematically expressed as follows:where X represents the particular element of the data, N represents the number of digits in the element of X, A represents the pixel element for 1st digit X, and Y represents the scaled 1 value between 0 and 1. The proposed model is applicable for all types of input lengths to the full types of integers. This technique is different than the existing normalization approaches which are as follows: Changed from the unstructured to the structured one. Purpose of formulation/scaling. All the inputs are numerical data only. Low light enhancement: recent methods for low light enhancement methods are not assured for applying in low light environments. In order to design the new method for low light enhancement, it should focus on the following. Enhance the efficiency and robustness of the low-light image enhancement algorithms, and the previous methods are not supported for insufficient techniques to meet the needs of current applications. This method should be able to adjust for the different types of images on different scales to produce an extraordinary result. Minimize the complexity (time, space) for overall computations that are available to all the methods. This satisfies the practical application, and also, real-time images must be supported to use this. Most of the existing techniques are used for longer operations and hence take more processing time. And still, it leads to two problems such as detail ambiguity and color deviations. Establishes the higher quality of the image evaluation in which image information recovery and color recovery functions are used for adjusting the low light enhancements. To address these issues for this step, multiscale Retinex theory is proposed, which is a color restoration method that processes the image quality for further enhancement using the single-scale Retinex or multiscale Retinex method. This algorithm is applied for 3 kinds of color channels such as R, G, and B separately. Thus, here, the original image is converted into the number of channels. This avoids the color distortion issue. For each algorithm, the color recovery factor C is computed. This computes the proportional relationship between the R, G, and B channels. This mathematically expressed equation is as follows:where F represents the function for mapping the color values, and the performance of the best color intensity values for restoration and recovery helps in mapping the function in which logarithmic is used for computations of color recovery.where α and β are the mathematical expressions for variables in which logarithmic function is computed and rewritten as follows: This algorithm considers the merits of the convolution operation using Gaussian computations. For the multiscale, that is, small, medium, and large range of patches yield good ideal effects. The performance of color restoration is improved using the color recovery factor values since it is updated for concurrent iterations.

3.2. Feature Extraction

The gray level co-occurrence matrix captures numerical features of a texture using spatial relations of similar gray tones. The following are the features derivable from a normalized co-occurrence matrix in Table 1.
Table 1

Statistical GLCM features-22.

Feature nameDescription
AutocorrelationSum of squares
ContrastSum average
Correlation 1 & 2Sum variance
Cluster prominenceSum entropy
Cluster shadeDifference variance
DissimilarityDifference entropy
EnergyInformation measure of correlation 1 & 2
EntropyInverse difference normalized (INN)
Homogeneity 1 & 2Inverse difference moment normalized
Maximum probability
Energy: measures the uniformity (or orderliness) of the gray level distribution of the image Range = [0 1] Homogeneity: measures the smoothness (homogeneity) of the gray level distribution of the image; range = [0 1] Contrast: Tables 2 and 3 give a measure of the intensity contrast between a pixel and its neighbor over the whole image
Table 2

List of features.

S.NoFeatureFormulaDescription
1Autocorrelationi,j(ij)p(i, j)It measures the coarseness of an image and evaluates the linear spatial relationships between texture primitives.
2Contrasti,j|ij|2p(i, j)Represents the amount of local gray level variation in an image; a high value of this parameter may indicate the presence of edges, noise, or wrinkled textures in the image.
3Correlation 1i,j(jμy)p(i, j)/σxσyGives a measure of how correlated a pixel is to its neighbor over the whole image.
4Correlation 2i,j(ij)p(i, j) − μxμy/σxσyGives a measure of gray level linear dependence between the pixels at the specified positions relative to each other.
5Cluster shadei,j(i+jμxμy)3p(i, j)Cluster shade and cluster prominence are measures of the skewness of the matrix, in other words the lack of symmetry.
6Cluster prominencei,j(i+jμxμy)4p(i, j)Gives a measure of local intensity variation.
7Dissimilarityi,j|ij|p(i, j)Dissimilarity measure belongs to the contrast group of texture metrics. Gives a measure of dissimilarity.
8Energyi,jp(i, j)2Measures the uniformity (or orderliness) of the gray level distribution of the image; images with a smaller number of gray levels have larger uniformity.
9Entropy−∑i,jp(i, j)log(p(i, j))Inhomogeneous images have a low entropy, while a homogeneous scene has high entropy.
10Homogeneity 1i,jp(i, j)/1+|ij|Gives a value that measures the closeness of the distribution of elements in the GLCM to the GLCM diagonal.
11Homogeneity 2i,j1/1+(ij)2p(i, j)Measures the smoothness (homogeneity) of the gray 12level distribution of the image; it is inversely correlated with contrast—if contrast is small, usually homogeneity is large.
12Maximum probability MAXpi,ji,j Gives a measure of max. Frequency of occurrence of pixel pairs.
13Sum of squares: Variancei,j(iμ)2p(i, j)Measures the dispersion (with regard to the mean) of the gray level distribution.
14Sum averagei−22Ngipx+y(i)Measures the mean of the gray level sum distribution of the image.
15Sum variancei−22Ng(i − [∑i−22Ngipx+y(i)])2Measures the dispersion (with regard to the mean) of the gray level sum distribution of the image.
16Sum entropy−∑i−22Ngpx+y(i)log{px+y(i)}Measures the disorder related to the gray level sum distribution of the image.
17Difference variance i22Ngii22Ngipxyi2 Measures the dispersion (with regard to the mean) of the gray level difference distribution of the image.
18Difference entropy−∑i−22Ngpxy(i)log{pxy(i)}Measures the disorder related to the gray level difference distribution of the image.
19Information measure of correlation 1 HXYHXY1/max{HX, HY}H is the entropy. HXY1=−∑i,jp(i, j)log(px(i), py(j)).
20Information measure of correlation 2 1e2HXY2HXY HXY2=−∑i,jp(i, j)py(j)log(px(i), py(j)).
21Inverse difference normalized (IDN)i=0Ng−1p(i, j)/1+(|ij|/N)IDMN and IDN measure image homogeneity as it assumes larger values for smaller gray tone differences in pair elements. It is more sensitive to the presence of near diagonal elements in the GLCM. It has maximum value when all elements in the image are same.
22Inverse difference moment normalized (IDMN)i=0Ng−1p(i, j)/1+(|ij|2/N2)
Table 3

List of shape features.

Sl.NoFeatureFormulaDescription
1Circularity C=4πArea/(Perimeter)2A measure of roundness or circularity (area-to-perimeter ratio) can be obtained as the ratio of the area of an object to the area of a circle with the same convex perimeter.1-for a circular object and <1 or >1 for an object that departs from circularity.
2Eccentricity E=axislengthshort/axislengthlongEccentricity is the ratio of the length of the short (minor) axis to the length of the long (major) axis of an object. Range: 0 to 1.
3Orientation θ=1/2tan−1(2μ11/μ20μ02)The orientation is the angle between the horizontal line and the major axis. It indicates the overall direction of the shape. Range: −90° to 90°
Range = [0 (size(GLCM,1)-1)2]

3.3. Classification

For the classification of retinal images into two classes such as WT and TMM, SVMs are used, and the optimum kernel function is selected from the set of kernel functions for the classifications. Figure 3 discusses the pictorial representation for the classification using SVMs.
Figure 3

SVM for classification of transgenic mice.

4. Experimental Results and Discussion

This proposed work is mainly implemented to provide precise classification between the WT and TMM for obtained accuracy of the diseases in an accurate manner. This proposed work undergoes preprocessing and feature extraction. The preprocessing technique is performed to minimize the level of the artifacts in the input image, and the enhancement of the contrast level is performed to increase the precise identification of features from the preprocessed image. Then, feature extraction is implemented to classify the images into two classes based on the extraction of significant features in an effective manner. In this section, the performance of the proposed model is implemented for the sum of images in the dataset in Figure 4. Table 4 describes the confusion matrix for the two classes with the use of four kinds of metrics. The definition of each metric is given below, and classifier performance is shown in Table 5.
Figure 4

(a) and (b) retinal images, and (c), (d) OCT images.

Table 4

Confusion matrix.

Total candidates (2280)True class
PositiveNegative
Predicted classPositiveTP (18)FP (613)TP + FP (631)
NegativeFN (06)TN (1643)TN + FN (1647)
TP + FN (24)TN + FP (2256)
Table 5

Classifier performance.

ClassifierAccuracy (%)Sensitivity (%)Specificity (%)
Decision tree969497
Neural network749873
Random forest986598
SVM999899
True positive (TP) is the no. of candidates correctly identified as TMM False positive (FP) is the no. of candidates incorrectly identified as TMM True negative (TN) is the no. of candidates correctly identified as non-TMM False negative (FN) is the no. of candidates incorrectly identified as non-TM

5. Conclusion

Alzheimer's disease is a progressive neurodegenerative illness defined by the presence of Amyloid–(A) in the brain. Nevertheless, because the degenerative concerns of the brain are complicated in classification, precise detection of this condition is a difficult process. The abnormalities in retinal fundus images for Alzheimer's disease are divided into two categories in this paper: wild-type (WT) and transgenic mice model (TMM). Optical coherence tomography (OCT) pictures are utilised to classify the patients into 2 categories for assessment. SVMs are used to classify the data, only with the best kernel selected via an evolutionary method. The RBF kernel function outperforms the other SVM support vectors in terms of accuracy. The textural properties of retinal fundus images are used to deal with just an efficient categorization utilising SVM. The overall accuracy reached 92% and 91% of precision for the classification of transgenic mice.
  9 in total

1.  Amyloid precursor protein and tau transgenic models of Alzheimer's disease: insights from the past and directions for the future.

Authors:  Naruhiko Sahara; Jada Lewis
Journal:  Future Neurol       Date:  2010-05-01

2.  Angiographic features of transgenic mice with increased expression of human serine protease HTRA1 in retinal pigment epithelium.

Authors:  Sandeep Kumar; Zachary Berriochoa; Balamurali K Ambati; Yingbin Fu
Journal:  Invest Ophthalmol Vis Sci       Date:  2014-05-22       Impact factor: 4.799

3.  Non-invasive optical imaging of retinal Aβ plaques using curcumin loaded polymeric micelles in APPswe/PS1ΔE9 transgenic mice for the diagnosis of Alzheimer's disease.

Authors:  Fidelis Chibhabha; Yaqi Yang; Kuang Ying; Fujie Jia; Qin Zhang; Shahid Ullah; Zibin Liang; Muke Xie; Feng Li
Journal:  J Mater Chem B       Date:  2020-08-26       Impact factor: 6.331

4.  From Brain Disease to Brain Health: Primary Prevention of Alzheimer's Disease and Related Disorders in a Health System Using an Electronic Medical Record-Based Approach.

Authors:  A M Fosnacht; S Patel; C Yucus; A Pham; E Rasmussen; R Frigerio; S Walters; D Maraganore
Journal:  J Prev Alzheimers Dis       Date:  2017

5.  Multimodal Coherent Imaging of Retinal Biomarkers of Alzheimer's Disease in a Mouse Model.

Authors:  Ge Song; Zachary A Steelman; Stella Finkelstein; Ziyun Yang; Ludovic Martin; Kengyeh K Chu; Sina Farsiu; Vadim Y Arshavsky; Adam Wax
Journal:  Sci Rep       Date:  2020-05-13       Impact factor: 4.379

6.  Microglial Activation in the Retina of a Triple-Transgenic Alzheimer's Disease Mouse Model (3xTg-AD).

Authors:  Elena Salobrar-García; Ana C Rodrigues-Neves; Ana I Ramírez; Rosa de Hoz; José A Fernández-Albarral; Inés López-Cuenca; José M Ramírez; António F Ambrósio; Juan J Salazar
Journal:  Int J Mol Sci       Date:  2020-01-27       Impact factor: 5.923

7.  Modular machine learning for Alzheimer's disease classification from retinal vasculature.

Authors:  Jianqiao Tian; Glenn Smith; Han Guo; Boya Liu; Zehua Pan; Zijie Wang; Shuangyu Xiong; Ruogu Fang
Journal:  Sci Rep       Date:  2021-01-08       Impact factor: 4.379

8.  Vascular retinal biomarkers improves the detection of the likely cerebral amyloid status from hyperspectral retinal images.

Authors:  Sayed Mehran Sharafi; Jean-Philippe Sylvestre; Claudia Chevrefils; Jean-Paul Soucy; Sylvain Beaulieu; Tharick A Pascoal; Jean Daniel Arbour; Marc-André Rhéaume; Alain Robillard; Céline Chayer; Pedro Rosa-Neto; Sulantha S Mathotaarachchi; Ziad S Nasreddine; Serge Gauthier; Frédéric Lesage
Journal:  Alzheimers Dement (N Y)       Date:  2019-10-14
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.