Literature DB >> 24454526

Analysis of visual appearance of retinal nerve fibers in high resolution fundus images: a study on normal subjects.

Radim Kolar1, Ralf P Tornow2, Robert Laemmer2, Jan Odstrcilik1, Markus A Mayer3, Jiri Gazarek4, Jiri Jan4, Tomas Kubena5, Pavel Cernosek5.   

Abstract

The retinal ganglion axons are an important part of the visual system, which can be directly observed by fundus camera. The layer they form together inside the retina is the retinal nerve fiber layer (RNFL). This paper describes results of a texture RNFL analysis in color fundus photographs and compares these results with quantitative measurement of RNFL thickness obtained from optical coherence tomography on normal subjects. It is shown that local mean value, standard deviation, and Shannon entropy extracted from the green and blue channel of fundus images are correlated with corresponding RNFL thickness. The linear correlation coefficients achieved values 0.694, 0.547, and 0.512 for respective features measured on 439 retinal positions in the peripapillary area from 23 eyes of 15 different normal subjects.

Entities:  

Mesh:

Year:  2013        PMID: 24454526      PMCID: PMC3888693          DOI: 10.1155/2013/134543

Source DB:  PubMed          Journal:  Comput Math Methods Med        ISSN: 1748-670X            Impact factor:   2.238


1. Introduction

The examination of the retina via an ophthalmoscope or fundus cameras (analog or digital) has been successfully used in diagnosis of many retinal and eye diseases [1]. Besides the optic disc, macula, and retinal vascular tree, the retinal nerve fiber layer (RNFL) can also be observed, particularly in a red-free light as proposed by Kulwant [2]. This layer creates a stripy-like texture pattern, which indicates the presence of nerve fibers. There has been an effort to analyze this layer in fundus images, which may improve the glaucoma diagnosis. Table 1 summarizes several important papers, where RNF analysis in fundus photography (analog or digital) has been described using different approaches. One of the basic papers has been published in 1984 by Airaksinen et al. [3]. He described a method for RNFL quality evaluation around the optic disc using a scoring system. In 1996 the complex survey for visual RNFL analysis in fundus with respect to age and optic disc damage has been described by Jonas and Dichtl [4]. A simple texture analysis for severe RNFL defects detection has been described and tested by Yogesan at al., 1998 [5], on set of 10 digitized fundus photographs with low resolution. Tuulonen et al. [6] also described the microtexture analysis of RNFL in gray level digitized photographs. The local properties of texture based on brightness difference were computed and used as an input for classification between glaucoma and normal and ocular hypertension. In our former paper [7] we described the fractal based texture analysis method of RNFL and its application for classification of RNFL defects. Markov random field has been also used for similar purpose with simple and subjective comparison with the data from optical coherence tomography (OCT) [8] as well as directional spectral analysis and structural texture analysis [9]. An attempt for early glaucoma diagnosis is described in [10] where Gabor filters were used for detection of wider RNFL defects.
Table 1

Short summarization of papers describing different approaches for the evaluation of RNF in fundus images (DCFI stands for digital colour fundus images).

AuthorMethodDataResults/description
Hoyt et al. (1973), [11]The first subjective attempt to utilize fundus cameras for glaucoma detection by the evaluation of RNFL visual appearance. Comparison with perimetric findings.A few number of black-and-white photographsFunduscopic signs of the RNFL pattern provide the earliest objective evidence of nerve fiber layer atrophy in the retina.

Lundstrom and Eklundh (1980), [12]Subjective visual evaluation of the changes in RNFL pattern intensity using fundus photographs.A few number of black-and-white photographsFindings that consecutive changes in RNFL pattern intensity are connected to progression of glaucoma disease.

Airaksinen et al. (1984), [3]Subjective scoring of visual RNFL appearance in fundus photographs.Black-and-white photographs (84 normals, 58 glaucomatous)Confirmation of the dependence between changes in RNFL pattern and glaucoma progression in fundus photographs.

Peli (1988), [13]Semiautomatic analysis of RNFL texture based on intensity information.Digitized black-and-white photographs (5 normal, 5 glaucomatous, and 5 suspected of glaucoma)Additional confirmation of the changes in RNFL intensity caused by glaucoma atrophy.

Yogesan et al. (1998), [5]Automatic method for texture analysis of RNFL based on gray level run length matrices.Digitized fundus photographs of size 648 × 560 pixels (5 normals, 5 glaucomatous)Promising results for large focal wedge-shaped RNFL losses well outlined by surrounding healthy nerve fiber bundles. Diffuse RNFL loses could not be detected.

Tuulonen et al. (2000), [6]Semiautomatic method using microtexture analysis of the RNFL pattern.Digitized fundus photographs 1280 × 1024 pixels (7 normals, 9 glaucomatous, and 8 suspected of glaucomaShowing that changes in a microtexture of RNFL pattern are related to glaucoma damage. There is a lack of small sample size.

Oliva et al. (2007), [14]Semiautomatic method to texture analysis based on RNFL pattern intensity. Comparison with OCT measurement.DCFI with size of 2256 × 2032 pixels (9 normals, 9 glaucomatous)Correlation was only 0.424 between the intensity related parameters extracted from fundus images and RNFL thickness was measured by OCT.

Kolář and Jan (2008), [7]Automatic method to texture analysis of RNFL based on fractal dimensions.DCFI with size of 3504 × 2336 pixels (14 normal, 16 glaucomatous)Local fractal coefficient was used as a feature for glaucomatous eye detection. There were problems with robust estimation of this coefficient.

Muramatsu, et al. (2010), [10]Automatic approach with Gabor filters to enhance certain regions with RNFL pattern and clustering of these regions aimed to glaucoma detection.DCFI with size of 768 × 768 pixels (81 normals, 81 glaucomatous)The method is suitable only for detection of focal and wider RNFL losses expressed by significant changes in intensity.

Odstrcilik et al. (2010), [8]Automatic method to texture analysis of RNFL based on Markov random fields.DCFI with size of 3504 × 2336 pixels (18 normals, 10 glaucomatous)The features ability to differentiate between healthy and glaucomatous cases is validated using OCT RNFL thickness measurement.

Prageeth et al. (2011), [15]Automatic method to texture analysis using only intensity information about RNFL presence.DCFI with size of 768 × 576 pixels (300 normals, 529 glaucomatous)Intensity criteria were used. Detection of the substantial RNFL atrophy.

Acharya et al. (2011), [16]Automatic analysis of RNFL texture using higher order spectra, run length, and cooccurrence matrices.DCFI with size of 560 × 720 pixels (30 normals, 30 glaucomatous)Specificity to detect glaucomatous eye is over 91%. The article does not explain thoroughly how the features were extracted and in which area of the image were computed.

Jan et al. (2012), [9]Automatic method to RNFL texture analysis based on combination of intensity, edge representation, and Fourier spectral analysis.DCFI with size of 3504 × 2336 pixels (8 normals, 4 glaucomatous)The ability of proposed features to classify RNFL defects has been proven via comparison with OCT. The comparison was done only in a heuristic manner.
In spite of these applications it is still not clear what is the correlation between the parameters from the texture analysis and the RNFL thickness. Independent of texture analysis methods, the texture parameters (features) describe the texture visual appearance and they offer a tool for qualitative and semiquantitative inspection of RNFL thickness. This paper describes the statistically based texture analysis of the RNFL in high resolution color fundus images of normal subjects and its correlation with RNFL thickness obtained by optical coherence tomography in the same subjects. The statistically based texture analysis makes the interpretation of the texture parameters well understandable and it is hypothesized that this analysis can be predictive and can lead to glaucoma diagnosis support. Although red-free photographs might be more appropriate for texture analysis, we have used color fundus images because they are widely distributed, inexpensive, and easy to acquire. In early glaucoma, the RNFL thinning preceded the optic disc damage and visual field loss so that RNFL can be used as a sensitive indicator of structural damage; see [17]. Recent papers, for example, [18], indicate that RNFL thickness measured by OCT can be used for diagnosis support in different stages of glaucoma [19], particularly in the early stage, where the RNFL thickness dramatically decreases. The principle of the proposed method is shown in Figure 1 and this paper is organized as follows. Section 2.1 shortly describes the acquisition devices and obtained images. Texture analysis of fundus image is described in Section 2.2 and RNFL segmentation in OCT B-scans in Section 2.3. Section 2.4 describes the multimodal registration, which is needed for modality comparison. The results are discussed in Section 3 and the paper finishes with concluding remarks in Section 4.
Figure 1

Flowchart of the proposed approach for RNFL visual appearance analysis. fROI stands for region of interest in fundus images and tROI stands for region of interest in RNFL thickness maps. See Section 2 for detailed description of each block.

2. Method

2.1. Data Acquisition

Color fundus images were taken by digital nonmydriatic fundus camera Canon CR-1 with a digital Canon camera EOS 40D (3888 × 2592 pixels, 45° field of view) on normal subjects without any suspected retinal or eye diseases. 23 color images (eyes) from 15 subjects taken on nondilated eyes in RAW (CR2) format were used for the presented analysis. Special care was taken during image acquisition—only sharp images were considered for presented analysis. For each analyzed eye, OCT volume scans were also acquired using a spectral domain OCT (Spectralis OCT, Heidelberg Engineering). Infrared reflection images (scanning laser ophthalmoscope, SLO) and OCT cross-sectional B-scan images of the dual laser scanning system were acquired simultaneously. From 61 to 121 B-scans per one eye were acquired, which corresponds to the spacing between each B-scan from 124.3 μm to 63.1 μm (30° field of view). An example of the positions of B-scans on the retinal surface is shown in Figure 3(a), where the SLO image, simultaneously acquired by OCT system, is also presented.
Figure 3

Spectralis SLO and OCT images. (a) SLO image. The blue lines represent the position of the B-scans on retinal surface (61 B-scans with spacing 124.3 μm). (b) B-scan images with segmentation lines after manual correction, internal limiting membrane above and outer nerve fiber layer below.

2.2. Texture Analysis of RNFL in Fundus Images

We have applied basic and advanced texture analysis methods in our previous work [7, 8, 20–22]. Statistical based methods are basic tool for the texture characterization and are also a promising tool for the RNFL texture analysis. There are three main classes of these methods: methods based on 1st-order statistics, 2nd-order statistics, and higher order statistics. Here, we applied a first-order statistics, which depend only on the individual pixel value and not on the interaction between pixels. The main reason for this simple statistic is that the interpretation of these parameters is straightforward and gives a basic view on texture properties and its visual appearance. This statistic includes five parameters (features): mean, standard deviation, kurtosis, skewness, and Shannon entropy (as defined in information theory). They are calculated from intensity probability distribution, which must be estimated based on histogram of the analyzed image region. The definition and description of these parameters can be found elsewhere [23]. Here we present only the summarizing equations in Table 2.
Table 2

Definitions of the first-order features used for analysis.

Mean μ = ∑g=0 G−1gH(g) H(g) represents the probability density function, estimated from histogram H(g) = n g/N, where pixel value g = 0,1, 2,…, G − 1,G is a number of gray levels, N is a number of pixels in analyzed image, and n g is a number of pixels with value g. μ n represents statistical moment of nth order: μ n = ∑g=0 G−1‍(gμ)n H(g)
Standard deviation σ = ∑g=0 G−1‍(gμ)2 H(g)
Shannon entropy E = −∑g=0 G−1H(g)log⁡2(H(g))
Skewness γ 1 = μ 3/μ 2 (3/2)
Kurtosis γ 2 = μ 4/(μ 2 2 − 3)
The color fundus images were preprocessed in three steps. In the first step we reconstructed the RGB image from RAW data to TIFF format with linear gamma correction using DCRAW software [24]. This step is important, because we can achieve linear relation between image intensity and reflected intensity from retinal structures. The second step is focused on removing the nonuniform illumination and increasing the contrast. Several methods were tested (e.g., [25, 26]) in order to increase the correlation between image features and RNFL thickness. Finally, the contrast limited adaptive histogram equalization (CLAHE) has been used [27]. This method locally enhances the contrast on small tiles, so that the histogram of output region has approximately uniform distribution. The size of tiles has been experimentally set to 20 × 20 pixels, but we observed that this size is not critical. The neighboring tiles are then interpolated to eliminate boundary artifacts. This approach has been applied on all color channels separately. In the third step four grayscale images were generated for successive analysis. The red (R), green (G), and blue (B) channels were used separately. And finally the grayscale image computed as a mean of green and blue channels has been generated (GB image). The motivation for this step comes from the optical properties of green-blue filter, which is usually used for red-free fundus imaging. This green-blue channel combination also corresponds to absorption spectra of rhodopsin with maximum around 500 nm. The data for the texture analysis was obtained by a manual selection of the small regions of interest (ROI) around the optic disc (Figure 2) including nasal, temporal, inferior, and superior area. The positions of ROIs correspond to various widths of the RNFL, given by the retinal physiology [28], to cover a large range of RNFL thickness. The size of ROI has been chosen to 41 × 41 pixels, which is a compromise between the ability to locally characterize texture by the features and the limitation to select sufficient number of these ROIs without blood vessels. These ROIs are located in close surroundings of the optic disc (approximately within the two optic disc diameters) and were carefully selected to exclude blood vessels and capillaries to remove their influence for the ROI texture analysis. The number of these ROIs in particular image is around 20 per each image. The total number of these ROIs for texture analysis is 439. These ROIs were defined in R, G, B, and GB channels and the above described statistical features were computed from each ROI. This leads to 20 features (5 features for each channel), which will be further analyzed.
Figure 2

Fundus image (GB channel only) from our dataset with selected ROIs for analysis. These regions were manually placed apart from the blood vessels to not influence the texture features.

One remark should be made here. Each subset of these samples comes from the same image, which implies their statistical dependence. Nevertheless, we can consider each ROI as representation of retinal structure at independent positions with various values of RNFL thickness and therefore these ROIs can be treated as statistically independent.

2.3. Segmentation in OCT Data

The OCT volume data has been processed in a semiautomatic way. In the first step, the inner limiting membrane (ILM) and the outer nerve fiber layer boundary (ONFL) have been automatically segmented. The parameters of the automated RNFL segmentation algorithm published in [29] have been adapted for the use on OCT volume scans. The algorithm can be summarized as follows. The retinal pigment epithelium (RPE) and ILM are detected by an edge detection taking the second derivative into account. After denoising the image with complex diffusion, the ONFL is found by an energy-minimization approach that takes the gradient as well as local and global smoothness constraints into account. The B-scans of the volume were segmented sequentially. This yielded segmentations that showed segmentation errors in a few cases, particularly in B-scans crossing the OD. In the second step, all segmentation errors were corrected manually using a nonparameterized curve (free line). A Windows compiled version of the segmentation software can be downloaded under http://www5.informatik.uni-erlangen.de/research/software. It is called OCTSEG (optical coherence tomography segmentation and evaluation GUI) and may serve for many OCT related image processing purposes such as segmentation of the retinal layers and blood vessels and visualization of the results. An example of the segmented ILM and ONFL is shown in Figure 3(b). This semiautomatic segmentation results in the RNFL thickness image, which is reconstructed from segmented B-scans. To ensure that the thickness image will have the same pixel size as the SLO image, an interpolation technique must be used (bilinear or spline interpolation is acceptable for our task [30]). Because we know the B-scans positions, we can map the thicknesses on the SLO image (see Figure 4(a)). This will be utilized in multimodal registration in the next section.
Figure 4

An example of the manually segmented RNFL mapped on the SLO image (a) and the green channel of fundus image (b). The colormap is scaled in μm and the area around the optic disc has been removed because it does not contain the RNFL.

2.4. SLO to GB Image Registration

To be able to compare the RNFL thickness map with the texture in the fundus images, image registration has to be performed. This bimodal registration (SLO to GB fundus image) can be automatic (e.g., [31, 32]) or manual. In this case we have used the registration based on manually selected landmarks positioned in the bifurcation points of the blood vessel tree. At least 12 landmarks were selected possibly uniformly throughout the images (Figure 5(a)). These are used for estimation of the spatial transformation parameters. Two kinds of spatial transformations are mostly used in retinal applications: affine and second-order polynomial transformations. Authors of [33] proved the validity of quadratic transformation model for curved retina, which is applicable particularly for images with a large field of view. We have also successfully tested this quadratic transformation together with the affine transformation, which gave us more precise results [34].
Figure 5

(a) Manually selected corresponding landmarks in SLO and fundus GB image. (b) Chessboard image from registered GB image. (c) Registered GB image.

The 12-parametric second-order polynomial transformation model is described by [34] Here, (x, y) denotes the coordinates of landmarks in a floating image (the image which will be aligned to the reference image) and (x′, y′) are the coordinates of these landmarks after transformation in a coordinate of the reference image. The image registration is defined as a minimization of sum of squared differences (energy function ℰ) between coordinates of corresponding landmarks in reference image (X, Y) and in transformed floating image (x′, y′): where N is a number of manually selected landmarks. Substitution leads to The energy ℰ is minimized with the respect to entries of transformation matrix a . This leads to a set of linear equations, which can be easily solved by the Gauss elimination method [35]. An example of the registration result is shown in Figure 5 together with the manually selected landmarks and chessboard image. This processing has been applied on each image pair (SLO and GB images) in our dataset. This registration procedure enables an easy thickness image mapping on the fundus image. This is shown in Figure 4(b) together with SLO image. The next step is the analysis of the texture feature and RNFL thickness.

3. Results and Discussion

The result of so far described processing is a set of small ROIs in fundus images (fROI) and the corresponding ROIs in the thickness map (tROI). As mentioned, the size of fROI is 41 × 41 pixels, which has been chosen to span a sufficiently large region with RNFL striation. The maximum fROI size was limited by the blood vessels and other anatomical structures in the retinal image. From the tROI position (determined by the fROI position) the thickness has been estimated using the mean value from the 7 × 7 central window. This tROI size is equivalent to 0.0066 mm2.

3.1. Correlation Analysis

The first step of correlation analysis is focused on correlation between each feature and thickness. Spearman's rank correlation coefficients R have been calculated between each feature and corresponding RNFL thickness for each dataset of ROIs in each fundus image. The R values and basic statistics are summarized in Table 3. The correlation between R channel and thickness is the lowest for all R-channel features. The other channels have higher Spearman's correlation, particularly the features from GB channel (with P value < 0.05). Features computed from this channel are also better from the other point of view (low interimage R standard deviation and highest minimum and maximum correlations).
Table 3

The table summarizes the Spearman's correlation coefficients computed from samples in particular image. The mean value, standard deviation, and minimum and maximum values are presented together with mean P value. The described features (mean μ, standard deviation σ, and Shannon entropy E) were estimated in different channels (R, G, B, and GB).

Feature R S mean R S st. deviation R S min R S maxMean P value
μ R 0.4610.1930.1610.7260.114
σ R 0.3440.2580.0370.8110.301
E R 0.2120.249−0.2050.5830.387

μ G 0.7580.0880.6210.8670.001
σ G 0.7060.1100.5630.8730.002
E G 0.6460.1040.4920.8300.006

μ B 0.7500.1160.5160.8740.003
σ B 0.7020.1070.5490.8720.002
E B 0.5660.241−0.0150.8480.110

μ GB 0.765 0.0990.5900.8740.001
σ GB 0.708 0.1080.5590.8690.002
E GB 0.657 0.0960.5310.8440.004
The Spearman's correlation coefficients have been also computed between individual features and corresponding RNFL thickness considering the whole dataset of ROIs at once. These values are summarized in Table 4. The correlation value higher than 0.5 can be seen for most of the features from G, B, and GB channels. The scatter plots between features and thickness are shown in Figure 6. Rather high variance can be seen from this data. Nevertheless, the dependence of feature value on RNFL thickness is obvious. The linear fit is shown for illustration.
Table 4

Spearman's correlation coefficients between considered features and RNFL thickness for the whole dataset; P value < 0.01.

μ R σ R E R μ G σ G E G μ B σ B E B μ GB σ GB E GB
0.3830.1560.1030.6810.5320.4910.6670.5010.352 0.694 0.547 0.512
Figure 6

Scatter plots for three features (μ, σ and E) and RNFL thickness for different channels (R, G, B and GB).

Each of the features from R has relatively low correlation (<0.5), which is probably caused by light reflections from the deeper retinal structures and therefore this channel is not convenient for RNFL texture analysis. Moreover, the light reflections within the red spectral band are relatively high and this reflected intensity can saturate the R channel of light sensor. These results indicate that the G, B, and GB channels are the most convenient channels for the texture analysis. It can be seen that the correlation coefficients of particular features are slightly higher for GB channel than single G and B channels. However, the correlation between particular features has also been investigated and it has been observed that there is a strong linear correlation between the same features computed from GB, G, or B channel (>0.86, P  value < 0.01), as can be expected. Therefore, we will use only the GB channel in further analysis. Another reason for GB channel priority is connected with fundus camera acquisition. It is clear that appearance of RNFL striation in G or B channels will depend on the properties of CMOS/CCD detection element in fundus camera. The combination of green and blue channels can decrease this dependence, because it combines the spectral characteristics of green and blue filters (which can be different for different manufacturers) and it is therefore more practical.

3.2. Regression Analysis

The multivariate nonlinear regression analysis has been applied to create a statistical model. The μ GB and σ GB values have been used as predictors and RNFL thickness as response. We used a second-order fitting model, which is appropriate considering the dependence of particular feature on thickness values, in the following form: where β is a vector of fitting coefficients. A nonlinear regression function nlinfit implemented in Matlab R2007b has been used. The results are graphically shown in Figure 7 and the estimated values are summarized in Table 5. The model was fitted on normalized data to be able to compare the influence of particular coefficients. One can see the highest linear dependence on μ GB. The σ GB has similar influence for linear and quadratic terms.
Figure 7

Graphical result of the multivariate regression analysis using the second-order polynomial model.

Table 5

The table shows the model coefficients, MAE (mean absolute error), and MCI (mean half width confidence interval).

β 1 β 2 β 3 β 4 β 5 β 6
80.5324.40−3.873.300.29−3.41

MAE = 15.59, MCI = 4.44, R 2 = 0.531
This basic analysis shows that there is a correlation between several basic statistical features and the RNFL thickness measured quantitatively by OCT. An example of 8 selected fROIs (from GB channel) with corresponding feature values and RNFL thicknesses is shown in Table 6. It can be seen that with increasing RNFL thickness, the texture structure is changing from random to more organized. This is well described by the μ GB, σ GB, and E GB values. The gray level mean value has straightforward interpretation—the reflected light intensity depends on the RNFL thickness. The standard deviation describes the “magnitude” of the gray level spatial variation of the nerve fibers independently from the light illumination. The Shannon entropy quantifies the shape of the intensity probability density function, estimated by histogram. More uniform histogram, which corresponds to area without RNFL, will have lower Shannon entropy value. On the other hand, stripy pattern due to RNFL will create higher peaks in histogram with higher Shannon entropy value. Skewness and kurtosis also describe the shape of the probability density function, but in different way, which is not significant in this case.
Table 6

Several selected fROIs are shown together with RNFL thickness and texture features computed from corresponding fROIs.

The regression model has been used to estimate the error of thickness estimation within each eye. The relative error of thickness estimation for each sample has been computed and the median value has been determined for each eye separately. This median value of errors ranges from 11.6% to 23.8% with mean value 16.9% and standard deviation 2.9%. The number of tested regions in retinal image ranges from 15 to 23. The level of this mean within-eye error and variance is promising, considering that we are using only two basic features: mean and variance texture features. The mean error also corresponds to MAE value of regression model for the whole datasets, which shows unbiased estimates of within-eye thicknesses. Nevertheless, it is expected that using more advanced texture analysis methods will enable creating more precise regression model.

4. Conclusion

This study on healthy subjects shows that basic local intensity analysis of the nerve fibers in the fundus photographs is related to RNFL thickness. The local reflected intensity in green-blue spectral band depends on RNFL thickness as well as the local standard deviation and Shannon entropy, which describe the probability density function of region intensities. The correlation between RNFL thickness and analyzed parameters is above 0.5. These values are mainly influenced also by the noise in fundus images, subjects variability, and also by inaccuracies in RNFL segmentation. However, we showed that when physicians analyze the fundus image, the local intensity variation on the nerve fiber branches is connected to RNFL thickness. A nonlinear statistical model has been built using the multivariate nonlinear regression with the mean absolute error 15.59 μm. This model offers a possibility for raw estimation of RNFL thickness from texture features. In conclusion two remarks should be emphasized. Only high quality and high resolution fundus images were used in this study. This is prerequisite for successful texture analysis. The second remark deals with RAW format. All images were acquired in RAW format and converted to lossless image format with linear gamma correction. If nonlinear gamma function is used, the feature values will result in a different dependence on RNFL thickness. This might influence the texture features and the visual appearance of RNFL thickness observed by physicians in fundus intensity image. The texture analysis of the nerve fiber layer in fundus images seems to be a promising tool, which can be used for screening purposes and can be added as an additional feature to a fundus photography based screening protocol (e.g., the glaucoma risk index presented by Bock at al. [36]). The possibility and usefulness of automatic texture analysis in images of glaucoma patients will be investigated in a next step.
  16 in total

1.  Interpolation revisited.

Authors:  P Thévenaz; T Blu; M Unser
Journal:  IEEE Trans Med Imaging       Date:  2000-07       Impact factor: 10.048

2.  Computer measurement of retinal nerve fiber layer striations.

Authors:  E Peli; T R Hedges Iii; B Schwartz
Journal:  Appl Opt       Date:  1989-03-15       Impact factor: 1.980

3.  Digital imaging and microtexture analysis of the nerve fiber layer.

Authors:  A Tuulonen; H Alanko; P Hyytinen; J Veijola; T Seppänen; P J Airaksinen
Journal:  J Glaucoma       Date:  2000-02       Impact factor: 2.503

4.  Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma.

Authors:  Chisako Muramatsu; Yoshinori Hayashi; Akira Sawada; Yuji Hatanaka; Takeshi Hara; Tetsuya Yamamoto; Hiroshi Fujita
Journal:  J Biomed Opt       Date:  2010 Jan-Feb       Impact factor: 3.170

5.  Automated diagnosis of glaucoma using texture and higher order spectra features.

Authors:  U Rajendra Acharya; Sumeet Dua; Xian Du; Vinitha Sree S; Chua Kuang Chua
Journal:  IEEE Trans Inf Technol Biomed       Date:  2011-02-24

Review 6.  Evaluation of the retinal nerve fiber layer.

Authors:  J B Jonas; A Dichtl
Journal:  Surv Ophthalmol       Date:  1996 Mar-Apr       Impact factor: 6.048

7.  Fundoscopy of nerve fiber layer defects in glaucoma.

Authors:  W F Hoyt; L Frisén; N M Newman
Journal:  Invest Ophthalmol       Date:  1973-11

8.  Evaluation of retinal nerve fiber layer, optic nerve head, and macular thickness measurements for glaucoma detection using optical coherence tomography.

Authors:  Felipe A Medeiros; Linda M Zangwill; Christopher Bowd; Roberto M Vessani; Remo Susanna; Robert N Weinreb
Journal:  Am J Ophthalmol       Date:  2005-01       Impact factor: 5.258

9.  Diffuse and localized nerve fiber loss in glaucoma.

Authors:  P J Airaksinen; S M Drance; G R Douglas; D K Mawson; H Nieminen
Journal:  Am J Ophthalmol       Date:  1984-11       Impact factor: 5.258

10.  Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients.

Authors:  Markus A Mayer; Joachim Hornegger; Christian Y Mardin; Ralf P Tornow
Journal:  Biomed Opt Express       Date:  2010-11-08       Impact factor: 3.732

View more
  2 in total

1.  Deep convolutional neural network-based patch classification for retinal nerve fiber layer defect detection in early glaucoma.

Authors:  Rashmi Panda; Niladri B Puhan; Aparna Rao; Bappaditya Mandal; Debananda Padhy; Ganapati Panda
Journal:  J Med Imaging (Bellingham)       Date:  2018-10-30

2.  Widen the Applicability of a Convolutional Neural-Network-Assisted Glaucoma Detection Algorithm of Limited Training Images across Different Datasets.

Authors:  Yu-Chieh Ko; Wei-Shiang Chen; Hung-Hsun Chen; Tsui-Kang Hsu; Ying-Chi Chen; Catherine Jui-Ling Liu; Henry Horng-Shing Lu
Journal:  Biomedicines       Date:  2022-06-03
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.