Literature DB >> 32518709

Factors in Color Fundus Photographs That Can Be Used by Humans to Determine Sex of Individuals.

Takehiro Yamashita1, Ryo Asaoka2, Hiroto Terasaki1, Hiroshi Murata2, Minoru Tanaka1, Kumiko Nakao1, Taiji Sakamoto1.   

Abstract

Purpose: Artificial intelligence (AI) can identify the sex of an individual from color fundus photographs (CFPs). However, the mechanism(s) involved in this identification has not been determined. This study was conducted to determine the information in CFPs that can be used to determine the sex of an individual.
Methods: Prospective observational cross-sectional study of 112 eyes of 112 healthy volunteers. The following characteristics of CFPs were analyzed: the color of peripapillary area expressed by the mean values of red, green, and blue intensities, and the tessellation expressed by the tessellation fundus index (TFI). The optic disc ovality ratio, papillomacular angle, retinal artery trajectory, and retinal vessel angles were also quantified. Their differences between the sexes were assessed by Mann-Whitney U tests. Regularized binomial logistic regression was used to select the decisive factors. In addition, its discriminative performance was evaluated through the leave-one-out cross validation.
Results: The mean age of 76 men and 36 women was 25.8 years. The regularized binomial logistic regression delivered the optimal model for sex selected variables of peripapillary temporal green and blue intensities, temporal TFI, supratemporal TFI, optic disc ovality ratio, artery trajectory, and supratemporal retinal artery angle. With this approach, the discrimination accuracy rate was 77.9%. Conclusions: Human-assessed characteristics of CFPs are useful in investigating the new theme proposed by AI, the sex of an individual. Translational Relevance: This is the first report to approach the thinking process of AI by humans and can be a new approach to medical AI research. Copyright 2020 The Authors.

Entities:  

Keywords:  color fundus photographs; sex differences; sex identification

Mesh:

Year:  2020        PMID: 32518709      PMCID: PMC7255626          DOI: 10.1167/tvst.9.2.4

Source DB:  PubMed          Journal:  Transl Vis Sci Technol        ISSN: 2164-2591            Impact factor:   3.283


Introduction

Artificial intelligence (AI), in particular deep learning, has become one of the most studied topics in science. In ophthalmology, AI is now about to enter into the clinical phase for the diagnosis and prognosis of diseases.– In the field of AI in ophthalmology, there are some new findings assumed to be not possible before AI. One of the most unexpected findings was the ability of AI to identify the sex of an individual based on the characteristics of the ocular color fundus photographs (CFPs) of the individual. The report by Poplin et al. showed that the accuracy rate reached as high as 97%. However, because of the mechanisms of deep learning, it is not possible to identify which clinical parameters were used by the machine to discriminate the sex of the individual whose CFPs were being analyzed. To the best of our knowledge, there has not been a study that evaluated the usefulness of the characteristics of a CFPs in determining the sex of the individual whose CFPs were being analyzed (Fig. 1).
Figure 1.

Representative fundus photographs of a man (left) and a woman (right). The fundus of the man has a reddish color (left), whereas that of the woman is bluish to greenish (right). The supratemporal artery is located closer to the macula in the woman's eye (right) than in the man's eye (left).

Representative fundus photographs of a man (left) and a woman (right). The fundus of the man has a reddish color (left), whereas that of the woman is bluish to greenish (right). The supratemporal artery is located closer to the macula in the woman's eye (right) than in the man's eye (left). Even though the overall approach may be new and its conclusion seems feasible, careful consideration is still needed when applying AI in the medical field. For example, AI can show the relationship between two phenomena but cannot differentiate the cause and the results. There is a well-known discrepancy of the AI conclusion that no patient exists in a region with no hospitals, and AI can conclude that the presence of a hospital is the cause of the disease. Thus, it is important to investigate the bases for the conclusions provided by AI using clinically established parameters especially in the medical field. One method to resolve this problem is tracking the process of AI to change a “black box AI” to an “explainable AI,” such as the heatmap in the report by Poplin et al. Another way can be by determining the theme that is validating the results independently using clinical parameters known by humans. We began this project to determine whether we can approach the conclusion of AI using only known parameters with a theme to distinguish the sex from ocular CFPs. Thus far, there have been numerous studies using various parameters of the ocular function and biometry,– and the parameters used in this study were derived from them. If these factors are useful for distinguishing the sex, we may be able to understand the conclusions by Poplin et al. We found that a combination of known clinical parameters in the CFPs are useful in identifying the sex of the individual whose CFPs were being analyzed. Most important, this is the first study of a novel diagnosis model using known clinical parameters that can solve a new theme provided by AI in ophthalmology.

Methods

The study was approved by the Ethics Committee of Kagoshima University Hospital, and it was registered with the University Hospital Medical Network clinical trials registry. The registration title was “Morphological analysis of the optic disc and the retinal nerve fiber in myopic eyes” and the registration number was UMIN000006040. A detailed protocol is available at https://upload.umin.ac.jp/cgi-open-bin/ctr/ctr.cgi?function=brows&action=brows&type=summary&recptno=R00000715. Data in this manuscript are used in our other study., All of the procedures used conformed to the tenets of the Declaration of Helsinki. A written informed consent was obtained from all subjects after an explanation of the procedures being used.

Subjects

This was a cross-sectional, prospective observational study. A total of 133 eyes of 133 volunteers were enrolled between November 1, 2010, and February 29, 2012. Volunteers with no known eye diseases that were determined by examining their medical charts were studied, and only the data from the right eyes were analyzed. The eligibility criteria were: age ≥20 years but <40 years; eyes normal by slit-lamp biomicroscopy, ophthalmoscopy, and OCT; best-corrected visual acuity ≤0.1 logarithm of the minimum angle of resolution units; and intraocular pressure ≤21 mm Hg. The exclusion criteria were: eyes with known ocular diseases such as glaucoma, staphyloma, and optic disc anomalies; presence of systemic diseases such as hypertension and diabetes; presence of visual field defects; and history of refractive or intraocular surgery. Seven eyes were excluded because of an ocular disease or prior ocular surgery; 3 eyes because of superior segmental optic hypoplasia, 1 eye because of glaucoma, and 3 eyes because of laser-assisted in situ keratomileusis; and 14 other eyes because of difficulty in measuring the fundus parameters. In the end, the data of 112 right eyes of 112 individuals (76 men and 36 women) were used for the statistical analyses. The axial lengths and refractive errors were measured as in our earlier studies.,

Angles of Supratemporal and Infratemporal Retinal Arteries and Veins, Location of Papillomacular Position, and Degree of Optic Disc Ovality

The CFPs and the OCT images were taken by a fundus camera (Topcon 3D OCT-1000 Mark II). The angle between the supratemporal (ST) and the infratemporal (IT) major retinal arteries and the temporal horizontal line was measured by using a 3.4-mm green circle centered on the optic disc center and the intersection of the ST or IT major retinal arteries (RA) and the green circle. The angle between the ST or IT major retinal vein (RV) and temporal horizontal line was measured by the same method.– We named these the ST and IT retinal artery (ST-RA and IT-RA) and the ST and IT retinal vein (ST-RV and IT-RV) angles. The papillomacular position is the angle formed by a horizontal line and the line connecting the optic disc center and the fovea in the CFPs (Fig. 2A). The ovality ratio was determined on the CFPs as described in detail previously. The maximum and minimum disc diameters were measured by a single observer. We defined the vertical axis of the disc as the longest diameter that was less than 45° of the geometric vertical axis, and the horizontal axis as the longest diameter that was more than 45° of the geometric vertical axis. The degree of ovality, the ovality ratio, was determined by dividing the maximum by the minimum disc diameters (Fig. 2B).
Figure 2.

Method of quantifying retinal vessel angles and (A) papillomacular position and (B) ovality ratio. Red double arrows point to the supra- and infratemporal retinal artery angles. Blue double arrows point to the supra- and infratemporal retinal vein angles. White double arrow is papillomacular position. The ovality ratio was determined by dividing the maximum by the minimum disc diameters.

Method of quantifying retinal vessel angles and (A) papillomacular position and (B) ovality ratio. Red double arrows point to the supra- and infratemporal retinal artery angles. Blue double arrows point to the supra- and infratemporal retinal vein angles. White double arrow is papillomacular position. The ovality ratio was determined by dividing the maximum by the minimum disc diameters.

Measurement of Retinal Artery Trajectory

The curvature of the RA trajectory was quantified by fitting it to a second-degree polynomial equation as described in detail elsewhere.– The RA and the center of the optic discs were identified in the CFPs. The fovea-to-disc axis in the CFPs was rotated to a vertical position. At least 20 points on the ST-RA and IT-RA were marked on the CFPs. The x and y coordinates of each mark were then determined automatically by the ImageJ program (ImageJ version 1.47, National Institutes of Health, Bethesda, MD; available at: http://imagej.nih.gov/ij/). Then, the x and y coordinates in the CFPs were converted to a new set of coordinates, x and y, with the center of the disc as the origin. Finally, the converted coordinate data were fit to a second-degree polynomial equation, a+ bx + cx2/100, with the curve fitting program of ImageJ. The a, b, and c are constants calculated by the curve-fitting program of ImageJ. Under these conditions, a larger “a” will make the curve steeper and will bring the arms of the curve closer to the fovea. Thus, the a constant was used as the curvature of the RA trajectory (Fig. 3A).
Figure 3.

Method of quantifying (A) retinal artery trajectory and (B) fundus color. The coordinate data (yellow dots in the fundus photograph) are fit to a second-degree polynomial equation, a + bx + cx2/100, with the curve fitting program of ImageJ (lower graph). The c constant was used as the degree of curvature of the retinal artery trajectory. ImageJ software was used to identify the mean intensity of the red (R), green (G), and blue (B) (right three graphs) within the peripapillary eight circles (yellow circles in the fundus photograph). The tessellation fundus index (TFI) is calculated by following formula: TFI = R/(R + G + B) in each of the eight locations.

Method of quantifying (A) retinal artery trajectory and (B) fundus color. The coordinate data (yellow dots in the fundus photograph) are fit to a second-degree polynomial equation, a + bx + cx2/100, with the curve fitting program of ImageJ (lower graph). The c constant was used as the degree of curvature of the retinal artery trajectory. ImageJ software was used to identify the mean intensity of the red (R), green (G), and blue (B) (right three graphs) within the peripapillary eight circles (yellow circles in the fundus photograph). The tessellation fundus index (TFI) is calculated by following formula: TFI = R/(R + G + B) in each of the eight locations.

Measurement of Red, Green, and Blue Intensity in Eight Peripapillary Locations and Calculation of Tessellation Fundus Index

Using the same CFPs, the ImageJ software was used to calculate the mean value of R, G, and B within each area. This was followed by the construction of histograms of the number of red, green, and blue pixels in the circular area. The tessellation fundus index (TFI) calculation algorithms was determined as described in detail in earlier studies,– and the findings showed that the peripapillary location of the tessellations varied greatly., The TFI was calculated using the mean red intensity (R), the mean green intensity (G), and the mean blue intensity (B) as follows: TFI = R/(R + G + B) using the mean value of red-green-blue intensity in the eight locations. The area of the measurements was determined as follows: first, the center of the lateral circle with diameter of 48 pixels was placed on the line between the macular and the center of the optic nerve head contacting the lateral margin of the optic nerve head as in Figure 3B. Then the neighboring circle was placed in contact with each other.

Statistical Analyses

The sex difference of each fundus parameter was assessed by Mann-Whitney U test. Then, the odds ratio of each fundus parameters for sex was evaluated using the univariate binomial logistic regression. Next, the optimal model for sex was determined by using regularized binomial logistic regression. It is widely acknowledged that ordinal statistical models, such as linear or binomial logistic regression, may be overfitted to the original sample especially when the number of predictor variables is large. The least absolute shrinkage and selection operator is a method proposed by Tibshrani et al. in which these problems in linear/logistic modeling could be mitigated by applying a shrinkage method so that the sum of the absolute values of the regression coefficients is regularized., This method has been used in many different fields from the analysis of human perception to genetic analysis,, and we have recently shown the usefulness of this approach in glaucoma., More specifically, in the case of L2 regularized binomial logistic regression, Ridge binomial logistic regression, the penalized version of the log-likelihood function to be maximized using the formula below;where xi is the i-th row of a matrix of n observations, with p predictors, β is the columns vector of the regression coefficients, and λ represents the penalty applied. Of note, this is identical to the ordinary binomial logistic regression when the λ values are equal to zero. Unlike deep learning, it is possible to directly observe the effect of selected parameters in the optimal model. Next, the diagnostic performance of the Ridge binomial logistic regression approach was evaluated using the leave-one-cross-validation method. In the leave-one-out cross validation, a single observation from the original sample was used as validation data, and the remaining observations (111 subjects) were used as training data. This procedure was repeated such that each observation in the sample was used once as the validation data (112 iterations). The diagnostic accuracy was evaluated by using the area under the receiver operating characteristic curve (AROC). All statistical analyses were performed with SPSS statistics 19 for Windows (SPSS Inc., IBM, Somers, NY) and the statistical programming language R (ver. 3.1.3, The R Foundation for Statistical Computing, Vienna, Austria).

Results

The demographics of the participants are shown in Table 1. The mean age was 25.8 years, and there were 76 men and 36 women.
Table 1.

Participants' Data

Sex (Eyes)Men 76 Eyes (Mean ± SD)Women 36 Eyes (Mean ± SD) P Value
Age (years)26.1 ± 3.825.1 ± 4.10.075
Spherical equivalent (diopters)−4.69 ± 3.53−4.76 ± 3.000.806
Axial length (mm)25.68 ± 1.3824.82 ± 1.440.006

SD, standard deviation.

Participants' Data SD, standard deviation. Mann-Whitney analysis showed that the ovality ratio, the ST-RA, and the ST-RV of the men were significantly larger than that of the women. The green and blue intensities of women were significantly higher than that of men except in the infranasal-G and inferior-G. All the TFIs in men were significantly higher than that of women (Table 2).
Table 2.

Sex Difference of Ocular Fundus Parameters Used for Analysis

Fundus ParametersMen 76 Eyes (Mean ± SD)Women 36 Eyes (Mean ± SD) P ValueSignificantly Higher Group
Papillomacular position (°)5.68 ± 3.634.95 ± 3.220.297
Retinal artery trajectory0.45 ± 0.100.52 ± 0.170.071
Ovality ratio0.91 ± 0.120.86 ± 0.100.039Male
Retinal artery angle (°)Supratemporal64.2 ± 12.057.8 ± 11.00.01Male
Infratemporal67.6 ± 13.063.8 ± 17.10.244
Retinal vein angle (°)Supratemporal70.2 ± 13.560.9 ± 12.90.017Male
Infratemporal70.2 ± 16.570.2 ± 13.80.888
Red intensityTemporal139.5 ± 30.8149.2 ± 24.30.078
Supratemporal148.5 ± 27.9154.7 ± 23.50.345
Superior144.3 ± 25.7145.6 ± 23.30.945
Supranasal137.0 ± 24.8138.8 ± 23.50.911
Nasal137.0 ± 23.1138.0 ± 25.40.930
Infranasal139.2 ± 24.2138.3 ± 27.20.741
Inferior151.3 ± 27.4147.9 ± 30.20.399
Infratemporal155.8 ± 29.5158.6 ± 28.70.827
Green intensityTemporal91.4 ± 19.3105.6 ± 15.4<0.001Women
Supratemporal101.7 ± 18.5112.6 ± 13.7<0.001Women
Superior99.2 ± 17.3105.9 ± 14.70.022Women
Supranasal89.2 ± 17.295.0 ± 15.90.041Women
Nasal84.5 ± 13.489.7 ± 14.40.046Women
Infranasal86.7 ± 15.291.9 ± 17.50.087
Inferior100.4 ± 18.3104.3 ± 19.20.188
Infratemporal104.8 ± 19.7113.0 ± 18.80.010Women
Blue intensityTemporal40.3 ± 12.651.9 ± 11.4<0.001Women
Supratemporal50.5 ± 13.660.5 ± 10.6<0.001Women
Superior50.2 ± 13.956.0 ± 10.70.019Women
Supranasal41.7 ± 13.547.2 ± 11.10.016Women
Nasal35.5 ± 10.240.4 ± 9.50.005Women
Infranasal39.2 ± 12.344.6 ± 12.10.020Women
Inferior50.8 ± 15.155.9 ± 13.90.048Women
Infra temporal53.5 ± 14.861.4 ± 13.70.002Women
Tessellation fundus indexTemporal0.52 ± 0.030.49 ± 0.03<0.001Men
Supratemporal0.50 ± 0.030.47 ± 0.02<0.001Men
Superior0.49 ± 0.030.47 ± 0.02<0.001Men
Supranasal0.51 ± 0.030.50 ± 0.020.002Men
Nasal0.53 ± 0.030.52 ± 0.02<0.001Men
Infranasal0.51 ± 0.040.51 ± 0.03<0.001Men
Inferior0.50 ± 0.050.48 ± 0.040.003Men
Infratemporal0.50 ± 0.040.48 ± 0.03<0.001Men

SD, standard deviation.

Sex Difference of Ocular Fundus Parameters Used for Analysis SD, standard deviation. Table 3 shows the odds ratio of each fundus parameters calculated by the univariate binomial logistic regression. The ovality ratio, ST-RA, and ST-RV of men were significantly larger than that of women. The retinal artery trajectory, temporal, supratemporal, infratemporal G, and all B of women were significantly higher than that of men except the inferior-B. All the TFIs in men were significantly higher than that of women.
Table 3.

The OR of Ocular Fundus Parameters

Fundus ParametersOR95% CI P Value
Papillomacular position (PMP)1.060.95 to 1.190.30
Retinal artery trajectory0.0140.00035 to 0.310.012
Ovality ratio1.041.003 to 1.090.0449
Retinal artery angleSupratemporal1.051.01 to 1.090.011
Infratemporal1.020.99 to 1.050.19
Retinal vein angleSupratemporal1.041.005 to 1.070.029
Infratemporal0.999970.97 to 1.020.998
Red intensityTemporal0.990.97 to 10.11
Supratemporal0.990.97 to 1.0060.25
Superior0.9980.98 to 1.010.79
Supranasal0.9970.98 to 1.010.722
Nasal0.9980.98 to 1.020.84
Infranasal1.0020.99 to 1.020.85
Inferior1.0040.99 to 1.020.55
Infratemporal0.9970.98 to 1.010.64
Green intensityTemporal0.940.91 to 0.970.00040
Supratemporal0.950.91 to 0.980.0033
Superior0.970.94 to 0.9980.053
Supranasal0.980.95 to 1.0020.091
Nasal0.970.94 to 1.0010.073
Infranasal0.980.95 to 1.0050.12
Inferior0.990.96 to 1.010.30
Infratemporal0.970.95 to 0.9970.048
Blue intensityTemporal0.910.87 to 0.95<0.001
Supratemporal0.920.87 to 0.96<0.001
Superior0.960.93 to 0.9950.034
Supranasal0.960.93 to 0.9970.040
Nasal0.950.90 to 0.990.021
Infranasal0.960.93 to 0.9950.035
Inferior0.970.94 to 1.0030.094
Infratemporal0.950.92 to 0.990.011
Tessellation fundus indexTemporal1.51.3 to 1.9<0.001
Supratemporal1.61.3 to 2.0<0.001
Superior1.41.1 to 1.70.0011
Supranasal1.31.1 to 1.50.0043
Nasal1.31.1 to 1.60.0017
Infranasal1.31.1 to 1.50.0021
Inferior1.21.1 to 1.40.0050
Infratemporal1.31.1 to 1.60.0029

Odds ratio was described as the increase of 1.0 % for ovality ration and tessellation fundus index and 1.0 unit for other variables.

CI, confidence interval; OR, odds ratio.

The OR of Ocular Fundus Parameters Odds ratio was described as the increase of 1.0 % for ovality ration and tessellation fundus index and 1.0 unit for other variables. CI, confidence interval; OR, odds ratio. The optimal model for the male sex obtained using the Ridge binomial logistic regression was −1.27 −0.018 × temporal G − 0.00057 × temporal B + 14.9 × temporal TFI + 14.8 × supratemporal TFI + 0.41 × ovality ratio – 1.13 × artery trajectory + 0.014 × ST-RA. The AROC value obtained using the Ridge binomial logistic regression with the leave-one-out cross validation was 77.9% (P < 0.001, DeLong's method, Fig. 4).
Figure 4.

The receiver operating characteristic curve with the Ridge binomial logistic regression. The area under the receiver operating characteristic curve was 77.9% (P < 0.001, DeLong's method). AROC, area under the receiver operating characteristic curve.

The receiver operating characteristic curve with the Ridge binomial logistic regression. The area under the receiver operating characteristic curve was 77.9% (P < 0.001, DeLong's method). AROC, area under the receiver operating characteristic curve.

Discussion

The purpose of this study was to determine the factors that can be used to distinguish the sex of an individual just from the different components of CFPs. The major problem with AI, such as deep learning, is that the analyzing process used to reach the conclusion is not known. Particularly in the medical field, even if the conclusion seems correct, it would have limited application to patients if the assumption cannot be clinically understood. Thus, understanding the thinking process is no less important than the conclusion itself in medicine. We did not intend to determine or trace the exact thinking process of AI. On the contrary, we attempted to approach the AI-proposed conclusion using only the known clinical parameters. Specifically, we collected quantitative data obtained from CFPs such as the color of the fundus, retinal artery angle, and others. The results showed that we could distinguish the sex of each eye with an accuracy of 77.9%. The advantage of the present approach is that each factor can be discussed to explain the thinking process that cannot be done in the black box AI. This is as follows. First, male fundus had higher TFI values than female fundus, indicating that the male fundus looks more red-colored. Indeed, higher values of temporal TFI and supratemporal TFI were suggestive of a male fundus, as suggested by the optimal model with Ridge binomial logistic regression. The red color of the ocular fundus is supposed to reflect the color of the large choroidal vessels., Because men have a thinner choroid, the choroidal vessels are easily observed in the CFPs, which makes the male fundus more reddish in color., A large epidemiological study showed that men have a higher TFI value than women. In contrast, more blue- or green-colored ocular fundus was suggestive of female subjects in this study, as suggested by the optimal model with Ridge binomial logistic regression. It is already known that a thick retina appears bluish or greenish in ocular CFPs., Jonas et al. reported that men had a thicker central fovea than woman, but this difference was not observed in other retinal regions. Furthermore, an eye with a shorter axial length would tend to have a thicker retina., Indeed, in this study, men had significantly longer axial lengths than women (P = 0.0069). It is therefore understandable that the color of the ocular fundus was one of the significant factors to differentiate the sexes. Second, the CFPs of female eyes tended to have a larger retinal artery trajectory and smaller ST-RA than that of male eyes by the optical model. These results indicate that the temporal retinal arteries in female eyes were located closer to the macular region than in male eyes. In an earlier study on the shape of the eye when the axial length is the same, women had smaller circumferential equators than men., Thus, women had more rugby ball-shaped eyes than men, where the long axis is the anteroposterior axis. In these eyes, it is likely that temporal retinal arteries will be located closer to the macula and the optic disc is more tilted showing an oval-shaped optic disc head., These facts are consistent with the present findings. These results may be useful for determining the cause of diseases with larger sex differences. For example, a macular hole occurs more often in women than men. Generally, these findings obtained from CFPs would be related to the shape of the eyeball. Considering the tangential tractional force of the vitreous on the macula, it is understandable that a rugby ball-shaped eye would be more associated with a macular hole. At the same time, a rugby ball-shaped eye is more frequent in women. AI may detect these “hidden relationships” from the CFPs. We also suggest that conventional methods of cognition and quantification for interpreting the validity of future AI judgments will be important. In this Ridge binomial logistic regression method, it was suggested that it was most advantageous to analyze five parameters when distinguishing the sex of the individual with good accuracy. Thus, there was not necessarily a single or a few prominent factors to distinguish men from women among the present factors. Rather, it was possible to say that men and women were identified comprehensively by multiple factors in the CFPs. It is understandable that, when there is no strong factor that stands out, it would be difficult for human eyes to collect or recognize these features. There are other methods in machine learning, such as random forest and support vector machine. We also evaluated the discrimination ability of these methods using the same fundus parameters through leave-one-out cross validation; however, significant improvement in the AROC value was not observed compared with the currently used Ridge logistic regression (AROC = 79.1% and P = 0.76% with random forest, and AROC = 74.0% and P = 0.22 [data not shown]). These AROC values were considerably lower than that in the study by Poplin et al. (97%). These results suggest other unknown parameters may further enable better discriminative ability; otherwise, the use of deep learning is more advantageous than other machine learning methods such as Ridge binomial logistic regression, random forest, and support vector machine. In addition, the current method requires the manual extraction of multiple features by human graders, whereas deep learning has a fully automated nature. Nonetheless, this does not discredit the merit of our study, because the purpose of the current study was to investigate whether known clinical parameters are useful in discriminating the sex of an individual from the parameters of CFPs. This study has limitations. One limitation was that the study population was made up of young Japanese volunteers who are the most myopic group in the world., More specifically, the vast majority (112 eyes) of the eyes had a refractive error (spherical equivalent) of less than −0.5 D and only 12 of the remaining eyes had a refractive error of ≥−0.5 D. Thus, our results describe the characteristics of young myopic eyes, and they might not necessarily hold for older individuals, other ethnic, or non-myopic populations. A large epidemiological study needs to be conducted to further validate the current results, especially for other populations. Another limitation is the time of the measurement. It takes about 10 minutes per image to measure all fundus parameters by an expert. A semiautomated program of the measurement is needed when investigating this issue for a large epidemiological study. In conclusion, the results showed that it is possible to identify the sex of an individual by analyzing the CFPs of the individual. Our results indicate that the mean TFIs, ovality ratio, and the angles of the ST-RA in men were significant factors for making this identification. The green and blue intensities of the fundus around the optic disc were also important factors. Thus, a new technique of AI is being instituted in ophthalmology, and its use should make it possible to diagnose more efficiently and easily. However, the results of this study indicate that the thinking process of humans will be needed to complement the AI findings.
  40 in total

1.  Quantitative analysis of myopic chorioretinal degeneration using a novel computer software program.

Authors:  Kumari Neelam; Rebecca Y K Chew; Martin H K Kwan; Chee Chew Yip; Kah-Guan Au Eong
Journal:  Int Ophthalmol       Date:  2012-04-06       Impact factor: 2.031

2.  Relationship Between Location of Retinal Nerve Fiber Layer Defect and Curvature of Retinal Artery Trajectory in Eyes With Normal Tension Glaucoma.

Authors:  Takehiro Yamashita; Koji Nitta; Shozo Sonoda; Kazuhisa Sugiyama; Taiji Sakamoto
Journal:  Invest Ophthalmol Vis Sci       Date:  2015-09-01       Impact factor: 4.799

3.  Optic disk ovality as an index of tilt and its relationship to myopia and perimetry.

Authors:  Eugene Tay; Steve K Seah; Siew-Pang Chan; Albert T H Lim; Sek-Jin Chew; Paul J Foster; Tin Aung
Journal:  Am J Ophthalmol       Date:  2005-02       Impact factor: 5.258

4.  Location of Ocular Tessellations in Japanese: Population-Based Kumejima Study.

Authors:  Takehiro Yamashita; Aiko Iwase; Yuya Kii; Hiroshi Sakai; Hiroto Terasaki; Taiji Sakamoto; Makoto Araie
Journal:  Invest Ophthalmol Vis Sci       Date:  2018-10-01       Impact factor: 4.799

5.  Quantification of retinal nerve fiber and retinal artery trajectories using second-order polynomial equation and its association with axial length.

Authors:  Takehiro Yamashita; Taiji Sakamoto; Hiroto Terasaki; Minoru Tanaka; Yuya Kii; Kumiko Nakao
Journal:  Invest Ophthalmol Vis Sci       Date:  2014-07-29       Impact factor: 4.799

6.  Using Deep Learning and Transfer Learning to Accurately Diagnose Early-Onset Glaucoma From Macular Optical Coherence Tomography Images.

Authors:  Ryo Asaoka; Hiroshi Murata; Kazunori Hirasawa; Yuri Fujino; Masato Matsuura; Atsuya Miki; Takashi Kanamoto; Yoko Ikeda; Kazuhiko Mori; Aiko Iwase; Nobuyuki Shoji; Kenji Inoue; Junkichi Yamagami; Makoto Araie
Journal:  Am J Ophthalmol       Date:  2018-10-12       Impact factor: 5.258

7.  Retinal nerve fibre layer photography with a wide angle fundus camera.

Authors:  P J Airaksinen; H Nieminen; E Mustonen
Journal:  Acta Ophthalmol (Copenh)       Date:  1982-06

8.  Retinal Thickness and Axial Length.

Authors:  Jost B Jonas; Liang Xu; Wen Bin Wei; Zhe Pan; Hua Yang; Leonard Holbach; Songhomitra Panda-Jonas; Ya Xing Wang
Journal:  Invest Ophthalmol Vis Sci       Date:  2016-04       Impact factor: 4.799

9.  Wider retinal artery trajectories in eyes with macular hole than in fellow eyes of patients with unilateral idiopathic macular hole.

Authors:  Naoya Yoshihara; Taiji Sakamoto; Takehiro Yamashita; Toshifumi Yamashita; Keita Yamakiri; Shozo Sonoda; Tatsuro Ishibashi
Journal:  PLoS One       Date:  2015-04-13       Impact factor: 3.240

10.  Location of Tessellations in Ocular Fundus and Their Associations with Optic Disc Tilt, Optic Disc Area, and Axial Length in Young Healthy Eyes.

Authors:  Hiroto Terasaki; Takehiro Yamashita; Naoya Yoshihara; Yuya Kii; Minoru Tanaka; Kumiko Nakao; Taiji Sakamoto
Journal:  PLoS One       Date:  2016-06-08       Impact factor: 3.240

View more
  5 in total

Review 1.  Artificial Intelligence in Predicting Systemic Parameters and Diseases From Ophthalmic Imaging.

Authors:  Bjorn Kaijun Betzler; Tyler Hyungtaek Rim; Charumathi Sabanayagam; Ching-Yu Cheng
Journal:  Front Digit Health       Date:  2022-05-26

2.  Predicting sex from retinal fundus photographs using automated deep learning.

Authors:  Edward Korot; Nikolas Pontikos; Xiaoxuan Liu; Siegfried K Wagner; Livia Faes; Josef Huemer; Konstantinos Balaskas; Alastair K Denniston; Anthony Khawaja; Pearse A Keane
Journal:  Sci Rep       Date:  2021-05-13       Impact factor: 4.379

3.  Systematic Comparison of Heatmapping Techniques in Deep Learning in the Context of Diabetic Retinopathy Lesion Detection.

Authors:  Toon Van Craenendonck; Bart Elen; Nele Gerrits; Patrick De Boever
Journal:  Transl Vis Sci Technol       Date:  2020-12-29       Impact factor: 3.283

4.  Factors in Color Fundus Photographs That Can Be Used by Humans to Determine Sex of Individuals.

Authors:  Simon Dieck; Miguel Ibarra; Ismail Moghul; Ming Wai Yeung; Jean Tori Pantel; Sarah Thiele; Maximilian Pfau; Monika Fleckenstein; Nikolas Pontikos; Peter M Krawitz
Journal:  Transl Vis Sci Technol       Date:  2020-06-05       Impact factor: 3.283

5.  Author Response: Factors in Color Fundus Photographs That Can Be Used by Humans to Determine Sex of Individuals.

Authors:  Taiji Sakamoto
Journal:  Transl Vis Sci Technol       Date:  2020-06-05       Impact factor: 3.283

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.