| Literature DB >> 35888063 |
Sangeeta Biswas1, Md Iqbal Aziz Khan1, Md Tanvir Hossain1, Angkan Biswas2, Takayoshi Nakai3, Johan Rohdin4.
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.Entities:
Keywords: color fundus photographs; deep neural network; detection of retinal diseases; segmentation of retinal landmarks
Year: 2022 PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973
Source DB: PubMed Journal: Life (Basel) ISSN: 2075-1729
Figure 1Sensors used in fundus cameras: (a) commonly used a single layered sensor coated with a color filter array having a Bayer pattern and (b) less commonly used three-layered direct imaging sensor. R: Red, G: Green, B: Blue.
Figure 2A color fundus photograph. We can see the retinal landmarks, i.e., optic disc, macula, and central retinal blood vessels, on the circular and colored foreground, surrounded by a dark background. Source of image: publicly available DRIVE data set and image file: 21_training.tif.
Figure 3Pros and cons of different color channels. 1st column i.e., (a,e,i,m): RGB fundus photographs, 2nd column i.e., (b,f,j,n): red channel images, 3rd column i.e., (c,g,k,o): green channel images, and 4th column i.e., (d,h,l,p): blue channel images. Choroidal blood vessels are clearly visible in the red channel, as shown inside the red box in (b). Lens flares are more visible in the blue channel, as shown inside the blue box in (d). Atrophy and diabetic retinopathy affected areas are more clearly visible in the green channel as shown inside the green boxes in (g,k). As shown inside the blue box in (l), the blue channel is prone to underexposure. The red channel is prone to overexposure, as shown inside the red box in (m). Source of fundus photographs: (a) PALM/PALM-Training400/H0025.jpg, (e) PALM/PALM-Training400/P0010.jpg, (i) UoA_DR/94/94.jpg, and (m) CHASE_DB1/images/Image_11L.jpg.
Color distribution in previous works for the automatic detection of retinal diseases and segmentation of retinal landmarks and atrophy. NN: Neural network-based approaches, Non-NN: Non-neural network-based approaches.
| Color | Number of Papers | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Disease Detection | Segmentation | |||||||||||
| Non-NN | NN | Non-NN | NN | |||||||||
| Total | Q1 | Q2 | Total | Q1 | Q2 | Total | Q1 | Q2 | Total | Q1 | Q2 | |
| (42) | (30) | (12) | (35) | (28) | (7) | (77) | (56) | (21) | (37) | (28) | (9) | |
| RGB | 18 | 9 | 9 | 29 | 24 | 5 | 14 | 10 | 4 | 28 | 22 | 6 |
| R | 7 | 5 | 2 | 2 | 1 | 1 | 15 | 9 | 6 | 0 | 0 | 0 |
| G | 22 | 11 | 11 | 4 | 2 | 2 | 59 | 43 | 16 | 10 | 8 | 2 |
| B | 3 | 3 | 0 | 1 | 1 | 0 | 8 | 7 | 1 | 0 | 0 | 0 |
| Gr | 6 | 3 | 3 | 5 | 4 | 1 | 7 | 5 | 2 | 3 | 0 | 3 |
Color channel used in non-neural Network (Non-NN) based previous works for automatically detecting diseases in retina. DR: Diabetic Retinopathy, AMD: Age-related Macular Degeneration, DME: Diabetic Macular Edema, R: Red, G: Green, B: Blue, Gr: Grayscale weighted summation of Red, Green and Blue.
| Year | Glaucoma | AMD & DME | DR | |||
|---|---|---|---|---|---|---|
| Reference | Color | Reference | Color | Reference | Color | |
| 2000 | Hipwell [ | G, B | ||||
| 2002 | Walter [ | G | ||||
| 2004 | Klein [ | RGB | ||||
| 2007 | Scott [ | RGB | ||||
| 2008 | Kose [ | RGB | Abramoff [ | RGB | ||
| Gangnon [ | RGB | |||||
| 2010 | Bock [ | G | Kose [ | Gr | ||
| Muramatsu [ | R, G | |||||
| 2011 | Joshi [ | R | Agurto [ | G | Fadzil [ | RGB |
| 2012 | Mookiah [ | Gr | Hijazi [ | RGB | ||
| Deepak [ | RGB, G | |||||
| 2013 | Akram [ | RGB | ||||
| Oh [ | RGB | |||||
| 2014 | Fuente-Arriaga [ | R, G | Akram [ | RGB | ||
| Noronha [ | RGB | Mookiah [ | G | Casanova [ | RGB | |
| 2015 | Issac [ | R, G | Mookiah [ | R, G | Jaya [ | RGB |
| Oh [ | G, Gr | |||||
| 2016 | Singh [ | G, Gr | Acharya [ | G | Bhaskaranand [ | RGB |
| Phan [ | G | |||||
| Wang [ | RGB | |||||
| 2017 | Acharya [ | Gr | Acharya [ | G | Leontidis [ | RGB |
| Maheshwari [ | R, G, B, Gr | |||||
| Maheshwari [ | G | |||||
| 2018 | Saha [ | G, RGB | ||||
| 2020 | Colomer [ | G | ||||
Color channel used in neural network (NN) based previous works for automatically detecting diseases in retina. DR: Diabetic Retinopathy, AMD: Age-related Macular Degeneration, DME: Diabetic Macular Edema, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
| Year | Glaucoma | AMD & DME | DR | |||
|---|---|---|---|---|---|---|
| Reference | Color | Reference | Color | Reference | Color | |
| 1996 | Gardner [ | RGB | ||||
| 2009 | Nayak [ | R, G | ||||
| 2014 | Ganesan [ | Gr | ||||
| 2015 | Mookiah [ | G | ||||
| 2016 | Asoka [ | Gr | Abramoff [ | RGB | ||
| Gulshan [ | RGB | |||||
| 2017 | Zilly [ | G, Gr | Burlina [ | RGB | Abbas [ | RGB |
| Ting [ | RGB | Burlina [ | RGB | Gargeya [ | RGB | |
| Quellec [ | RGB | |||||
| 2018 | Ferreira [ | RGB, Gr | Grassmann [ | RGB | Khojasteh [ | RGB |
| Raghavendra [ | RGB | Burlina [ | RGB | Lam [ | RGB | |
| Li [ | RGB | |||||
| Fu [ | RGB | |||||
| Liu [ | RGB | |||||
| 2019 | Liu [ | R, G, B, Gr | Keel [ | RGB | Li [ | RGB |
| Diaz-Pinto [ | RGB | Peng [ | RGB | Zeng [ | RGB | |
| Matsuba [ | RGB | Raman [ | RGB | |||
| 2020 | Singh [ | RGB | ||||
| Gonzalez-Gonzalo [ | RGB | |||||
| 2021 | Gheisari [ | RGB | ||||
Color channel used in non-neural network (Non-NN) based previous works for segmenting retinal landmarks. OD: Optic Disc, CRBVs: Central Retinal Blood Vessels, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
| Year | OD | Macula/Fovea | CRBVs | |||
|---|---|---|---|---|---|---|
| Reference | Color | Reference | Color | Reference | Color | |
| 1989 | Chaudhuri [ | G | ||||
| 1999 | Sinthanayothin [ | RGB | ||||
| 2000 | Hoover [ | RGB | ||||
| 2004 | Lowell [ | Gr | Li [ | RGB | ||
| 2006 | Soares [ | G | ||||
| 2007 | Xu [ | RGB | Niemeijer [ | G | Ricci [ | G |
| Abramoff [ | R, G, B | Tobin [ | G | |||
| 2008 | Youssif [ | RGB | ||||
| 2009 | Niemeijer [ | G | Cinsdikici [ | G | ||
| 2010 | Welfer [ | G | ||||
| Aquino [ | R, G | |||||
| Zhu [ | RGB | |||||
| 2011 | Lu [ | R, G | Welfer [ | G | Cheung [ | RGB |
| Kose [ | RGB | |||||
| You [ | G | |||||
| 2012 | Bankhead [ | G | ||||
| Qureshi [ | G | Fraz [ | G | |||
| Fraz [ | G | |||||
| Li [ | RGB | |||||
| Lin [ | G | |||||
| Moghimirad [ | G | |||||
| 2013 | Morales [ | Gr | Chin [ | RGB | Akram [ | G |
| Gegundez [ | G | Badsha [ | Gr | |||
| Budai [ | G | |||||
| Fathi [ | G | |||||
| Fraz [ | G | |||||
| Nayebifar [ | G, B | |||||
| Nguyen [ | G | |||||
| Wang [ | G | |||||
| 2014 | Giachetti [ | G, Gr | Kao [ | G | Bekkers [ | G |
| Aquino [ | R, G | Cheng [ | G | |||
| 2015 | Miri [ | R, G, B | Dai [ | G | ||
| Mary [ | R | Hassanien [ | G | |||
| Harangi [ | RGB, G | Imani [ | G | |||
| Lazar [ | G | |||||
| Roychowdhury [ | G | |||||
| 2016 | Mittapalli [ | RGB | Medhi [ | R | Aslani [ | G |
| Roychowdhury [ | G | Onal [ | Gr | Bahadarkhan [ | G | |
| Sarathi [ | R, G | Christodoulidis [ | G | |||
| Orlando [ | G | |||||
| 2018 | Ramani [ | G | Khan [ | G | ||
| Chalakkal [ | RGB | Xia [ | G | |||
| 2019 | Thakur [ | Gr | Khawaja [ | G | ||
| Naqvi [ | R, G | Wang [ | RGB | |||
| 2020 | Dharmawan [ | R, G, B | Carmona [ | G | Saroj [ | Gr |
| Guo [ | G | Zhang [ | G | |||
| Zhou [ | G | |||||
| 2021 | Kim [ | G | ||||
Color channel used in neural network (NN) based previous works for segmenting retinal landmarks. OD: Optic Disc, CRBVs: Central Retinal Blood Vessels, Gr: Grayscale weighted summation of Red, Green and Blue, R: Red, G: Green, B: Blue.
| Year | OD | Macula/Fovea | CRBVs | |||
|---|---|---|---|---|---|---|
| Reference | Color | Reference | Color | Reference | Color | |
| 2011 | Marin [ | G | ||||
| 2015 | Wang [ | G | ||||
| 2016 | Liskowski [ | G | ||||
| 2017 | Barkana [ | G | ||||
| Mo [ | RGB | |||||
| 2018 | Fu [ | RGB | Al-Bander [ | Gr | Guo [ | G |
| Guo [ | RGB | |||||
| Hu [ | RGB | |||||
| Jiang [ | RGB | |||||
| Oliveira [ | G | |||||
| Sangeethaa [ | G | |||||
| 2019 | Wang [ | RGB, Gr | Jebaseeli [ | G | ||
| Chakravarty [ | RGB | Lian [ | RGB | |||
| Gu [ | RGB | Noh [ | RGB | |||
| Tan [ | RGB | Wang [ | Gr | |||
| Jiang [ | RGB | |||||
| 2020 | Gao [ | RGB | Feng [ | G | ||
| Jin [ | RGB | Tamim [ | G | |||
| Sreng [ | RGB | |||||
| Bian [ | RGB | |||||
| Almubarak [ | RGB | |||||
| Tian [ | RGB | |||||
| Zhang [ | RGB | |||||
| Xie [ | RGB | |||||
| 2021 | Bengani [ | RGB | Hasan [ | RGB | Gegundez-Arias [ | RGB |
| Veena [ | RGB | |||||
| Wang [ | RGB | |||||
Color channel used for automatically detecting atrophy in retina. R: Red, G: Green, B: Blue.
| Year | Non-NN | NN | ||
|---|---|---|---|---|
| Reference | Color | Reference | Color | |
| 2011 | Lu [ | R, B | ||
| 2012 | Cheng [ | R, G, B | ||
| Lu [ | R, B | |||
| 2018 | Septiarini [ | R, G | ||
| 2020 | Li [ | R, G, B | Chai [ | RGB |
| Son [ | RGB | |||
| 2021 | Sharma [ | RGB | ||
Data sets used in our experiments.
| Data Set | Height × Width | Field-of-View | Fundus Camera | Number of Images |
|---|---|---|---|---|
| CHASE_DB1 |
|
| Nidek NM-200-D | 28 |
| DRIVE |
|
| Canon CR5-NM 3CCD | 40 |
| HRF |
|
| Canon CR-1 | 45 |
| IDRiD |
|
| Kowa VX-10 | 81 |
| PALM |
|
| Zeiss VISUCAM 500 NM | 400 |
| STARE |
|
| TopCon TRV-50 | 20 |
| UoA-DR |
|
| Zeiss VISUCAM 500 | 200 |
Training, validation and test sets used in our experiments.
| Segmentation of | Data Set | Number of Images in | ||
|---|---|---|---|---|
| Training Set | Validation Set | Test Set | ||
| CRBVs | CHASE_DB1 | 7 | 5 | 16 |
| DRIVE | 10 | 8 | 22 | |
| HRF | 11 | 9 | 25 | |
| STARE | 5 | 4 | 11 | |
| UoA-DR | 50 | 40 | 110 | |
| Optic Disc | IDRiD | 20 | 16 | 45 |
| PALM | 100 | 80 | 220 | |
| UoA-DR | 50 | 40 | 110 | |
| Macula | PALM | 100 | 80 | 220 |
| UoA-DR | 50 | 40 | 110 | |
| Atrophy | PALM | 100 | 80 | 220 |
Figure 4There is a noticeable overlap in the histograms of the foreground and the background in the blue channel. Histograms are slightly overlapped in the green channel. In the red channel, histograms are not overlapped and easily separable. Therefore, by setting 0 to the pixels lower than the threshold value, and setting 255 to the pixels higher than the , we can easily generate the background mask from the red channeled image. Source of fundus photograph: STARE data set and image file: im0139.ppm.
Architecture of our U-Net. #Params: Number of parameters.
| Layer | Output Shape | # Params |
|---|---|---|
| Input | (256, 256, 1) | 0 |
| Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 160 |
| Dropout (0.1) | (256, 256, 16) | 0 |
| Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU, name = | (256, 256, 16) | 2320 |
| Convolution (strides = (2, 2), filters = 16, kernel = (3, 3), activation = ELU) | (128, 128, 16) | 2320 |
| Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 4640 |
| Dropout (0.1) | (128, 128, 32) | 0 |
| Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU, name = | (128, 128, 32) | 9248 |
| Convolution (strides = (2, 2), filters = 32, kernel = (3, 3), activation = ELU) | (64, 64, 32) | 9248 |
| Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 18,496 |
| Dropout (0.2) | (64, 64, 64) | 0 |
| Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU, name = | (64, 64, 64) | 36,928 |
| Convolution (strides = (2, 2), filters = 64, kernel = (3, 3), activation = ELU) | (32, 32, 64) | 36,928 |
| Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 73,856 |
| Dropout (0.2) | (32, 32, 128) | 0 |
| Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU, name = | (32, 32, 128) | 147,584 |
| Convolution (strides = (2, 2), filters = 128, kernel = (3, 3), activation = ELU) | (16, 16, 128) | 147,584 |
| Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 295,168 |
| Dropout (0.3) | (16, 16, 256) | 0 |
| Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU, name = | (16, 16, 256) | 590,080 |
| Convolution (strides = (2, 2), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
| Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
| Dropout (0.3) | (8, 8, 256) | 0 |
| Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (8, 8, 256) | 590,080 |
| Transposed Convolution (strides = (2, 2), filters = 256, kernel = (2, 2), activation = ELU, name = | (16, 16, 256) | 262,400 |
| Concatenation ( | (16, 16, 512) | 0 |
| Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 1,179,904 |
| Dropout (0.3) | (16, 16, 256) | 0 |
| Convolution (strides = (1, 1), filters = 256, kernel = (3, 3), activation = ELU) | (16, 16, 256) | 590,080 |
| Transposed Convolution (strides = (2, 2), filters = 128, kernel = (2, 2), activation = ELU, name = | (32, 32, 128) | 131,200 |
| Concatenation ( | (32, 32, 256) | 0 |
| Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 295,040 |
| Dropout (0.2) | (32, 32, 128) | 0 |
| Convolution (strides = (1, 1), filters = 128, kernel = (3, 3), activation = ELU) | (32, 32, 128) | 147,584 |
| Transposed Convolution (strides = (2, 2), filters = 64, kernel = (2, 2), activation = ELU, name = | (64, 64, 64) | 32,832 |
| Concatenation ( | (64, 64, 128) | 0 |
| Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 73,792 |
| Dropout (0.2) | (64, 64, 64) | 0 |
| Convolution (strides = (1, 1), filters = 64, kernel = (3, 3), activation = ELU) | (64, 64, 64) | 36,928 |
| Transposed Convolution (strides = (2, 2), filters = 32, kernel = (2, 2), activation = ELU, name = | (128, 128, 32) | 8224 |
| Concatenation ( | (128, 128, 64) | 0 |
| Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 18,464 |
| Dropout (0.1) | (128, 128, 32) | 0 |
| Convolution (strides = (1, 1), filters = 32, kernel = (3, 3), activation = ELU) | (128, 128, 32) | 9248 |
| Transposed Convolution (strides = (2, 2), filters = 16, kernel = (2, 2), activation = ELU, name = | (256, 256, 16) | 2064 |
| Concatenation ( | (256, 256, 16) | 0 |
| Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 4624 |
| Dropout (0.1) | (256, 256, 16) | 0 |
| Convolution (strides = (1, 1), filters = 16, kernel = (3, 3), activation = ELU) | (256, 256, 16) | 2320 |
| Convolution (strides = (1, 1), filters = 1, kernel = (1, 1), activation = Sigmoid, name = Output) | (256, 256, 1) | 17 |
Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting optic disc.
| Color | Dataset | Precision | Recall | AUC | MIoU |
|---|---|---|---|---|---|
| RGB | IDRiD | 0.897 ± 0.018 | 0.877 ± 0.010 | 0.940 ± 0.005 | 0.896 ± 0.003 |
| PALM | 0.859 ± 0.009 | 0.862 ± 0.013 | 0.933 ± 0.006 | 0.873 ± 0.003 | |
| UoA_DR | 0.914 ± 0.012 | 0.868 ± 0.006 | 0.936 ± 0.003 | 0.895 ± 0.004 | |
| Gray | IDRiD | 0.868 ± 0.020 | 0.902 ± 0.016 | 0.952 ± 0.007 | 0.892 ± 0.004 |
| PALM | 0.758 ± 0.020 | 0.737 ± 0.025 | 0.870 ± 0.011 | 0.788 ± 0.009 | |
| UoA_DR | 0.907 ± 0.007 | 0.840 ± 0.005 | 0.923 ± 0.002 | 0.876 ± 0.008 | |
| Red | IDRiD | 0.892 ± 0.006 | 0.872 ± 0.008 | 0.936 ± 0.004 | 0.892 ± 0.004 |
| PALM | 0.798 ± 0.004 | 0.824 ± 0.012 | 0.912 ± 0.006 | 0.837 ± 0.003 | |
| UoA_DR | 0.900 ± 0.007 | 0.854 ± 0.006 | 0.928 ± 0.003 | 0.885 ± 0.003 | |
| Green | IDRiD | 0.837 ± 0.023 | 0.906 ± 0.009 | 0.953 ± 0.004 | 0.882 ± 0.008 |
| PALM | 0.708 ± 0.012 | 0.718 ± 0.013 | 0.859 ± 0.006 | 0.771 ± 0.004 | |
| UoA_DR | 0.895 ± 0.009 | 0.821 ± 0.010 | 0.912 ± 0.005 | 0.869 ± 0.006 | |
| Blue | IDRiD | 0.810 ± 0.038 | 0.715 ± 0.011 | 0.858 ± 0.005 | 0.799 ± 0.010 |
| PALM | 0.662 ± 0.032 | 0.692 ± 0.019 | 0.845 ± 0.009 | 0.748 ± 0.008 | |
| UoA_DR | 0.873 ± 0.012 | 0.800 ± 0.009 | 0.901 ± 0.004 | 0.851 ± 0.002 |
Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting CRBVs.
| Color | Dataset | Precision | Recall | AUC | MIoU |
|---|---|---|---|---|---|
| RGB | CHASE_DB1 | 0.795 ± 0.005 | 0.638 ± 0.004 | 0.840 ± 0.002 | 0.696 ± 0.018 |
| DRIVE | 0.851 ± 0.007 | 0.519 ± 0.009 | 0.781 ± 0.004 | 0.696 ± 0.013 | |
| HRF | 0.730 ± 0.017 | 0.633 ± 0.007 | 0.838 ± 0.005 | 0.651 ± 0.021 | |
| STARE | 0.822 ± 0.009 | 0.488 ± 0.010 | 0.766 ± 0.006 | 0.654 ± 0.011 | |
| UoA_DR | 0.373 ± 0.003 | 0.341 ± 0.008 | 0.669 ± 0.005 | 0.556 ± 0.004 | |
| Gray | CHASE_DB1 | 0.757 ± 0.019 | 0.635 ± 0.016 | 0.834 ± 0.009 | 0.648 ± 0.040 |
| DRIVE | 0.864 ± 0.014 | 0.529 ± 0.014 | 0.786 ± 0.008 | 0.673 ± 0.032 | |
| HRF | 0.721 ± 0.032 | 0.617 ± 0.008 | 0.825 ± 0.005 | 0.605 ± 0.038 | |
| STARE | 0.810 ± 0.021 | 0.522 ± 0.022 | 0.784 ± 0.011 | 0.619 ± 0.031 | |
| UoA_DR | 0.373 ± 0.007 | 0.298 ± 0.022 | 0.648 ± 0.012 | 0.540 ± 0.009 | |
| Red | CHASE_DB1 | 0.507 ± 0.018 | 0.412 ± 0.007 | 0.703 ± 0.005 | 0.602 ± 0.001 |
| DRIVE | 0.713 ± 0.026 | 0.391 ± 0.016 | 0.705 ± 0.010 | 0.637 ± 0.005 | |
| HRF | 0.535 ± 0.027 | 0.349 ± 0.014 | 0.680 ± 0.008 | 0.581 ± 0.004 | |
| STARE | 0.646 ± 0.040 | 0.271 ± 0.011 | 0.649 ± 0.008 | 0.563 ± 0.005 | |
| UoA_DR | 0.304 ± 0.011 | 0.254 ± 0.012 | 0.621 ± 0.006 | 0.539 ± 0.002 | |
| Green | CHASE_DB1 | 0.781 ± 0.017 | 0.676 ± 0.021 | 0.858 ± 0.007 | 0.691 ± 0.059 |
| DRIVE | 0.862 ± 0.011 | 0.541 ± 0.026 | 0.794 ± 0.012 | 0.703 ± 0.047 | |
| HRF | 0.754 ± 0.018 | 0.662 ± 0.020 | 0.856 ± 0.008 | 0.647 ± 0.077 | |
| STARE | 0.829 ± 0.018 | 0.558 ± 0.028 | 0.806 ± 0.011 | 0.662 ± 0.052 | |
| UoA_DR | 0.384 ± 0.007 | 0.326 ± 0.023 | 0.662 ± 0.012 | 0.552 ± 0.011 | |
| Blue | CHASE_DB1 | 0.581 ± 0.024 | 0.504 ± 0.023 | 0.751 ± 0.010 | 0.638 ± 0.004 |
| DRIVE | 0.771 ± 0.016 | 0.449 ± 0.015 | 0.736 ± 0.008 | 0.657 ± 0.007 | |
| HRF | 0.473 ± 0.016 | 0.279 ± 0.016 | 0.633 ± 0.007 | 0.558 ± 0.004 | |
| STARE | 0.446 ± 0.014 | 0.242 ± 0.018 | 0.608 ± 0.007 | 0.535 ± 0.003 | |
| UoA_DR | 0.316 ± 0.010 | 0.271 ± 0.015 | 0.630 ± 0.007 | 0.540 ± 0.002 |
Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting macula.
| Color | Dataset | Precision | Recall | AUC | MIoU |
|---|---|---|---|---|---|
| RGB | PALM | 0.732 ± 0.016 | 0.649 ± 0.029 | 0.825 ± 0.014 | 0.753 ± 0.009 |
| UoA_DR | 0.804 ± 0.027 | 0.713 ± 0.043 | 0.858 ± 0.021 | 0.794 ± 0.012 | |
| Gray | PALM | 0.712 ± 0.024 | 0.638 ± 0.016 | 0.819 ± 0.007 | 0.744 ± 0.003 |
| UoA_DR | 0.811 ± 0.017 | 0.712 ± 0.018 | 0.858 ± 0.008 | 0.796 ± 0.005 | |
| Red | PALM | 0.719 ± 0.013 | 0.648 ± 0.015 | 0.823 ± 0.007 | 0.749 ± 0.005 |
| UoA_DR | 0.768 ± 0.006 | 0.726 ± 0.013 | 0.863 ± 0.006 | 0.790 ± 0.003 | |
| Green | PALM | 0.685 ± 0.020 | 0.641 ± 0.004 | 0.820 ± 0.002 | 0.739 ± 0.005 |
| UoA_DR | 0.791 ± 0.013 | 0.693 ± 0.011 | 0.848 ± 0.005 | 0.783 ± 0.005 | |
| Blue | PALM | 0.676 ± 0.020 | 0.637 ± 0.019 | 0.817 ± 0.009 | 0.734 ± 0.002 |
| UoA_DR | 0.801 ± 0.035 | 0.649 ± 0.013 | 0.826 ± 0.006 | 0.769 ± 0.012 |
Performance (mean ± standard deviation) of U-Nets using different color channels for segmenting atrophy.
| Color | Dataset | Precision | Recall | AUC | MIoU |
|---|---|---|---|---|---|
| RGB | PALM | 0.719 ± 0.033 | 0.638 ± 0.030 | 0.814 ± 0.014 | 0.707 ± 0.019 |
| Gray | PALM | 0.630 ± 0.021 | 0.571 ± 0.025 | 0.777 ± 0.012 | 0.658 ± 0.039 |
| Red | PALM | 0.514 ± 0.010 | 0.430 ± 0.029 | 0.705 ± 0.013 | 0.596 ± 0.015 |
| Green | PALM | 0.695 ± 0.009 | 0.627 ± 0.032 | 0.808 ± 0.015 | 0.714 ± 0.011 |
| Blue | PALM | 0.711 ± 0.015 | 0.578 ± 0.016 | 0.785 ± 0.008 | 0.687 ± 0.018 |
Number of cases where a U-Net marks OD and macula correctly in the masks. N: Total number of fundus photographs in the test set.
| Segmentation for | N | Number of Cases in | ||||
|---|---|---|---|---|---|---|
| RGB | Gray | Red | Green | Blue | ||
| Optic Disc (OD) | 375 | 329 | 324 | 316 | 303 | 297 |
| Macula | 330 | 270 | 265 | 271 | 265 | 267 |
Figure 5Failure case of OD segmentation. (a) RGB image overlaid by reference mask for OD segmentation, (b) RGB image overlaid by inaccurately predicted OD mask, (c) Grayscale image overlaid by inaccurately predicted mask for OD segmentation, (d) Red channel image overlaid by inaccurately predicted mask for OD segmentation, (e) Green channel image overlaid by inaccurately predicted mask for OD segmentation, and (f) Blue channel image overlaid by inaccurately predicted mask for OD segmentation. Source of image: PALM/P0159.jpg.
Figure 6Failure case of macula segmentation. (a) RGB image overlaid by reference mask for macula segmentation, (b) RGB image overlaid by inaccurate predicted macula mask, (c) Grayscale image overlaid by inaccurately predicted mask for macula segmentation, (d) Red channel image overlaid by inaccurately predicted mask for macula segmentation, (e) Green channel image overlaid by inaccurately predicted mask for macula segmentation, and (f) Blue channel image overlaid by inaccurately predicted mask for macula segmentation. Source of image: PALM/P0159.jpg.
Number of cases where a U-Net marks multiple places as OD and macula in the masks. N: Total number of fundus photographs in the test set.
| Segmentation for | N | Number of Cases in | ||||
|---|---|---|---|---|---|---|
| RGB | Gray | Red | Green | Blue | ||
| Optic Disc (OD) | 375 | 29 | 26 | 43 | 46 | 43 |
| Macula | 330 | 17 | 25 | 14 | 17 | 14 |
Figure 7Examples of generated masks by the color-specific U-Nets for segmenting the CRBVs. The reference mask and the generated masks are shown in the first and third rows, whereas different color channels overlaid by masks are shown in the second and fourth rows. (a) the reference mask & (d) RGB fundus photograph overlaid by the reference mask, (b) generated mask by the U-Net trained by the RGB fundus photographs & (e) RGB image overlaid by the mask in (b), (c) generated mask by the U-Net trained by the grayscale fundus photographs & (f) Grayscaled image overlaid by the mask in (c), (g) generated mask by the U-Net trained by the red channel fundus photographs & (j) Red channeled fundus photograph overlaid by the mask in (g), (h) generated mask by the U-Net trained by the green channel fundus photographs & (k) Green channel image overlaid by the mask in (h), and (i) generated mask by the U-Net trained by the blue channel fundus photographs & (l) Blue channel image overlaid by the mask in (i). Source of image: CHASE_DB1/Image_14R.jpg.
Number of inappropriately exposed fundus photographs. N: Total number RGB fundus photographs in the test set of a specific data set.
| Data Set | N | Number of Cases in Each Color Channel | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Where | Where | ||||||||
| Gray | Red | Green | Blue | Gray | Red | Green | Blue | ||
| CHASE_DB1 | 28 | 0 | 10 | 0 | 13 | 0 | 4 | 0 | 3 |
| DRIVE | 40 | 0 | 12 | 0 | 0 | 0 | 1 | 0 | 3 |
| HRF | 45 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| IDRiD | 81 | 0 | 2 | 0 | 6 | 0 | 0 | 0 | 23 |
| PALM | 400 | 0 | 0 | 1 | 40 | 0 | 0 | 2 | 121 |
| STARE | 20 | 0 | 2 | 0 | 10 | 0 | 0 | 0 | 4 |
| UoA-DR | 200 | 0 | 0 | 0 | 22 | 0 | 0 | 0 | 88 |
Figure 8Example of overexposed red channel and underexposed blue channel of a retinal image. First row shows different channels of a fundus photograph and second row shows their corresponding histograms. Histograms of inappropriately exposed images are highly skewed and have low entropy. Source of image: CHASE_DB1/Image_11R.jpg.
Performance of our approach of generating background masks.
| Data Set | Precision | Recall | AUC | MIoU |
|---|---|---|---|---|
| DRIVE | 0.997 | 0.997 | 0.996 | 0.995 |
| HRF | 1.000 | 1.000 | 1.000 | 1.000 |
Distribution of provided binary and non-binary masks for segmenting CRBVs, optic discs, macula and retinal atrophy. n: total number of provided masks, m: number of provided binary masks.
| Segmentation Type | CHASE_DB1 | DRIVE | HRF | IDRiD | PALM | STARE | UoA-DR | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| n | m | n | m | n | m | n | m | n | m | n | m | n | m | |
| CRBVs | 28 | 0 | 40 | 0 | 45 | 0 | 0 | 0 | 0 | 0 | 40 | 0 | 200 | 200 |
| Optic Disc | 0 | 0 | 0 | 0 | 0 | 0 | 81 | 0 | 400 | 0 | 0 | 0 | 200 | 200 |
| Macula | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| Retinal Atrophy | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 311 | 0 | 0 | 0 | 0 | 0 |
Effect of different amounts of training data on the performance (mean ± standard deviation) of U-Nets trained using different color channels for segmenting CRBVs. Note that the CLAHE is applied in the pre-processing stage.
| CHASE_DB1 | |||||
|---|---|---|---|---|---|
|
|
|
|
|
|
|
| 25% Training, | RGB | 0.569 ± 0.203 | 0.448 ± 0.041 | 0.729 ± 0.059 | 0.537 ± 0.046 |
| GRAY | 0.615 ± 0.081 | 0.412 ± 0.041 | 0.735 ± 0.024 | 0.503 ± 0.051 | |
| RED | 0.230 ± 0.030 | 0.332 ± 0.053 | 0.613 ± 0.010 | 0.474 ± 0.006 | |
| GREEN | 0.782 ± 0.026 | 0.526 ± 0.020 | 0.792 ± 0.007 | 0.606 ± 0.045 | |
| BLUE | 0.451 ± 0.114 | 0.370 ± 0.018 | 0.683 ± 0.032 | 0.485 ± 0.008 | |
| 25% Training, | RGB | 0.571 ± 0.207 | 0.441 ± 0.045 | 0.724 ± 0.062 | 0.538 ± 0.048 |
| GRAY | 0.624 ± 0.080 | 0.407 ± 0.036 | 0.731 ± 0.023 | 0.502 ± 0.050 | |
| RED | 0.244 ± 0.037 | 0.342 ± 0.046 | 0.619 ± 0.009 | 0.474 ± 0.007 | |
| GREEN | 0.791 ± 0.026 | 0.515 ± 0.022 | 0.787 ± 0.008 | 0.602 ± 0.044 | |
| BLUE | 0.449 ± 0.116 | 0.362 ± 0.017 | 0.677 ± 0.033 | 0.484 ± 0.008 | |
| 55% Training, | RGB | 0.816 ± 0.012 | 0.541 ± 0.024 | 0.784 ± 0.012 | 0.684 ± 0.018 |
| GRAY | 0.803 ± 0.002 | 0.515 ± 0.026 | 0.775 ± 0.010 | 0.671 ± 0.016 | |
| RED | 0.389 ± 0.039 | 0.363 ± 0.027 | 0.680 ± 0.021 | 0.504 ± 0.028 | |
| GREEN | 0.838 ± 0.005 | 0.583 ± 0.017 | 0.806 ± 0.009 | 0.687 ± 0.038 | |
| BLUE | 0.648 ± 0.019 | 0.383 ± 0.012 | 0.698 ± 0.006 | 0.601 ± 0.010 | |
|
| |||||
|
|
|
|
|
|
|
| 25% Training, | RGB | 0.796 ± 0.036 | 0.443 ± 0.065 | 0.749 ± 0.028 | 0.622 ± 0.072 |
| GRAY | 0.835 ± 0.016 | 0.419 ± 0.022 | 0.739 ± 0.009 | 0.590 ± 0.066 | |
| RED | 0.362 ± 0.098 | 0.342 ± 0.072 | 0.628 ± 0.015 | 0.476 ± 0.007 | |
| GREEN | 0.846 ± 0.010 | 0.463 ± 0.025 | 0.758 ± 0.009 | 0.671 ± 0.027 | |
| BLUE | 0.537 ± 0.078 | 0.297 ± 0.028 | 0.660 ± 0.022 | 0.512 ± 0.026 | |
| 25% Training, | RGB | 0.839 ± 0.035 | 0.442 ± 0.068 | 0.749 ± 0.030 | 0.626 ± 0.073 |
| GRAY | 0.874 ± 0.018 | 0.413 ± 0.023 | 0.737 ± 0.009 | 0.592 ± 0.068 | |
| RED | 0.400 ± 0.108 | 0.352 ± 0.073 | 0.637 ± 0.014 | 0.476 ± 0.009 | |
| GREEN | 0.896 ± 0.009 | 0.462 ± 0.025 | 0.760 ± 0.009 | 0.676 ± 0.028 | |
| BLUE | 0.575 ± 0.080 | 0.300 ± 0.024 | 0.663 ± 0.020 | 0.512 ± 0.027 | |
| 55% Training, | RGB | 0.896 ± 0.005 | 0.539 ± 0.010 | 0.787 ± 0.006 | 0.732 ± 0.014 |
| GRAY | 0.895 ± 0.004 | 0.528 ± 0.012 | 0.781 ± 0.005 | 0.731 ± 0.006 | |
| RED | 0.660 ± 0.085 | 0.316 ± 0.037 | 0.674 ± 0.017 | 0.520 ± 0.038 | |
| GREEN | 0.904 ± 0.003 | 0.533 ± 0.008 | 0.786 ± 0.003 | 0.718 ± 0.024 | |
| BLUE | 0.783 ± 0.042 | 0.386 ± 0.044 | 0.705 ± 0.021 | 0.645 ± 0.037 | |
|
| |||||
|
|
|
|
|
|
|
| 25% Training, | RGB | 0.792 ± 0.006 | 0.537 ± 0.021 | 0.799 ± 0.013 | 0.597 ± 0.024 |
| GRAY | 0.776 ± 0.004 | 0.497 ± 0.017 | 0.781 ± 0.011 | 0.579 ± 0.025 | |
| RED | 0.204 ± 0.024 | 0.258 ± 0.017 | 0.591 ± 0.014 | 0.467 ± 0.002 | |
| GREEN | 0.821 ± 0.013 | 0.578 ± 0.012 | 0.824 ± 0.006 | 0.624 ± 0.037 | |
| BLUE | 0.155 ± 0.002 | 0.361 ± 0.010 | 0.580 ± 0.001 | 0.482 ± 0.008 | |
| 25% Training, | RGB | 0.759 ± 0.006 | 0.535 ± 0.023 | 0.797 ± 0.014 | 0.593 ± 0.023 |
| GRAY | 0.741 ± 0.005 | 0.503 ± 0.017 | 0.782 ± 0.011 | 0.576 ± 0.025 | |
| RED | 0.197 ± 0.021 | 0.245 ± 0.017 | 0.586 ± 0.013 | 0.467 ± 0.002 | |
| GREEN | 0.794 ± 0.016 | 0.581 ± 0.013 | 0.824 ± 0.006 | 0.619 ± 0.036 | |
| BLUE | 0.149 ± 0.004 | 0.368 ± 0.013 | 0.578 ± 0.002 | 0.480 ± 0.007 | |
| 55% Training, | RGB | 0.781 ± 0.008 | 0.608 ± 0.005 | 0.824 ± 0.004 | 0.693 ± 0.013 |
| GRAY | 0.768 ± 0.010 | 0.573 ± 0.017 | 0.807 ± 0.009 | 0.677 ± 0.022 | |
| RED | 0.512 ± 0.009 | 0.271 ± 0.021 | 0.641 ± 0.013 | 0.536 ± 0.011 | |
| GREEN | 0.788 ± 0.006 | 0.647 ± 0.009 | 0.846 ± 0.003 | 0.674 ± 0.060 | |
| BLUE | 0.274 ± 0.110 | 0.341 ± 0.047 | 0.620 ± 0.032 | 0.500 ± 0.019 | |
|
| |||||
|
|
|
|
|
|
|
| 25% Training, | RGB | 0.556 ± 0.204 | 0.300 ± 0.073 | 0.659 ± 0.073 | 0.478 ± 0.008 |
| GRAY | 0.619 ± 0.050 | 0.283 ± 0.058 | 0.680 ± 0.033 | 0.478 ± 0.017 | |
| RED | 0.148 ± 0.003 | 0.222 ± 0.033 | 0.516 ± 0.009 | 0.468 ± 0.000 | |
| GREEN | 0.600 ± 0.242 | 0.351 ± 0.030 | 0.680 ± 0.082 | 0.483 ± 0.019 | |
| BLUE | 0.167 ± 0.036 | 0.145 ± 0.034 | 0.518 ± 0.021 | 0.469 ± 0.001 | |
| 25% Training, | RGB | 0.531 ± 0.195 | 0.334 ± 0.082 | 0.672 ± 0.082 | 0.482 ± 0.009 |
| GRAY | 0.607 ± 0.055 | 0.314 ± 0.066 | 0.691 ± 0.039 | 0.483 ± 0.020 | |
| RED | 0.143 ± 0.003 | 0.231 ± 0.038 | 0.512 ± 0.011 | 0.471 ± 0.000 | |
| GREEN | 0.587 ± 0.243 | 0.376 ± 0.048 | 0.688 ± 0.092 | 0.488 ± 0.024 | |
| BLUE | 0.164 ± 0.032 | 0.142 ± 0.039 | 0.517 ± 0.020 | 0.472 ± 0.001 | |
| 55% Training, | RGB | 0.756 ± 0.014 | 0.448 ± 0.031 | 0.749 ± 0.015 | 0.610 ± 0.038 |
| GRAY | 0.748 ± 0.010 | 0.504 ± 0.026 | 0.770 ± 0.010 | 0.656 ± 0.017 | |
| RED | 0.181 ± 0.020 | 0.293 ± 0.069 | 0.558 ± 0.008 | 0.474 ± 0.006 | |
| GREEN | 0.749 ± 0.013 | 0.550 ± 0.025 | 0.795 ± 0.012 | 0.659 ± 0.038 | |
| BLUE | 0.163 ± 0.007 | 0.324 ± 0.059 | 0.547 ± 0.006 | 0.469 ± 0.004 | |
|
| |||||
|
|
|
|
|
|
|
| 25% Training, | RGB | 0.320 ± 0.011 | 0.398 ± 0.008 | 0.699 ± 0.006 | 0.541 ± 0.015 |
| GRAY | 0.315 ± 0.011 | 0.353 ± 0.016 | 0.675 ± 0.007 | 0.526 ± 0.017 | |
| RED | 0.203 ± 0.013 | 0.260 ± 0.016 | 0.614 ± 0.006 | 0.516 ± 0.005 | |
| GREEN | 0.332 ± 0.007 | 0.415 ± 0.018 | 0.705 ± 0.009 | 0.534 ± 0.014 | |
| BLUE | 0.237 ± 0.012 | 0.260 ± 0.008 | 0.620 ± 0.007 | 0.526 ± 0.006 | |
| 25% Training, | RGB | 0.313 ± 0.011 | 0.395 ± 0.008 | 0.697 ± 0.005 | 0.540 ± 0.015 |
| GRAY | 0.306 ± 0.011 | 0.350 ± 0.016 | 0.673 ± 0.008 | 0.524 ± 0.017 | |
| RED | 0.201 ± 0.013 | 0.259 ± 0.015 | 0.614 ± 0.006 | 0.516 ± 0.005 | |
| GREEN | 0.326 ± 0.007 | 0.412 ± 0.017 | 0.704 ± 0.009 | 0.532 ± 0.014 | |
| BLUE | 0.232 ± 0.011 | 0.257 ± 0.007 | 0.618 ± 0.006 | 0.524 ± 0.005 | |
| 55% Training, | RGB | 0.333 ± 0.005 | 0.445 ± 0.012 | 0.717 ± 0.004 | 0.557 ± 0.007 |
| GRAY | 0.330 ± 0.003 | 0.413 ± 0.014 | 0.700 ± 0.006 | 0.559 ± 0.004 | |
| RED | 0.289 ± 0.011 | 0.299 ± 0.007 | 0.641 ± 0.004 | 0.543 ± 0.003 | |
| GREEN | 0.335 ± 0.002 | 0.470 ± 0.010 | 0.728 ± 0.004 | 0.564 ± 0.004 | |
| BLUE | 0.281 ± 0.012 | 0.280 ± 0.013 | 0.630 ± 0.006 | 0.540 ± 0.004 | |
Performance (mean ± standard deviation) of U-Nets trained using different color channels for segmenting CRBVs when CLAHE is not applied on the retinal images in the pre-processing stage. Note that 55% data was used for training, whereas, 25% data is for validation and 25% data for testing.
| Database | Color | Precision | Recall | AUC | MIoU |
|---|---|---|---|---|---|
| CHASEDB1 | RGB | 0.676 ± 0.057 | 0.419 ± 0.037 | 0.727 ± 0.020 | 0.576 ± 0.051 |
| GRAY | 0.629 ± 0.078 | 0.406 ± 0.052 | 0.714 ± 0.025 | 0.570 ± 0.060 | |
| RED | 0.217 ± 0.012 | 0.353 ± 0.026 | 0.611 ± 0.006 | 0.476 ± 0.009 | |
| GREEN | 0.802 ± 0.017 | 0.530 ± 0.019 | 0.781 ± 0.009 | 0.672 ± 0.023 | |
| BLUE | 0.589 ± 0.023 | 0.373 ± 0.016 | 0.690 ± 0.006 | 0.556 ± 0.050 | |
| DRIVE | RGB | 0.856 ± 0.024 | 0.470 ± 0.017 | 0.750 ± 0.010 | 0.693 ± 0.011 |
| GRAY | 0.855 ± 0.021 | 0.464 ± 0.030 | 0.746 ± 0.015 | 0.693 ± 0.024 | |
| RED | 0.297 ± 0.009 | 0.376 ± 0.017 | 0.619 ± 0.003 | 0.472 ± 0.010 | |
| GREEN | 0.886 ± 0.006 | 0.509 ± 0.010 | 0.771 ± 0.005 | 0.722 ± 0.004 | |
| BLUE | 0.504 ± 0.171 | 0.331 ± 0.043 | 0.642 ± 0.031 | 0.551 ± 0.071 | |
| HRF | RGB | 0.757 ± 0.014 | 0.533 ± 0.023 | 0.784 ± 0.010 | 0.664 ± 0.026 |
| GRAY | 0.730 ± 0.010 | 0.520 ± 0.011 | 0.776 ± 0.006 | 0.655 ± 0.011 | |
| RED | 0.164 ± 0.002 | 0.311 ± 0.010 | 0.577 ± 0.001 | 0.483 ± 0.005 | |
| GREEN | 0.791 ± 0.007 | 0.603 ± 0.008 | 0.820 ± 0.003 | 0.705 ± 0.008 | |
| BLUE | 0.153 ± 0.004 | 0.347 ± 0.022 | 0.576 ± 0.003 | 0.476 ± 0.006 | |
| STARE | RGB | 0.579 ± 0.077 | 0.348 ± 0.030 | 0.696 ± 0.020 | 0.497 ± 0.023 |
| GRAY | 0.379 ± 0.146 | 0.312 ± 0.067 | 0.624 ± 0.041 | 0.487 ± 0.032 | |
| RED | 0.157 ± 0.004 | 0.444 ± 0.055 | 0.558 ± 0.010 | 0.456 ± 0.017 | |
| GREEN | 0.592 ± 0.085 | 0.442 ± 0.021 | 0.742 ± 0.010 | 0.517 ± 0.033 | |
| BLUE | 0.164 ± 0.005 | 0.327 ± 0.056 | 0.546 ± 0.013 | 0.474 ± 0.003 | |
| UoADR | RGB | 0.323 ± 0.003 | 0.411 ± 0.004 | 0.699 ± 0.002 | 0.555 ± 0.004 |
| GRAY | 0.319 ± 0.003 | 0.372 ± 0.019 | 0.679 ± 0.009 | 0.556 ± 0.005 | |
| RED | 0.238 ± 0.017 | 0.220 ± 0.014 | 0.598 ± 0.008 | 0.522 ± 0.005 | |
| GREEN | 0.328 ± 0.009 | 0.438 ± 0.019 | 0.713 ± 0.008 | 0.563 ± 0.004 | |
| BLUE | 0.262 ± 0.012 | 0.261 ± 0.008 | 0.619 ± 0.004 | 0.535 ± 0.002 |