| Literature DB >> 34353324 |
Shenming Hu1, Xinze Luan2, Hong Wu3, Xiaoting Wang2, Chunhong Yan4, Jingying Wang3, Guantong Liu4, Wei He5.
Abstract
PURPOSE: A real-time automatic cataract-grading algorithm based on cataract video is proposed.Entities:
Keywords: Automatic cataract grading; Deep learning; YOLOv3
Mesh:
Year: 2021 PMID: 34353324 PMCID: PMC8340478 DOI: 10.1186/s12938-021-00906-3
Source DB: PubMed Journal: Biomed Eng Online ISSN: 1475-925X Impact factor: 2.819
Fig. 1The image and usage scenario of the slit lamp
Fig. 2a–d Describe the four random collection methods of eye lens, respectively. To reduce the impact of video context correlation on the ACCV method caused by different shooting methods
Fig. 3Example image of a light knife cutting into the lens area. a–c are diagrams when the slit image is outside the pupil; d within the red frame, the light knife can be considered to have entered the pupil area; while within the orange frame, the light knife is considered to be outside the pupil
Fig. 4Sample of lens image classification marked by hospital doctors
Fig. 5The overall flowchart of the ACCV method
ACCV algorithm description
| Algorithm description |
|---|
| S1. Input a video file collected by the mobile phone slit lamp. Send it to YOLOv3 to identify whether the current frame contains lens section information. If not, continue to identify. If it contains lens section information, then go to the next step |
| S2. After identifying the information of the lens section, continuously judge whether the next frame also is the lens position to eliminate misjudgment. If two consecutive frames are in the pupil, then go to the next step; if the first frame of YOLOv3 is identified to be in the pupil and the second frame not, or both frames are not in the pupil, then it continues to be sent to YOLOv3 for recognition |
| S3. After judging that two consecutive frames are in the pupil, and the area shall be sent into the YCrCb space. Take the Cb component to get ValueCb, calculate the number of ValueCb > average AverCb, and get NUM_Cb. And send it into the differential Relu activation function after normalization processing, and unify the different input ranges to get NOR_NUM_Cb |
| S4. At this time, judge whether NOR_NUM_Cb is zero or not. If it is zero, the demarcated area is still in the pupil, and you can continue to obtain the lens section view. If it is 1, the demarcated area is not in the pupil and then resend it into YOLOv3 to judge whether to enter the pupil area again |
| S5. The pupil area recognized by YOLOv3 is intercepted from the original image, and the obtained image data set is classified based on DensNet with deep learning |
Fig. 6Basic principles of YOLOv3 diagram
Fig. 8Comparison algorithm and confusion matrix of ACCV
Fig. 7Each color interval of binarized graphs greater than the average gray level. The change in Cb space is closest to whether the slit light is in the pupil
Comparison of evaluation indexes of VGG19, Inception-v3, Resnet50, Mobilenet, Xception and ACCV
| Method | Acc | Sensitivity | Specificity | Pre | F1 |
|---|---|---|---|---|---|
| ACCV | 0.9400 | 0.9200 | 0.9600 | 0.9580 | 0.9388 |
| Mobilenet | 0.8800 | 0.8200 | 0.9400 | 0.9318 | 0.8723 |
| VGG-19 | 0.8700 | 0.7600 | 0.9800 | 0.9744 | 0.8539 |
| Inception-v3 | 0.8100 | 0.6600 | 0.9600 | 0.9429 | 0.7765 |
| ResNet-50 | 0.8600 | 0.8000 | 0.9200 | 0.9091 | 0.8511 |
| Xception | 0.8600 | 0.8200 | 0.9000 | 0.8943 | 0.8542 |
Fig. 9ROC curves and AUC values. a Densenet—the classification model in ACCV; b Mobilenet-the classification model; c Inception-v3-the classification model; d ResNet-50-the classification modelthe classification model; e vgg-19-the classification modelthe classification model; f Xception-the classification modelthe classification model