| Literature DB >> 30872411 |
Xiaohui Du1, Lin Liu2, Xiangzhou Wang2, Guangming Ni2, Jing Zhang2, Ruqian Hao2, Juanxiu Liu2, Yong Liu2.
Abstract
The analysis of fecal-type components for clinical diagnosis is important. The main examination involves the counting of red blood cells (RBCs), white blood cells (WBCs), and molds under the microscopic. With the development of machine vision, some vision-based detection schemes have been proposed. However, these methods have a single target for detection, with low detection efficiency and low accuracy. We proposed an algorithm to identify the visible image of fecal composition based on intelligent deep learning. The algorithm mainly includes region proposal and candidate recognition. In the process of segmentation, we proposed a morphology extraction algorithm in a complex background. As for the candidate recognition, we proposed a new convolutional neural network (CNN) architecture based on Inception-v3 and principal component analysis (PCA). This method achieves high-average Precision of 90.7%, which is better than the other mainstream CNN models. Finally, the images within the rectangle marks were obtained. The total time for detection of an image was roughly 1200 ms. The algorithm proposed in the present paper can be integrated into an automatic fecal detection system.Entities:
Keywords: cell object detection; deep learning; fecal microscopic images; image recognition; pattern recognition
Year: 2019 PMID: 30872411 PMCID: PMC6449518 DOI: 10.1042/BSR20182100
Source DB: PubMed Journal: Biosci Rep ISSN: 0144-8463 Impact factor: 3.840
Figure 1The sample pre-processing and capturing optical system
Figure 2Cell candidates extracted
(A) Red blood cells; (B) White blood cells; (C) Mildews; (D) Impurities.
Figure 3Flow chart of the object segment algorithm
Figure 4Cell detection structure with CNN model
Figure 5Structure of Inception-v3
Figure 6PCA-inception training model
Segment result of five different samples
| ID | Algorithm | Number of targets of ground truth | Number of candidates | Number of missing | Time consumed (ms) |
|---|---|---|---|---|---|
| 1 | 2A | 6 | 19 | 0 | 534.746 |
| 2B | 22 | 0 | 3858.38 | ||
| 2 | 2A | 5 | 16 | 0 | 986.883 |
| 2B | 16 | 2 | 3462.25 | ||
| 3 | 2A | 10 | 91 | 0 | 838.002 |
| 2B | 105 | 0 | 4131.89 | ||
| 4 | 2A | 13 | 93 | 0 | 627.661 |
| 2B | 98 | 2 | 3832.61 | ||
| 5 | 2A | 11 | 119 | 0 | 982.831 |
| 2B | 106 | 1 | 3797.69 |
2A: the result for the object segment algorithm; 2B: the result for the SS.
Segment result statistics of 89665 different images
| Algorithm | Total target by artificial | Total missing | Average number of candidates per image | Average time consumed (ms) |
|---|---|---|---|---|
| 2A | 15818 | 210 | 65.41 | 648.808 |
| 2B | 739 | 70.35 | 3916.31 |
2A: the result for the object segment algorithm; 2B: the result for the SS.
Figure 7The comparison of the proposed method and SS: (A) proposed method; (B) SS
Figure 8Image recognition result: the blue boxes represent molds, while the cyan box is a WBC
The recognition results of PCA-Inception model
| RBCs | WBCs | Molds | Total | |
|---|---|---|---|---|
| Annotation Number | 761 | 693 | 1055 | 2509 |
| True positive | 728 | 611 | 983 | 2322 |
| 92.9% | 88.8% | 90.3% | Ave: 90.7% | |
| 95.7% | 88.2% | 93.2% | Ave: 92.5% | |
| 94.3% | 88.5% | 91.7% | Ave: 91.6% |
Comparison of the recognition results of several models
| VGG-19 | Inception-v3 | Inception-v4 | Inception-Resnet-v2 | PCA-Inception-v3 | |
|---|---|---|---|---|---|
| Average | 83.7% | 89.6% | 89.2% | 89.8% | 90.7% |
| Average | 86.2% | 90.1% | 90.8% | 90.4% | 92.5% |
| Average | 84.9% | 89.8% | 90.0% | 90.1% | 91.6% |