| Literature DB >> 33176716 |
Hang Yu1, Feng Yang1, Sivaramakrishnan Rajaraman1, Ilker Ersoy2, Golnaz Moallem1,3, Mahdieh Poostchi1, Kannappan Palaniappan4, Sameer Antani1, Richard J Maude5,6,7, Stefan Jaeger8.
Abstract
BACKGROUND: Light microscopy is often used for malaria diagnosis in the field. However, it is time-consuming and quality of the results depends heavily on the skill of microscopists. Automating malaria light microscopy is a promising solution, but it still remains a challenge and an active area of research. Current tools are often expensive and involve sophisticated hardware components, which makes it hard to deploy them in resource-limited areas.Entities:
Keywords: Automated light microscopy; Convolutional neural network; Machine learning; Malaria; Smartphone application
Mesh:
Year: 2020 PMID: 33176716 PMCID: PMC7656677 DOI: 10.1186/s12879-020-05453-1
Source DB: PubMed Journal: BMC Infect Dis ISSN: 1471-2334 Impact factor: 3.090
Fig. 1System Setup for Malaria Screener. During the (semi-) automated* screening process, the body of the smartphone is attached to an adapter. The adapter holds the phone, and aligns its camera with the eyepiece of the microscope. * The system is semi-automated in that the user needs to move the slide manually to search for an ideal field of view while capturing smear images
Fig. 2Diagram of the application software architecture and interfaces
Fig. 3Diagram of the parasite detection module for a thin smear input. The original image is first segmented using a watershed algorithm to extract single-cell patches. These cell patches are then classified by a customized CNN model, which has been pre-trained using TensorFlow framework, and deployed on the smartphone with TensorFlow Lite
Fig. 5a UI screens during a slide screening session. b The workflow of a session
Fig. 4Diagram of the local SQLite database. PK: primary key. Each line that connects two tables indicates the one-to-many relationship between them. For example, the patient table has a one-to-many relationship with the Slide table, meaning one patient can have multiple slides. Fields with an asterisk symbol (*) are either mandatory inputs by the user or automatically generated data; other fields are optional inputs by the user. Name of the slide preparer. Name of the user performing the screening. App outputs and manual counts for thin smears: RBC counts, infected RBC counts, manual RBC counts, manual infected RBC counts. App outputs and manual counts for thick smears: parasite counts, WBC counts, manual parasite counts, manual WBC counts
System mean performance on five folds for thick smears
| Accuracy | AUC | Sensitivity | Specificity | Precision | |
|---|---|---|---|---|---|
| Patch-level [ | 96.89% | 98.48% | 90.82% | 97.43% | 74.84% |
| Patient-level [ | 78.00% | 84.90% | 79.33% | 74.00% | 90.42% |
Classification module mean performance on five folds for thin smears compared to the state-of-the-art
| Accuracy | AUC | Sensitivity | Specificity | F1-score | |
|---|---|---|---|---|---|
| Proposed Module (Patch-level) [ | |||||
| Proposed Module (Patient-level) [ | 95.9% | 99.1% | 94.7% | 97.2% | 95.9% |
| Gopakumar et al. (2018) [ | 97.7% | – | 97.1% | 98.5% | – |
| Bibin, Nair & Punitha (2017) [ | 96.3% | – | 97.6% | 95.9% | – |
| Dong et al. (2017) [ | 98.1% | – | – | – | – |
| Liang et al. (2017) [ | 97.3% | – | 96.9% | 97.7% | – |
| Das et al. (2013) [ | 84.0% | – | 68.9% | – | |
| Ross et al. (2006) [ | 73.0% | – | 85.0% | – | – |