| Literature DB >> 30144349 |
Anna-Maija Penttinen1, Ilmari Parkkinen1, Sami Blom2, Jaakko Kopra3, Jaan-Olle Andressoo1, Kari Pitkänen2, Merja H Voutilainen1, Mart Saarma1, Mikko Airavaara1.
Abstract
Unbiased estimates of neuron numbers within substantia nigra are crucial for experimental Parkinson's disease models and gene-function studies. Unbiased stereological counting techniques with optical fractionation are successfully implemented, but are extremely laborious and time-consuming. The development of neural networks and deep learning has opened a new way to teach computers to count neurons. Implementation of a programming paradigm enables a computer to learn from the data and development of an automated cell counting method. The advantages of computerized counting are reproducibility, elimination of human error and fast high-capacity analysis. We implemented whole-slide digital imaging and deep convolutional neural networks (CNN) to count substantia nigra dopamine neurons. We compared the results of the developed method against independent manual counting by human observers and validated the CNN algorithm against previously published data in rats and mice, where tyrosine hydroxylase (TH)-immunoreactive neurons were counted using unbiased stereology. The developed CNN algorithm and fully cloud-embedded Aiforia™ platform provide robust and fast analysis of dopamine neurons in rat and mouse substantia nigra.Entities:
Keywords: artificial intelligence; cloud-based analysis; digital imaging; midbrain; stereology
Mesh:
Substances:
Year: 2018 PMID: 30144349 PMCID: PMC6585833 DOI: 10.1111/ejn.14129
Source DB: PubMed Journal: Eur J Neurosci ISSN: 0953-816X Impact factor: 3.386
Figure 1Workflow and validation of the Aiforia™ platform. (a) Schematic diagram of the workflow to implement whole‐slide scanning, cloud‐based image processing, and Aiforia™ platform to count TH‐positive neurons in SNpc. Circles represent detected neuronal somas. (b) Representative figure of the analysed area and CNN performance, scale bar is 100 μm. CNN, convolutional neural network. (c) The algorithm was validated by comparing the results to manual counts obtained by human observers in specific regions in the rat midbrain, R 2 = 0.95; y = 0.95x. (d) The algorithm was next tested against rat samples previously analysed with StereoInvestigator (Runeberg‐Roos et al., 2016). The data is shown as Left (L; lesion side)/Right (R; intact side) ratios; R 2 = 0.81. (e) The algorithm was tested against mouse samples previously analysed with StereoInvestigator (Kumar et al., 2015). The data is shown as Left (L; lesion side)/Right (R; intact side) ratios; R 2 = 0.87. (f) 96 consecutive 40‐micron thick brain sections were analysed to obtain the total number of TH‐positive neurons in rat SN. The analysis are is marked in each section. [Colour figure can be viewed at wileyonlinelibrary.com]
Formulas for counting precision, recall and F1‐score for the CNN algorithm
| Metrics |
|---|
| Precision = TP/(TP + FP) |
| Recall = TP/(TP + FN) |
| F1‐score = 2*Precision*Recall/(Precision + Recall) |
FP, false positive; FN, false negative; TP, true positive; TN, true negative.
The results for counting precision, recall and F1‐score of the CNN algorithm versus human observers
| Score (95% Confidence interval) | |
|---|---|
| Precision | 88.5% (85.5–91.4%) |
| Recall | 87.8% (84.9–90.7%) |
| F1‐score | 88.2% (85.3–91.0%) |
Estimates of total number of TH+ cells in 6‐OHDA lesioned rats and mice with StereoInvestigator (SI) and the CNN algorithm
| CNN lesioned | CNN intact | SI lesioned | SI intact | |
|---|---|---|---|---|
| Rats (vehicle treated) | 6,052 ± 1,633 | 21,443 ± 743 | 6,000 ± 1,636 | 21,276 ± 762 |
| Mice (wildtype, vehicle treated) | 1,428 ± 180 | 2,119 ± 95 | 1,139 ± 147 | 2,017 ± 100 |
In CNN analysis the obtained cell numbers were multiplies by six (rat, sections collected at six section intervals, total of nine sections analysed) or two (mouse, sections analyzed at three planes using the medial terminal nucleus of the accessory optic track as anatomical landmark (Kumar et al., 2015). N = 11 in all groups. Data are expressed as mean ± SEM.