| Literature DB >> 30246927 |
Leonie L Zeune1,2, Sanne de Wit1, A M Sofie Berghuis3, Maarten J IJzerman3, Leon W M M Terstappen1, Christoph Brune2.
Abstract
For using counts of circulating tumor cells (CTCs) in the clinic to aid a physician's decision, its reported values will need to be accurate and comparable between institutions. Many technologies have become available to enumerate and characterize CTCs, thereby showing a large range of reported values. Here we introduce an Open Source CTC scoring tool to enable comparison of different reviewers and facilitate the reach of a consensus on assigning objects as CTCs. One hundred images generated from two different platforms were used to assess concordance between 15 reviewers and an expert panel. Large differences were observed between reviewers in assigning objects as CTCs urging the need for computer recognition of CTCs. A demonstration of a deep learning approach on the 100 images showed the promise of this technique for future CTC enumeration.Entities:
Keywords: ACCEPT; Agreement; CTC; consensus; deep learning; definition; experts; ground truth; reviewers; scoring
Mesh:
Year: 2018 PMID: 30246927 PMCID: PMC6585854 DOI: 10.1002/cyto.a.23576
Source DB: PubMed Journal: Cytometry A ISSN: 1552-4922 Impact factor: 4.355
Figure 1Screenshot of the ACCEPT CTC scoring tool showing a thumbnail gallery of all fluorescent channels for each presented cell, with four answers and three plots presenting measurement information of the respective cell to aid in the decision. After selecting an answer, the program automatically proceeds to the next cell. [Color figure can be viewed at wileyonlinelibrary.com]
Figure 2Results of the CTC scoring by 15 reviewers, summarized with the average reviewer, followed by the results of the expert panel consisting of four expert reviewers and the deep learning automated CTC scoring (upper panel). The average reviewer score of all 100 cells are presented in three scatter plots using several parameters: the mean intensity of the signal detected in all three channels (DAPI, CK, and CD45), the size and roundness of the CK signal and the overlay between the DAPI and CK signals (lower panel).
Overview of agreement on 100 cells between (A) Average Reviewer and Expert Panel, (B) Average Reviewer and Deep Learning, and (C) Expert Panel and Deep Learning, summarizing scores as a “CTC” class and “Not a CTC” class.
| A | |||
| Agreement: 80 ĸ = 0.60 | Expert panel | ||
| Not a CTC | CTC | ||
| Average reviewers | Not a CTC | 50 | 19 |
| CTC | 1 | 30 | |
| B | |||
| Agreement: 84 ĸ = 0.64 | Deep learning | ||
| Not a CTC | CTC | ||
| Average reviewers | Not a CTC | 58 | 11 |
| CTC | 5 | 26 | |
| C | |||
| Agreement: 76 ĸ = 0.52 | Deep learning | ||
| Not a CTC | CTC | ||
| Expert panel | Not a CTC | 45 | 6 |
| CTC | 18 | 31 | |