| Literature DB >> 32376840 |
A Mencattini1, D Di Giuseppe1, M C Comes1, P Casti1, F Corsi2, F R Bertani3, L Ghibelli4, L Businaro3, C Di Natale1, M C Parrini5, E Martinelli6.
Abstract
We describe a novel method to achieve a universal, massive, and fully automated analysis of cell motility behaviours, starting from time-lapse microscopy images. The approach was inspired by the recent successes in application of machine learning for style recognition in paintings and artistic style transfer. The originality of the method relies i) on the generation of atlas from the collection of single-cell trajectories in order to visually encode the multiple descriptors of cell motility, and ii) on the application of pre-trained Deep Learning Convolutional Neural Network architecture in order to extract relevant features to be used for classification tasks from this visual atlas. Validation tests were conducted on two different cell motility scenarios: 1) a 3D biomimetic gels of immune cells, co-cultured with breast cancer cells in organ-on-chip devices, upon treatment with an immunotherapy drug; 2) Petri dishes of clustered prostate cancer cells, upon treatment with a chemotherapy drug. For each scenario, single-cell trajectories are very accurately classified according to the presence or not of the drugs. This original approach demonstrates the existence of universal features in cell motility (a so called "motility style") which are identified by the DL approach in the rationale of discovering the unknown message in cell trajectories.Entities:
Mesh:
Substances:
Year: 2020 PMID: 32376840 PMCID: PMC7203117 DOI: 10.1038/s41598-020-64246-3
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1A schematic representation of the proposed method. (A) Time-lapse microscopy is used to acquire the video sequence of cells moving in a Petri dish or in an OOC platform. (B) Cells are localized and tracked through the video sequence. (C) For each cell (or cluster of cells) of interest, an atlas of trajectories is collected for the different biological conditions under consideration. (D) Through a pre-trained Deep Learning architecture, i.e., AlexNET, the tool provides a feature representation of each ATLAS in order to perform experiment classification through a predefined taxonomy (e.g., drug vs no-drug).
Figure 2Four examples of trajectories of immune cells around a selected cancer cell: (a,b) treated cancer cells and (c,d) untreated cancer cells for CASE STUDY 1. Colours have been used only for the sake of track visualization.
Figure 3Four examples of trajectories of a cancer cell within a cluster: treated (a,b) and untreated conditions (c,d) for CASE STUDY 2. Colours have been used only for the sake of track visualization.
Description of the two ATLAS compositions.
| EXPERIMENT 1 | EXPERIMENT 2 | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| VID7 | VID8 | VID9 | VID10 | VID11 | VID12 | |||||||
| Treatment | − | + | − | + | − | + | − | + | − | + | − | + |
| N. tumor Cells | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 | 10 |
| N. tracks | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 | 200 |
| VID1 | VID3 | VID5 | VID6 | |||||||||
| Treatment | + | − | − | + | + | + | + | |||||
| N. clusters | 9 | 5 | 9 | 9 | 8 | 8 | 11 | |||||
| N. tracks | 39 | 45 | 154 | 31 | 40 | 30 | 60 | |||||
VID stands for videos. Case study 1 atlas composition; Case study 2 atlas composition. Bolded videos indicate testing partition for each case study.
Classification accuracy values obtained for the two CASE STUDIES: CASE STUDY 1, CASE STUDY 2.
| Accuracy Values | Single-cell analysis | Tumor-cell microenvironment analysis (majority voting) | Video-level analysis (majority voting) |
|---|---|---|---|
| Case Study 1 | |||
| Training | |||
| Test | 68% (67%–69%) | 91% (86%–96%) | 100% |
| Case Study 2 | |||
| Training | |||
| Test | 82% (78%–85%) | 92% (88%–95%) | 100% |
Diverse consensus levels have been considered. First column indicates single-cell analysis with no-consensus; second column indicates tumor-cell environment analysis where consensus by majority voting is performed over all the immune cells tracks in the neighbourhood of the same cancer cell (case study 1) or cancer cells tracks belonging to the same cluster (case study 2); third column indicates that majority voting has been performed at the video level by combining all the tracks within the same video. Balanced accuracy has been evaluated for all the situations to account for samples unbalance. The accuracy values within the brackets indicate the results obtained in each turn of the two-fold testing procedure. The average accuracy values are also reported.
Classification accuracy values obtained for the two CASE STUDIES (CASE STUDY 1, CASE STUDY 2) by applying METHOD A (time series analysis by 1D Deep Learning strategy as that described in[17]) and METHOD B (classifiers using basic trajectory features).
| ACCURACY VALUES | Single-cell analysis | Tumor-cell microenvironment analysis (majority voting) | Video-level analysis (majority voting) |
|---|---|---|---|
| METHOD A | 59% (55–62%) | 59% (55–62%) | 59% (50–67%) |
| METHOD B | 64% (60–70%) | 79% (70–87%) | 75% (67–83%) |
| METHOD A | 55% (54–56%) | 55% (46–63%) | 71% (67–75%) |
| METHOD B | 65% (63– 66%) | 70% (68–72%) | 71% (67–75%) |
The three diverse consensus levels of Tab. 2 have been considered. First column indicates single-cell classification results with no-consensus; second column indicates tumor-cell environment analysis where consensus by majority voting is performed over all the immune cells tracks in the neighbourhood of the same cancer cell (case study 1) or cancer cells tracks belonging to the same cluster (case study 2); third column indicates that majority voting has been performed at the video level by combining the prediction of all the tracks within the same video. Balanced accuracy has been evaluated for all the situations to account for samples unbalance. The accuracy values within the brackets indicate the results obtained in each turn of the two-fold validation procedure. The average accuracy values are also reported.
Figure 4Distribution of track length for (a) Case Study 1 and (b) Case Study 2. The dashed red line indicates the observation time (a) 72 hr and (b) 6 hr.
Figure 5Examples of atlases of trajectory pictures for CASE STUDY 2. (a) Cluster of untreated cancer cells. (b) Cluster of treated cancer cells with etoposide agent 50μM. Colours are used only for the sake of visualization, since pictures of trajectories are represented by black & white images.