| Literature DB >> 31483827 |
Shori Nishimoto1, Yuta Tokuoka1, Takahiro G Yamada1, Noriko F Hiroi1,2, Akira Funahashi1.
Abstract
Image-based deep learning systems, such as convolutional neural networks (CNNs), have recently been applied to cell classification, producing impressive results; however, application of CNNs has been confined to classification of the current cell state from the image. Here, we focused on cell movement where current and/or past cell shape can influence the future cell movement. We demonstrate that CNNs prospectively predicted the future direction of cell movement with high accuracy from a single image patch of a cell at a certain time. Furthermore, by visualizing the image features that were learned by the CNNs, we could identify morphological features, e.g., the protrusions and trailing edge that have been experimentally reported to determine the direction of cell movement. Our results indicate that CNNs have the potential to predict the future direction of cell movement from current cell shape, and can be used to automatically identify those morphological features that influence future cell movement.Entities:
Mesh:
Year: 2019 PMID: 31483827 PMCID: PMC6726366 DOI: 10.1371/journal.pone.0221245
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Number of image patches per moving direction.
| Cell Type | Moving Direction | train | validation | test | total |
|---|---|---|---|---|---|
| NIH/3T3 | upper right | 107 | 36 | 36 | 179 |
| upper left | 112 | 38 | 38 | 188 | |
| lower left | 145 | 49 | 48 | 242 | |
| lower right | 105 | 36 | 35 | 176 | |
| U373 | upper right | 99 | 33 | 33 | 165 |
| upper left | 151 | 51 | 51 | 253 | |
| lower left | 57 | 20 | 19 | 96 | |
| lower right | 168 | 57 | 56 | 281 | |
| hTERT-RPE1 | upper right | 57 | 20 | 19 | 96 |
| upper left | 438 | 146 | 147 | 731 | |
| lower left | 162 | 55 | 54 | 271 | |
| lower right | 141 | 47 | 47 | 235 |
Fig 1Architecture of CNN models for predicting the future direction of cell movement.
In the flowchart, an image patch of a NIH/3T3 cell annotated on the upper right (Input) is presented to a CNN model. The input is processed through a series of repeating convolutional layers (orange) and max-pooling layers (yellow). In the convolutional layer, the activation images illustrate extracted feature maps of the sample image patch (Input). The red boxes and lines illustrate the connections within the CNN model. After repeating processing through convolutional layers and max-pooling layers, fully connected layers are used for prediction (green). The network output (Output) represents the distribution over four moving directions.
CNN model-relevant hyperparameters.
| Layer | Type | Description |
|---|---|---|
| 1 | Convolution | Filter size = 5 × 5, Number of filters = 8, Stride size = 1 |
| 2 | Convolution | Filter size = 5 × 5, Number of filters = 32, Stride size = 1 |
| 3 | Max-pooling | Filter size = 2 × 2, Stride size = 2 |
| 4 | Convolution | Filter size = 3 × 3, Number of filters = 40, Stride size = 1 |
| 5 | Convolution | Filter size = 5 × 5, Number of filters = 32, Stride size = 1 |
| 6 | Max-pooling | Filter size = 2 × 2, Stride size = 2 |
| 7 | Convolution | Filter size = 5 × 5, Number of filters = 48, Stride size = 1 |
| 8 | Convolution | Filter size = 5 × 5, Number of filters = 64, Stride size = 1 |
| 9 | Max-pooling | Filter size = 2 × 2, Stride size = 2 |
| 10 | Convolution | Filter size = 5 × 5, Number of filters = 64, Stride size = 1 |
| 11 | Convolution | Filter size = 5 × 5, Number of filters = 72, Stride size = 1 |
| 12 | Max-pooling | Filter size = 2 × 2, Stride size = 2 |
| 13 | Fully connected | Number of neurons = 1000 |
| 14 | Fully connected | Number of neurons = 4 |
Fig 2Visualized image features learned by the CNN models.
(A) NIH/3T3 dataset. (B) U373 dataset. (C) hTERT-RPE1 dataset. For each moving direction, each group of images shows exemplary results (i.e., those for a correctly predicted test image patch). The upper row of each group of images comprises—from left to right—the frame corresponding to the input image patch, the frame imaged in the middle between the left frame and the right frame, and the frame when the moving direction was annotated (scale bars, 20 μm). The time under each frame shows the elapsed time since the leftmost frame was imaged. The blue bounding box indicates the area corresponding to the input image patch. The red dot indicates the position of the cell obtained by manual tracking. The red line indicates the trajectory of cell movement starting from the position of the cell at 0 min. The lower row of each group of images comprises—from left to right—the input image patch, the local features visualized by GBP for the feature maps whose maximum activations were top three, and the heatmap of pixel-wise relevance calculated by DTD.
Fig 3Annotation of the moving direction.
(A) Exemplary time-lapse images of a migrating cell (scale bars, 20 μm). The time under each frame shows the elapsed time since the leftmost frame (annotation target) was imaged. Δt is the time when the net displacement exceeded the average diameter of NIH/3T3 cells. The red dot indicates the position of the cell obtained by manual tracking. The red line indicates the trajectory of cell movement starting from the position of the cell at 0 min. The radius of the cyan circle is the average diameter of NIH/3T3 cells. (B) Annotating one of the four moving directions. According to the value of cell displacement (Δx, Δy) at the time Δt, the moving direction was annotated as shown in the figure. The red line and the cyan circle are the same as those of the frame at Δt min in (A).