| Literature DB >> 35974325 |
Judah Zammit1, Daryl L X Fung1, Qian Liu1,2, Carson Kai-Sang Leung1, Pingzhao Hu3,4,5.
Abstract
BACKGROUND: A recurring problem in image segmentation is a lack of labelled data. This problem is especially acute in the segmentation of lung computed tomography (CT) of patients with Coronavirus Disease 2019 (COVID-19). The reason for this is simple: the disease has not been prevalent long enough to generate a great number of labels. Semi-supervised learning promises a way to learn from data that is unlabelled and has seen tremendous advancements in recent years. However, due to the complexity of its label space, those advancements cannot be applied to image segmentation. That being said, it is this same complexity that makes it extremely expensive to obtain pixel-level labels, making semi-supervised learning all the more appealing. This study seeks to bridge this gap by proposing a novel model that utilizes the image segmentation abilities of deep convolution networks and the semi-supervised learning abilities of generative models for chest CT images of patients with the COVID-19.Entities:
Keywords: COVID-19; Computed tomography; Convolutional network; Image segmentation; Semi-supervised learning
Mesh:
Year: 2022 PMID: 35974325 PMCID: PMC9381397 DOI: 10.1186/s12859-022-04878-6
Source DB: PubMed Journal: BMC Bioinformatics ISSN: 1471-2105 Impact factor: 3.307
Fig. 1Before (top) and after (bottom) data pre-processing
Quantitative results of ground-glass opacity (GGO), consolidation (CON), background, and the overall average on the test dataset
| Lesion | Method | IoU | F1 | Recall | Precision |
|---|---|---|---|---|---|
| GGO | U-Net | 0.391 ± 0.280 | 0.499 ± 0.32 | 0.608 ± 0.358 | 0.47 ± 0.326 |
| GGO | SegNet | 0.004 ± 0.027 | 0.007 ± 0.044 | 0.012 ± 0.087 | 0.009 ± 0.071 |
| GGO | StitchNet | 0.358 ± 0.257 | 0.471 ± 0.303 | 0.517 ± 0.331 | 0.489 ± 0.328 |
| CON | U-Net | 0.404 ± 0.331 | 0.49 ± 0.368 | 0.616 ± 0.378 | 0.485 ± 0.38 |
| CON | SegNet | 0.021 ± 0.113 | 0.027 ± 0.137 | 0.057 ± 0.227 | 0.021 ± 0.114 |
| CON | StitchNet | 0.318 ± 0.315 | 0.397 ± 0.361 | 0.539 ± 0.411 | 0.387 ± 0.369 |
| Background | U-Net | 0.983 ± 0.023 | 0.992 ± 0.012 | 0.987 ± 0.02 | 0.996 ± 0.006 |
| Background | SegNet | 0.97 ± 0.044 | 0.984 ± 0.024 | 0.999 ± 0.009 | 0.971 ± 0.043 |
| Background | StitchNet | 0.985 ± 0.021 | 0.992 ± 0.011 | 0.992 ± 0.011 | 0.993 ± 0.014 |
| Overall | U-Net | 0.593 | 0.66 | 0.737 | 0.65 |
| Overall | SegNet | 0.332 | 0.339 | 0.356 | 0.334 |
| Overall | StitchNet | 0.554 | 0.62 | 0.683 | 0.623 |
Fig. 2Visual comparison of the segmentation results, where the green and blue labels indicate GGO and Consolidation, respectively
Fig. 3A comparison between the LVAE and our SVAE
Fig. 4A description of the building blocks of the SVAE
The dimensionality of the five latent variables
| Level | ||
|---|---|---|
| 0 | (352,352,1) | NA |
| 1 | (176,176,1) | (176,176,32) |
| 2 | (88,88,1) | (88,88,64) |
| 3 | (44,44,1) | (44,44,128) |
| 4 | (22,22,1) | (22,22,256) |
| 5 | (11,11,1) | (11,11,512) |
Level 0 denotes the input image x
Fig. 5An illustration of the mappings between the SVAE’s variables. We denote the output of each block as
Fig. 6Visualization of the data and StitchNet’s outputs. For segmentations, ground glass opacity is shown in green, consolidation in blue and healthy tissue in black
Fig. 7Hierarchical graphical models. Latent, partially observed and observed variables are shown with clear, half-filled and filled, respectively. Arrows and diamond nodes represent functional mappings