| Literature DB >> 35365743 |
Ina Vernikouskaya1, Hans-Peter Müller2, Dominik Felbel1, Francesco Roselli2,3, Albert C Ludolph2,3, Jan Kassubek2,3, Volker Rasche4,5.
Abstract
The objective of this study was to automate the discrimination and quantification of human abdominal body fat compartments into subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) from T1-weighted MRI using encoder-decoder convolutional neural networks (CNN) and to apply the algorithm to a diseased patient sample, i.e., patients with amyotrophic lateral sclerosis (ALS). One-hundred-and-fifty-five participants (74 patients with ALS and 81 healthy controls) were split in training (50%), validation (6%), and test (44%) data. SAT and VAT volumes were determined by a novel automated CNN-based algorithm of U-Net like architecture in comparison with an established protocol with semi-automatic assessment as the reference. The dice coefficients between the CNN predicted masks and the reference segmentation were 0.87 ± 0.04 for SAT and 0.64 ± 0.17 for VAT in the control group and 0.87 ± 0.08 for SAT and 0.68 ± 0.15 for VAT in the ALS group. The significantly increased VAT/SAT ratio in the ALS group in comparison to controls confirmed the previous results. In summary, the CNN approach using CNN of U-Net architecture for automated segmentation of abdominal adipose tissue substantially facilitates data processing and offers the opportunity to automatically discriminate abdominal SAT and VAT compartments. Within the research field of neurodegenerative disorders with body composition alterations like ALS, the unbiased analysis of body fat components might pave the way for these parameters as a potential biological marker or a secondary read-out for clinical trials.Entities:
Mesh:
Year: 2022 PMID: 35365743 PMCID: PMC8976026 DOI: 10.1038/s41598-022-09518-w
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Average network performance in validation and test data sets. VAT—visceral adipose tissue; SAT—subcutaneous adipose tissue.
| Fat compartment | Metric | Validation | Test | |
|---|---|---|---|---|
| ALS patients | VAT | Dice | 0.56 ± 0.09 | 0.68 ± 0.15 |
| Pixel error (%) | 4.11 ± 1.31 | 3.11 ± 1.29 | ||
| SAT | Dice | 0.87 ± 0.06 | 0.87 ± 0.08 | |
| Pixel error (%) | 2.07 ± 0.80 | 1.55 ± 0.63 | ||
| Controls | VAT | Dice | 0.60 ± 0.10 | 0.64 ± 0.17 |
| Pixel error (%) | 0.79 ± 0.48 | 2.41 ± 1.67 | ||
| SAT | Dice | 0.87 ± 0.03 | 0.87 ± 0.04 | |
| Pixel error (%) | 0.69 ± 0.08 | 2.11 ± 1.37 |
Figure 1Prediction results from a test dataset from a control. (a) Original transversal MRI single plane image. (b) Reference segmentation mask. (c) Predicted label map. (d) Difference image between reference and predicted segmentation with arrows indicating major differences in prediction of VAT in hip bones. (e) Predicted segmented mask overlaid on original MR image.
Figure 2Prediction results on randomly selected test cases. Multi-slice prediction on a dataset from a control (a) and from an ALS patient (b).
Figure 33D rendering of multi-slice predictions from a control (a) and ALS patient (b).
Figure 4Correlations between volumes of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) calculated based on the reference segmentation vs. predicted by U-Net in the control group (upper panels) and in the ALS group (lower panels).
Figure 5Ratio of visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) for 34 ALS patients vs. 34 controls (test sample data). Error bars are the standard error of the mean; *p < 0.01.
Clinical characterization of subject groups. FVC—forced vital capacity.
| ALS patients | controls | ||
|---|---|---|---|
| m/f | 50/24 | 42/39 | 0.05 |
| Age/years | 60 ± 13 median 62 range (22–81) | 60 ± 13 median 58 range (26–88) | 0.19 |
| BMI/kg/m2 | 26 ± 4 median 24 range (15–32) | 24 ± 4 median 25 range (19–40) | 0.01 |
| ALS-FRS-R | 37 ± 8 | – | – |
| Slope (ALS-FRS-R)/year | − 10 ± 10 | – | – |
| Disease duration/years | 2.1 ± 1.6 | – | – |
| Onset (spinal/bulbar) | 50/14 | – | – |
| FVC/% | 65 ± 14 | – | – |
| Age at onset/years | 58 ± 12 | – | – |
Figure 6Architecture of proposed encoder-decoder convolutional neural network. It takes an image of 384 × 384 pixels resolution and process it in several convolutional, pooling, transposed convolutional and concatenation layers, before the final pixelwise semantic segmentation is performed with the “softmax” activation in the last classification layer.