| Literature DB >> 35773276 |
Alexander Studier-Fischer1, Silvia Seidlitz2,3, Jan Sellner2,3,4, Berkin Özdemir1, Manuel Wiesenfarth5, Leonardo Ayala2, Jan Odenthal1, Samuel Knödler1, Karl Friedrich Kowalewski6, Caelan Max Haney1, Isabella Camplisson7, Maximilian Dietrich8, Karsten Schmidt9, Gabriel Alexander Salg1, Hannes Götz Kenngott1, Tim Julian Adler2,10, Nicholas Schreck5, Annette Kopp-Schneider5, Klaus Maier-Hein2,3,4, Lena Maier-Hein2,3,4,10, Beat Peter Müller-Stich1, Felix Nickel11,12.
Abstract
Visual discrimination of tissue during surgery can be challenging since different tissues appear similar to the human eye. Hyperspectral imaging (HSI) removes this limitation by associating each pixel with high-dimensional spectral information. While previous work has shown its general potential to discriminate tissue, clinical translation has been limited due to the method's current lack of robustness and generalizability. Specifically, the scientific community is lacking a comprehensive spectral tissue atlas, and it is unknown whether variability in spectral reflectance is primarily explained by tissue type rather than the recorded individual or specific acquisition conditions. The contribution of this work is threefold: (1) Based on an annotated medical HSI data set (9059 images from 46 pigs), we present a tissue atlas featuring spectral fingerprints of 20 different porcine organs and tissue types. (2) Using the principle of mixed model analysis, we show that the greatest source of variability related to HSI images is the organ under observation. (3) We show that HSI-based fully-automatic tissue differentiation of 20 organ classes with deep neural networks is possible with high accuracy (> 95%). We conclude from our study that automatic tissue discrimination based on HSI data is feasible and could thus aid in intraoperative decisionmaking and pave the way for context-aware computer-assisted surgery systems and autonomous robotics.Entities:
Mesh:
Year: 2022 PMID: 35773276 PMCID: PMC9247052 DOI: 10.1038/s41598-022-15040-w
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Tissue atlas comprising spectral fingerprints of 20 organs and specific tissue types. Stomach (A = 39; n = 849), jejunum (A = 44; n = 1546), colon (A = 39; n = 1330), liver (A = 41; n = 1454), gallbladder (A = 28; n = 526), pancreas (A = 31; n = 530), kidney (A = 42; n = 568), spleen (A = 41; n = 1353), bladder (A = 32; n = 779), omentum (A = 23; n = 570), lung (A = 19; n = 652), heart (A = 19; n = 629), cartilage (A = 15; n = 586), bone (A = 14; n = 537), skin (A = 43; n = 2158), muscle (A = 15; n = 560), peritoneum (A = 28; n = 2042), vena cava (A = 15; n = 353), kidney with Gerota’s fascia (A = 18; n = 393), bile fluid (A = 13; n = 362). A indicates the number of animals; n indicates the number of measurements in total. Graphs depict mean reflectance (ℓ1-normalized on pixel-level) of individual pigs (gray) as well as overall mean (blue) ± 1 standard deviation (SD) (black) with wavelengths from 500 to 1000 nm on the x-axis and reflectance in arbitrary units on the y-axis.
Figure 2Visualization of spectral similarity with t-distributed Stochastic Neighbor Embedding (t-SNE) as a non-linear dimensionality reduction tool on the ℓ1-normalized data; one point represents the median spectrum within one region of interest (ROI) of one organ in one image of one pig. It can be seen that organs such as spleen and liver form isolated clusters, while other organs such as jejunum overlap with the rest.
Figure 3Sources of variation of hyperspectral data. (Proportion of) variability in reflectance explained by each factor using linear mixed models. Factors include “organ”, “pig”, “angle”, “image” and “repetition”. For each recorded wavelength, an independent linear mixed model was fitted with fixed effects for the factors “organ” and “angle” as well as random effects for “pig” and “image”. Variation across repetitions was given by the residual variation. The greater the proportion of variability for “organ”, the more reflectance can be seen as organ-characteristic. Shaded areas depict 95% (pointwise) confidence intervals based on parametric bootstrapping. The numbers represent the median across wavelengths.
Figure 4Sources of variation of hyperspectral data stratified by organ. Explained variation analysis stratified by organ using linear mixed models. For each organ and wavelength, independent linear mixed models were fitted with fixed effects for “angle” and random effects for “pig” and “image”. Variation across repetitions is given by the residual explained standard deviation. Shaded areas depict 95% (pointwise) confidence intervals based on parametric bootstrapping. The numbers on each subplot represent the median across wavelengths.
Figure 5Results of deep learning-based organ classification. (a) confusion matrix which was generated for a hold-out test set comprising 9895 annotations from 5293 images of 8 pigs that were not part of the training data. Confusion matrices were calculated and column-wise normalized (i.e. divided by the column sum) per pig based on the absolute number of (mis-)classified annotations. These normalized confusion matrices were averaged across pigs while ignoring non-existent entries (e.g. due to missing organs for one pig). Each value in the matrix thus depicts the average fraction of annotations which were labeled as the column class and predicted as the row class. Numbers in brackets depict the standard deviation across pigs. Zero values are not shown in the confusion matrix in order to improve visibility. Since multiple organs can appear on the same image, the number of annotations exceeds the number of images. (b) Exemplary image with multiple organ annotations by an expert. (c) Organs classified through deep learning.
Figure 6Hyperspectral camera system. (a) Visualization of a three-dimensional hyperspectral datacube with x and y as spatial dimensions and z as hyperspectral dimension. The recorded reflectance information content of one pixel is visualized as an example. (b) TIVITA® Tissue camera system.