| Literature DB >> 35925509 |
Niclas Bockelmann1, Daniel Schetelig2, Denise Kesslau2, Steffen Buschschlüter2, Floris Ernst3, Matteo Mario Bonsanto4.
Abstract
PURPOSE: During brain tumor surgery, care must be taken to accurately differentiate between tumorous and healthy tissue, as inadvertent resection of functional brain areas can cause severe consequences. Since visual assessment can be difficult during tissue resection, neurosurgeons have to rely on the mechanical perception of tissue, which in itself is inherently challenging. A commonly used instrument for tumor resection is the ultrasonic aspirator, whose system behavior is already dependent on tissue properties. Using data recorded during tissue fragmentation, machine learning-based tissue differentiation is investigated for the first time utilizing ultrasonic aspirators.Entities:
Keywords: Convolutional neural network; Machine learning; Tactile sensor; Tissue differentiation; Ultrasonic aspirator
Mesh:
Year: 2022 PMID: 35925509 PMCID: PMC9463293 DOI: 10.1007/s11548-022-02713-0
Source DB: PubMed Journal: Int J Comput Assist Radiol Surg ISSN: 1861-6410 Impact factor: 3.421
Fig. 1Conceptual representation of tissue differentiation using an ultrasonic aspirator. Electrical data that are recorded during tissue interaction with an ultrasonic aspirator are processed with machine learning algorithms to infer information about tissue properties
Fig. 2Tissue model and data acquisition process. Example of artificial tissue model is shown in a. Schematic representation of data acquisition process is shown in b. Tissue ablation and data acquisition are performed using a CNC-machine that traverses the ultrasonic aspirator in lanes across the artificial tissue model
Fig. 3Distribution of measurement points over two classes very soft and soft. Each differently colored box indicates data from one recording
Hyperparameters of neural network architecture
| Layer type | Size | Nonlinearity | |
|---|---|---|---|
| AE—hidden encoder | FC | 32 | ReLU |
| AE—latent space | FC | 9 | None |
| AE—hidden decoder | FC | 32 | ReLU |
| AE—output | FC | None | |
| Clf—hidden | FC | 4 | ReLU |
| Clf—output | FC | 1 | None |
n number of input features, AE autoencoder, Clf classifier, FC fully connected
Hyperparameters of convolutional neural network architecture
| Layer type | # Kernel | Kernel size | Stride | Nonlinearity | |
|---|---|---|---|---|---|
| Conv In | Conv | 16 | 7 | 1 | ReLU |
| ResBlock 1 | Conv | 32 | 7 | 2 | ReLU |
| Conv | 32 | 1 | 2 | ReLU after addition | |
| Conv | 32 | 7 | 1 | ReLU after addition | |
| ResBlock 2 | Conv | 32 | 7 | 2 | ReLU |
| Conv | 32 | 1 | 2 | ReLU after addition | |
| Conv | 32 | 7 | 1 | ReLU after addition | |
| ResBlock 3 | Conv | 64 | 7 | 2 | ReLU |
| Conv | 64 | 1 | 2 | ReLU after addition | |
| Conv | 64 | 7 | 1 | ReLU after addition | |
| Pooling | Adaptive Avg. Pool | – | – | – | – |
| Classifier | FC | 2 | – | – | None |
Every Conv layer is followed by a batch normalization. The outputs of the last two Conv layers within a ResBlock are added and subsequently processed by ReLU
Conv convolution, FC fully connected, ResBlock residual block
Fig. 4Network architectures. Detailed information about hyperparameters can be found in appendix in Tables 3 and 4. AE autoencoder, Conv convolution, Clf classifier, ResBlock residual block
Fig. 5Preprocessing filters. Effect of preprocessing on voltage signal during contact phase is shown in a. Frequency response of preprocessing filters is shown in b. Raw signal with noise in blue, band-stop filtered signal and frequency response in orange, low-pass filtered signal and frequency response in green
Results of different classifiers using raw input without preprocessing
| Classifier | ACC | PPV | TPR | AUROC | |
|---|---|---|---|---|---|
| RF | 0.688 (0.206) | 0.692 (0.203) | 0.700 (0.212) | 0.692 (0.203) | 0.790 (0.220) |
| NN | 0.682 (0.181) | 0.687 (0.175) | 0.689 (0.182) | 0.687 (0.175) | 0.735 (0.211) |
| CNN | 0.720 (0.122) | 0.735 (0.104) | 0.773 (0.097) | 0.735 (0.104) | 0.778 (0.224) |
Metrics provided as mean (standard deviation)
Fig. 6Training and validation loss of NN and CNN on raw data for an exemplary fold
Results on different preprocessing filters
| Classifier | Filter | ACC | PPV | TPR | AUROC | |
|---|---|---|---|---|---|---|
| RF | Low-pass | 0.788 (0.238) | 0.795 (0.229) | 0.805 (0.232) | 0.795 (0.229) | 0.881 (0.170) |
| RF | Band-stop | 0.658 (0.201) | 0.664 (0.199) | 0.681 (0.214) | 0.664 (0.199) | 0.748 (0.245) |
| NN | Low-pass | 0.900 (0.096) | 0.902 (0.092) | 0.918 (0.070) | 0.902 (0.092) | 0.935 (0.090) |
| NN | Band-stop | 0.703 (0.199) | 0.710 (0.190) | 0.714 (0.197) | 0.710 (0.190) | 0.754 (0.232) |
| CNN | Low-pass | 0.828 (0.192) | 0.838 (0.173) | 0.849 (0.159) | 0.838 (0.173) | 0.838 (0.291) |
| CNN | Band-stop | 0.799 (0.146) | 0.801 (0.144) | 0.816 (0.147) | 0.801 (0.144) | 0.844 (0.227) |
Metrics provided as mean (standard deviation)