| Literature DB >> 35495095 |
Mengjie Shi1, Tianrui Zhao1, Simeon J West2, Adrien E Desjardins3,4, Tom Vercauteren1, Wenfeng Xia1.
Abstract
Photoacoustic imaging has shown great potential for guiding minimally invasive procedures by accurate identification of critical tissue targets and invasive medical devices (such as metallic needles). The use of light emitting diodes (LEDs) as the excitation light sources accelerates its clinical translation owing to its high affordability and portability. However, needle visibility in LED-based photoacoustic imaging is compromised primarily due to its low optical fluence. In this work, we propose a deep learning framework based on U-Net to improve the visibility of clinical metallic needles with a LED-based photoacoustic and ultrasound imaging system. To address the complexity of capturing ground truth for real data and the poor realism of purely simulated data, this framework included the generation of semi-synthetic training datasets combining both simulated data to represent features from the needles and in vivo measurements for tissue background. Evaluation of the trained neural network was performed with needle insertions into blood-vessel-mimicking phantoms, pork joint tissue ex vivo and measurements on human volunteers. This deep learning-based framework substantially improved the needle visibility in photoacoustic imaging in vivo compared to conventional reconstruction by suppressing background noise and image artefacts, achieving 5.8 and 4.5 times improvements in terms of signal-to-noise ratio and the modified Hausdorff distance, respectively. Thus, the proposed framework could be helpful for reducing complications during percutaneous needle insertions by accurate identification of clinical needles in photoacoustic imaging.Entities:
Keywords: Deep learning; Light emitting diodes; Minimally invasive procedures; Needle visibility; Photoacoustic imaging
Year: 2022 PMID: 35495095 PMCID: PMC9048160 DOI: 10.1016/j.pacs.2022.100351
Source DB: PubMed Journal: Photoacoustics ISSN: 2213-5979
Fig. 1Flowchart illustration of the process of semi-synthetic training dataset generation. Top row: acquisition of sensor data from human finger vasculature in vivo as background. Bottom row: synthetic radio-frequency (RF) sensor data generation from a simulated needle.
Fig. 2Architecture of the proposed network for improving needle visibility in photoacoustic imaging.
Fig. 3Photoacoustic imaging with needle insertions into a blood-vessel-mimicking phantom with conventional reconstruction, U-Net enhancement and U-Net enhancement with post-processing.
Quantitative evaluation of the trained neural network using blood-vessel-mimicking phantoms. These performance metrics are expressed as mean standard deviations from 20 measurements acquired from different phantoms and needle positions.
| Metrics | Conventional reconstruction | U-Net enhancement | U-Net enhancement with post-processing |
|---|---|---|---|
| SNR | 8.7 ±2.3 | – | |
| MHD | 63.2 ± 15.9 |
Fig. 4Photoacoustic imaging with needle insertions into ex vivo tissue with conventional reconstruction, U-Net enhancement, and U-Net enhancement with post-processing.
Quantitative evaluation of the trained neural network using ex vivo needle images. These performance metrics are expressed as mean standard deviations from 20 measurements acquired from different spatial locations of the ex vivo tissue and needle positions.
| Metrics | Conventional reconstruction | U-Net enhancement | U-Net enhancement with post-processing |
|---|---|---|---|
| SNR | 91.3 ±47.3 | – | |
| MHD | 28.7 ± 16.3 | 6.3 ±9.1 |
Fig. 5Photoacoustic (PA) imaging with needle insertions into human fingers in vivo with conventional reconstruction, U-Net enhancement, and standard Hough Transform. Signals from the skin surface are indicated by triangle wide arrows, and signals that may be from digital arteries are indicated by hollow triangle wide arrows. The outcomes of U-Net enhancement and standard Hough Transform are denoted by green lines in PA and ultrasound (US) overlays. (a) - (d) are from a reconstructed PA image sequence recorded during needle insertions in real time.
Quantitative evaluation of the trained neural network using in vivo needle images. These performance metrics are expressed as mean standard deviations from 20 measurements acquired at different time points during the insertion.
| Metrics | CR | U-Net Enhancement | U-Net Enhancement with Post-processing | SHT |
|---|---|---|---|---|
| SNR | – | – | ||
| MHD | 87.7 ± 24.4 |
Conventional reconstruction.
Standard hough transform.
Quantitative performance on three PA video sequences in vivo with the proposed model.
| Sequence 1 | Sequence 2 | Sequence 3 | |
|---|---|---|---|
| Frames with needle | 92/128 | 53/128 | 99/128 |
| Frames without needle | 36/128 | 75/128 | 29/128 |
| Needle missed | 0 | 5 | 3 |
| True positives | 92 | 48 | 96 |
| False positives | 0 | 1 | 1 |
| True positive rate (%) | 100 | 90.6 | 97.0 |
| False positive rate (%) | 0 | 1.3 | 3.4 |