Literature DB >> 34584840

Modeling combined ultrasound and photoacoustic imaging: Simulations aiding device development and artificial intelligence.

Sumit Agrawal1, Thaarakh Suresh1,2, Ankit Garikipati3, Ajay Dangi1, Sri-Rajasekhar Kothapalli1,4.   

Abstract

Combined ultrasound and photoacoustic (USPA) imaging has attracted several pre-clinical and clinical applications due to its ability to simultaneously display structural, functional, and molecular information of deep biological tissue in real time. However, the depth and wavelength dependent optical attenuation and the unknown optical and acoustic heterogeneities limit the USPA imaging performance in deep tissue regions. Novel instrumentation, image reconstruction, and artificial intelligence (AI) methods are currently being investigated to overcome these limitations and improve the USPA image quality. Effective implementation of these approaches requires a reliable USPA simulation tool capable of generating US based anatomical and PA based molecular contrasts of deep biological tissue. Here, we developed a hybrid USPA simulation platform by integrating finite element models of light (NIRFast) and ultrasound (k-Wave) propagations for co-simulation of B-mode US and PA images. The platform allows optimization of different design parameters for USPA devices, such as the aperture size and frequency of both light and ultrasound detector arrays. For designing tissue-realistic digital phantoms, a dictionary-based function has been added to k-Wave to generate various levels of ultrasound speckle contrast. The feasibility of modeling US imaging combined with optical fluence dependent multispectral PA imaging is demonstrated using homogeneous as well as heterogeneous tissue phantoms mimicking human organs (e.g., prostate and finger). In addition, we also demonstrate the potential of the simulation platform to generate large scale application-specific training and test datasets for AI enhanced USPA imaging. The complete USPA simulation codes together with the supplementary user guides have been posted to an open-source repository (https://github.com/KothapalliLabPSU/US-PA_simulation_codes).
© 2021 Published by Elsevier GmbH.

Entities:  

Keywords:  Artificial intelligence; Deep learning; NIRFast; Photoacoustic imaging; Simulations; Spectral unmixing; Ultrasound imaging; k-Wave

Year:  2021        PMID: 34584840      PMCID: PMC8452892          DOI: 10.1016/j.pacs.2021.100304

Source DB:  PubMed          Journal:  Photoacoustics        ISSN: 2213-5979


Introduction

Photoacoustic imaging (PAI) gained significant attention of biomedical research community by displaying rich molecular optical absorption contrast images of deep tissue with higher spatial resolution than possible with pure optical imaging technologies [1]. In PAI, the light absorbing tissue chromophores (e.g., hemoglobin and melanin) undergo thermoelastic expansion and generate broadband ultrasound waves, i.e. photoacoustic waves. These waves propagate and get detected by the ultrasound (US) transducers placed outside the body, to form 3-D photoacoustic (PA) images with rich molecular contrast. In PAI, the spatial resolution and the penetration depth in the photon diffusion regime (> mm) are scalable with ultrasound frequency, and the temporal resolution is mostly limited by the laser pulse-repetition frequency. Multi-wavelength PAI further provides functional (e.g., oxygen saturation) and molecular-specific (e.g., melanin) information important to diagnose the diseased tissue [2,3]. The ease of integration of PAI capabilities to the clinical US systems further allowed the demonstration of dual-modality USPA devices capable of simultaneously displaying in real-time the anatomical and molecular information in pre-clinical as well as clinical research [4,5]. In recent years, many clinical applications of handheld USPA devices have emerged [6,7]; such as imaging of human breast [[8], [9], [10]], prostate [11], ovaries [12], muscular dystrophy [13], melanoma metastasis [14], inflammatory arthritis [15], thyroid [16], Crohn’s disease [17] and imaging human vasculature using portable LED arrays [18]. However, USPA imaging commonly suffers from depth and wavelength dependent optical and acoustic attenuation which affects the visibility of deep tissue targets [19,20]. Moreover, the unknown optical and acoustic heterogeneities complicate the estimation of optical fluence needed for quantitative PAI [[21], [22], [23]]. These heterogeneities also introduce reflection artifacts [24,25]. Together, these factors limit the USPA imaging performance, especially for deep tissue clinical applications. To overcome these limitations, novel USPA imaging hardware [[26], [27], [28], [29]], image reconstruction [30,31], and deep learning methods [[32], [33], [34], [35]] are actively being investigated. These approaches would benefit from reliable simulation of USPA device performance for a given clinical application through realistic modeling of the optical and acoustic properties of the biological tissue of interest and optimization of optical and acoustic design parameters of the device. For example, such model-based simulations are needed to generate large datasets for training AI models to overcome problems (e.g. artifact identification and removal) frequently encountered in USPA imaging. Computational models can help reduce the total cost and time involved in the device development and validations through phantom, animal, and clinical studies. The credibility of such simulation models needs to be assessed through parametric verification and validation studies against the experiments. Towards this goal, several PAI simulation approaches have been proposed to optimize the PAI device geometries and quantitative image reconstruction [[36], [37], [38], [39], [40], [41], [42]]. These simulations have been predominantly reported for photoacoustic computed tomography geometries that involve rotating single element transducers or sparsely distributed ultrasound transducer elements [36,37]. However, despite rapid progress in the development and clinical translation of dual-modality USPA devices, no simulation studies have been reported to the best of our knowledge for modeling dual-modality B-mode US and PA imaging of realistic heterogeneous tissue medium. Recently, PAI simulations using k-Wave toolbox have become popular [38,[37], [38], [39], [40], [41], [42]]. Since k-Wave models only acoustic wave propagation, majority of these PAI simulations assumed uniform optical fluence across tissue depth [33,42], while some studies employed light transportation models such as NIR Fast [38] or Monte Carlo [39] for estimating light fluence inside homogenous tissue medium and subsequently converting the fluence into initial pressure distribution, followed by photoacoustic reconstruction using k-Wave. Mastanduno et al., integrated k-Wave and NIRFast toolboxes for quantitative photoacoustic computed tomography reconstruction using circular ultrasound detector geometry [38]. Nima et al. [39] employed Monte Carlo simulations for estimating optical fluence in homogeneous tissue phantoms and reconstructed B-mode PA images using an ultrasound linear array geometry in k-Wave toolbox. Jacques [40] presented 3D Monte Carlo simulations of light transport and a MATLAB based acoustic toolbox for predicting PA signal received by a single US detector. Although Monte Carlo simulations provide more accurate fluence estimation across all tissue depths they are computationally expensive and slow [43]. The current PAI simulations lack the following important features of real-time USPA imaging: 1) B-mode US images displaying structural information of the tissue phantom; 2) depth and wavelength dependent multispectral PAI to delineate the molecular information of the tissue; 3) Modeling of tissue optical and acoustic heterogeneity to overcome artifacts and improve quantification accuracy. Overcoming these challenges, we recently integrated a finite-element model (NIRFast toolbox [44]) that uses optical diffusion approximation for estimating optical fluence in a deep tissue medium and the k-Wave toolbox for acoustic modeling and demonstrated the feasibility of dual-modality USPA imaging simulations [45] in homogenous phantoms. Expanding on this conference proceedings paper, here we present the feasibility of our dual-modality USPA simulation platform to provide B-mode US based anatomical images and PA based molecular contrast images in a realistic heterogeneous tissue background. To assess the credibility of our simulation models, we performed parametric verification and validation studies against the experiments. We demonstrate the capabilities of our platform in modeling and imaging of different complex phantoms such as: 1) modeling USPA imaging of heterogeneous prostate tissue, 2) modeling USPA Imaging of human finger cross-section, and 3) modeling multispectral PA imaging of overlapping absorbing targets. To accomplish this, we developed and added a new dictionary-based function to the k-Wave toolbox for generating tissue-realistic acoustically heterogeneous phantoms helpful for simulating the required ultrasound speckle contrast levels for a given mean acoustic impedance of the tissue region. In addition, we demonstrate the applicability of the USPA simulation platform for generating massive datasets to aid the data driven AI algorithms in 1) locating deep tissue PAI targets in high optical scattering background, 2) photoacoustic spectral unmixing of chromophores using multispectral PA simulation data, and 3) PA reflection artifact reduction. The rest of the paper is organized as follows: Section II presents the simulation geometry, including the ultrasound and optical properties of the tissue phantoms, and the workflow of the hybrid USPA imaging simulations. Section III presents parametric verification and validation study results and discussions. Section IV concludes the work with insights to the future scope of the work.

Methodology

In this section, we start with the description of simulation geometry and different device configurations. Later we describe the optical properties of the medium and the light absorbing (target) objects used for the simulations presented in this work. A schematic representation and workflow of our simulation platform for generating dual-modality US as well as PA images is also presented. The bulk optical properties of the medium and associated photon transport mean free path determines the fraction of photons (optical fluence) present at each location, and was achieved using an open-source software package, NIRFast toolbox [44]. The fluence map is then converted to the pressure distribution, which is then propagated and detected using an open source MATLAB based software platform, k-wave toolbox [41], In contrast, the simulation of B-mode ultrasound image formation is rather straightforward using k-Wave toolbox. Here, ultrasound imaging is performed using the same ultrasound transducer array that is used for photoacoustic detection.

Simulation geometry

Here we introduce the phantom geometry, and key design parameters for light source and ultrasound transducer arrays employed in the USPA imaging device modeled in our simulations. Fig. 1 shows three device configurations used for studying the effect of the aperture size and frequency of light source as well as ultrasound transducer on the USPA imaging performance. The phantom geometry is designed as a rectangular two-dimensional (2D) grid of 100 mm width and 60 mm depth. The grid spacing is 0.2 mm and the total number of nodes in the grid are 150801. The linear ultrasound transducer and light source arrays are both kept at the bottom of the tissue grid, centered at the (x, z) = (0, 0) origin of the grid. In all simulations, the ultrasound transducer array is centered at 1 MHz frequency with a fractional bandwidth of 80 %, unless otherwise specified. Fig. 1(a) shows the first USPA device configuration consisting of a 64-element ultrasound transducer array spread from -6.4 mm to 6.4 mm (total of 12.8 mm), and a 40 mm long light source from -20 mm to 20 mm, both along the x-direction. Fig. 1(b) shows the second device configuration consisting of a 128-element transducer array spread from -12.8 mm to 12.8 mm (total of 25.6 mm), and a 40 mm long light source. Fig. 1(c) shows the third device with 20 mm long light source and a 128-element transducer array.
Fig. 1

Phantom geometries for three USPA device configurations used in parametric simulation studies with varying ultrasound transducer array and light apertures. Transducer array: yellow; light source: orange; scale: mm.

Phantom geometries for three USPA device configurations used in parametric simulation studies with varying ultrasound transducer array and light apertures. Transducer array: yellow; light source: orange; scale: mm.

Optical Properties of tissue mimicking phantom

The optical properties of the tissue phantoms determine the amount of light absorbed or scattered at each grid position. In our simulations, we have used both homogenous as well as heterogeneous backgrounds mimicking human prostate tissue. The light absorbing molecules such as oxy-hemoglobin (HbO2), deoxy-hemoglobin (Hb), and exogenous contrast agent indocyanogreen (ICG) are used as molecular targets placed in the tissue background. In Table 1, we list the absorption coefficients of the three molecular targets and the tissue background, taken from [46] and [47]. The reduced scattering coefficients (μ′) for the tissue background and the molecular targets (HbO2, Hb, and ICG) are calculated using Eq. 1:where, a (mm−1) is the value of reduced scattering coefficient at 800 nm, λ is the wavelength at which the scattering coefficient is being calculated, and b is the scattering power [46].
Table 1

Absorption and reduced scattering coefficients (in mm−1) of molecular targets (HbO2, Hb, and ICG), background tissue used in simulations.

λ(nm)HbO2
Hb
ICG
Background
μaμ'sμaμ'sμaμ'sμaμ's
7500.26001.04E-50.78001.04E-50.09821.04E-50.00131.1368
7750.34001.02E-50.62001.02E-50.16541.02E-50.00111.0805
8000.42501.00E-50.42001.00E-50.12571.00E-50.00111.0287
8250.47509.79E-50.37609.79E-50.02589.79E-50.00130.9808
8500.53009.60E-50.37609.60E-50.00679.60E-50.00130.936
8750.59309.42E-50.37609.42E-50.00039.42E-50.00180.8953
9000.62509.25E-50.42109.25E-50.00009.25E-50.00410.8571
Absorption and reduced scattering coefficients (in mm−1) of molecular targets (HbO2, Hb, and ICG), background tissue used in simulations.

Fluence calculation

The first step in the generation of PA image involves the forward propagation of light into the tissue phantom and calculation of optical fluence. We have used an open-source software package, NIRFast [44] that solves for the light diffusion equation and calculates the light fluence, ϕ at each grid position of the 2D phantom. Fig. 2 shows that the optical fluence reaches deeper tissue depths for a light aperture size of 40 mm (Fig. 2a) as compared to 20 mm (Fig. 2b), inside a homogeneous tissue background described in Table 1 at 800 nm wavelength. Fig. 2c plots the wavelength-dependent optical fluence attenuation as a function of depth inside the homogeneous tissue background calculated for a 40 mm light aperture size. These plots were used for compensating the wavelength dependent fluence attenuation as a function of depth in all our photoacoustic imaging simulations [48].
Fig. 2

Optical fluence calculations inside the tissue medium. (a, b) Fluence maps generated by a 40 mm, and a 20 mm aperture size light source, respectively. (c) Wavelength dependent attenuation of optical fluence as a function of depth. Scale: mm, colorbar: dB.

Optical fluence calculations inside the tissue medium. (a, b) Fluence maps generated by a 40 mm, and a 20 mm aperture size light source, respectively. (c) Wavelength dependent attenuation of optical fluence as a function of depth. Scale: mm, colorbar: dB.

Flowchart for US and PA image generation

This section presents the systematic workflow (Fig. 3) of the USPA simulation platform for generating US and PA images. We designed a combined acoustic and optical phantom consisting of nine circular targets of radius 0.25 mm, shown in Fig. 3a. For the US phantom, the acoustic impedances are set to 1.5 and 1.7 MRayl for the background and the circular targets, respectively [49]. For the optical phantom, six targets out of nine (2, 3, 5, 6, 8, and 9 marked with black circles) are defined as light absorbing vascular targets having a higher absorption coefficient of 0.425 mm−1 (typical blood absorption at 800 nm) compared to the other three targets (1, 4, and 7 marked with orange circles) having an absorption coefficient of 0.0011 mm−1 matching with the tissue background [46].
Fig. 3

(a-f) Flowchart presenting key steps involved in the USPA simulations. US, PA and coregistered US + PA images of a homogeneous phantom consisting of nine-circular targets obtained from (g-i) simulations, and (j-l) experiments. Scale: mm, colorbar: dB.

(a-f) Flowchart presenting key steps involved in the USPA simulations. US, PA and coregistered US + PA images of a homogeneous phantom consisting of nine-circular targets obtained from (g-i) simulations, and (j-l) experiments. Scale: mm, colorbar: dB. For the US image generation, the forward and backward simulations involve the propagation of ultrasound waves from the transducer elements to the tissue medium and vice versa, respectively using an open source ultrasound simulation platform k-Wave toolbox [41] (Fig. 3(d, e)). The received time-dependent pressure data at each transducer location is then fed to our custom developed US beamforming toolbox (Fig. 3f), which uses standard Delay-and-Sum [11] approach with fixed transmit and dynamic receive focusing, to reconstruct B-mode US images, as shown in Fig. 3g. The first simulation step for a PA image generation involves the forward propagation of light in the discretized 2D optical phantom grid with pre-defined optical properties using NIRfast [44] (Fig. 3b). It uses a finite element method to solve for the radiative transport equation with a diffusion approximation and calculates the light fluence, ϕ(r,λ), at each grid position. This generates the 2D optical fluence map, as shown in Fig. 3c, for a given wavelength (e.g., 800 nm in this case). The fluence map is then converted to the initial pressure map, P using Eq. 2, where Γ is the Gruneisen parameter, a measure of conversion efficiency from the light absorption to pressure. A fixed homogeneous value of 0.2 for Γ was assumed in all our studies [50]. ϕ(r,λ) and μ are the optical fluence and the absorption coefficients at a position r and for wavelength λ [51]. This equation provides the value of initial pressure at each grid location of the tissue phantom. The detection path for PA simulation involves the propagation of generated photoacoustic pressure waves from the respective grid positions inside the tissue phantom to the position of all ultrasound transducer elements using k-Wave toolbox [41] (Fig. 3e). This solves the solution of following acoustic wave equation [51]:where, v is the acoustic speed in the medium, β is the isobaric volume expansion coefficient, C is the isobaric specific heat, and H(r, t) is the amount of thermal energy converted to pressure at a position r and time t. The pressure data from each grid position is measured as a function of time at each transducer element location in the form of time-dependent sensor voltage data; also termed as radio frequency (RF) channel data. We then use our custom developed receive-only Delay-and-Sum beamforming toolbox (Fig. 3f) on the RF channel data [11] to reconstruct B-mode PA image, as shown in Fig. 3h. Due to the difference in acoustic impedance between each circular target and the background, all nine targets generated ultrasound echoes and are imaged in the US mode (Fig. 3g), whereas, only the six light absorbing vascular targets (represented as black dots, simulating blood vessels) are detected in the PA image (Fig. 3h). In addition to US and PA images, the simulation platform also displays the co-registered US + PA image (Fig. 3i). In our simulation studies, the speed of sound in homogeneous tissue background was assumed to be 1480 m/s and the frequency (f) dependent acoustic attenuation inside the tissue was taken as 0.75 f1.5 dB/cm/MHz1.5 [[49], [50], [51]]. In k-Wave, the Courant-Friedrichs-Lewy (CFL) number, defined as the ratio of the distance a wave can travel in one time step to the grid spacing [41], was chosen as default value of 0.3. A voxel size of 20 was considered as perfectly matched layer surrounding the tissue grid. In PAI simulations, the fluence map generation (using NIRFast) over the defined 200 μm resolution grid took ∼2 min and the pressure propagation to gather the raw photoacoustic signals (using k-Wave) took ∼30 s. On the other hand, US simulations, including both forward excitation and ultrasonic detection over the same 200 μm grid (using k-Wave) took ∼2 min for every A-line and a total of 101 scan lines were used for acquiring one B-mode US image. All computations were performed over a Xeon processor with 128 GB RAM and 16 GB NVIDIA Titan XP GPU. To substantiate the imaging capabilities of our simulations, we performed experiments on a tissue mimicking homogeneous intralipid phantom using a custom designed USPA device. The device integrates a linear 64-element CMUT array with a fiber optic light guide connected to a tunable OPO laser (Opotek Inc., 10-Hz repetition rate, 5-ns pulse width, 680- to 950-nm wavelength range). Similar to the simulated phantom, the experimental phantom consisted of nine fishing-wire targets (mimicking the simulated circular targets) in which seven are painted black to mimic the light absorbing vascular targets. The USPA data is acquired and reconstructed using a PC-based multi-channel US data acquisition system (Verasonics, Inc.) [11]. Beamformed US, PA (at 800-nm) and coregistered US + PA images are shown in Fig. 3(j–l). These results confirm that our simulation platform is capable of closely modeling the USPA devices in realistic experimental conditions. The PA simulation for a single wavelength described in the flowchart can be extended to generate multi-wavelength PA images for obtaining the spectrally unmixed images of different molecular targets, as demonstrated in the subsequent sections.

Results and discussions

This section presents validation experiments conducted to further evaluate the performance of the hybrid USPA simulation platform. Sections III-(A, B) presents the parametric studies such as the effect of aperture sizes of the ultrasound transducer and the light source, and the center frequency of the ultrasound transducer on the USPA imaging performance. Section III-C presents the multispectral PA imaging capabilities of the platform for delineating different molecular targets. Section III-D describes a dictionary-based function built on k-Wave for generating tissue realistic acoustic phantoms. Section III-E demonstrates modeling of different complex phantoms for deep tissue imaging applications. Section III-F presents the feasibility of the simulation platform in aiding deep-learning enhanced PAI.

Effect of ultrasound transducer and light array size

With a fixed 40 mm aperture size of the light source, we studied here the effect of US transducer array size by employing i) a 64-element linear US array of 12.8 mm length (Fig. 1a), and ii) a 128-element linear US array of 25.6 mm length (Fig. 1b), with 0.2 mm pitch. We used the same tissue phantom with nine circular targets as described in section II-D. Fig. 4(a–c) and Fig. 4(d–f) show US, PA, and coregistered US + PA simulated imaging results for the 64-element and 128-element transducer sizes respectively. Quantified spatial resolutions of targets #2 and #8, provided in Table 2, demonstrate that the lateral resolution and the visibility of deeper targets (depth of imaging) improves with the aperture size of US transducer.
Fig. 4

Parametric studies validating the effect of the US transducer and the light source apertures on the USPA imaging performance. Simulation results with (a-c) 40 mm light and 64-element US, (d-f) 40 mm light and 128-element US, and (g-i) 20 mm light and 128-element US array; scale: mm, colorbar: dB.

Table 2

Spatial resolution for homogeneous case studies.

ConfigurationConfig.1 (Fig. 1a)Config.2 (Fig. 1b)Config.3 (Fig. 1c)
Lateral Resolution (mm)UStarget-21.3001.2001.200
target-81.7801.7201.720
PAtarget-23.5201.4011.440
target-83.7501.4501.625
Axial Resolution (mm)UStarget-20.5700.5700.570
target-80.5700.5700.570
PAtarget-20.2950.3120.320
target-80.2670.2410.241
Parametric studies validating the effect of the US transducer and the light source apertures on the USPA imaging performance. Simulation results with (a-c) 40 mm light and 64-element US, (d-f) 40 mm light and 128-element US, and (g-i) 20 mm light and 128-element US array; scale: mm, colorbar: dB. Spatial resolution for homogeneous case studies. In PAI, the strength of the PA signal is directly proportional to the local optical fluence, which in turn increases with the light aperture size. Using the USPA simulation platform, we validate the effect of light aperture on the PAI performance. For a fixed ultrasound transducer array with 128-elements, when we reduced the aperture size of light illumination from 40 mm (Fig. 1b) to 20 mm (Fig. 1c), the visibility of deeper photoacoustic targets (close to 5 cm) is reduced (Fig. 4e and Fig. 4h), due to the reduced optical fluence. Table 3 presents the peak PA signal at the target region, mean noise surrounding the target region and the calculated signal to noise (SNR) ratio values for shallow (target-2) and deep (target-9) targets. The signal strength for target-9 reduced about 15 % from 0.9022 to 0.7756 when changing the light aperture size from 40 mm to 20 mm. Further, using a smaller US transducer aperture (Config.1), the signal strength for target-9 dropped down to 0.5040, confirming that the visibility of deep tissue targets is affected with the size of both light aperture as well as US transducer. However, no significant change in the spatial resolution was observed with the change in the size of the light source (Table 2), when comparing the configurations of Fig. 1b,c.
Table 3

Signal, noise and signal to noise ratio (SNR) for homogeneous case studies.

ConfigurationConfig.1 (Fig. 1a)Config.2 (Fig. 1b)Config.3 (Fig. 1c)
Signal (a.u.)target-20.93221.82611.8070
target-90.50400.90220.7756
Noise (a.u.)target-20.02310.04650.0404
target-90.07710.11830.1033
SNR (dB)target-232.128231.880533.0199
target-916.303817.646717.5144
Signal, noise and signal to noise ratio (SNR) for homogeneous case studies.

Effect of center frequency of ultrasound transducer

In this subsection, we study the effects of changing the center frequency of the ultrasound transducer array on the PAI performance using the tissue phantom with nine circular targets described in section II-D. The aperture sizes of light source and transducer array are fixed and the center frequency of the transducer is changed from 1 MHz, to 2 MHz and 5 MHz. The calculated lateral and axial resolutions (LR and AR in Fig. 5a-c) for the 2nd target, using half the distance between 90 % to 10 % of the maximum PA amplitude (in the line spread functions shown in Fig. 5d and e), show that the spatial resolution improves with the increase in the center frequency of the US transducer array. These results confirm that the visibility of deeper targets (5 cm) become weaker with increase in the center frequency of the transducer, due to the depth and frequency dependent acoustic attenuation.
Fig. 5

The effect of change in the center frequency of the US transducer on PAI performance. (a-c) PA images of the phantom consisting of nine-circular targets, obtained with the transducers centered at 1 MHz, 2 MHz, and 5 MHz frequency, respectively. (d, e) PA amplitudes of the 2nd wire target in images (a-c), plotted in lateral and axial directions, respectively. Lateral resolution (LR) and axial resolution (AR) in mm. Scale: mm, colorbar: dB.

The effect of change in the center frequency of the US transducer on PAI performance. (a-c) PA images of the phantom consisting of nine-circular targets, obtained with the transducers centered at 1 MHz, 2 MHz, and 5 MHz frequency, respectively. (d, e) PA amplitudes of the 2nd wire target in images (a-c), plotted in lateral and axial directions, respectively. Lateral resolution (LR) and axial resolution (AR) in mm. Scale: mm, colorbar: dB.

Multispectral photoacoustic imaging and spectral unmixing

In this subsection, we present multispectral photoacoustic imaging and spectral unmixing results for the homogeneous tissue phantom embedded with HbO2, ICG, and Hb molecules at two different depths. Since the optical absorption of these targets is wavelength dependent, the generated pressure and resulting photoacoustic image contrast changes with the wavelength. Fig. 6a shows the position of six molecular targets: HbO2 (x = -15 mm), ICG (x = 0 mm), and Hb (x = 15 mm), each of radius 0.25 mm, placed at 25 mm and 40 mm depth inside the homogenous tissue background. PA images at seven wavelengths (750 nm–900 nm, with an interval of 25 nm), were acquired using a 128-element US transducer and a 40 mm light source. Five representative PA images are shown in Fig. 6(b–f). The PA intensity plots, generated by quantifying the PA intensity of these targets in the multi-wavelength PA images (Fig. 6g), closely matches with the respective standard spectral plots of the three molecular targets (Fig. 6h). For example, the standard crossover of Hb and HbO2 spectra around 800 nm, and the peak response of ICG around 780 nm can be observed in these plots. We further applied a linear spectral unmixing algorithm that computes a non-negative solution to a linear least squares problem [52], using the simulated multi-wavelength PAI data, and obtained the unmixed images of HbO2, Hb, and ICG (Fig. 6(i–k)). These results demonstrate the feasibility of the hybrid USPA simulation platform to accurately simulate the spectroscopic PA imaging of the tissue chromophores.
Fig. 6

Multi-spectral PAI and spectral unmixing results with the USPA simulation platform. (a) Tissue phantom consisting of HbO2, ICG, and Hb molecules at two different depths. (b-f) PA images of the phantom obtained at five optical wavelengths. (g) Plots of mean PA intensities and (h) plots of optical absorption coefficients as a function of wavelength for HbO2, ICG, and Hb. (i-k) corresponding spectrally unmixed images; scale: mm, colorbar: dB.

Multi-spectral PAI and spectral unmixing results with the USPA simulation platform. (a) Tissue phantom consisting of HbO2, ICG, and Hb molecules at two different depths. (b-f) PA images of the phantom obtained at five optical wavelengths. (g) Plots of mean PA intensities and (h) plots of optical absorption coefficients as a function of wavelength for HbO2, ICG, and Hb. (i-k) corresponding spectrally unmixed images; scale: mm, colorbar: dB.

Dictionary-based approach for acoustic phantom generation

As mentioned in the k-Wave documentation [41], the ultrasound image contrast is generated at two different levels. At macroscopic scale, the difference in bulk acoustic impedance values between different tissue types, such as bone vs. fat tissue, leads to visible structural contrast differences and edges in the ultrasound image. To manifest such macroscopic contrast differences in simulations, segmented tissue regions of a heterogeneous acoustic digital phantom are defined with bulk impedance (mean acoustic speed x mean density) values of respective tissues. At microscopic scale, the minor variations in the acoustic impedances of the cells, extracellular matrix and other tissue microenvironment leads to scattering and subsequent interference of acoustic waves of varying frequency and phase differences, and thus generates a speckle pattern that is characteristic of an ultrasound image. In the current k-Wave toolbox, these microscopic variations are manifested by modulating the mean acoustic impedance of a given tissue region using random Gaussian noise. However, the level of speckle contrast is highly dependent on the degree of the Gaussian noise or modulation of the impedance, which were not studied before in detail. In a realistic scenario, a typical ultrasound image comprises of different levels of speckle contrast or texture. These levels are actually representative of the tissue heterogeneity at the microscopic level for individual tissue regions. Therefore, for simulating realistic ultrasound image, it is important to develop a model based acoustic phantom mimicking corresponding macroscopic and microscopic acoustic impedance variations. Towards this goal, we have developed a dictionary based approach built on k-Wave toolbox. Fig. 7 shows the simulated ultrasound images (cropped in target region) for 12 different speckle contrast levels with varying tissue acoustic heterogeneity. The level of tissue heterogeneity is mentioned on each figure in %. The background texture was kept constant as 2% among all the simulations. As shown, with the increase in the percentage of target tissue’s acoustic heterogeneity, overall brightness of the target texture also increased. Users can hand pick the values of acoustic heterogeneity levels, i.e. the class in % from this dictionary base and generate the required tissue texture. A MATLAB script (phantom_acoustic_characteristics.m) for creating the acoustic phantoms with varying levels of heterogeneity and a detailed user guide document (US_simulation_guide.docx) are both uploaded to the same GitHub repository under (US_codes) folder.
Fig. 7

A dictionary-based approach for generating varying ultrasound speckle contrast for different tissue regions. (a-l) Show different levels of ultrasound contrast in the circular target region with a constant background speckle contrast of 2%.

A dictionary-based approach for generating varying ultrasound speckle contrast for different tissue regions. (a-l) Show different levels of ultrasound contrast in the circular target region with a constant background speckle contrast of 2%.

Modeling of complex phantoms for deep tissue imaging applications

Application-1: modeling USPA imaging of heterogeneous prostate tissue

The capability of our simulation platform to image heterogeneous tissue is demonstrated by developing in silico human prostate phantom and simulating TransRectal-USPA (TRUSPA) imaging of prostate using a 128-element linear US array and a 40 mm light aperture. A 600 × 600 pixels grayscale bitmap image was created with the bladder, prostate, soft-tissue and the vasculature regions. This image was converted to a 60 mm x 60 mm phantom with a grid-size of 100 μm in MATLAB, as shown in Fig. 8a. The acoustic and optical properties of different tissue regions inside the prostate phantom were defined as per the literature. Fig. 8(b, c) shows the acoustic impedance and the absorption coefficient maps of the phantom. The acoustic impedances of the bladder, prostate and soft tissue were defined as 1.57, 1.60, and 1.63 respectively, in MRayls [53]. The absorption coefficient of blood vasculature, prostate, surrounding soft tissue and the bladder regions were defined as 0.425, 0.03, 0.02, and 0.01 respectively, in mm−1 at 800 nm [54].
Fig. 8

Simulating transrectal US and PA imaging of human prostate. (a) Bitmap image of a designed heterogeneous prostate phantom mesh and corresponding (b) acoustic impedance map and (c) optical absorption coefficient map with arrows pointing to the prostatic vasculature. (d-f) Simulated US, PA, and coregistered US + PA images of the in silico prostate phantom. (g-i) Experimental results for in vivo TRUSPA imaging of human prostate. Bladder (B), Prostate (P) and rectal soft tissue (R). Scale: mm, colorbar: dB.

Simulating transrectal US and PA imaging of human prostate. (a) Bitmap image of a designed heterogeneous prostate phantom mesh and corresponding (b) acoustic impedance map and (c) optical absorption coefficient map with arrows pointing to the prostatic vasculature. (d-f) Simulated US, PA, and coregistered US + PA images of the in silico prostate phantom. (g-i) Experimental results for in vivo TRUSPA imaging of human prostate. Bladder (B), Prostate (P) and rectal soft tissue (R). Scale: mm, colorbar: dB. Further, to define a realistic acoustic phantom, we explored the literature for the transrectal prostate ultrasound images and confirmed the brightness/contrast levels of the three main tissue types [11]. Bladder is usually the darkest region, soft tissue is the brightest and the prostate tissue contrast lies between the two. Therefore, considering our dictionary-based approach described in the previous section, we have chosen the level of heterogeneity as 0.1 % for the bladder, 0.5 % for the prostate tissue and 1.0 % for the background soft tissue region. The simulated US image (Fig. 8d) clearly displays the anatomical information of the prostate, bladder (hypoechoic region above the prostate), and the surrounding tissue regions. Moreover, the required ultrasound texture sufficiently matched with the realistic transrectal ultrasound images seen in the literature. In contrast, the simulated PA image (Fig. 8e) maps the optical absorption contrast of the prostatic vasculature. The coregistered US + PA image in Fig. 8f shows overlaid anatomical and molecular optical contrast of the in silico human prostate phantom. Fig. 8(g–i) show in vivo transrectal imaging of human prostate acquired with a TRUSPA device integrating a 64-element linear CMUT array and a fiber optic light guide, as described in section II-D [11]. Human experiments were approved by the IRB of the Stanford University [11]. These experimental results are in close agreement with the above simulation results, wherein US image shows the structure of the prostate and the surrounding regions, and the PA image shows the vasculature of the prostate and surrounding regions.

Application-2: modeling USPA imaging of human finger cross-section

Here, we provide another example to demonstrate the capability of our simulation platform to image complex tissue phantoms. While the prostate phantom used in previous example contains mainly soft tissue properties, the in silico human finger cross-section phantom developed here mimics both soft tissue and bone. As shown in Fig. 9a, a 400 × 400 pixels grayscale phantom image corresponding to 40 mm square grid with 0.1 mm resolution, was created with water as the background imaging medium. To mimic a typical human finger cross-section, we adopted middle phalanx of a middle finger with approximate overall thickness of 16 mm. The outer skin layer is approximately 0.3 mm and the bone size is ∼ 8 mm. There are five blood vessel targets of different size and shapes located around the tissue region. A higher percent volume of melanosomes (13 %) was chosen for the top layer of skin compared to the bottom layer (1.2 %), as the finger is not uniformly toned throughout and is usually darker on the dorsal side. The optical absorption coefficient, reduced scattering coefficient, speed of sound, and acoustic density values for the different tissue regions used in this simulation are: [Bone: 0.02 mm−1, 0.15 mm−1, 3000 m/s, 2000 kg/m3], [Melanin: 5.76 mm−1, 3.98 mm−1, 1645 m/s, 1150 kg/m3], [Soft tissue: 0.01 mm−1, 1.20 mm−1, 1540 m/s, 1058 kg/m3], [Blood: 0.43 mm−1, 1.61 mm−1, 1575 m/s, 1055 kg/m3], and [Water: 0.02 mm−1, 1.10 mm−1, 1480 m/s, 1000 kg/m3]. Further, we explored the literature for B-mode US images of human finger cross-section and studied the brightness and speckle contrast levels of the three main tissue types [55]. Bone in the center is usually visible as hypoechoic region surrounded by brighter soft tissue layer and the top skin layer has the brightest edge contrast. Therefore, based on our dictionary-based approach in section III D, we have chosen the level of heterogeneity as 0.1 % for the bone, 0.6 % for the soft tissue and 1.0 % for the top skin layer. Anechoic water medium was given a heterogeneity value of 0.05 %. We then performed USPA simulations of the finger phantom using a 128-element linear US array (4 MHz center frequency and 80 % bandwidth) and a 40 mm light source. Fig. 9b presents the simulated B-mode US image with brightness and speckle texture nearly matching realistic human finger B-mode US images seen in literature [55]. Fig. 9c presents the simulated PA image of the finger phantom, revealing optical absorption contrasts of blood vessel targets as well as of the absorbing skin layer consisting of melanin chromophores. The PA image also shows significant reflection artifacts generated due to the presence of bone surrounding the vessels. The photoacoustic waves originated from the skin layer and the vessels inside the soft tissue are reflected by the higher acoustic impedance bone structure, leading to the reflection artifacts in the PA image. This demonstrates the capability of the USPA simulation platform to model the effect of acoustic heterogeneity in imaging realistic tissue. The coregistered US + PA image in Fig. 9d shows overlaid anatomical and optical contrasts of the in silico human finger cross-section phantom. Fig. 9e shows a finger imaging experimental setup using a commercial LED-based US and PA imaging system protocols discussed in this reference [18]. Fig. 9(f–h) show in vivo USPA imaging results of a human finger cross-section. These experimental results are in close agreement with the corresponding simulation results presented in Fig. 9(a–d), with the US images showing matching structural contrast and PA image mapping local vascular contrast. The presence of significant reflection artifacts generated due to finger bone (pointed by white arrows) can also be seen in the PA images of both experimental and simulation results. In the next section (Application 3, Fig. 13), we will demonstrate that these reflection artifacts can be corrected by AI when trained with the simulated finger imaging USPA data sets.
Fig. 9

Comparison between simulated and experimental results of USPA imaging of human finger cross-section. (a) Phantom geometry showing a typical cross-sectional view of human finger. (b-d) Simulated B-mode ultrasound (US), photoacoustic (PA) and coregistered US + PA image of the in silico human finger phantom. (e) USPA imaging experimental setup for in vivo human finger imaging. (f-h) Experimental results for in vivo human finger cross-section imaging. Scale: mm.

Fig. 13

PA simulation aided reflection artifact reduction using deep learning. (a) Picture of human finger immersed in water tank for PA imaging using a commercial LED-PAI system. (b) Acquired PA image with reflection artifacts (pointed with white arrow). (c) Output PA image obtained with deep learning (U-Net) approach. Scale: mm.

Comparison between simulated and experimental results of USPA imaging of human finger cross-section. (a) Phantom geometry showing a typical cross-sectional view of human finger. (b-d) Simulated B-mode ultrasound (US), photoacoustic (PA) and coregistered US + PA image of the in silico human finger phantom. (e) USPA imaging experimental setup for in vivo human finger imaging. (f-h) Experimental results for in vivo human finger cross-section imaging. Scale: mm.

Application-3: modeling multispectral PA imaging of overlapping absorbing

In a realistic scenario, it is common to have two or more chromophores mixed in an unpredictable ratio at the same voxel location. In such complex situations, it is difficult to reliably unmix the molecular composition of each chromophore. Here we study the capability of our simulation platform for unmixing the overlapping absorbing targets at the same location. Fig. 10a shows the phantom geometry used for this simulation having a 40 mm square grid of 0.1 mm resolution, a 128-element linear US array with 1 MHz center frequency and 0.2 mm pitch, and a 40 mm length near infrared source for multiwavelength tissue illumination. The background of the phantom is chosen as a tissue with the acoustic and wavelength dependent optical properties listed in Table 1. A 0.4 mm diameter mixed blood target, consisting of 60 % HbO2 and 40 % Hb leading to an oxygen saturation (sO2) value of 60 %, was positioned at the center of the grid (Fig. 10a).
Fig. 10

Multispectral photoacoustic (PA) simulation of overlapping absorbing target. (a) Phantom geometry showing a mixed blood target with 60 % HbO2 and 40 % Hb, leading to oxygen saturation (sO2) value of 60 %, located at the center of the grid. (b-g) Simulated zoomed-in PA images of the phantom at 750 nm to 875 nm wavelength with an interval of 25 nm. Unmixed maps of Hb (h) and HbO2(i) using the linear spectral unmixing approach. (j) Calculated sO2 map using (h) and (i). Scale: mm.

Multispectral photoacoustic (PA) simulation of overlapping absorbing target. (a) Phantom geometry showing a mixed blood target with 60 % HbO2 and 40 % Hb, leading to oxygen saturation (sO2) value of 60 %, located at the center of the grid. (b-g) Simulated zoomed-in PA images of the phantom at 750 nm to 875 nm wavelength with an interval of 25 nm. Unmixed maps of Hb (h) and HbO2(i) using the linear spectral unmixing approach. (j) Calculated sO2 map using (h) and (i). Scale: mm. Simulated B-mode PA images of this phantom over six wavelengths starting from 750 nm to 875 nm with an interval of 25 nm are shown in Fig. 10b-g. The mixing of the two hemoglobin chromophores in this example lead to nearly uniform PA intensity in all six wavelength PA images. Fig. 10h and i show the unmixed Hb and HbO2 maps using linear spectral unmixing. To obtain the sO2 map, we divided the abundance of HbO2 with the total hemoglobin content (HbO2 + Hb) at each pixel. The resulting sO2 map is shown in Fig. 10j with an estimated sO2 value of 69.75 %. The discrepancy between the estimated sO2 (69.75 %) and the actual sO2 (60 %) can be attributed to inaccuracies with linear unmixing approach. This can be solved with AI models as described below.

USPA simulation aided artificial intelligence (AI) for PAI

Artificial intelligence networks using deep learning and machine learning approaches are widely being investigated for medical image analysis and diagnosis, including real-time image segmentation and disease classification [56]. In addition to architectural advancements, a domain-enriched high-fidelity learning is required to develop and translate reliable AI approaches in healthcare research [57]. In recent years, AI has been actively studied for various PAI applications [[58], [59], [60], [61]]. However, with PAI still in its early stages of clinical translation, the scarcity of clinical PAI data remains a major challenge in optimally training the AI models for a given task. Most commonly, readily available acoustic simulations have been employed for generating required PAI training datasets assuming uniform optical fluence inside the tissue medium [33]. As such, these studies did not model the realistic experimental PAI where optical fluence strongly depends on the tissue optical and acoustic properties, imaging depth and excitation wavelength. As demonstrated in previous sections, our model based USPA simulations account for both depth and wavelength dependent optical scattering and are capable of generating co-simulated US and multispectral PA imaging datasets for the in silico tissue phantom mimicking realistic heterogeneous tissue environment. We present the following two studies to demonstrate the applicability of the USPA simulation datasets for AI enhanced PAI.

Application-1: USPA simulation aided deep learning approach for photoacoustic target detection in deep tissue

We recently reported an encoder-decoder based convolutional neural network (CNN) to identify the origin of photoacoustic wavefronts in deep-tissue scattering medium [62]. The network was trained with 16240 model-based simulated PA images, generated by our simulation platform presented here, and tested on both simulated and experimental PAI data acquired in various background optical scattering conditions. These results demonstrated that the photoacoustic targets upto 55 mm depth can be localized with a high accuracy of ∼20 μm. This can be attributed to the true modeling of optical scattering in the training PAI dataset generated by the USPA simulation platform. Here, we demonstrate the applicability of this approach for multi-target detection by training the network with simulated PAI datasets of multiple PA targets buried in a strong background optical scattering noise. The network performance was validated using an experimental test dataset consisting of three PA targets placed inside an intralipid phantom with reduced scattering coefficient of 20 cm−1 (shown in Fig. 11a). Due to the heavy noise, the SNR and the visibility of deeper PA target (at 33 mm) in the conventional beamformed image is poor (Fig. 11b). To compensate for this, conventional methods [41] generally apply a time gain compensation (TGC) correction assuming a uniform acoustic attenuation along the tissue depth. However, due to increase in the noise, the TGC approaches fail in boosting the overall SNR. To address this, a model capable of learning the noise patterns at varying tissue depths is necessary. As shown in Fig. 11c, with the network trained with PAI datasets of varying optical scattering noise conditions generated by the USPA simulation platform, we could precisely localize all the three PA targets.
Fig. 11

USPA simulations enabled deep-learning enhanced PAI. (a) Zoomed view of acquired raw PA data consisting of three PA targets situated at 13 mm, 23 mm, and 33 mm depth in a tissue medium with a reduced scattering coefficient of 20 cm−1. (b) Conventional beamformed B-mode PA image. (c) Deep-learning enhanced PA image output. Scale: mm.

USPA simulations enabled deep-learning enhanced PAI. (a) Zoomed view of acquired raw PA data consisting of three PA targets situated at 13 mm, 23 mm, and 33 mm depth in a tissue medium with a reduced scattering coefficient of 20 cm−1. (b) Conventional beamformed B-mode PA image. (c) Deep-learning enhanced PA image output. Scale: mm.

Application-2: Multispectral PAI simulations aided unsupervised photoacoustic spectral unmixing

Recently, we presented an end-to-end unsupervised, modified independent component analysis (ICA), approach for delineating the molecular information in PAI [63], using the training and test multispectral PAI datasets (60 images, each image consisting 401 × 401 pixels, are used in the training process) obtained from the USPA simulation platform. Our approach outperformed the standard linear spectral unmixing when tested over simulated as well as experimental datasets. The ICA model learned the non-uniform spectral variations from the simulated multiwavelength PA training dataset (similar to Fig. 6) and was shown to unmix the respective molecular information in the following two cases: (i) an experimental 1.5 % agarose phantom consisting of three 0.5 mm outer diameter tubes filled with oxygenated blood “o” (90 % sO2), ICG (65 μM) and deoxygenated blood “d” (50 % sO2), embedded 15 mm deep (Fig. 12a); and (ii) in vivo multiwavelength PA data acquired over a mouse with subcutaneous prostate tumor, 5 min after intravenous injection of 50 μl ICG (Fig. 12j). These experimental data were acquired using the TRUSPA device described in section III-E. Animal experiments were approved by the Administrative Panel on Laboratory Animal Care of the Stanford University [11].
Fig. 12

USPA simulation aided unsupervised PA spectral unmixing for (a) a tissue mimicking phantom with tubes filled with HbO2 “o” (90 % sO2), ICG “I” and Hb “d” (50 % sO2). (b, c) PA images of the phantom shown at 750 nm and 850 nm. Corresponding unmixed results for HbO2, Hb and ICG using linear (d-f) and unsupervised ICA approaches (g-i). Estimated values of sO2 are marked in %. (j) Picture of a mouse bearing subcutaneous tumor. (k, l) PA images of the mouse tumor shown at 750 nm and 850 nm, acquired 5 min after intravenous ICG injection. Corresponding unmixed results for HbO2, Hb and ICG using linear (m-o) and unsupervised ICA approach (p-r). Improved detection of Hb is highlighted in (n, q) using white arrows. Scale: mm.

USPA simulation aided unsupervised PA spectral unmixing for (a) a tissue mimicking phantom with tubes filled with HbO2 “o” (90 % sO2), ICG “I” and Hb “d” (50 % sO2). (b, c) PA images of the phantom shown at 750 nm and 850 nm. Corresponding unmixed results for HbO2, Hb and ICG using linear (d-f) and unsupervised ICA approaches (g-i). Estimated values of sO2 are marked in %. (j) Picture of a mouse bearing subcutaneous tumor. (k, l) PA images of the mouse tumor shown at 750 nm and 850 nm, acquired 5 min after intravenous ICG injection. Corresponding unmixed results for HbO2, Hb and ICG using linear (m-o) and unsupervised ICA approach (p-r). Improved detection of Hb is highlighted in (n, q) using white arrows. Scale: mm. Fig. 12b, c and k, l show two representative PA images out of the six wavelengths (750 nm–875 nm with 25 nm interval) used for unmixing the phantom and the mouse data, respectively. The spectrally unmixed maps of HbO2, Hb and ICG obtained using linear unmixing and the trained ICA model are presented in Fig. 12(d–i) and 12(m–r). In both the cases, the trained ICA model outperformed the linear unmixing in better estimation of Hb and sO2 concentrations (marked on Fig. 12d and g) in the phantom and the mouse cases. For example, tumor regions as expected to be hypoxic (low sO2) with more Hb concentration. These results demonstrate that the multispectral PAI datasets generated by our simulations can adequately train the AI models with the required spectral knowledge of tissue chromophores and subsequently help unmix the molecular information in realistic in vivo environments.

Application-3: PA simulations aided deep learning approach for reflection artifact reduction

Here, we present another application of our simulation platform for deep learning based PA image enhancement. Following the PA image generation of human finger cross-section presented in section III-E we generated a database of unique finger phantoms leading to 1800 PA images with variations including skin tone, skin thickness, number, size and shape of blood vessels, tissue thickness, finger width and bone size [64]. With this dataset covering all possible variations, we trained a U-Net architecture with each sample consisting of one ground truth image (simulated with acoustically homogeneous finger phantom devoid of reflection artifacts) and one train/test PA image (simulated with acoustically heterogeneous finger phantom containing reflection artifacts). The trained U-Net network was then tested over the experimental PA data acquired over a human finger using an LED-based USPA imaging system (AcousticX, Cyberdyne Inc., Ibaraki, Japan), as shown in Fig. 13a. The experiment was conducted with the internal imaging protocol of Cyberdyne Inc. (Rotterdam, The Netherlands) for healthy-volunteer imaging studies [64]. The acquired PA image, shown in Fig. 13b consisted of signal from five vascular targets, skin layer and reflection artifacts (pointed with white arrow) visible around the bone region. The cleaned output PA image (Fig. 13c) from the U-Net model shows not only the enhanced PA contrast from the vascular targets but also removal of the reflection artifacts present around the bone region. These results demonstrate that our simulation model was able to generate realistic datasets that helped in training a deep neural network to reduce reflection artifacts in in vivo PA images. PA simulation aided reflection artifact reduction using deep learning. (a) Picture of human finger immersed in water tank for PA imaging using a commercial LED-PAI system. (b) Acquired PA image with reflection artifacts (pointed with white arrow). (c) Output PA image obtained with deep learning (U-Net) approach. Scale: mm.

Conclusion

This paper presented and validated a hybrid USPA numerical simulation approach that adequately simulates multispectral PAI as well as US imaging of deep (up to 60 mm) homogeneous and heterogeneous biological tissue. The simulation platform integrates two open source toolboxes: i) NIRFast toolbox for forward light propagation and calculation of optical fluence at each tissue grid location; and ii) k-Wave toolbox for ultrasound propagation and detection. The platform models dual-modality USPA imaging devices and generates (i) realistic B-mode US images featuring anatomical information of targeted tissue; (ii) multispectral PA images displaying functional and molecular information based on optical absorption contrast from light absorbing chromophores such as oxy- and deoxy hemoglobin’s and ICG; and (iii) co-registered US + PA images with overlaid anatomical and molecular contrasts revealing the origin of PA molecular contrast in the background of ultrasound based structural information. Extensive parametric studies demonstrated that the USPA simulations can practically model the effect of key design parameters such as the size of the US transducer array, the light source aperture, and the frequency of the US transducer on the dual-modality US and PA imaging performance. In addition, the capabilities of the USPA simulation platform to accurately map the spectral profiles of deep tissue molecular PA targets, including overlapping absorbing targets, and obtain the respective unmixed molecular information has been demonstrated. Furthermore, the feasibility of USPA imaging of heterogeneous tissue was demonstrated using complex in silico phantoms mimicking human prostate with soft tissue properties and human finger consisting of both soft tissue and bone. To help design such tissue realistic digital phantoms with suitable acoustic properties, a dictionary-based function was developed and integrated to the k-Wave toolbox for generating various ultrasound speckle contrast levels for a given mean acoustic impedance of the tissue region. This approach helped in assigning an estimated percentage variation in the acoustic impedances for each pixel in the digital phantom. The ability to modulate acoustic properties at microscopic level led to corresponding realistic ultrasound speckle contrast observed in US images. This in turn also allowed generation of realistic PA images as photoacoustic waves were able to propagate through the acoustically heterogeneous tissue phantom. As a result, PA images of the finger phantom showed reflection artifacts due to acoustic impedance mismatch between the bone and the surrounding soft tissue consisting of blood vessels. In addition to modeling the USPA device performance in different scenarios, this paper also presented the applicability of using the US and PA simulated datasets in training and testing different AI models for enhanced PAI. The potential of the USPA domain-enriched learning is demonstrated by testing AI models over experimental datasets for (i) localizing deep-tissue vascular targets buried in strong optical scattering noise, (ii) unmixing Hb, HbO2 and ICG molecules inside deep tissue phantoms as well as in vivo mouse tumor models, and (iii) reducing the photoacoustic reflection artifacts from in vivo human finger imaging data. In summary, the presented USPA simulation platform provides a powerful tool for optimizing the performance of dual-modality USPA imaging devices for various pre-clinical and clinical applications. More importantly, the capabilities of the platform to model application-specific complex heterogeneous tissue phantoms and generate corresponding US and PA training datasets in a large scale opens the door for emerging AI applications in the dual-modality ultrasound and photoacoustic imaging fields. Future scope of this work involves 3-D simulations and validation studies on different organs (beyond prostate and finger imaging demonstrated in this work) mimicking realistic optical and acoustic heterogeneities, artifacts, shadow effects and system noise.

Declaration of Competing Interest

The authors declare that there are no conflicts of interest.
  49 in total

Review 1.  Quantitative spectroscopic photoacoustic imaging: a review.

Authors:  Ben Cox; Jan G Laufer; Simon R Arridge; Paul C Beard
Journal:  J Biomed Opt       Date:  2012-06       Impact factor: 3.170

2.  Laser optoacoustic imaging system for detection of breast cancer.

Authors:  Sergey A Ermilov; Tuenchit Khamapirad; Andre Conjusteau; Morton H Leonard; Ron Lacewell; Ketan Mehta; Tom Miller; Alexander A Oraevsky
Journal:  J Biomed Opt       Date:  2009 Mar-Apr       Impact factor: 3.170

3.  Dual Scan Mammoscope (DSM)-A New Portable Photoacoustic Breast Imaging System With Scanning in Craniocaudal Plane.

Authors:  Nikhila Nyayapathi; Rachel Lim; Huijuan Zhang; Wenhan Zheng; Yuehang Wang; Melinda Tiao; Kwang W Oh; X Cynthia Fan; Ermelinda Bonaccio; Kazuaki Takabe; Jun Xia
Journal:  IEEE Trans Biomed Eng       Date:  2019-08-19       Impact factor: 4.538

4.  Deep learning improves contrast in low-fluence photoacoustic imaging.

Authors:  Ali Hariri; Kamran Alipour; Yash Mantri; Jurgen P Schulze; Jesse V Jokerst
Journal:  Biomed Opt Express       Date:  2020-05-29       Impact factor: 3.732

5.  Dictionary learning technique enhances signal in LED-based photoacoustic imaging.

Authors:  Parastoo Farnia; Ebrahim Najafzadeh; Ali Hariri; Saeedeh Navaei Lavasani; Bahador Makkiabadi; Alireza Ahmadian; Jesse V Jokerst
Journal:  Biomed Opt Express       Date:  2020-04-14       Impact factor: 3.732

6.  Visualizing breast cancer using the Twente photoacoustic mammoscope: what do we learn from twelve new patient measurements?

Authors:  M Heijblom; D Piras; W Xia; J C G van Hespen; J M Klaase; F M van den Engh; T G van Leeuwen; W Steenbergen; S Manohar
Journal:  Opt Express       Date:  2012-05-21       Impact factor: 3.894

7.  Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media.

Authors:  Chao Huang; Kun Wang; Liming Nie; Lihong V Wang; Mark A Anastasio
Journal:  IEEE Trans Med Imaging       Date:  2013-03-22       Impact factor: 10.048

Review 8.  Photoacoustic clinical imaging.

Authors:  Idan Steinberg; David M Huland; Ophir Vermesh; Hadas E Frostig; Willemieke S Tummers; Sanjiv S Gambhir
Journal:  Photoacoustics       Date:  2019-06-08

9.  Photoacoustic-guided focused ultrasound (PAFUSion) for identifying reflection artifacts in photoacoustic imaging.

Authors:  Mithun Kuniyil Ajith Singh; Wiendelt Steenbergen
Journal:  Photoacoustics       Date:  2015-09-28
View more
  4 in total

1.  Video-rate full-ring ultrasound and photoacoustic computed tomography with real-time sound speed optimization.

Authors:  Yachao Zhang; Lidai Wang
Journal:  Biomed Opt Express       Date:  2022-07-27       Impact factor: 3.562

2.  Simultaneous Denoising and Localization Network for Photoacoustic Target Localization.

Authors:  Amirsaeed Yazdani; Sumit Agrawal; Kerrick Johnstonbaugh; Sri-Rajasekhar Kothapalli; Vishal Monga
Journal:  IEEE Trans Med Imaging       Date:  2021-08-31       Impact factor: 11.037

3.  SIMPA: an open-source toolkit for simulation and image processing for photonics and acoustics.

Authors:  Janek Gröhl; Kris K Dreher; Melanie Schellenberg; Tom Rix; Niklas Holzwarth; Patricia Vieten; Leonardo Ayala; Sarah E Bohndiek; Alexander Seitel; Lena Maier-Hein
Journal:  J Biomed Opt       Date:  2022-04       Impact factor: 3.758

4.  Modeling toolchain for realistic simulation of photoacoustic data acquisition.

Authors:  Jan-Willem Muller; Mustafa Ü Arabul; Hans-Martin Schwab; Marcel C M Rutten; Marc R H M van Sambeek; Min Wu; Richard G P Lopata
Journal:  J Biomed Opt       Date:  2022-09       Impact factor: 3.758

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.