Literature DB >> 35696442

Design and execution of a verification, validation, and uncertainty quantification plan for a numerical model of left ventricular flow after LVAD implantation.

Alfonso Santiago1,2, Constantine Butakoff2, Beatriz Eguzkitza1, Richard A Gray3, Karen May-Newman4, Pras Pathmanathan3, Vi Vu4, Mariano Vázquez1,2.   

Abstract

BACKGROUND: Left ventricular assist devices (LVADs) are implantable pumps that act as a life support therapy for patients with severe heart failure. Despite improving the survival rate, LVAD therapy can carry major complications. Particularly, the flow distortion introduced by the LVAD in the left ventricle (LV) may induce thrombus formation. While previous works have used numerical models to study the impact of multiple variables in the intra-LV stagnation regions, a comprehensive validation analysis has never been executed. The main goal of this work is to present a model of the LV-LVAD system and to design and follow a verification, validation and uncertainty quantification (VVUQ) plan based on the ASME V&V40 and V&V20 standards to ensure credible predictions.
METHODS: The experiment used to validate the simulation is the SDSU cardiac simulator, a bench mock-up of the cardiovascular system that allows mimicking multiple operation conditions for the heart-LVAD system. The numerical model is based on Alya, the BSC's in-house platform for numerical modelling. Alya solves the Navier-Stokes equation with an Arbitrary Lagrangian-Eulerian (ALE) formulation in a deformable ventricle and includes pressure-driven valves, a 0D Windkessel model for the arterial output and a LVAD boundary condition modeled through a dynamic pressure-flow performance curve. The designed VVUQ plan involves: (a) a risk analysis and the associated credibility goals; (b) a verification stage to ensure correctness in the numerical solution procedure; (c) a sensitivity analysis to quantify the impact of the inputs on the four quantities of interest (QoIs) (average aortic root flow [Formula: see text], maximum aortic root flow [Formula: see text], average LVAD flow [Formula: see text], and maximum LVAD flow [Formula: see text]); (d) an uncertainty quantification using six validation experiments that include extreme operating conditions.
RESULTS: Numerical code verification tests ensured correctness of the solution procedure and numerical calculation verification showed a grid convergence index (GCI)95% <3.3%. The total Sobol indices obtained during the sensitivity analysis demonstrated that the ejection fraction, the heart rate, and the pump performance curve coefficients are the most impactful inputs for the analysed QoIs. The Minkowski norm is used as validation metric for the uncertainty quantification. It shows that the midpoint cases have more accurate results when compared to the extreme cases. The total computational cost of the simulations was above 100 [core-years] executed in around three weeks time span in Marenostrum IV supercomputer.
CONCLUSIONS: This work details a novel numerical model for the LV-LVAD system, that is supported by the design and execution of a VVUQ plan created following recognised international standards. We present a methodology demonstrating that stringent VVUQ according to ASME standards is feasible but computationally expensive.

Entities:  

Mesh:

Year:  2022        PMID: 35696442      PMCID: PMC9232142          DOI: 10.1371/journal.pcbi.1010141

Source DB:  PubMed          Journal:  PLoS Comput Biol        ISSN: 1553-734X            Impact factor:   4.779


Introduction

Over 5 million people suffer heart failure (HF) in the U.S. alone, with ∼1 million new cases diagnosed annually [1]. Heart transplant is the recommended treatment for the 10% of these patients in Stage D [2] condition. Despite this, there are only 2000 organs yearly available for transplant [3], sufficient for only 0.4% of these patients. The limited organ availability is making left ventricular assist device (LVAD) therapy a leading treatment for the remaining 99.6% patients, with a ∼90% chance of 1-year survival [4]. Mortality and HF status following LVAD implantation are primarily associated with inefficient unloading of the left ventricle and persistence of right ventricular dysfunction [5], and stroke [6]. Optimization of LVAD speed is routinely performed for patients post-implant with a ramp study. Transthoracic echocardiography measurements of cardiac geometry and function are made while LVAD speed is slowly increased over a wide range [7]. The final pump speed is selected by balancing overall cardiac output, efficiency of left ventricle (LV) unloading, and preserving flow pulsatility [5]. Several variables are assessed from standard ultrasound views including LV end-diastolic dimension, LV end-systolic diameter, frequency of Aortic Valve (AoV) opening, degree of valve regurgitation, right ventricle (RV) systolic pressure, blood pressure, and heart rate (HR) at each speed setting. In addition, LVAD pump power, pulsatility index and flow are recorded. For the Thoratec HeartMate II, the ramp speed protocol starts at a speed of 8k[rpm] and increases by 400[rpm] increments every 2 minutes until a speed of 12k[rpm] is reached. As LVAD speed is increased, the LV volume decreases, as does the frequency of AoV opening and flow pulsatility. Excessive LV unloading at higher LVAD speeds increases the demand on the right heart, causing tricuspid regurgitation and also possibly producing suction events, which disrupt the flow into the LVAD inflow cannula. The clinical practice for LVAD speed selection first ensures that the hemodynamics are compatible with life, e.g. a mean arterial pressure greater than 65 mmHg [7] and a minimum cardiac index of 3.6×10−6[ms] (2.2[L/min/m2]) of body surface area (BSA) [5] To optimize LV unloading, the interventricular septum position should not bow towards either the left or right. If these conditions are met, the LVAD speed is selected that achieves intermittent AoV opening while maintaining no more than mild mitral regurgitation or aortic insufficiency [8]. De novo AoV insufficiency development in LVAD patients is linked to lack of AoV opening. Interestingly, AoV insufficiency occurred in the majority of LVAD patients (66%) whose AoVs remained closed during support, but rarely (8%) in those whose AoVs opened regularly [7]. A patient specific LVAD speed calibration is important for ensuring appropriate cardiovascular support and minimizing the frequency of adverse events related to long-term support. However, the ramp echo study is not performed routinely after the first month post-implant, due to the unjustified expense and inconvenience. A computational tool that predicts cardiac output and aortic valve opening for the subject’s characteristics could reduce the ramp study requirements as well as contribute to speed adjustments required over a long-term, even supporting a speed adjustment paradigm that contributes to recovery. As LVADs are a life support therapy that carry a high risk for the patient, the Food and Drug Administration (FDA) rank them in the most exhaustive level of control (Class III) before approving their commercialisation. Therefore, using computational models to predict the behaviour of these devices or to guide design decisions should be accompanied with a stringent validation process to ensure credible results. Validating computational models is a whole challenge by itself, but recent published guidelines tackle this problem. While scientific computing has undergone extraordinary increases in sophistication, a fundamental disconnect exists between simulations and practical applications. While most simulations are deterministic, engineering applications have many sources of uncertainty arising from a number of sources such as subject variability, initial conditions or system surroundings. Furthermore, the numerical model itself can introduce large uncertainties due to the assumptions and the numerical approximations employed [9]. Without forthrightly estimating the total uncertainty in a prediction, decision makers will be ill advised. To address this issue, multiple standards for industrial guidance has been published like the American Society of Mechanical Engineers (ASME) V&V codes [10-12]. Extensive modeling studies dealing with multiple LVAD factors like ventricular size [13], cannula implantation position [14], implantation depth [15-17] or angulation [18] exist but none of them provide credibility evidence as suggested in the recent ASME V&V40 [11], nor are any guided by AMSE V&V20 [12] which was developed over 10 years ago for demonstrating credibility of computational fluid dynamics (CFD) models. The reason for this is that such a validation requires a thorough comparison of the simulation results against bench or animal experiment measurements and hundreds of executions of the numerical model, which involves a large computational cost. To our knowledge, this is the first paper describing such a comprehensive computational LVAD setup and using ASME verification, validation and uncertainty quantification (VVUQ) standards to design and execute a VVUQ plan. While our final goal is to predict intra-LV stagnation biomarkers, this manuscript is focused on the credibility assessment of the numerical model. The contributions of this manuscript combines: A deformable ventricle numerical model using a unidirectional fluid-structure interaction (FSI) and 0D aortic impedance model (as in [16]). A novel pressure-driven valve model for mitral and aortic valves. A dynamic pressure-flow (also called H-Q) performance curve for the LVAD boundary condition. A VVUQ plan designed and executed following the ASME V&V40 and V&V20 standards [11, 12]. A set of validation metrics to quantify the differences between the simulation and the experiment and that describe the aortic valve flow and total cardiac output.

Note on the nomenclature

The words “model” and “experiment” may work for any simpler representation of a more complex system (e.g. animal, bench or numerical model/experiment). For the sake of brevity, we refer to the bench model/experiment simply as “experiment” and to the numerical model/experiment as “simulation”, except if stated otherwise.

Methods

Description of the benchtop model

The experiments were performed with the San Diego State University (SDSU) cardiac simulator (CS), shown in Fig 1. This CS is a mock circulation loop of the heart and the circulatory system with an apically implanted LVAD (Abbott HeartMate II) that has been reported previously in [19, 20]. It involves a transparent model of the dilated LV based on an idealised geometry, immersed in a water-filled tank and connected to an external circulatory loop mimicking the systemic circulation. The tank is fully watertight, so when the piston pump generates negative pressure, the LV expands to the end diastolic volume (EDV). The LV used is manufactured from platinum-cured silicone rubber (Young’s modulus E = 6.2×105[Pa] at 100% elongation and ultimate tensile strength of P = 5.52×106[Pa] at 400% elongation). Porcine valves were used in both the aortic position (26[mm] Medtronic 305 Cinch) and the mitral position (25[mm] 6625 Carpentier Edwards Bioprosthesis). Tygon tubing (16[mm] diameter) replaced the HeartMate II outflow graft and was connected to the ascending aorta at a 90[°] angle approximately 15[mm] distal to the aortic root. The circulating fluid was a viscosity-matched blood analogue consisting of 40[%] glycerol (viscosity of μ = 3.72×10−3[Pas−1] at 20[°C]) and saline [21]. Constant mitral pressure is achieved by using a large open reservoir for the left atria (LA), maintaining the LA pressure constant thorough the studies. A fluid circuit composed of partially clamped tubing and compliance chambers is used to physically represent and tune the systemic circulation. This circuit can be mathematically represented by a 3-element Windkessel model with , , , following the method in [22]. This lumped representation of the circulatory system allows characterising the outlet boundary condition in the numerical model.
Fig 1

Leftmost side: Schematic of the experiment setup.

LA: left atrium, MV: Mitral valve, AV: Aortic valve, Ao: aorta. Flow and pressure sensors are indicated in blue and red respectively. The body lumped system is afterwards characterised with a three element Windkessel model with parameters R, R and C. Center: variables extracted from the benchtop experiment to create the simulation and address the uncertainty quantification (UQ). Rightmost side: Schematic of the simulation, including the lumped models for the boundary conditions. The measured P is imposed in the mitral valve. The measured R, R and C are used in a three element Windkessel boundary condition at the Aortic valve output. The H-Q curve retrieved and measured is used as a dynamic boundary condition in the LVAD outflow. The benchtop piston dynamics is used as boundary condition for the deformable LV geometry.

Leftmost side: Schematic of the experiment setup.

LA: left atrium, MV: Mitral valve, AV: Aortic valve, Ao: aorta. Flow and pressure sensors are indicated in blue and red respectively. The body lumped system is afterwards characterised with a three element Windkessel model with parameters R, R and C. Center: variables extracted from the benchtop experiment to create the simulation and address the uncertainty quantification (UQ). Rightmost side: Schematic of the simulation, including the lumped models for the boundary conditions. The measured P is imposed in the mitral valve. The measured R, R and C are used in a three element Windkessel boundary condition at the Aortic valve output. The H-Q curve retrieved and measured is used as a dynamic boundary condition in the LVAD outflow. The benchtop piston dynamics is used as boundary condition for the deformable LV geometry. During diastole the silicone ventricle dilates allowing the filling with fluid from the atrial chamber. During systole the ventricle is compressed expelling fluid through the aortic valve. Through this process the LVAD is continuously extracting fluid out of the LV. If the LVAD speed is high enough, the aortic valve remains hemodynamically closed during systole. The total aortic flowrate Q is measured with a Transonic TS410 20PXL clamp-on flow meter (resolution: 8.3×10−7[m3/s] (50[ml/min]), maximum zero offset: 5×10−9[m3/s] (0.3[ml/min]), and absolute accuracy: 10[%]). The LVAD flowrate Q is measured with a Transonic TS410 10PXL flow meter (resolution: 1.6×10−7[m3/s] (10[ml/min]), maximum zero offset: 1.0×10−9[m3/s] (0.06[ml/min]), absolute accuracy: 10[%]). The aortic valve flow Q is calculated as Q = Q − QVAD. Pressure in the LV and the aortic root are measured with two Icumed TranspacIV (sensitivity: 666.65[Pa](5[mmHg]) ±1%, zero offset: 3333.3[Pa] (25[mmHg])). Signals are amplified collected with a ADinstruments Powerlab DAQ device (sampling frequency: 200[Hz], impedance 1[MΩ]@1[pF], resolution: 16[bit]) and processed with ADInstruments LabChart version: 1.8.7. [23]. Two beating modes and three pump speeds are used for six validation experiments (described ahead in section Validation points and ranges). The beating mode 22[%]@68.42[bpm] has EF = 22[%] and HR = 68.42[bpm] with end systolic volume (ESV) = 180.0×10−6[m3] (180[cm3]) and EDV = 230.0×10−6[m3] (230[cm3]). The beating mode 17[%]@61.18[bpm] has EF = 17[%] and HR = 61.18[bpm] with ESV = 180.0×10−6[m3] (180[cm3]) and EDV = 216.86×10−6[m3] (216.86[cm3]). The Ejection Fraction (EF), the EDV, and the ESV are linked by the EF equation: As the ESV is fixed by the silicone ventricle volume, there is a direct correlation between EF and EDV. From now on, we only characterise the case as a function of the EF. Both beating modes correspond to a New York heart association (NYHA) Class IV HF patient [24]. The pump speeds used for the validation points are 0[rpm], 8k[rpm] and 11k[rpm]. To avoid backflow, the LVAD outflow conduit were clamped in the 0[rpm] experiments so the system mimics a pre-LVAD baseline rather than a heart implanted with a LVAD turned off. The pressure-flow (also called H-Q) performance curves are experimentally retrieved for each pump speed (see S4 Data) and used afterwards as input for the simulation. Approximating a pump pressure-flow curve with a quadratic fit is a common engineering practice. The SDSU-CS has been widely used in academia and industry, and its reproducibility addressed, not only for LVADS [19, 25–30], but also valves [23, 31, 32], and with combined devices [20, 33]. In this work use retrospective experimental data for the validation with UQ.

Description of the numerical model

Overall simulation pipeline

The computational domain is created from the exact same computer geometry used to manufacture the silicone ventricle (refer to Fig 1). The FSI problem requires the construction of two meshes. The solid mechanics mesh is created directly from the original geometry, containing at least four linear tetrahedra in the wall thickness, obtaining a total of 200k elements. The CFD mesh was created by closing the solid domain and extruding the inlets and outlets to ensure flow development. The geometry is discretized including a boundary layer valid for Re < 4000[−] [34], using linear tetrahedra, pyramids and pentagons. The final mesh has 1.6M elements. For the time discretisation, a first order trapezoidal rule with a time step of 0.00428[s] was used in every case, which matches the experimental setup sampling period. The FSI problem can be tackled by a unidirectional or a bidirectional approach [35]. In the former approach, the solid problem unilaterally deforms the fluid mesh. In the latter approach, an iterative process is requires to balance the internal forces of the solid problem with the surface pressure of the fluid problem. To obtain a computationally inexpensive and accurate way of deforming the ventricle, a unidirectional FSI approach is used to deform the LV. This same approach is used in [16], where a pressure is imposed in the external solid domain which afterwards deforms the CFD domain between the ESV and the EDV. In this unidirectional FSI approach the solid domain is exclusively used to impose the boundary deformation and velocity in the fluid domain, but no force is fed back to the solid problem, as it would be in an iterative bidirectional FSI formulation [35]. We justify this choice from the working principle of the experimental set-up (section Description of the benchtop model). The piston forces volume changes in the silicone ventricle independently of the internal ventricle pressure. Once the simulation pipeline is completed, the input files are modified to work as a template. This template is used by Dakota server (DARE) (described in section DARE) for the sensitivity analysis (SA) and the UQ analysis.

Description of the solver

Here we briefly describe the numerical model used, highlighting only the novel components. The incompressible, Newtonian fluid is modelled by the Navier-Stokes equations in Alya, the Barcelona Supercomputing Center (BSC) in-house tool for simulations [36]. The Navier-Stokes equations are solved using an an Arbitrary Lagrangian-Eulerian (ALE) formulation, allowing the fluid domain to deform: where μ is the dynamic viscosity of the fluid, ρ the density, v the velocity, p is the mechanical pressure, f the volumetric force term and is the domain velocity. As the fluid domain deforms due to the imposed boundary displacements, the deformation for the inner nodes have to be computed. For this, we use the technique proposed in [37]. Thenumerical model is based on the finite elements method (FEM), using the algebraic subgrid scale (ASGS) as in [38] for stabilisation. In order to solve this system efficiently in supercomputers, a split approach is used [39]. The Schur complement is obtained and solved with an Orthomin(1) algorithm [40] with weak Uzawa operator preconditioner. The momentum equation is solved twice using generalized minimal residual method (GMRES) with Krylov dimension 100 and a diagonal preconditioner. For the continuity equation a deflated conjugate gradient algorithm is used. Solid mechanics is modelled via linear momentum balance [41], using a neo-Hookian formulation to represent the Platinum-cured silicone [42]. As described in section section Description of the benchtop model, the solid material bulk modulus is K = 1×106[Pa] and its shear modulus is G = 2×105[Pa] (corresponding to a Poisson ratio of ν = 0.4[−]). The implicit formulation is solved with a using a total Lagrangian formulation and the GMRES algorithm for linear systems’ where Newton method is the nonlinear solver. A complete description of the fluid- electro- mechanical model can be found in [43]. The referenced work shows the governing equations and the solution strategy for cardiac electrophysiology, mechanical deformation, and the ventricular hemodynamics. The cited manuscript also describes the strategy for both bidirectional couplings present in the three physics problem.

Initial and boundary conditions

The Mitral inlet has a constant pressure of P. The aortic model has a 0D Windkessel model with three components: a resistor R, serially connected with an RC parallel impedance R, and C [22]. On the deforming ventricular walls, as well as on the rest of the fluid domain, the velocity the domain deformation is imposed. This is , where is the velocity at the deformed boundary. The LVAD outlet has a specific type of boundary condition that will be thoroughly described in section LVAD boundary condition. the rest of the boundaries have . As for the initial condition in the unknowns, the initial fluid velocity is v∣ = 0[m/s].

Valve modelling

Mitral and aortic valves are modelled through a pressure-driven porous layer in the valvular region. This porous media add an isotropic force to the right-hand side of the momentum equation with the shape , where where P is the material porosity and I the identity matrix. This strategy provides a robust numerical scheme against the potential ill-conditioned stage of confined fluid. To ensure a smooth change with potentially abrupt changes in the transvalvular pressure, the porosity is driven through a hyperbolic tangent as: where P the maximum possible porosity, s the slope of the curve, Δp the transvalvular pressure drop, and a reference pressure gradient. In practice if the valve is closed and if the valve is open. To avoid spurious valve opening and/or closing due to transient peaks in the tranvalvular pressure gradients, the measures are filtered using a median filter.

LVAD boundary condition

Pump performance can be characterised with pressure-flow curves for each speed of the pump’s rotor. These pressure-flow curves (also called H-Q curves) provide a relation between the pressure difference between the pump inlet and outlet, and the flow the pump can provide at that speed. Let ΔpVAD = P − P be the pressure difference between the outlet and the inlet of the pump and QVAD the flow through the pump. The pressure-flow relationship can be approximated with a quadratic equation as: For each ΔpVAD there is a single QVAD and vice versa. This relation can be used as a boundary condition, imposing a flowrate for a calculated pressure difference ΔpVAD. With this, the LVAD boundary condition keeps a flowrate constrained with Eq 3. The method to calculate the variable ranges for the pump model inputs is explained in the S4 Data. The fitting coefficients with their errors are shown in Table 1.
Table 1

Fitting coefficients (a, b, and c) and fitting errors ϵ of the H-Q performance curves.

The pump speed is measured in [rpm]. The units are a[Pa], b[Pa ⋅ s/m3], and c[Pa ⋅ s2/m6].

Pump speedaVAD ± ϵ / bVAD±ϵ/cVAD
0k0.0 ± 0.0 / 0.0 ± 0.0 / 0.0
8k1.17×104 ± 1.44×102 / −7.72×107 ± 2.85×106 / 0.0
11k2.17×104 ± 4.89×102 / −9.02×107 ± 6.15×106 / 0.0

Fitting coefficients (a, b, and c) and fitting errors ϵ of the H-Q performance curves.

The pump speed is measured in [rpm]. The units are a[Pa], b[Pa ⋅ s/m3], and c[Pa ⋅ s2/m6].

DARE

Executing the SA and the UQ during the VVUQ process requires generating thousands of inputs for the simulation code, submitting the jobs, processing the simulation results, and extracting the quantities of interest (QoIs) out of the physical field results. For this, we created DARE. DARE is an automating tool that works coupled with Sandia’s Dakota (version 6.12) [44, 45] and allows automatically encoding, submitting and retrieving jobs to any high performance computing (HPC) infrastructure (Fig 2). Dakota allows to characterise and sample model inputs for multiple analysis types like SA, UQ or optimisation. The Dakota+DARE pair runs in a computer external to the HPC machine for as long as the analysis under execution may last (up to several weeks in this work). DARE receives Dakota’s chosen inputs, processes the simulation templates with an encoder, submits the job to the supercomputer queue, waits for the jobs to be finished and processes the final results to feed Dakota with the obtained outputs. A combination of Dakota’s restart capabilities with DARE failure capture capabilities makes this a robust framework for the required analysis.
Fig 2

Scheme of DARE building blocks.

Design of the VVUQ plan through V&V40 risk-based credibility assessment

We performed credibility assessment by following the ASME V&V 40 [11] standard. The standard provides a framework for assessing the relevance and adequacy of the completed VVUQ activities for medical devices. Applying the standard requires a set of preliminary steps to determine the required level of credibility for the model. These preliminary steps are to identify: (1) the question of interest, this is the question the tool will find an answer to; (2) the context of use (CoU), this is the specific role and scope of the computational model; (3) the QoI, these are the simulation outputs relevant for the CoU; (4) the model influence, this is the contribution of the model in making a decision; (5) the decision consequence, this is the possibility that incorrect model results might lead to patient harm; and (6) model risk, which is based on model influence and decision consequence. Once these items are identified, goals for the credibility evidence can be defined and the VVUQ plan designed.

Question of Interest

For an apically implanted LVAD, does the selected pump speed produce: (a) complete aortic valve opening (Q > 5×10−6[m3/s] (0.3[L/min])); and (b) a Cardiac output compatible with life ( (4.2[L/min])) for a range of HR and EF covering a HF patient population?

Context of Use (CoU)

The heart-LVAD computational model may be used by design engineers to assist in the preclinical development of LVAD, by characterising aortic root, LVAD and intra-LV flows for a given pump speed. The goal of the heart-LVAD computational model is to provide a computational replica of a benchtop experiment for a quantitative analyses in parametric explorations. The heart-LVAD computational model by no means is replacing animal experiments or clinical trials, but augmenting the totality of evidence.

Quantities of Interest (QoI)

These are the simulation outputs relevant to the CoU. For completeness, here we repeat that the QoIs are the maximum and average flows through the outlet boundaries (LVAD flow Q and aortic root flow Q).

Model influence

Although the numerical test will augment the evidence provided by the bench test to aid design, they do not qualify the safeness of the device. This meaning animal testing and clinical trials are still required during the regulatory submission to prove safety and efficacy of the device. Therefore the computational model influence can be categorised as low (see Table 2).
Table 2

Risk map.

Adapted from [46].

Model influencehigh345
med234
low123
lowmedhigh
Decision consequence

Risk map.

Adapted from [46].

Decision consequence

While the CoU specifies the usage of the model for design iterations, the results could be used to make indirect decisions that affect the patients’ health. If the model fails to make accurate predictions for the question of interest, could advice for an operating condition that produce either: (a) low cardiac output or b a permanently closed aortic valve. These might lead to thromboembolic events, aortic regurgitation or death. Therefore the decision consequence is categorised as high (see Table 2).

Risk assessment

As the model influence has been categorised as “low” and the decision consequence as “high”, the LV-LVAD model is categorised with a risk of 3 on the 1–5 scale from Table 2, therefore requiring a medium level goals in the VVUQ plan.

Translation of model risks into credibility goals

The ASME V&V40 standard [11] defines 13 credibility factors (some containing sub-factors) that break down the assessment of the VVUQ activities. Once the risk associated with the modelling tool has been determined, the next stage of the V&V40 pipeline is defining a ranking (gradation) for each factor sorted by increasing level of investigation, and then selecting a credibility goal for each factor based. Table 3 lists the 13 credibility factors and sub-factors. Gradations for each factor are provided in the S1 Data. For most credibility factors, the gradation proposed in the V&V40 standard is used. Table 3 summarizes the maximum possible score in the gradation, the targeted goal and the achieved score. The targeted goal also includes the description required to achieve that score. Per V&V40, goals were chosen so that model credibility is generally commensurate with model risk. Therefore, for most factors, a medium level or higher goal was chosen. The rationale behind the chosen goals is provided in the S2 Data.
Table 3

ASME V&V40 credibility factors [11] analysed on the risk-based assessment.

The table shows the maximum possible score (“Max.” column), the desired goal (“Goal” column) and the obtained score (“Obt.” column). The goal column also includes the description of the activity to achieve that gradation.

AspectEvaluation
Max.GoalObt.
1. Verification (Sec. Verification)1.1. Code1.1.1. software quality assurance (SQA)CB [SQA procedures are specified and documented.]C
1.1.2. Numerical code verificationDC [The numerical solution is compared to an exact solution.]D
1.2. Calculation1.2.1. Discretisation error LightGrayCB [Convergence analysis are performed obtaining stable behaviours.]C
1.2.2. Numerical solver errorCB [Solver parameters are based on values from a previously verified model.]B
1.2.3. User errorDB [Key inputs and outputs were verified by the practitioner.]C
2. Validation (Secs. Sensitivity analysis and Validation with uncertainty quantification)2.1 Computational model2.1.1. Model formCB [Influence of some assumptions is explored.]B
2.1.2 Model inputs2.1.2.1. Quantification of sensitivitiesCB [A SA of the expected key parameters is performed.]B
2.1.2.2. Quantification of uncertaintiesDB [UQ is executed on expected key inputs but not propagated to the QoIs.]C
2.2. Comparator2.2.1. Test samples2.2.1.1. Quantity of test samplesCA [A single sample is used.]A
2.2.1.2. Range of characteristics of test samplesDA [A single test condition is examined.]A
2.2.1.3. Measurements of test samplesCB [One or more key characteristic are measured.]C
2.2.1.4. Uncertainty of test samples measurementsCA [Characteristics uncertainty is not addressed.]A
2.2.2. Test conditions2.2.2.1. Quantity of test conditionsBB [Multiple test conditions.]B
2.2.2.2. Range of test conditionsDB [Test conditions representing a range of conditions near nominal range are examined.]C
2.2.2.3. Measurements of test conditionsCB [One or more key test conditions are measured.]B
2.2.2.4. Uncertainty of test conditions measurementsCB [UQ of the test conditions incorporated instrument accuracy only.]B
2.3. Assessment2.3.1. EquivalenceCB [The types of all inputs are similar, but ranges are not equivalent.]C
2.3.2. Output Comparison2.3.2.1. QuantityBB [Multiple outputs were compared.]B
2.3.2.2. Equivalency of output parametersCB [Most types of outputs are similar.]C
2.3.2.3. Rigour of output comparisonCB [Comparison was performed by arithmetic difference.]C
2.3.2.4. Agreement of output comparisonCB [The level of agreement is satisfactory for some key comparisons.]B
3. Applicabilty (Sec. Discussion on the V&V40 credibility factors: Achieved score)3.1. Relevance of the Quantity of interestCB [A subset of the QoIs are identical to those for the CoU.]C
3.2. Relevance of the validation activities to the CoUDB [There is partial overlap between the ranges of the validation points and the CoU.]C

ASME V&V40 credibility factors [11] analysed on the risk-based assessment.

The table shows the maximum possible score (“Max.” column), the desired goal (“Goal” column) and the obtained score (“Obt.” column). The goal column also includes the description of the activity to achieve that gradation.

Design and goals of the VVUQ plan

This section explains the VVUQ activities carried out to achieve the credibility goals defined.The VVUQ plan has been designed following [9, 11, 12]. Provide verification evidence: SQA practices should be followed to ensure reproducibility and traceability. Numerical code verification (NCV) is mandatory to ensure correctness in the coding of the models. Numerical calculation verification is mandatory to ensure a sufficient spatial discretisation of the problem. Execute a sensitivity analysis in the operating range: A non-linear global SA within the operating range of the cases should be executed to: (a) understand the impact of each input on the QoIs, and (b) Safely reduce the number of input variables for the UQ through Pearson’s ρ and Sobol indices analyses. The goal of step (b) has a direct impact in the UQ as it reduces its computational cost. Perform validation with uncertainty quantification: The reduced input model obtained from the SA is used to execute the UQ analysis. At least a middle point and the extreme cases of the operation envelope should be investigated. A comparison of the QoIs’ distributions is required including a validation metric that allows quantitatively comparing the results between validation points and against other similar works or future projects. Adequacy assessment: Evaluate if the simulation credibility evidence is good enough to safely answer the question of interest.

Code and calculation verification

Code and calculation verification tests provide evidence of the correctness in the translation of the governing equations and numerical solution procedure. It is enclosed by SQA that provides means to monitor the software engineering processes and ensure traceability of the changes. Numerical code verification tests provide a metric of the correctness of the governing equations implementation. These types of tests generally compare the solution obtained by the simulation code with a known analytical solution. Numerical calculation verification is intended to bound the error introduced by the numerical discretisation. It involves the comparison of results in increasingly refined discretisations to estimate the numerical error in the final simulations. For this manuscript, we choose to execute stationary, dynamic, 2D, and 3D numerical code verification tests to ensure the correctness of the governing equations implementation. Finally, increasingly complex numerical calculation verification tests are shown that allow bounding the discretisation error.

Sensitivity analysis

A SA [47] is a statistical tool that allows quantifying the impact of each input variable in each QoI of the model. It is helpful as it allows to rank the input variables based on their contribution to the variation of the model output. On the one hand, identifying the less relevant inputs allows to reduce the dimensionality of the problem, as the less relevant inputs can be safely avoided to decrease the computational cost during UQ. On the other hand, reducing the experimental uncertainty of the most relevant inputs identified is critical to obtain accurate model predictions. The reason for this is that a large uncertainty in a highly impactful input will produce a large uncertainty in the model output. The SA is carried out by firstly sampling the input values using latin hypercube sampling (LHS) for all the shared input variables. Later, these samples are used to execute independent CFD simulations. In this work we execute two types of SA: i) local SA via Pearson coefficients [48], and ii) global SA through total Sobol indices calculation [49] by relying on a 5-th order polynomial chaos expansion (PCE) [50]. While having both, Pearson coefficient and Sobol indices may seem redundant, the former is simpler to understand and implement than the latter. This eases the task of reproducing and comparing the manuscript results. Both analyses are performed by relying on 500 samples. Pearson´s coefficient analysis is a first order approach that provides insights of the model behaviour with an accessible and straightforward method. However, it is a measure of the linear association between the inputs and the outputs and it is valid only under Pearson´s assumptions [51] of linear and homoscedastic data with no multivariate outliers. For complex data distributions, Sobol indices are a better fitted method that provides information of the importance of each input taking into account complex factors like nonlinearities, input interactions, and sample dispersion. Sobol´s global SA rely on high-order integrals to accurately calculate the indices. These integrals require a relatively large number of samples to be evaluated. In this manuscript, the original 500 samples are used to fit a PCE emulator which is afterwards used to obtain the 5000 samples required for the Sobol integrals. The PCE polynomial order has been chosen to balance computing time and fitting accuracy. Further description on the Sobol indices and the PCE method can be found in [49, 50].

Uncertainty quantification

Validation involves measuring the difference between both sources of predictions, the experiment and the simulation. These two sources are subject to different types of uncertainties that should be identified as part of a UQ analysis. The measuring instruments in the experiment introduce the measuring error, while user error is introduced by the experimentalist variability. In the experiment on this manuscript, each execution of the experiment contains multiple beats. Therefore, even for the same set of inputs and due to the measuring instrument error (see section Description of the benchtop model), there will be a dispersion in the QoIs that requires the the measurement error to be quantified. As we use retrospective experimental data not specifically thought for VVUQ, there is only a single execution of the experiment for each of the six validation points. This hampers quantification of the user error. To tackle this issue we add a 10% error range in the QoIs measured in the experiment to account for the user error. As there is no other information on that user error shape, no probability distribution can be assumed. The numerical error in the simulation tool is estimated by the code and calculation verification (see S3 Data), and the input error quantified during the UQ. All the parameters shared by both the experimental and numerical models are listed and classified in Table 4.
Table 4

Table of model inputs and their uncertainty characterisation.

VariableSymbolClassificationCharacterisationvalue range
Density ρ deterministicexact value1100[kg/m2]
Viscosity μ deterministicexact value0.00372[Pa ⋅ s]
Heart rate HR aleatoryuniform[40, 120][bpm]
Atrial pressure P LA aleatoryuniform [0.5,1.5]×103[Pa]
Ejection fraction EF aleatoryuniform[0.1, 0.35][−]
Aortic serial resistance RsAo aleatoryuniform [5,20]×106[Pa·s/m3]
Aortic parallel resistance RpAo aleatoryuniform [50,200]×106[Pa·s/m3]
Aortic parallel Capacitance CpAo aleatoryuniform [5.0,20.0]×10-7[m3/Pa]
Constant VAD coefficient A VAD aleatoryuniform [0.025,1]×101[m3/s]
Linear VAD coefficient B VAD aleatoryuniform [0.5,5]×10-4[Pa·s/m3]
Each one of the model variables can be characterised as one of the following three: (a) deterministic, when their values are known (b) aleatory, when the variable is uncertain due to inherent variation and can be characterised with a cumulative distribution function (CDF); (c) epistemic, when the variable is affected by reducible uncertainty due to lack of knowledge and it can be represented with a bounded interval. The inputs of the model should be characterised by one of these three categories and treated accordingly. The uncertain variables are sampled using LHS in the uniform ranges identified in Table 4. While SA is executed for all the variables referred in Table 4, the UQ may be executed on a reduced set of inputs. If the SA study concludes there are variables with little to no impact in the QoIs, these variables can be safely omitted in the UQ to reduce the computational cost. These variables are identified with the Pearson´s ρ and the Sobol index analysis. The selected impactful uncertain variables are forward propagated through the computational model down to the output to obtain the QoIs distributions. Once the output distributions are obtained for the experiments and for the simulations, the differences are quantified using a validation metric. To evaluate these differences, we use the Minkowski L1 norm (MN) validation metric proposed in [9] in two different ways. For the first approach (called MN) a uniform distribution is assumed in the experimental data, and the MN integrated between that artificially built experimental empirical cumulative distribution function (ECDF) and the simulation ECDF. For the second approach, so called p-box approach [52], no distribution is assumed in the experimental data. Therefore, the MN is calculated between the simulation ECDF and the maximum (MN+) and minimum (MN−) limits of the experimental data.

Results

Results are split in three parts. Section Verification briefly relates the verification results. Section Sensitivity analysis shows the results for the SA of the numerical model, which rank the model input variables according to their impact on the outputs. As mentioned in the section Design and goals of the VVUQ plan, LHS is used during the SA to sample the input values domain. Results are analysed through scatter plots, Person’s ρ correlation and total Sobol indices. Section Validation with uncertainty quantification shows a UQ analysis for six validation points reproduced in the SDSU-CS. The six validation points include two different conditions, so called 22[%]@68.42[bpm] and 17[%]@61.18[bpm], with three pump speeds (0k, 8k and 11k[rpm]) each. As the benchtop experiment uses a continuous flow LVAD, we assume there is no beat-to-beat variation. Even if there may be a change in the internal LV flow structures due to its chaotic nature, this does not affect the mass flow through the boundaries or the LV volume. The model is intended to reproduce inbound and outbound flows in the LV of the CS, therefore the final goal is to correctly reproduce the flow meter signals of the experiment. As the statistical tools for the UQ analysis require scalars, the QoIs chosen to characterise the flows are maximum and average Aortic and LVAD flows. An analysis and comparison of each set of simulation results at every spatial point of the volumetric domain is virtually impossible even for a small number of cases, let alone more than 1000 simulation executions as done in this work. Therefore, even if sample qualitative results are shown for each validation point, the maximum and average flows through the boundaries are calculated and used to calculate statistical trends.

Verification

This section will briefly describe the results related to the verification credibility factors. Further description of the numerical code verification and numerical calculation verification can be found in the S3 Data.

Numerical code verification

The simulation software used on this manuscript is developed with a continuous integration and continuous deployment (CD/CI) strategy based on git, combining feature-driven development and feature branches with issue tracking. The SQA pipelines ensure continuous integration, running a series of software checks, builds, and regression tests when the developers modify the source code. The tests include a combination of 27 architectures and compilers, optimization options running more than 200 regression tests making a total of more than 4000 different executions. This is complemented with compilation time unity testing and a bi-weekly executed benchmark suite to measure performance evolution. Numerical code verification is executed as per Section 2 of [12], for a 2D Poiseuille and a 3D Womersley flow problem in a cylindrical tube. These problems have non-trivial analytical solutions that are used as true value. For both cases the discretisation error is monitored as the grid is systematically refined. If the ratio between mesh subdivisions is defined as r = r/r then, for the cases in this manuscript r1,2 = r2,3 = r = 2.0, a figure considerably larger than 1.3, the minimum value recommended [12]. The largest velocity magnitude root mean square error (RMSE) is 0.03% for the finest mesh and the observed order of convergence pobs = 1.907, compatible with the theoretical order of convergence of 2 of the 2nd order backward differentiation formula (BDF) time scheme used. The largest RMSE is 0.05% for the finest mesh, and the observed order of convergence is pobs = 1.82, close to the theoretical order of convergence of the 2nd order BDF time scheme used. Both, the 2D Poiseuille flow and the 3D Womersley flow are thorougly described in the S3 Data. With the result of the code verification process we demonstrate [53] that: (a) equations are solved correctly with at most 0.5% error with respect to the analytical solution for the Reynolds numbers representative of LVAD problem; (b) observed order of accuracy is similar to the theoretical order; (c) The equation coding, transformations and solution procedures are correct.

Numerical calculation verification

The original model mesh described in section Description of the numerical model was refined twice using the technique described in [54]. The meshes were used for two different configurations of the problem. First, we use a set of simplified boundary conditions. Geometry deformation, valve modelling, and the pump boundary condition are deactivated as the behaviour of these features depend on the CFD results. The goal is to evaluate the CFD solver in a complex domain with a non-ideal mesh. As such geometry does not have an analytical solution, the RMSE are calculated against the finest mesh computed. The maximum velocity magnitude RMSE is 0.91% and the observed order of convergence is pobs = 0.971 compatible with the theoretical order of convergence of 1 provided by the first order trapezoidal time integration. The second calculation verification test is executed with the complete model that includes all features. As it is a dynamic problem, time averaged quantities were calculated. The maximum velocity magnitude RMSE is 2.4% and the time-averaged observed order of convergence is compatible with the theoretical order of convergence of 1 provided by the first order trapezoidal time integration. Detailed results are shown in the S3 Data.

Discussion of the verification results

While the applicability of validation results is currently a topic of active discussion [55], the applicability of verification results is rarely discussed. The reason for this is probably the scarce number of verification tests that have a non-trivial analytical solution or a manufactured solution. The tests in this section are executed with Reynolds number close to the ones in the LV, supporting credibility of the solution procedure for an operating condition similar to the validation operating condition. Simple, stationary physical problems as the 2D Poiseuille flow allow having small errors for a reduced computational cost. When trying to find a solution to more complex transient problems (e.g. the 3D Womersley transient flow) the errors increase. The model solved in this project (section Description of the numerical model) is not only a transient problem solved on a fine mesh, but also: (a) the fluid mesh is deforming, (b) the boundary conditions (such as the valves or the pump boundary condition) vary with the solution of the CFD solver and, (c) it requires solving the near-ill-conditioned problem of the valve closing. Therefore, considerably larger errors were expected compared to simpler academic problems. Despite that, the numerical error remained under 2.4% for the most complex use case presented.

Sensitivity analysis

Results for the SA

The SA is intended to identify the input variables with the highest impact in the QoIs. The variable ranges used for the SA are shown in Table 4. The density ρ and the dynamic viscosity μ are easily and accurately measured. Furthermore, due to the system operation pressures and the fluid bulk properties, these quantities are not expected to change. With this, these two variables are classified as deterministic, knowing their exact value. The rest of the input variables are ranged in approximately one order of magnitude, so the UQ sweeping ranges fall within the SA ranges. To proceed with the LHS a uniform distribution is considered, obtaining 500 samples from the input variables. These samples are used to run 500 simulations, obtaining an ensemble of the QoIs. The sampling and results are shown in the scatter plot at Fig 3 together with the Person’s correlation coefficient ρ. From a visual analysis of the scatter plot it can be seen that the data is nonlinear, heteroskedastically distributed, and contains multivariate outliers, failing 3 of the 7 assumptions required for Pearson’s analysis. To overcome this issue, a global SA is done by calculating total Sobol indices (indicated in section Sensitivity analysis). Total Sobol indices provide information of the importance of each input taking into account complex factors like nonlinearities, input interactions, and sample dispersion. The total Sobol index of each input with respect to each QoI are shown as a tornado plot in Fig 3. The larger the index, the more important that input is for the QoI. The total cost of the 500 simulations is about 50 [core-years] in Marenostrum IV supercomputer.
Fig 3

Scatter plots and total Sobol indices tornado plots for the 8 input variables and the 6 QoIs.

The scatter plot also shows the Pearson’s linear correlation number ρ in the top left corner. Units are intentionally avoided in the Y-axis of the total Sobol indices tornado plot to ease its legibility.

Scatter plots and total Sobol indices tornado plots for the 8 input variables and the 6 QoIs.

The scatter plot also shows the Pearson’s linear correlation number ρ in the top left corner. Units are intentionally avoided in the Y-axis of the total Sobol indices tornado plot to ease its legibility.

Discussion of the SA results

The scatter plots and the Pearson’s ρ analysis shown in Fig 3 provide a simple tool to identify the most important variables in the LV-LVAD system. While these tools are useful for a first order approach to understanding the system’s behaviour, they fall apart for non-linear effects and complex interactions. Total Sobol indices provide a more insightful tool that accounts for the effect of each input variable and their interactions in each QoI. From the total Sobol indices analysis we can see that the most highly ranked inputs are the EF, HR, A and B. Inputs such as the Windkessel parameters , , , and the left atrial pressure P have total Sobol indices smaller than 0.25 for at least one QoI so they are qualified as not relevant for the UQ. The reason for this is addressed in the discussion at the end of this section. The wide ranges chosen for the global SA provide a trustworthy set of Sobol indexes that are applicable to the smaller ranges during the UQ analysis. While SA is a common tool other fields of cardiac modelling like electrophysiology and solid mechanics [56-58], there is no published work with a local nor a global SA for 3D CFD regarding the LV-LVAD system. Despite this, The trends in Fig 3 agree with experiment data and clinical observations. A higher pump speed, translated as a larger A coefficient, has a positive correlation with the LVAD flow and a negative correlation with the aortic flow [59]. The reason for this is that the suction produced by the pump reduces the aortic valve opening [8, 60]. Also, the HR and EF has a direct positive correlation with the LVAD and aortic flows [61]. The unexpectedly [61, 62] small influence of the mean atrial pressure (P) and arterial impedance (characterised via , , and ) can be explained due to the lack of Frank-Starling mechanism [63] in the silicone ventricle of the experiment and therefore also in its computational analogue. This may raise a concern on the model applicability if it was used for clinical guidance. But, as stated in the CoU of this manuscript: “(…) the model is intended to provide a computational replica of a benchtop experiment (…)”. While the quantification of the so called applicability error would be critical to correctly estimate the total model error, it is still under investigation [64].

Validation with uncertainty quantification

The global SA in sction Sensitivity analysis identified the four most relevant variables, namely the EF, the HR, A, and B. These are the simulation input variables studied during the UQ analysis. The UQ analysis consist of six validation experiments varying the four chosen inputs. For each validation point, a qualitative set of images is shown that allows visualising the CFD behaviour of the problem. The quantitative results are analysed through scatter plots and ECDFs. To evaluate the differences between the experimental and simulation distributions, we use the MN validation metric already described in section Uncertainty quantification. In every figure, experimental results are represented with orange and simulation results with blue.

Validation points and ranges

For the UQ analysis, the SDSU-CS is configured at two beating conditions. The condition 22[%]@68.42[bpm] has an EF = 22[%] and HR = 68.42[bpm]. The condition 17[%]@61.18[bpm] has an EF = 17[%] and HR = 61.18[bpm]. Three pump combinations are used in each case, 0k[rpm] (or pump off with clamped outflow LVAD conduit, recall section Description of the benchtop model), 8k[rpm], and 11k[rpm]. This makes a total of six validation points as described in Table 5. These six validation points are chosen to vary the QoIs that provide information to answer the question of interest: EF, HR, and pump speed (via the coefficients a and b). As there is no information on the precision of the prescribed EF and HR, a 10% error is assumed for these two inputs, producing the ranges in the second and third column of Table 5. Similarly, and as explained in section Design and goals of the VVUQ plan, the experiment data accounts for the instrument error. But, as we count with only a single experiment per validation point, an error range of 10% is included in the QoIs to account for the measurement uncertainty. To calculate one of the validation metrics shown (MN) we assume a uniform distribution in the measured QoIs for that assumed range. On the contrary, the multiple simulations executed let us calculate the ECDF used for the metrics. The method to calculate the variable ranges for the pump model inputs is explained in the S4 Data. The coefficients range and the uncertainty characterisation are shown in Table 6.
Table 5

The six validation points used for the UQ analysis.

ConditionEF value rangeHR value rangePump speed
22[%]@68.42[bpm][19.8,24.2][%][65.55, 72.45][bpm]0k[rpm]
8k[rpm]
11k[rpm]
17[%]@61.18[bpm][15.3,18.7][%][53,63][bpm]0k[rpm]
8k[rpm]
11k[rpm]
Table 6

Range of H-Q curve coefficients used for the UQ analysis.

The range is obtained as 2ϵ + ϵ where ϵ is 10% of the measured value.

Pump speedcoefficientUncertainty classificationcharacterisationvalue range
0k a VAD deterministicexact value0.0 [Pa]
b VAD deterministicexact value0.0 [Pa ⋅ s/m3]
c VAD deterministicexact value0.0 [Pa ⋅ s2/m6]
8k a VAD aleatoryuniform[10.91, 12.66]×103[Pa]
b VAD aleatoryuniform-[7.90, 7.53]×107[Pa ⋅ s/m3]
c VAD deterministicexact value0.0 [Pa ⋅ s2/m6]
11k a VAD aleatoryuniform[19.64, 23.77]×104[Ba]
b VAD aleatoryuniform-[9.80, 8.24]×107[Pa ⋅ s/m3]
c VAD deterministicexact value0.0 [Pa ⋅ s2/m6]

Range of H-Q curve coefficients used for the UQ analysis.

The range is obtained as 2ϵ + ϵ where ϵ is 10% of the measured value. These simulation variables are sampled using a LHS obtaining 50 samples per validation experiment, making a total of 300 numerical simulations that required about 30 [core-years] in Marenostrum IV supercomputer.

Condition 22[%]@68.42[bpm]

Fig 4 shows qualitative surface results for the three pump speeds for the condition 22[%]@68.42[bpm] and the pump speeds 0k, 8k, 11k[rpm]. Afterwards, Figs 5 to 7 provide quantitative results for the referenced condition and pump speeds. The flow plots in Figs 5a, 6a and 7a show the aortic (Q) and LVAD flow (Q) for the experiment (orange) and the simulation (blue). As the UQ analysis also accounts for HR, the time axis is normalised. The scatter plots in Figs 5c, 6c and 7c show the inputs in the x-axis and the outputs in the y-axis for the experiment (orange) and the simulation (blue). The orange range in the y-axis is representing the assumed 10% measurement error and the simulation results kernel distribution estimation (KDE) is represented in blue shades surrounding the simulation measures. Figs 5d, 6d and 7d show the simulation ECDF in blue and the experimental limits with two vertical orange ranges, together with the artificial uniform distribution used to calculate MN. Finally, the bench and numerical experiments data limits and the multiple MN are shown in Figs 5b, 6b and 7b.
Fig 4

Qualitative surface results for the condition 22[%]@68.42[bpm].

Pump speed is 0k[rpm], 8k[rpm], and 11k[rpm] in the first, second, and third rows respectively. The columns indicate different time frames in the simulation. t = 0.42[s] show results for the plateau previous to systole, t = 0.58[s] show the atrial kick, t = 0.77[s] show systole, and t = 1.03[s] shows diastole. Videos of the simulations can be found in the S1 Video.

Fig 5

Summary for the condition 22[%]@68.42[bpm] and 0k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the experimental and simulation data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distribution.

Fig 7

Summary for the condition 22[%]@68.42[bpm] and 11k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation and experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distribution.

Fig 6

Summary for the condition 22[%]@68.42[bpm] and 8k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation and experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distribution.

Qualitative surface results for the condition 22[%]@68.42[bpm].

Pump speed is 0k[rpm], 8k[rpm], and 11k[rpm] in the first, second, and third rows respectively. The columns indicate different time frames in the simulation. t = 0.42[s] show results for the plateau previous to systole, t = 0.58[s] show the atrial kick, t = 0.77[s] show systole, and t = 1.03[s] shows diastole. Videos of the simulations can be found in the S1 Video.

Summary for the condition 22[%]@68.42[bpm] and 0k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the experimental and simulation data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distribution.

Summary for the condition 22[%]@68.42[bpm] and 8k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation and experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distribution.

Summary for the condition 22[%]@68.42[bpm] and 11k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation and experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distribution.

Condition 17[%]@61.18[bpm]

Results are presented similarly to section Condition 22[%]@68.42[bpm]. Fig 8 show a set of time frames for the three pump speeds during the condition 17[%]@61.18[bpm]. Afterwards, Figs 9 to 11 show the experimental and simulation results. Again, the experimental results are shown in orange and simulation results in blue. the experiment and simulation ranges and the validation metrics are shown in Figs 9b, 10b and 11b.
Fig 8

Qualitative surface results for the condition 17[%]@61.18[bpm].

Pump speed is 0k[rpm], 8k[rpm], and 11k[rpm] in the first, second, and third rows respectively. The columns indicate different time frames in the simulation. t = 0.42[s] show results for the plateau previous to systole, t = 0.64[s] show the atrial kick, t = 0.85[s] show systole, and t = 1.16[s] shows the diastolic filling. Videos of the simulations can be found in the S1 Video.

Fig 9

Summary for the condition 17[%]@61.18[bpm] and 0k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation and experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distributions.

Fig 11

Summary for the condition 17[%]@61.18[bpm] and 11k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation aned experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distributions.

Fig 10

Summary for the condition 17[%]@61.18[bpm] and 8k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation and experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distribution.

Qualitative surface results for the condition 17[%]@61.18[bpm].

Pump speed is 0k[rpm], 8k[rpm], and 11k[rpm] in the first, second, and third rows respectively. The columns indicate different time frames in the simulation. t = 0.42[s] show results for the plateau previous to systole, t = 0.64[s] show the atrial kick, t = 0.85[s] show systole, and t = 1.16[s] shows the diastolic filling. Videos of the simulations can be found in the S1 Video.

Summary for the condition 17[%]@61.18[bpm] and 0k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation and experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distributions.

Summary for the condition 17[%]@61.18[bpm] and 8k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation and experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distribution.

Summary for the condition 17[%]@61.18[bpm] and 11k[rpm].

(a): aortic valve and LVAD flows. (b): validation metrics. (c): scatter plot showing the simulation aned experimental data. (d): ECDF for the simulation, experimental data limits and the constructed uniform distributions.

Discussion of the UQ results

Similarly to the SA, while UQ analyses have been recently done for electrophysiology and electromechanical models of the heart [56, 58, 65], but no equivalent studies have been conducted for ventricular CFD. The first noticeable feature in the scatter plots is the lack of dispersion in the x-axes for the experimental results. The x-axes represent the prescribed values of the inputs in the experiment or simulation. As there is a single execution of the experiment per validation point, there is no information on the input variable distribution. This fact that translates as zero dispersion in the x-axis at experimental scatter plots. On the contrary, the multiple beats contained in each validation point and the measurement error in the flow meter is seen as y-axis dispersion for the experiment. Even with a highly reproducible and tested experimental setup as the SDSU-CS we were not able to obtain the experimental probability distributions required for the highest rankings in the ASME V&V40 standard. To be able to obtain these distributions, the exact same experiment should be repeated multiple times, a time consuming and expensive process. While working with these distributions as input data is unarguably optimal, we face the most common case where only a single experimental data point is available [9]. Another noteworthy detail is the fact that the MN is an absolute metric, therefore its interpretation depends on the QoI’s range and mean value at the specific condition. As an example, the 0k[rpm] cases for both conditions may seem the trivial solution for and . but, even if they have the smallest validation metrics in this manuscript, there is no overlap for these QoIs in the scatter plots. Results show the smallest validation metrics (i.e. better agreement) for the mid-point working conditions with larger differences for the extreme cases. The large uncertainty ranges in the pump H-Q curve (see S4 Data and Table 6) for the 8k[rpm] and most noticeably 11k[rpm] speeds produce a considerable dispersion in the simulation results. This dispersion is most noticeable in the 11k[rpm] cases (Figs 7 and 11) as a large uncertainty range in the flow curves (Figs 7a and 11a), but it also exhibits itself in the empirical cumulative distribution function (ECDFs) (Figs 7d and 11d) and the scatter plots (Figs 7c and 11c) as a poor overlap of the blue-shaded simulation KDE and the orange range for the experiments. Particularly, Fig 11c shows a clustering in the and simulation points. As part of the application study in this manuscript, we show in Fig 12 the average aortic valve flow Q for each execution. The data has a threshold at 5×10−6[m3/s] (0.3[L/min]), moment where the aortic valve starts to fully open allowing a consistent flow through it. This is because the a and b uncertainty ranges are so wide that they even allow aortic valve opening for some scenarios, something also observed in [59]. The largest validation metric (Figs 7b and 11b) with the pump operating at 11k[rpm] is for for both conditions 17[%]@61.18[bpm] and 22[%]@68.42[bpm].
Fig 12

Aortic valve flow and total flow as a function of the HR and EF.

Data is shown for both, 8k[rpm] and 11k[rpm]. The grey hatched region represents the minimum limit for both and .

The 0k[rpm] results at Figs 5 to 9 also show a mismatch between the numerical and the bench data. As the pump H-Q curves are forced to zero in the simulation, there is also zero uncertainty in the associated inputs (a and b). This produces unequivocally LVAD flows equal to zero and a Dirac’s delta probability distribution function for these QoIs in the simulation results. On the contrary, the experiment still shows a small y-axis scattering in the LVAD QoIs (Figs 5c and 9c). This is produced due to flow disturbances around the flow meter and the sensor’s offset error (see flow-meter characteristics in section Description of the benchtop model). The 0k[rpm] cases also highlights a modelling error in the simulation results, most clearly in the flow curves in Figs 5a and 9a: the aortic flow wave produced by the pressure curve in the model is too triangular and short-timed, and the backflow during the valve closing is too large. These differences may have been introduced by the simplified aortic valve model. Despite the poor overlap for the distributions seen in the ECDF plots in Figs 5d and 9d and the scatter plots in Figs 5d and 9d, the validation metrics in Fig 5b and 9b for and are almost negligible as the values enforced in the simulation are close to the experimental ones. The 8k[rpm] operation condition provides the smallest overall metrics (i.e. best agreement) between the experimental and simulation. For the condition 22[%]@68.42[bpm] this can be seen as a good overlap between the experimental ranges and the simulation distributions. Similarly, the simulation ECDF curves (Fig 6d) overlap the experiment and numerical ranges for every QoI. Almost similarly, the condition 17[%]@61.18[bpm] shows an overlap for most variables in the ECDF curves (Fig 10d) except for those associated with the aortic flow ( and ). Despite that lack of overlap, the validation metrics in Fig 10b are also reduced. The lack of publications with UQ analyses for ventricular CFD makes comparing these results a difficult task. The results suggest a mismatch between the experimental and simulation results for the 0k[rpm] (LVAD off) and the 11k[rpm] cases. These validation points highlight potential issues that are worth exploring.

Discussion on the V&V40 credibility factors: Achieved score

This section is intended to summarise the achieved scores for the credibility factors described in the ASME V&V40 [11] and summarised in Table 3. The section is intended to compare the pre-selected goal with the final achieved score. For the reader’s reference, the translation from the scores to the required activities can be found in the S1 Data. The rationale behind the chosen goal for this work can be found in the S2 Data. In the rest of this section we will summarise the rationale behind the scores obtained for each credibility factor.

Section 1: Verification credibility factors

For the calculation and code verification the ASME standards define a set of rankings that ranges from no verification up to an extensive set of tests. The code used as simulation engine follows rigorous SQA enforced by the code life cycle tool. Also, the fact that the simulation code is partially open source and part of the Partnership for Advanced Computing in Europe (PRACE) Unified European Applications Benchmark Suite (UEABS) ensures the code transparency and constant scrutiny [54, 66, 67]. Added to this, two numerical code verification tests and three numerical calculation verification tests (S3 Data) were executed to ensure code correctness and bounded numerical error for this problem. Comparing the executed tasks with the rankings in the S1 Data, led us to rank the SQA, NCV, discretisation error, and numerical solver error with the maximum score, surpassing the original desired goal. Due to the lack of external manpower, the inputs of the solver where only checked by internal review. This led us to achieve a C out of a maximum D in the user error credibility factor. As the number of input variables is rather small, the original goal was B out of D and therefore the original goal is achieved.

Section 2: Validation credibility factors

While the background model is based on the well known Navier-Stokes equations, there are multiple associated sub-models like the pump H-Q performance curve model, the lumped valve model or the Aortic impedance Windkessel model. Some of the assumptions for the simplification on the UQ were tested a-priori during the SA, therefore the model form correctness scores a B out of C, the desired goal. On the model input credibility factor, due to the thorough SA and UQ executed we achieved the desired goal of B out of C. The analysis was executed using retrospective experimental data, which was not gathered for VVUQ use. A single silicone ventricle was used as test sample, achieving an A out of C for the quantities of test samples credibility factor and an A out of D for the range of the test sample characteristics credibility factor. Despite this, the geometry used was characterised in all its features with a computer drawing tool, obtaining a score of C out of C. The CoU focuses on reproducing the bench experiment and assessing mass flow through the boundaries and not on analysing the internal LV fluid dynamics. Therefore it does not require evaluating the QoIs for multiple geometries. With this, a single idealised geometry was considered for the credibility evidence. With this, all the goals for the test sample credibility factors were achieved or surpassed. Further rationale can be found in the detailed table in the S2 Data. On the test conditions credibility factor, multiple test conditions were tested (achieving a B out of B) representing the expected extreme conditions range (achieving a C out of D), measuring all the key test conditions (achieving a B out of C). Despite this, the uncertainty of the test conditions was not characterised, achieving an A out of C. Most of the goals for the comparator credibility factor where achieved or surpassed, except for the characterisation of the test condition uncertainties. Finally in the assessment credibility factor, the accuracy of the simulation output is evaluated. As the simulation is designed to reproduce the experiment physics and resulting QoIs, all the input types and ranges of all inputs were identical,achieving a C out of C for the equivalence of inputs credibility factor. Also multiple outputs were rigorously compared via two approaches of validation metrics, achieving a C out of C for the equivalence, rigour and quantity of output variables credibility factor, surpassing the predefined goals in all of them. As the level of agreement was satisfactory for some key comparisons, we achieve the desired goal of B out of C for the agreement of output comparison credibility factor.

Section 3: Applicability credibility factors

This item assesses how relevant the validation results are to the CoU. As the CoU proposes the tool for analysing LVAD and Aortic valve flow to analyse total cardiac output and aortic valve opening, we assign the maximum rank C out of C for the relevance of the QoIs for the question of interest. As the number of validation points could be increased to target a larger operation envelope, the validation activities only encompassed some validation points for the CoU, achieving a C out of D. In both cases, the goals were surpassed.

Overall credibility assessment

We demonstrate the model to be sufficiently close to the validation points for the simple CoU proposed. While some of the credibility factors did not obtain the maximum achievable score, they did obtained or surpassed the desired goal for the medium risk application (refer to Table 3 for a summary). Riskier applications, where the numerical model drive safety related conclusions or where the final decision relies more on middling, would require achieving those maximum achievable scores. Further improvements on the model to obtain these maximum scores are discussed in the conclusion.

Application to ramp study

While the CoU constrains the usage of the numerical model for preclinical development of LVADs, we show a use case outside of that CoU where the model presented in this manuscript can be useful. The applicability of the credibility evidence for the proposed use case is discussed at the end of this section. As explained earlier, a ramp study [7] is routinely performed after LVAD implantation to select the pump speed for the patient. The selection is based on LV flow and geometry measured with echocardiography while the LVAD speed is increased over a wide range. The optimal speed is chosen to ensure end-organ perfusion. The desired cardiac output (measured as ) is dependent of the patient BSA, and about 3.6×10−6[m/s] (2.2[L/min/m2]) per unit of BSA. This is to say, for an average BSA of 1.9[m2], the desired is 7×10−5[m3/s] (4.2[L/min]). Bowing of the intraventricular septum towards either RV (at low LVAD speeds) or LV (at high speeds) is avoided, particularly if the LVAD inflow cannula is oriented towards the AoV. On the contrary, AoV opening increases at lower LVAD speeds. It is desirable to achieve opening of the AoV (measured via a = 0.3[L/min]), at least intermittently, to avoid aortic regurgitation. The balance among these considerations are determined by the clinicians present, and the speed selected for long term LVAD support. Interestingly, most LVAD patients experience changes in cardiac geometry and function during the use of LVAD support. For example, increased heart rate or blood pressure can result in shorter systolic durations and alter AoV opening. Reduction in LV volume due to reverse remodeling and improvements in ejection fraction may shift the intra-ventricular septum position or enable greater AoV opening. With this, the final pump speed will depend on the patient’s hemodynamic condition, quantified here via the EF and HR. In all of these examples, having a more adaptive system begins with a validated tool such as the model proposed herein. The model results extend the range of the experimental studies by evaluating the QoIs over a range of heart rate and ejection fraction, specifically AoV flow and total aortic flow (). Fig 12 shows that the model predicts a small but notable decrease in LVAD flow with increasing HR and EF, which is accompanied by an increase in flow through the AoV. Fig 12 shows that the lower pump speed limit is bounded by and the upper pump speed limit is bounded by . At 8k[rpm] the pump is unable to meet the (4.2[L/min]) requirement for the range of HR and EF analysed. Oppositely, the 11k[rpm] case is unable to meet the requirement for the situations with HR≲ 65[bpm] or EF≲ 18[%]. This agrees with the findings in [7] showing that the AoV closes at 9124 ± 1, 222[rpm] with an optimal LVAD speed of 8850 ± 470[rpm].

Aortic valve flow and total flow as a function of the HR and EF.

Data is shown for both, 8k[rpm] and 11k[rpm]. The grey hatched region represents the minimum limit for both and . While it is exciting to show a clinical application case as the one described here, the CoU presented in this manuscript specifies that the model may only be used during preclinical design of the device. With the current CoU, the credibility evidence gathered is not complete enough for a clinical application. The simulation tool and the bench experiment contain a set of simplifications that should be studied in a future CoU addressing clinical practice. While the experimental and numerical models use an idealised and smooth LV, patient´s LV geometries come in a vast variety of volumes, shapes and levels of trabeculations. Human left ventricle (LVs) are also mechanically connected to the RV and the surrounding tissues, which affect the LV behaviour. In this work an homogeneous LV contraction is assumed, which may not be valid for advanced HF patients with large non-contractile regions. Future iterations of this work addressing clinical CoUs should evaluate these variables in the bench-top experiment and numerical model to safely use the listed simplifications in the current version of the proposed tool. Ideally, in future iterations of LVAD therapy, the device will be able to remotely monitor the HR, and the speed setting updated to adjust for the patient condition. With improvements in remote monitoring, device adjustment, the presented model could serve as a baseline tool for a smart interface between the patient’s LVAD and their clinical team, and form the foundation for automated speed control.

Conclusion

This manuscript provides a thorough detail and execution of a VVUQ plan of a clinically relevant numerical model. Starting from a major concern of LVAD treatment we define the V&V40 terminology and goals following the approved standard, and proceed to design a VVUQ plan that we afterwards execute and analyse with statistical tools. While [68] reviews the use of computer model for critical health applications under the ASME V&V40 standard [11], this work presents for the first time a complete execution of a VVUQ plan following the cited guideline. Even if simpler LVAD numerical models have been published in the past, this is the first including a deformable ventricle, a pressure driven valve model and a dynamic LVAD boundary condition. The numerical model has been created to faithfully reproduce the SDSU-CS. To do so, the numerical model required to deform the mesh with the same pattern as the experiment, a 0D model of the systemic arteries, a novel approach for the valves that is driven by the transvalvular pressure gradient, and a novel approach to represent the LVAD through an H-Q curve performance function as boundary condition. Moreover, such a model has been subject to the V&V40 pipeline allowing to bound the uncertainties in the simulation. The main facilitator for this has been the usage of a bench experiment as a source of comparators. When comparing animal experiments with benchtop experiments, the former have a larger inter-subject variability, lower reproducibility and lower access to the QoI, while the latter provide a more reproducible and accessible set of comparators. To ensure the solution procedure correctness, the numerical model was subjected to two code verification tests that bounded the numerical error for an operating condition close to the validation points. The two calculation verification tests executed, provided a measurement of the uncertainty produced by the spatial discretisation. The local SA provided a graphical understanding of the model’s behaviour, while the global SA based on total Sobol indices highlighted the most impactful input variables. This variable reduction brings the consequent reduction in the UQ analysis computational cost. The six validation points swiped through three pump speed velocities, two EF and two HR. The final use of the model is the application to the ramp study [7]. The model predicted an operational pump speed range that allows obtaining aortic valve opening and the desired total aortic flow. Moreover, the pump speed ranges predicted by the model agrees by the ranges found in [7]. From an applicability perspective, the impact of modeling and simulation is only achieved when credibility goals and resources align. While the execution of a thorough VVUQ plan provides the final results high credibility, it is a time consuming and computationally expensive process. Achieving the highest scores in Table 3 requires not only rigorous testing of the simulation code but also a large number of experiment executions. Given the ASME V&V 40 risk-based credibility assessment, the burden associated with the highest scores is only required for high risk applications. Unfortunately, these applications are where the simulations will prove the most beneficial. Considering the level of effort required to surpass the credibility threshold defined by the ASME V&V40 standard, it is arguable that the minimum credibility threshold for the computational models has been set up considerably higher compared to the credibility requirements for benchtop experiments. However, a detailed discussion on the costs and relative reliability of different models (clinical, animal, bench, simulation) given current best practice for each, is out of the scope of this manuscript. Historically, the regulatory entities have accepted bench experiments, animal experiments, and human trials as sources of evidence for pre-market approval (PMA) applications [69]. The advent of numerical models into biomedical devices design raises new questions, especially if their use is intended to evaluate the device safety and effectiveness during the regulatory evaluation. Even if widely accepted, bench experiments provide an insight into the device performance for generally simplified geometries and under strictly controlled conditions. Despite this, they allow evaluating the device in an environment similar to the usage conditions. Similarly, animal experimentation has been a cornerstone of the regulatory system for decades. Despite this, the translation of animal experiments to humans has been widely criticised [70-73] due to three main reasons: (1) the effects of the laboratory environment and other variables on study outcomes, (2) disparities between animal models of disease and human diseases, and (3) species differences in physiology and genetics [70]. While human clinical trials provide the only true reflection of the device behaviour in the intended usage, they are constrained to tightly regulated ethical standards and small samples. As the research questions become more sophisticated, it is becoming increasingly difficult to find trustworthy answers with the limitations of the available sources of evidence [74]. Using numerical modelling as a source of evidence opens a new door for the regulatory process with the promise of resolving the multiple flaws [70-74] present in the classical approach. With such pledges, it is expected that numerical models are subject to a tight credibility scrutiny. But, answering how much we can trust simulations and in what portion they will be replacing bench and animal experiments, or even human trials is something only history will tell. Being the first iteration of the VVUQ plan, the results highlighted a number of items to be improved in the future. On the bench experiment side, including multiple executions for each validation point, would improve the validation metrics in the UQ, as the input variables could be characterised with a probability distribution instead of a forced 10% experimental uncertainty as done in the current manuscript. Given the hypotheses in this manuscript, we did not tested the effect of the LV bag size and shape, although that would increase the application range of the model. Also, retrieving multiple measurements of the H-Q curve of the LVAD for the operating condition will also have a direct positive impact on the final validation metrics, as we can conclude from the results in this work. The large uncertainty ranges in the H-Q curves in this manuscript produce distributions in the QoIs. Using a lumped H-Q curve representation for the LVAD behaviour may also have an impact on the results. While a quadratic approximation of the pressure-flow relation in a pump is a common engineering practice, it is still a simplification that may fall apart in complex use conditions. On the simulation side, the pressure-driven porous layer is certainly affecting the LV vortical structures, as the valve geometries shapes the LV flow patterns [23, 31, 32]. Adding valve geometries will improve the intra-LV flow patterns in future applications where vortex quantification becomes critical. Also, including the deterministic variables in the SA will increase the model credibility, necessary for a higher credibility score in the model form factor. These improvements will not only make a more accurate model, but also increase the scoring in the credibility factors. With these improvements, the model could target riskier applications.

The video shows an example simulation for each validation point.

The top row shows the 17[%]@61.18[bpm] condition and the bottom row the 22[%]@68.42[bpm] condition. The different columns are the results for 0k, 8k, 11k[rpm]. The surfaces are coloured by velocity magnitude and the arrows show the flow direction. (MP4) Click here for additional data file.

Ranking for risk informed credibility assessment.

This section provides the gradiation for each credibility factor and actions required on each item in the standard ASME V&V40. (PDF) Click here for additional data file.

Rationale behind the achieved scores for each credibility factor.

This section provides the rationale behind every achieved score for each credibility factor. The score is taken from S1 Data. (PDF) Click here for additional data file.

Verification evidence.

Results of the verification tests. (PDF) Click here for additional data file.

Calculation of the pump input variable ranges.

Experimental data and coefficient fitting for the pump parameters. (PDF) Click here for additional data file. 12 Jan 2022 Dear Mr. Santiago, Thank you very much for submitting your manuscript "Design and execution of a Verification, Validation, and Uncertainty Quantification plan for a numerical model of left ventricular flow after LVAD implantation" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by two independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments. We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts. Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Andrew D. McCulloch, Ph.D. Associate Editor PLOS Computational Biology Daniel Beard Deputy Editor PLOS Computational Biology *********************** Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: Uploaded as attachment Reviewer #2: The authors presented a very through approach for the verification, validation and uncertainty quantification of a model of the LV-LVAD system. It is a first step towards the possible use of such simulations for in silico clinical trials. However, even tough the study is overall complete and well designed, I would like to highlight a few points that would require further clarifications. General comments Could the authors please use units that are more commonly used in clinical practice (cardiac output in L/min and pressure in mmHg)? Methods: 2.1. Could the authors please comment on the adequacy between the HM2 inflow cannula and the tygon tubing they used in the experiments? 2.1. The authors have retrieved the H-Q curves for each pump speed. Was it done for a “steady” state? Could the authors provide more information about the protocol they implemented to get these curves? They are the standard for LVADs but a few studies have also observed that the LVAD response is more complex and unsteady, which might explain some discrepancies between the model and the experiments. 2.2.1. Could the authors give more details about the solid mesh they used for the one-way FSI? (shell elements or solid elements?) What mechanical properties have they used for the solid model? (only the young modulus is given in the description of the experiments). Have the authors compared the overall mechanics of the solid domain between their model and the experiments? 2.2.3. Could the authors explain their choice to keep the mitral valve inlet at a constant pressure? To my understanding, this is not the case for the experiments. Overall, a sketch of the FSI model with the different BC would really help us understand the simulation. 2.2.4. The authors used a porous layer to model the valve, which is numerically more efficient. Have they assessed the effect of this model assumption on their results? In the experiments, the valve motion may impact the intraventricular flow patterns, especially the opening of the aortic valve. 2.2.5. The authors have approximated the pressure-flow relationship with a quadratic equation. I assumed the coefficients AVAD, BVAD and CVAD used in equation 3 are the fitting parameters and they depend on the LVAD speed. The authors provide the values in Table 5, but it is a bit late in the manuscript. Could they please add this table to the methods section? Is this model commonly used for CFD models of LVAD? 2.2.5. Also, have the authors evaluated the cycle do cycle dependence of their model? 2.4.2. The sensitivity analysis is an important part of this study and would benefit for further details. Could the authors comment on the relevance of the Pearson’s coefficient for their approach? Their problem is quite complex and a linear assumption, without any interactions between the variables, might be a strong hypothesis. As they state, the second approach seems more appropriate. Also, could the authors provide the equations for the Sobol indices they compute? as they present them later in the results sections. Could they also comment on their choice of a 5th order polynomial chaos? Finally, they stated that both analyses relied on 500 samples. How were these samples chosen (i.e. which experimental plan did the authors used? Latin hypercube sampling?)? 2.4.3. The authors added a 10% error range in the Qols measured in the experiment. Have the authors measured the repeatability and reproducibility of the experiments? If not, was this 10% error arbitrary? Results: 3. The part of the sampling of the input values using latin hyper cube sampling belong to the methods section. (it can be repeated in the results section but needs to appear in the methods). 3.1.1. Could the authors clarify their choice for the value ranges used for the SA? Was it based on previous studies or their experiments? 3.1.1. The second approach for the SA considered nonlinear effects and interactions between the parameters. However, the authors only present the coefficient of the linear effects. If the other Sobol coefficients are not significant or very low, the authors could state it clearly in the results section. 3.1.2. The authors have assumed that the H-Q curves were quadratic, defined with 3 parameters, but then they only use 2 parameters in the uncertainty characterization? Was the third parameter irrelevant or very small compared to the other ones? Figure 4. The authors previously stated that the 4 Qols were the max and average aortic and LVAD flows. What is QRAT that the authors present in this figure? 3.2 The authors have performed the UQ analysis for 6 experimental points. They describe them later, but could they please add a short sentence to justify this number in this section? Figure 6. Could the authors add units to all the plots? Also, the Qao plot is cut. Could the authors explain why they observe some flow experimentally (despite being minimal) through the LVAD, even is the RPM is 0? 3.2.4. The authors obtained the best agreement for the 8k RPM condition, especially for the flowrate through the LVAD. However, the flowrate in the aorta is more difficult to predict, due to the complex opening and closing of the aortic valve. Could the authors comment on the capacity of their valve model to predict such phenomena correctly? Could it partly explain the difference they observed (especially in the 0K RPM condition?)? Conclusion: 5. The authors state that the bench experiments provide a highly reproducible set of comparators. Have they evaluated the reproducibility of the experiments? I assume they meant this comparatively to in vivo measurements. Minor comments: 2.1. Please replace “during systole the ventricle contracts” by “it is contracted or compressed”. This sentence may lead people to think that there is some type of active material able to contract, when the piston pump is responsible for the contraction. 2.4.3. Please add a space between “metric.” and “To evaluate”. Please remove a space before the coma in the same sentence. Table 3. Ejection fraction is either 0.35 or 35%. Figure 5. The legend of the color bar is a little difficult to read. 3.3. Validation credibility factors. Please add “on” after “model is based”. Figure 6. The difference in scale in the graphs of QVAD is large and tend to be misleading as it shows large difference when they are physiologically not important. The authors could change the scale of the y axis on these plots. ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No: See comments to the authors Reviewer #2: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Ahmet Erdemir Reviewer #2: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at . Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols Submitted filename: Review_PCOMPBIOL-D-21-02033.pdf Click here for additional data file. 25 Feb 2022 Submitted filename: PLOS_answer_to_reviewer2.pdf Click here for additional data file. 29 Mar 2022 Dear Mr. Santiago, Thank you very much for submitting your manuscript "Design and execution of a Verification, Validation, and Uncertainty Quantification plan for a numerical model of left ventricular flow after LVAD implantation" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your revised manuscript was reviewed by members of the editorial board and by the original two independent reviewers. The reviewers noted some minor but important issues. Based on the reviews, we are likely to accept this manuscript for publication, providing that you revised the manuscript according to the review recommendations. Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Andrew D. McCulloch, Ph.D. Associate Editor PLOS Computational Biology Daniel Beard Deputy Editor PLOS Computational Biology *********************** A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately: [LINK] Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: The review is uploaded as an attachment. Reviewer #2: Please see the attached document ********** Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No: Experimental data, in particular time history of signals, used as simulation inputs and as output comparators, were not provided. Reviewer #2: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Ahmet Erdemir Reviewer #2: No Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols References: Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Submitted filename: Review_PCOMPBIOL-D-21-02033_R1.pdf Click here for additional data file. Submitted filename: Review_PLOS.docx Click here for additional data file. 8 Apr 2022 Submitted filename: Answers_to_reviewers_it2.pdf Click here for additional data file. 26 Apr 2022 Dear Mr. Santiago, We are pleased to inform you that your manuscript 'Design and execution of a Verification, Validation, and Uncertainty Quantification plan for a numerical model of left ventricular flow after LVAD implantation' has been provisionally accepted for publication in PLOS Computational Biology. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Andrew D. McCulloch, Ph.D. Associate Editor PLOS Computational Biology Daniel Beard Deputy Editor PLOS Computational Biology *********************************************************** 3 Jun 2022 PCOMPBIOL-D-21-02033R2 Design and execution of a Verification, Validation, and Uncertainty Quantification plan for a numerical model of left ventricular flow after LVAD implantation Dear Dr Vazquez, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Agnes Pap PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol
  42 in total

1.  Long-term follow-up of continuous flow left ventricular assist devices: complications and predisposing risk factors.

Authors:  Tolulope A Adesiyun; Rhondalyn C McLean; Ryan J Tedford; Glenn J R Whitman; Chris M Sciortino; John V Conte; Ashish S Shah; Stuart D Russell
Journal:  Int J Artif Organs       Date:  2017-08-02       Impact factor: 1.595

2.  Assessing Computational Model Credibility Using a Risk-Based Framework: Application to Hemolysis in Centrifugal Blood Pumps.

Authors:  Tina M Morrison; Prasanna Hariharan; Chloe M Funkhouser; Payman Afshari; Mark Goodin; Marc Horner
Journal:  ASAIO J       Date:  2019 May/Jun       Impact factor: 2.872

3.  Heart Disease and Stroke Statistics-2019 Update: A Report From the American Heart Association.

Authors:  Emelia J Benjamin; Paul Muntner; Alvaro Alonso; Marcio S Bittencourt; Clifton W Callaway; April P Carson; Alanna M Chamberlain; Alexander R Chang; Susan Cheng; Sandeep R Das; Francesca N Delling; Luc Djousse; Mitchell S V Elkind; Jane F Ferguson; Myriam Fornage; Lori Chaffin Jordan; Sadiya S Khan; Brett M Kissela; Kristen L Knutson; Tak W Kwan; Daniel T Lackland; Tené T Lewis; Judith H Lichtman; Chris T Longenecker; Matthew Shane Loop; Pamela L Lutsey; Seth S Martin; Kunihiro Matsushita; Andrew E Moran; Michael E Mussolino; Martin O'Flaherty; Ambarish Pandey; Amanda M Perak; Wayne D Rosamond; Gregory A Roth; Uchechukwu K A Sampson; Gary M Satou; Emily B Schroeder; Svati H Shah; Nicole L Spartano; Andrew Stokes; David L Tirschwell; Connie W Tsao; Mintu P Turakhia; Lisa B VanWagner; John T Wilkins; Sally S Wong; Salim S Virani
Journal:  Circulation       Date:  2019-03-05       Impact factor: 29.690

4.  Uncertainty quantification and sensitivity analysis of left ventricular function during the full cardiac cycle.

Authors:  J O Campos; J Sundnes; R W Dos Santos; B M Rocha
Journal:  Philos Trans A Math Phys Eng Sci       Date:  2020-05-25       Impact factor: 4.226

5.  Small Left Ventricular Size Is an Independent Risk Factor for Ventricular Assist Device Thrombosis.

Authors:  Venkat Keshav Chivukula; Jennifer A Beckman; Anthony R Prisco; Shin Lin; Todd F Dardas; Richard K Cheng; Stephen D Farris; Jason W Smith; Nahush A Mokadam; Claudius Mahr; Alberto Aliseda
Journal:  ASAIO J       Date:  2019-02       Impact factor: 2.872

6.  Aortic valve closure associated with HeartMate left ventricular device support: technical considerations and long-term results.

Authors:  Robert M Adamson; Walter P Dembitsky; Sam Baradarian; Joseph Chammas; Karen May-Newman; Suzanne Chillcott; Marcia Stahovich; Vicki McCalmont; Kristi Ortiz; Peter Hoagland; Brian Jaski
Journal:  J Heart Lung Transplant       Date:  2011-01-22       Impact factor: 10.247

7.  Advanced (stage D) heart failure: a statement from the Heart Failure Society of America Guidelines Committee.

Authors:  James C Fang; Gregory A Ewald; Larry A Allen; Javed Butler; Cheryl A Westlake Canary; Monica Colvin-Adams; Michael G Dickinson; Phillip Levy; Wendy Gattis Stough; Nancy K Sweitzer; John R Teerlink; David J Whellan; Nancy M Albert; Rajan Krishnamani; Michael W Rich; Mary N Walsh; Mark R Bonnell; Peter E Carson; Michael C Chan; Daniel L Dries; Adrian F Hernandez; Ray E Hershberger; Stuart D Katz; Stephanie Moore; Jo E Rodgers; Joseph G Rogers; Amanda R Vest; Michael M Givertz
Journal:  J Card Fail       Date:  2015-05-04       Impact factor: 5.712

8.  Fusion of aortic valve commissures in patients supported by a continuous axial flow left ventricular assist device.

Authors:  James O Mudd; Jonathan D Cuda; Marc Halushka; Karl A Soderlund; John V Conte; Stuart D Russell
Journal:  J Heart Lung Transplant       Date:  2008-10-26       Impact factor: 10.247

9.  Cardiac transplantation in the United States: an analysis of the UNOS registry.

Authors:  Matthew J Everly
Journal:  Clin Transpl       Date:  2008

10.  Understanding the influence of left ventricular assist device inflow cannula alignment and the risk of intraventricular thrombosis.

Authors:  Michael Neidlin; Sam Liao; Zhiyong Li; Benjamin Simpson; David M Kaye; Ulrich Steinseifer; Shaun Gregory
Journal:  Biomed Eng Online       Date:  2021-05-11       Impact factor: 2.819

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.