Literature DB >> 34506517

Designing and validating a Markov model for hospital-based addiction consult service impact on 12-month drug and non-drug related mortality.

Caroline A King1, Honora Englander2, P Todd Korthuis2, Joshua A Barocas3, K John McConnell4, Cynthia D Morris5, Ryan Cook2.   

Abstract

INTRODUCTION: Addiction consult services (ACS) engage hospitalized patients with opioid use disorder (OUD) in care and help meet their goals for substance use treatment. Little is known about how ACS affect mortality for patients with OUD. The objective of this study was to design and validate a model that estimates the impact of ACS care on 12-month mortality among hospitalized patients with OUD.
METHODS: We developed a Markov model of referral to an ACS, post-discharge engagement in SUD care, and 12-month drug-related and non-drug related mortality among hospitalized patients with OUD. We populated our model using Oregon Medicaid data and validated it using international modeling standards.
RESULTS: There were 6,654 patients with OUD hospitalized from April 2015 through December 2017. There were 114 (1.7%) drug-related deaths and 408 (6.1%) non-drug related deaths at 12 months. Bayesian logistic regression models estimated four percent (4%, 95% CI = 2%, 6%) of patients were referred to an ACS. Of those, 47% (95% CI = 37%, 57%) engaged in post-discharge OUD care, versus 20% not referred to an ACS (95% CI = 16%, 24%). The risk of drug-related death at 12 months among patients in post-discharge OUD care was 3% (95% CI = 0%, 7%) versus 6% not in care (95% CI = 2%, 10%). The risk of non-drug related death was 7% (95% CI = 1%, 13%) among patients in post-discharge OUD treatment, versus 9% not in care (95% CI = 5%, 13%). We validated our model by evaluating its predictive, external, internal, face and cross validity. DISCUSSION: Our novel Markov model reflects trajectories of care and survival for patients hospitalized with OUD. This model can be used to evaluate the impact of other clinical and policy changes to improve patient survival.

Entities:  

Mesh:

Year:  2021        PMID: 34506517      PMCID: PMC8432751          DOI: 10.1371/journal.pone.0256793

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Drug overdose is the leading cause of unintentional injury death in the United States [1]. Among people with opioid use disorder (OUD), an estimated 20% eventually die of drug overdose [2], but cardiovascular diseases, cancer, and infectious diseases also contribute to mortality rates. Patients with OUD who are hospitalized for OUD-related and other diagnoses are often medically complex and face life-threatening illnesses. These patients experience higher mortality rates than hospitalized patients with similar conditions [2]. Hospitalization is a vulnerable time for patients with OUD. People with OUD may leave the hospital before completing recommended medical therapy if withdrawal symptoms are untreated [3]. People who withdraw from opioids have lower drug tolerance and increased risk of drug overdose after discharge in the absence of treatment for OUD [Mayo Clin Proc. 2009 ">4-6]. Medications for opioid use disorder (MOUD), including methadone, buprenorphine and naltrexone, can reduce the risk of death from opioid overdose in patients with OUD [7]. These medications work as opioid receptor full agonists (methadone), partial agonists (buprenorphine), or antagonists (naltrexone) [8]. Despite the success of MOUD to reduce opioid overdose deaths, most hospitalized patients with OUD are not started on MOUD [9, 10], though, when offered, nearly three-quarters of patients with OUD choose to start MOUD [11]. Interventions to improve initiation of MOUD among hospitalized patients are urgently needed [12]. Addiction consult services (ACS) are an emerging intervention to engage hospitalized patients in care and meet patient-driven goals for substance use treatment [13]. Typically, they include care from an interprofessional team that may include medical providers, social workers, nurses, and alcohol and drug counselors [14]. Some intentionally include people with lived experience in recovery [15-17]. ACSs typically address the needs of people who use any substance (for example, stimulant, alcohol, and opioids). Care includes comprehensive assessments, withdrawal management, medication treatment, psychosocial and harm reduction interventions, and efforts to support patient engagement and linkage to care across settings. ACSs commonly also provide staff education and patient advocacy [14, 18, 19]. Evaluation of ACS demonstrates improved engagement in post-hospitalization treatment and decreased substance use [12, 13]. However, assessing the effect of ACS using gold-standard study designs is challenging because of the costs and logistical challenges associated with multi-site, cluster-randomized trials. Additionally, it can be difficult statistically to assess distal, rare outcomes like drug-related mortality in the context of a hospital-based intervention. We consequently do not know how ACSs affect post-discharge drug-related mortality or non-drug related mortality for patients with OUD. Simulation modeling allows researchers to rapidly test different care delivery scenarios and capture robust estimates of study outcomes, which can support healthcare system decision-making and answer salient clinical questions in the midst of the opioid overdose epidemic. Modeling inpatient care scenarios can guide healthcare systems in addressing a rapidly evolving epidemic more quickly and adaptively than randomized trials. Simulation modeling has previously been used to estimate prevented overdose deaths from the expansion of naloxone distribution [20-22], the progression of opioid addiction [23], and the implementation of safe-injection sites [24]. The objective of this study was to design and validate a Markov model that estimates the impact of ACS care on 12-month mortality among hospitalized patients with OUD.

Methods

Setting and study design

Oregon Health & Science University in Portland, Oregon is home to an inpatient ACS, the Improving Addiction Care Team (IMPACT). IMPACT is a hospital-based service that utilizes an interdisciplinary team of physicians, advanced practice providers, social workers, and peers with lived experience in recovery to support non-treatment seeking adults with substance use disorder. Patients are eligible to be referred if they have known or suspected substance use disorder (SUD), other than tobacco use disorder alone. IMPACT conducts substance use assessments, initiates medication-based treatment (including buprenorphine, methadone and extended release naltrexone for OUD) and behavioral treatment where appropriate, and connects patients to post-discharge SUD treatment. IMPACT utilizes a harm reduction approach and integrates principles of trauma-informed care. Previous research describes IMPACT’s design and evaluation [11, 12, 16–18, 25, 26]. Notably, IMPACT is the only comprehensive ACS in Oregon, though a few hospitals offer MOUD initiation during hospitalization. We developed and validated a Markov model to estimate the impact of ACS care on 12-month mortality among hospitalized patients with OUD (Fig 1). We organize our methods in the order of completion: first, we decided on model structure; next we used available data to populate the model; and finally, we validated the model. As such, we describe: 1) model structure, 2) model data, and 3) model validation.
Fig 1

Markov model of hospital-based addiction care in Oregon, 2015–2018.

The Oregon Health & Science University’s Institutional Review Board approved this study and waived the requirement for informed consent (#00010846).

1) Model structure

Our model reflects key components of care as patients move through hospitalization, discharge, and post-hospital time periods. The model has the following components: ACS consult, post-discharge OUD treatment engagement, and 12-month post-discharge drug related death, non-drug related death, and survival.

ACS referral

Once patients are hospitalized, they can be referred to ACS care. ACSs exist across a growing number of North American hospitals.

Post-discharge OUD treatment engagement

We used a modified Healthcare Effectiveness Data and Information Set (HEDIS) measure of engagement to stratify for post-discharge OUD treatment engagement. The original measure requires that patients initiate treatment and have two or more additional alcohol or drug services or medication for OUD within 34 days of initiation [27]. Recent research has shown that evidence-based MOUD has superior outcomes in preventing mortality and decreasing opioid use [7]. For this reason, we defined post-discharge OUD treatment engagement as: 1) at least two filled prescriptions for buprenorphine, extended-release naltrexone, or methadone from an Opioid Treatment Program in the 30 days following hospital discharge, or 2) a prescription for extended-release naltrexone or buprenorphine that covered 28 of the 30 days post-hospital discharge [28].

12-month mortality

At twelve months, deaths are classified as drug related versus non-drug related (including as circulatory, neoplasm, infectious, digestive (including alcohol-related liver disease), external (including suicide and unintentional injury), respiratory, endocrine, and other) by ICD-10 mortality codes described by Hser et al. [2].

2) Model data

Our Markov model could be used in any setting with patients hospitalized with OUD where data exists for recalibration. We populated our model with data from Oregon Medicaid claims data and expert opinion, described below, to reflect care from an addiction consult service in Portland, Oregon, and its impact on post-discharge drug and non-drug related mortality. We had multiple goals in using data to populate our model. We needed a dataset of patients, where some patients were referred to ACS and some were not. Then, we needed to be able to match ACS patients to controls as one way to account for some confounding. We needed the dataset to follow both patients referred to ACS, and those not, through 12 months after hospital discharge. Finally, we needed the dataset to have additional covariates to control for additional confounding, which we planned to do via logistic regression models at each transition point. Below, we describe the merging of OHSU’s ACS dataset with Oregon Medicaid data and Vital Statistics data, to achieve our goal for a dataset for the model population. Because we wanted to incorporate national estimates into our dataset, we used Bayesian logistic regression to integrate expert opinion into our estimates. We describe each of these steps below.

Participants

To generate probability of ACS referral and post-discharge treatment engagement, we used Oregon Medicaid claims data to identify patients with OUD hospitalized at least once from April 2015 through August 2018, including IMPACT patients. Because OHSU IMPACT was the only ACS in Oregon during the study window, we used IMPACT registry which tracks all referrals to identify patients with Oregon Medicaid who were referred to ACS. To generate probability of 12-month mortality, we utilized mortality data from Oregon Vital Statistics through December 31, 2018; thus, we included only patients admitted through January 1, 2018 to allow 12 months of follow-up time. [Patients were eligible for inclusion if they were over 18 years old and had an ICD-9 (304.*) or ICD-10 (F11*) diagnosis of OUD during a hospital admission.

Cohorts for transition points

We defined three cohorts for our analyses utilizing Oregon Medicaid data. First, we included all patients who met eligibility criteria in analysis for our first transition, referral to ACS. Then, we used a matched cohort of three controls to one IMPACT patient for our post-discharge OUD care engagement and mortality analyses. We matched without replacement on hospital admission quarter and admission number, including one admission per person.

Transition data

For ACS referral, we identified all hospitalized patients with OUD in Oregon during the study period, and then identified the subset who were referred to the ACS. For post-discharge OUD treatment engagement, we used Oregon Medicaid claims data to identify if patients met the modified HEDIS engagement measure in the 30 days following hospital discharge. For 12-month mortality, we used Oregon Vital Statistics data to identify deaths in our cohort during the study period through December 31, 2018. For mortality models, the cohort was limited to include only participants seen before January 1, 2018 to allow for 12 months of follow-up time for all participants. We classified deaths as drug related versus non-drug related as indicated above. We manually reviewed deaths that were not captured by these codes and reclassified to fit into drug versus non-drug related categories.

Transition probabilities

We used a Bayesian approach to obtain transition probabilities for our Markov model using Oregon data. In short, we integrated national expert information with estimates from Oregon data for each of our three transition steps: ACS referral, post-discharge MOUD, and 12-month mortality. We also adjusted for confounding at each transition point. Bayesian logistic regression allowed us to accomplish this goal. We ran logistic regression models for each transition point, using the transition as the outcome (i.e. an outcome of 12-month post-discharge mortality) and the prior step as the primary covariate of interest (i.e. 30-day post-discharge MOUD), adjusting for all other covariates in the model. We extract a marginal probability from this logistic regression model- this is our Bayesian likelihood. We used information from experts in addiction as our prior. The Bayesian approach allows the integration of the prior and likelihood to estimate a posterior probability, which we use as our posterior probability.

Bayesian priors via expert elicitation

Because of the novelty of ACS, few published papers existed from which we could have derived prior estimates of transition probabilities for Bayesian analysis. Thus, we used expert elicitation to capture prior information for our models. The Bayesian process helped account for some for some of the uncertainty that comes from incorporating expert opinion. We identified important covariates at each transition point, including age (in years), gender (female/male), race (White/not White/unknown), ethnicity (Hispanic/Not Hispanic), concurrent alcohol use disorder (yes/no), concurrent stimulant use disorder (yes/no), hospital length of stay (in days), rural residence (yes/no), filled at least one prescription for medication for OUD in the month before hospital admission (yes/no), previously admitted to the hospital (yes/no), and Chronic Illness and Disability Payment System (CDPS) Score (continuous). The engagement model also included referral to an ACS (yes/no). The mortality models included engagement in care after discharge (yes/no) and filled a naloxone prescription in the 30 days after hospital discharge (yes/no). We used a clinical-vignette design to ask providers about the relevance of covariates on patient outcomes. To do this, participants provided a probability estimate for different events: referral to an ACS, post-discharge engagement, and mortality. For example, a vignette could read: “The patient is a young White man with OUD and AUD. He was in the hospital for several days. He was on medication for OUD at admission. He had never previously been admitted to the hospital. He has many comorbidities. He is not from a rural area. What is the probability he engaged in post-discharge treatment for OUD within 30 days of discharge?” Experts evaluated 16 (referral to ACS), 17 (engagement) and 18 (mortality) vignettes selected from an optimal experimental design generated for each model [29]. From the optimal design, we chose a subset of the vignettes that were substantially different from one other for ease of interpretability and to maximize the information gathered about each covariate. As part of our IRB-approved research, study authors (HE, PTK) generated lists of experts in addiction consult services and hospital-based addiction treatment in general in the United States. Each participant took only one survey. We aimed to recruit at least five participants for each survey, with a goal of at least three responses per survey. For the referral to ACS survey, we also asked participants to refer hospitalists at their institutions to complete the survey, as hospitalists are frequently providers who refer patients to ACS. Ultimately, six participants took the ACS survey (6 of 11, 54.5%), four took the engagement survey (4 of 5, 80%), and three took the mortality survey (3 of 8, 37.5%). After surveying expert participants, we calculated the mean and identified the minimum and maximum ratings. We then numerically fit beta distributions to those quantities using differing “confidence levels” [30]. Then, we updated our priors with the information from data about our cohort described above. We estimated marginal probabilities over observed cases using fitted Bayesian logistic regression models at each transition point [31].

Bayesian logistic regression models

We used the transformed prior information from expert surveys and Oregon Medicaid cohort data to fit Bayesian logistic regression models at each transition point. Models were fit using Markov Chain Monte Carlo methods [32]. We sampled each parameter 10,000 times with 2000 burn-in chains. We used multiple metrics to assess model convergence. First, we used Gelman and Rubin’s potential scale reduction factor; all values in all models equal 1.0. Values close to 1.0 are suggestive of convergence. Effective sample sizes all approximated the number of posterior draws requested. All model trace plots appear to have a caterpillar-like distribution, and there were no divergent transitions. Autocorrelation plots for all parameters suggest low autocorrelation. We used the package Shiny Stan to evaluate Bayesian model fit [33]. We tested different prior information strengths: first, using a cohort sample size method, where the prior information equivalates a percent of the study sample size (0.1%, 1%, 5% and 10%); second, using a confidence interval method, where we fit beta distributions to the range of survey responses, and then used the maximum and minimum values as borders for 80%, 85%, 90%, and 95% confidence intervals. We picked the best-fit model using Pareto smoothed importance-sampling leave-one-out cross validation using the loo package in R where lower expected log predictive density values indicate a better model fit [34]. We also prioritized models where Pareto k diagnostic values had at least good reliability for all estimates. We used mcmcObsProb in the BayesPostEst package [35] to estimate marginal transition probabilities over observed cases with the fitted Bayesian logistic regression models. We created prior-posterior plots using ggplot2 [36].

3) Model validation

We validated our model using the frameworks suggested by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making’s Good Research Practices Model Validation guidelines (ISPOR-SMDM) [37]. We explored five components of validity: face validity, internal validity, cross validity, predictive validity, and external validity. As suggested, we provide a non-technical description of our model in S3 File.

Results

There were 8,450 patients admitted at least once with OUD in Oregon from April 2015 through August 2018. A subset of 6,654 patients were seen by January 1st, 2018. Among this subset, at twelve months, 114 (1.7%) participants died from drug-related causes and 408 (6.1%) died from non-drug related causes. Participant demographics of observed data are included in Table 1.
Table 1

Participant demographics.

All patients n = 8,450Seen by ACS n = 265Not Seen by ACS n = 8,185p-value
Age Years44.5 (15.4)39.5 (0.77)44.6 (0.17)<0.001
Gender Male3,632 (43.0%)159 (60.0%)3,473 (42.4%)<0.001
Race White5,919 (70.1%)169 (63.8%)5,750 (70.3%)0.034
Not White543 (6.4%)16 (6.0%)527 (6.4%)
Unknown race1,988 (23.5%)80 (30.2%)1,908 (23.3%)
Ethnicity Hispanic299 (3.5%)10 (3.8%)289 (3.5%)0.002
Alcohol use disorder 306 (3.6%)14 (5.3%)322 (3.9%)0.269
Stimulant use disorder 689 (8.2%)41 (15.5%)642 (7.8%)<0.001
Length of stay (days) 6.6 (11.2)14.9 (0.97)6.4 (0.12)<0.001
Rural residence 2,234 (26.4%)32 (12.1%)2,202 (26.9%)<0.001
Medication for OUD at hospital admission 1,508 (17.8%)48 (18.1%)1,460 (17.8%)0.908
Previously admitted to hospital 1,891(22.4%)116 (43.8%)1,775 (21.7%)<0.001
CDPS Score 2.5 (1.6)3.11 (0.11)2.48 (0.02)<0.001
Transition probabilities derived from Bayesian logistic regression models are depicted in Fig 2. In our study, 4% (95% CI = 2%, 6%) of patients admitted at least once for OUD were referred to an ACS in Oregon. Of those, 47% (95% CI = 37%, 57%) engaged in post-discharge OUD care. Of the 96% not seen by an ACS, 20% (95% CI = 16%, 24%) engaged in post-discharge OUD care. The risk of drug-related death at 12 months among patients who engaged in post-discharge OUD care was 3% (95% CI = 0%, 7%) versus 6% (95% CI = 2%, 10%) in patients who did not engage in care. The risk of non-drug related death was 7% (95% CI = 1%, 13%) among patients who engaged in OUD treatment, versus 9% (95% CI = 5%, 13%) for those who did not. For referral to ACS care, the best-fit Bayesian logistic regression model used an 80% confidence interval; for all other models, a sample size of 0.1% fit best (S1 File). All estimates had acceptable Pareto k-diagnostic values. We report posterior intervals for each covariate from Bayesian logistic regression models in S2 File.
Fig 2

Markov model with estimated transition probabilities for hospital-based addiction care in Oregon, 2015–2018.

Model validation

Face validity

To assess face validity, one researcher (CK) designed the model and received feedback from experts in addiction medicine outside of the study team about the model’s face validity. Experts agreed that the model reflected the path of care for patients admitted to hospitals in Oregon with OUD (structure). Further, the use of Oregon Medicaid data, versus data from the literature, was considered a strength in deriving evidence for the model by outside experts. ACS and their impact on care for patients with OUD is of immense interest to healthcare systems and policymakers, and experts also agreed that the question was timely and important (problem formation). Finally, after data analysis, the model results were presented to researchers who agreed that estimates from the model matched their expectations (results).

Internal validity

We conducted additional checks and analyses to ensure internal validity of our Bayesian approach (also referred to as technical validity, [38]). First, a recent paper used a similar approach and data structure to evaluate the impact of prenatal maternal factors on nonadherence to infant HIV medication in South Africa. After building our Bayesian model, we used the deidentified data from the South Africa analysis to attempt to replicate identical results as were published. The built model exactly replicated the results of the South African analysis. Second, we conducted classic logistic regression models for each transition point in addition to the Bayesian models. We placed a 1/3, 1/3 noninformative prior (Kerman’s prior) on all covariates, which should be roughly approximate to the classic logistic regression results. Our results with non-informative priors were sufficiently similar to classical logistic regression results. Finally, we conducted code “walk throughs” as suggested, where the analyst (CK) walked through code with an expert in these methods (RC). In addition to the above steps, because we used Bayesian analyses for our transition probabilities, we needed to ensure that our final estimates of confidence intervals around engagement and mortality estimates actually encompassed the observed number of people who engaged, and people who died from drug-related and non-drug related deaths. We simulated estimates, generating “Low” and “High” modeled estimates based on “best” and “worst” cases of model dynamics (e.g. lower confidence bound of estimate for ACS referral, lower confidence bound for post-discharge OUD treatment engagement, upper confidence bound for drug-related mortality generates an estimate for “High” death). Of the 6,654 patients with 12 months follow-up time, the model estimates that 1,330.8 patients engage in care (Low, High = (1,064.6, 1,597.0)). We observed 1,318 patients who engaged in care in the cohort. Additionally, the model estimated 357.2 drug related deaths (Low, High = (98.5, 632.6)); there were 114 observed drug related deaths in the dataset. Similarly, the model predicted 570.8 non-drug related deaths (Low, High = (263.6, 865.0)); there were 408 observed non-drug related deaths in the dataset. Mortality analyses rarely account for all sources of follow-up which may mean that reported mortality estimates in the literature are lower than in reality. Thus, it was not surprising that modeled transition probabilities from Bayesian logistic regression for 12-month mortality may be higher than raw observed proportions.

Cross-validation

Researchers at a separate academic medical center have developed, validated and calibrated the Reducing Infections Related to Drug Use Cost-Effectiveness (REDUCE) model, a Monte Carlo microsimulation model [39]. This model has the capacity to answer similar questions to what we post here, using estimates derived from published data and from expert sources. In contrast to our model which uses a cohort defined by opioid use disorder, the REDUCE model simulates data for people who inject drugs. Because model estimates for the REDUCE model are derived from a variety of sources in different parts of the county, we expected outcomes from the REDUCE model to be different from our model; we felt these differences are important to understand. To support cross-validation of our model, the research team that developed the REDUCE model generated 4,153 simulated patients admitted to the hospital for the first time. Of those, 36 died while in the hospital (0.9%). Of the 4117 still alive at hospital discharge, 96 (2.3%) died within 12 months of hospital discharge (95% CI = 1.9%, 2.8%). This is lower than our estimated 928 (13.9%) deaths from our Markov model (Low, High = (5.4%, 22.5%)). There are several important differences between the REDUCE model and our model. First, as previously mentioned, the REDUCE model simulates data from patients who inject drugs, while ours models patients who have OUD more generally. There are important demographic differences between these two groups, including that our model also includes patients with a primary diagnosis of cancer. Next, the percentage of people seen by an ACS in the REDUCE model was higher than in our model: 25% of patients in REDUCE were seen by an ACS versus 4% in our model. The REDUCE model uses data from Boston, where higher numbers of patients are seen by ACS. This makes it challenging to understand REDUCE estimates in the context of Oregon specifically. Additionally, patients had a higher post-discharge treatment engagement rate in the REDUCE model. In REDUCE, approximately 25.2% of patients receive medication for OUD for at least one week in the month following discharge, versus our model, where 20% of patients not seen by an ACS receive MOUD after discharge. Finally, data from the first simulated admission was used to estimate 12-month mortality from REDUCE; because we matched our cohort controls on the number of previous admissions among patients seen by an ACS, it is possible that our patients were older and sicker than patients who had never previously been admitted to the hospital. While the base model structures are similar, our model is populated with data that provides a focused understanding of addiction consult services in Oregon. Populating our model with different data, including Boston estimates, could provide tailored explorations of ACS in different settings.

External validity

To examine external validity, we used large, high-quality, recent studies of representative populations in independent cohorts of participants to separately validate post-discharge OUD treatment engagement and 12-month drug related and non-drug related mortality. We simulated a cohort of size determined from outside research and looked to see if our simulated confidence interval (cohort simulation/matrix multiplication method, [38]) was different from observed values or confidence intervals from the published estimates (Table 2). Where there was disagreement, we describe potential causes.
Table 2

Table of results for external validation of Markov model.

Data SourceJustification of selectionDependent, partially dependent, independent data sourcePart of model evaluatedComparison of differences and results in data sourcesEvaluation of cohort simulation results versus observed data
Naeger et al. [40]Testing in national datasetIndependentPost-discharge OUD treatment engagementData from 36,719 patients with an inpatient admission for opioid abuse, dependence, or overdose, 2010 to 2014• Data from time period just prior to Oregon Medicaid cohort; engagement may have been lower• Included any prescription for post-discharge MOUDCohort simulation showed 7343.8 (Low, High = (5875, 8812)) people predicted to engage versus 6132 people observed• Modeled range of estimates contains point estimate of observed engagement
LaRochelle et al 2018 [41]Testing in large cohort studyIndependent12-month drug and non-drug related mortality17,568 Massachusetts adults without cancer from 2012 to 2014• Dataset mortality may be lower because of exclusion of patients with cancer• Post-discharge treatment engagement for OUD included all time, to 12 months, of post-discharge engagement, which may further decrease drug-related deathsCohort simulation showed 8.6 non-drug related deaths per 100 person-years (Low, High = (1.5, 13.0)), and 5.4 opioid-related deaths per 100 person-years (Low, High = (4.0, 9.5))• Observed all-cause mortality was 4.7 deaths (Low, High = (4.4, 5.0)) per 100 person-years; opioid-related mortality was 2.1 deaths (Low, High = (1.9 to 2.4)) per 100-person years• Opioid-related deaths may be higher in our model because of a more liberal definition of opioid-related deaths
Ashman et al. (CDC) [42]Testing in large cohort studyIndependent12-month all-cause mortality• 24,340 patients with an opioid hospitalization across 94 National Hospital Care Survey hospitals• Analysis included patients with cancerCohort simulation showed 3,394 all-cause deaths (Low, High = (1324, 5478)) versus 1,879 (2,295*0.819) all-cause deaths observed• Modeled range of estimates contains point estimate of observed all-cause mortality

Predictive validity

All relevant data was included in building the Markov model described in this paper. We have planned analyses to evaluate our model predictions versus Medicaid claims data for the same cohort of patients seen in through December of 2020, once data are released.

Discussion

We built and validated a Markov model that reflects trajectories of care and survival at twelve months for patients hospitalized with OUD in Oregon. We used a Bayesian framework to integrate clinical expertise with data from Oregon Medicaid claims to estimate transition probabilities in our model. After development, we validated our model using ISPOR-SMDM standards, evaluating face validity, internal validity, cross validity, predictive validity and external validity. Compared to the REDUCE model—another model that assess ACS care delivery—our estimates are more context-relevant estimates of post-discharge OUD treatment engagement and 12-month drug and non-drug related mortality in Oregon. Our overall mortality estimate is higher than the REDUCE model, which may reflect severity of illness of people who are older, sicker, with more previous inpatient hospitalizations and limited linkage to post-discharge OUD care in Oregon. This is important as one potential use of our populated model is to predict the impact of expanding inpatient ACS care in Oregon; a model populated with Oregon data may better reflects the local care setting at baseline may provide more accurate results following intervention. Additionally, populating our model with different data in different ACS context may similarly provide tailored results. This study had several limitations. First, because we sought to build a model that reflected addiction care in Oregon, the model may not be generalizable to other settings. Still, the Oregon experience may help inform modeling in other states with limited ACS uptake, and we used Bayesian estimates from national experts to inform transition probabilities. Second, Medicaid claims data are often inaccurate in classifying patient race and ethnicity; our study estimates may not correctly capture the experience of people of color in Oregon. Third, we originally planned to use 30-day mortality as an outcome for this study, but we were unable to do so because of limited drug-related mortality in the 30-day post-discharge period; we used 12-month mortality data instead. Fourth, deriving Bayesian priors from expert elicitation may be less than ideal; clinician estimates may be inaccurate. However, in the absence of published priors available for our transition probabilities, expert elicitation was an appropriate first step to help answer these research questions. Finally, Medicaid claims data does not separate costs for inpatient delivery of medication for OUD, so it was not possible to tell if patients received OUD inpatient outside of an ACS. This model can be used to evaluate changing scenarios of care in spaces where healthcare providers, healthcare systems, or policymakers are considering implementing or changing ACS coverage in their applicable system. Specifically, this model has been used to evaluate the impact of expanding ACS in Oregon on post-discharge treatment engagement, and to estimate the impact of increasing fentanyl in Oregon’s drug supply on post-discharge overdose deaths. Results from these analyses have been submitted to help inform Oregon Medical Association and Oregon Hospital Association decision-making about ACS expansion in the state. The strength of the model comes from the estimates used to populate it, and with recalibration, the model can be adapted to different settings of ACS care delivery. For example, a different hospital with an ACS could estimate transition probabilities by using this model and their own local data. Similarly, hospitals considering ACS expansion could use our Bayesian estimate of ACS effectiveness (derived with a prior from sites across the United States) and combine this with local data for other estimates. This could provide hospitals with an estimate of what might be feasible with ACS implementation. In this paper, we describe data that reflects ACS care in Oregon. Using this data, we can model changing scenarios of care in Oregon, from increasing ACS care delivery to implementing drug-policy related changes, potentially including reducing barriers to naloxone access, implementing safe consumption sites or safe supply interventions, and others. Future research should use this model to evaluate changes in care delivery in Oregon to understand how these changes may impact survival among patients with OUD.

Conclusion

Hospitalization is a critical time for patients with OUD, and addiction consult services can help support patients during hospitalization and connect them to post-discharge care. Markov modeling can help researchers, clinical teams and policy makers understand how changes in care systems might impact patient outcomes. Additionally, our model allows healthcare systems and policymakers to evaluate the impact of ACS on mortality. In this work, we built and validated a Markov model that reflects the trajectories of care and survival for patients hospitalized with OUD in Oregon. Future research should use this work to evaluate state-wide clinical and policy changes that may impact patient survival.

Model fit statistics.

(DOCX) Click here for additional data file.

Estimates from classical and Bayesian logistic regression models, and prior-posterior plots.

(DOCX) Click here for additional data file.

Non-technical model description.

(DOCX) Click here for additional data file. 13 Apr 2021 PONE-D-20-37837 Designing and validating a Markov model for hospital-based addiction consult service impact on 12-month drug and non-drug related mortality PLOS ONE Dear Dr. King, Thank you for submitting your manuscript to PLOS ONE, and for your continued patience as we completed the first round of the review process. After careful consideration, we feel that your manuscript has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Your manuscript has been assessed by two external experts, who have requested clarification on a number of points regarding the study's methodology and specific contribution to the literature.. Please submit your revised manuscript by May 27 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Dr Joseph Donlan Senior Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at and 2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (a) whether consent was informed and (b) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information. If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information. Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”). For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research 3. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 4. Thank you for stating the following in the Competing Interests section: 'Dr. Korthuis serves as principal investigator for NIH-funded studies that accept donated study medication from Alkermes (extended-release naltrexone) and Indivior (buprenorphine).' a. Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. b. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 5. Please ensure that you refer to Figures 3 and 4 in your text as, if accepted, production will need this reference to link the reader to each figure. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The current study aimed to develop a Markov model to estimate the impact of engaging opioid use disorder (OUD) patients in hospital-based addiction consult services (ACS) on drug and non-drug related mortality. The topic is indeed very important given the state of the opioid crisis and the purpose of modeling the effect of such services can be significant in helping other health systems model the impact of creating similar addiction services. My concern is that as an epidemiologist working in the content area but without much experience with such Markov models, I found the explanation of the process and outcomes to be quite confusing. Admittedly, some of my limitations in understanding may result from my own lack of expertise in these statistical models. However, given Plos one has a wide array of readers from different research areas, it is likely others will also require further explanation of this process and I provide some comments below with specific questions/suggestions about the methods and elsewhere. Intro The 20% of people who die from a drug overdose is an estimate from one study/region - authors should include the word “an estimated 20%” to clarify this is not necessarily a comprehensive/representative rate. The second sentence ends quite abruptly with “also contribute.” (contribute to risk of death among this population?) Given some readers of Plos One may not have a background in addiction treatment, the intro should at least give 1-2 more sentences about what it means to start people on MOUD in the hospital and transition them to ongoing care. The fourth paragraph should refer to “simulation modeling” rather than modeling so the reader knows what type of modeling is being referred to Methods The entire order of the methods section is very confusing. It is not clear what piece of the process came first, what estimates are derived from the Oregon data itself vs. the simulated model or even what data were used for deriving these estimates and the model parameters. I recommend the section begin by describing all the datasets and variables that were used in this analysis and then moving chronologically through the process of generating the model and the estimates. For example, it’s unclear if the authors had data on ACS participation or if this was simulated. It is unclear how these data were linked (if at all) - the hospital ACS information, the Medicaid data, and the mortality data. The authors call this an “intention to treat” approach - which is a term used for randomized studies. Since this is not a randomized study, nor a quasi experimental study, it seems nonsensical to use this word as the process of whether someone is referred to an ACS does not seem to be random at all. Related to the prior comment, it is not clear how the models are taking into account the different characteristics of people who do and do not get referred to an ACS - for example, a person referred to an ACS can be at a much more advanced stage of OUD and therefore have higher mortality than someone who was never referred - how is that taken into account? Later the authors mention covariates but there is no explanation of how they get incorporated into the Markov model (again, this may be my limitation in understanding, but an explanation of how/if this comes into play would be helpful) I am wary of this process by which probability parameters are being estimated from expert opinion - clinicians are generally not good at guessing probabilities from their own experiences so it’s unclear how this is a meaningful source rather than estimates that have been derived in the literature for example? If this is indeed a valid method that has been used in the past to estimate probabilities of this nature, authors should in the least explain this process and cite reference to this method being a valid way to generate these simulations There needs to be greater explanation of how the expert opinion probabilities were integrated with their actual estimates derived from Oregon data to inform the models ? Again, further explanation of Markov models may generally help readers from different disciplines better understand the methods for this paper and also the added value of using these models here Results In the first paragraph, who are the 6,654 patients? those seen by ACS? Discussion It would be helpful to include greater discussion around how such modeling adds value - for example, how hospital systems may use such models to make calculations on value of incorporating such programs or how hospitals with ACS could use these modeling techniques to better understand their own population and how they can improve their protocols/systems. Reviewer #2: PDF file attached. The comments are in the pdf file. The statistical analysis looks complete from the perspective of the data gathered, but it is not clear how this can be used by others in the field, especially by decision-makers in public policy. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: review_OUD.pdf Click here for additional data file. 27 May 2021 Reviewer #1: The current study aimed to develop a Markov model to estimate the impact of engaging opioid use disorder (OUD) patients in hospital-based addiction consult services (ACS) on drug and non-drug related mortality. The topic is indeed very important given the state of the opioid crisis and the purpose of modeling the effect of such services can be significant in helping other health systems model the impact of creating similar addiction services. My concern is that as an epidemiologist working in the content area but without much experience with such Markov models, I found the explanation of the process and outcomes to be quite confusing. Admittedly, some of my limitations in understanding may result from my own lack of expertise in these statistical models. However, given Plos one has a wide array of readers from different research areas, it is likely others will also require further explanation of this process and I provide some comments below with specific questions/suggestions about the methods and elsewhere. Thank you for reviewing our manuscript. We have thoroughly revised the manuscript with regards to your points below, and believe this has improved readability, particularly for audiences from different research areas. Intro 1. The 20% of people who die from a drug overdose is an estimate from one study/region - authors should include the word “an estimated 20%” to clarify this is not necessarily a comprehensive/representative rate. We have made this change (Introduction, paragraph 1). 2. The second sentence ends quite abruptly with “also contribute.” (contribute to risk of death among this population?) We have updated to include the following: “Among people with opioid use disorder (OUD), an estimated 20% eventually die of drug overdose (2), but cardiovascular diseases, cancer, and infectious diseases also contribute to mortality rates.” (Introduction, paragraph 1). 3. Given some readers of Plos One may not have a background in addiction treatment, the intro should at least give 1-2 more sentences about what it means to start people on MOUD in the hospital and transition them to ongoing care. We have updated to include the following: ”Hospitalization is a vulnerable time for patients with OUD. People with OUD may leave the hospital before completing recommended medical therapy if withdrawal symptoms are untreated (3). People who withdraw from opioids have lower drug tolerance and increased risk of drug overdose after discharge in the absence of treatment for OUD (4-6). Medications for opioid use disorder (MOUD), including methadone, buprenorphine and naltrexone, can reduce the risk of death from opioid overdose in patients with OUD (7). These medications work as opioid receptor full agonists (methadone), partial agonists (buprenorphine), or antagonists (naltrexone) (8). Despite the success of MOUD to reduce opioid overdose deaths, most hospitalized patients with OUD are not started on MOUD (9, 10), though, when offered, nearly three-quarters of patients with OUD choose to start MOUD (11). Interventions to improve initiation of MOUD among hospitalized patients are urgently needed (12).” (Introduction, paragraph 2). 4. The fourth paragraph should refer to “simulation modeling” rather than modeling so the reader knows what type of modeling is being referred to We have made this change (Introduction, paragraph 4). Methods 5. The entire order of the methods section is very confusing. It is not clear what piece of the process came first, what estimates are derived from the Oregon data itself vs. the simulated model or even what data were used for deriving these estimates and the model parameters. I recommend the section begin by describing all the datasets and variables that were used in this analysis and then moving chronologically through the process of generating the model and the estimates. For example, it’s unclear if the authors had data on ACS participation or if this was simulated. It is unclear how these data were linked (if at all) - the hospital ACS information, the Medicaid data, and the mortality data. Related to the prior comment, it is not clear how the models are taking into account the different characteristics of people who do and do not get referred to an ACS - for example, a person referred to an ACS can be at a much more advanced stage of OUD and therefore have higher mortality than someone who was never referred - how is that taken into account? Later the authors mention covariates but there is no explanation of how they get incorporated into the Markov model (again, this may be my limitation in understanding, but an explanation of how/if this comes into play would be helpful) Thank you for this very helpful comment. We have reorganized the methods to make our order of steps clear. For example, we now state: “We developed and validated a Markov model to estimate the impact of ACS care on 12-month mortality among hospitalized patients with OUD (Fig 1). We organize our methods in the order of completion: first, we decided on model structure, next we used available data to populate the model, and last, we validated the model. As such, we describe: 1) model structure, 2) model data, and 3) model validation.” (Methods Paragraph 2). We then number to delinate each step, and then provide additional introduction sections where we thought there was confusion. For example, at the beginning of the model data step, we wrote the following, which also highlights the integration of observed ACS data, Oregon Medicaid data and Vital Statistics data. “We had multiple goals in using data to populate our model. We needed a dataset of patients, where some patients were referred to ACS and some were not. Then, we needed to be able to match ACS patients to controls as one way to account for some confounding. We needed the dataset to follow both patients referred to ACS, and those not, through 12 months after hospital discharge. Finally, we needed the dataset to have additional covariates to control for additional confounding, which we planned to do via logistic regression models at each transition point. Below, we describe the integration of OHSU’s ACS dataset with Oregon Medicaid data and Vital Statistics data, to achieve our goal for a dataset for model population. Because we wanted to incorporate national estimates into our dataset, we used Bayesian logistic regression to integrate expert opinion into our estimates. We describe each of these steps below.” (Methods Paragraph 8) We have also clarified where data is from simulated vs observed estimates, for example, “Among this subset, at twelve months, 114 (1.7%) participants died from drug-related causes and 408 (6.1%) died from non-drug related causes. Participant demographics of observed data are included in Table 1.” (Results Paragraph 1) We included the following sentence to highlight that ACS referral data was observed: “Because OHSU IMPACT was the only ACS in Oregon during the study window, we used IMPACT registry which tracks all referrals to identify patients with Oregon Medicaid who were referred to ACS.” (Methods Paragraph 10) Confounding was accounted for by extracting estimates from logistic regression models that accounted for confounders. So, for example, we used logistic regression to evaluate the impact of post-discharge MOUD on 12-month mortality, accounting for confounders. From the adjusted model, we extracted the marginal probability of post-discharge MOUD; this is our base transition probability. The Bayesian model uses the national estimate from experts as our prior, and integrates the marginal probability from the logistic regression model, to give us an updated posterior probability that incorporates both estimates. We have updated the manuscript to better describe this for a lay audience: “We used a Bayesian approach to obtain transition probabilities for our Markov model using Oregon data an describe this below. In short, we wanted to integrate expert information nationally with estimates from Oregon data for each of our three transition steps: ACS referral, post-discharge MOUD, and 12-month mortality. We also needed to adjust for confounding at each transition point. Bayesian logistic regression allowed us to accomplish this goal. We were able to run logistic regression models for each transition point, using the transition as the outcome (i.e. an outcome of 12-month post-discharge mortality) and the prior step as the primary covariate of interest (i.e. 30-day post-discharge MOUD), adjusting for all other covariates in the model. We extract a marginal probability from this logistic regression model- this is our Bayesian likelihood. We used information from experts in addiction (described below) as our prior. The Bayesian approach allows the integration of the prior and likelihood to estimate a posterior probability, which we use as our posterior probability.”(Methods, Paragraph 12) Length of time with OUD was not available as a confounder, and is not possible to accurately calculate from Oregon Medicaid data. 6. The authors call this an “intention to treat” approach - which is a term used for randomized studies. Since this is not a randomized study, nor a quasi experimental study, it seems nonsensical to use this word as the process of whether someone is referred to an ACS does not seem to be random at all. Thank you- we agree. We have updated to read as follows: “For this model, all patients referred to ACS were included regardless of level of care engagement or specific services received.” (Methods, Paragraph 2). 7. I am wary of this process by which probability parameters are being estimated from expert opinion - clinicians are generally not good at guessing probabilities from their own experiences so it’s unclear how this is a meaningful source rather than estimates that have been derived in the literature for example? If this is indeed a valid method that has been used in the past to estimate probabilities of this nature, authors should in the least explain this process and cite reference to this method being a valid way to generate these simulations Thank you for this comment. ACS are a novel intervention, and prior to this manuscript, there were no published estimates of the parameters we sought to evaluate for ACS involvement. Thus, we chose to use expert elicitation to derive study estimates. Often models rely on expert opinion when there is a lack of available data to inform model parameters. We agree that expert opinion may be less than optimal, however, the Bayesian process we used helps account for some of the uncertainty that comes from incorporating expert opinion. We have updated the methods and limitations to describe this (Methods, Paragraph 11; Discussion, Paragraph 3). “Because of the novelty of ACS, few published papers existed from which we could have derived prior estimates of transition probabilities for Bayesian analysis. Thus, we used expert elicitation to capture prior information for our models. The Bayesian process helped account for some for some of the uncertainty.”(Methods, Paragraph 11) “Fourth, deriving Bayesian priors from expert elicitation may be less than ideal; clinician estimates may be inaccurate. However, in the absence of published priors available for our transition probabilities, expert elicitation was an appropriate first step to help answer these research questions.” (Discussion, Paragraph 3) 8. There needs to be greater explanation of how the expert opinion probabilities were integrated with their actual estimates derived from Oregon data to inform the models ? Again, further explanation of Markov models may generally help readers from different disciplines better understand the methods for this paper and also the added value of using these models here Thank you. We have updated to include the following: “We used a Bayesian approach to obtain transition probabilities for our Markov model using Oregon data an describe this below. In short, we wanted to integrate expert information nationally with estimates from Oregon data for each of our three transition steps: ACS referral, post-discharge MOUD, and 12-month mortality. We also needed to adjust for confounding at each transition point. Bayesian logistic regression allowed us to accomplish this goal. We were able to run logistic regression models for each transition point, using the transition as the outcome (i.e. an outcome of 12-month post-discharge mortality) and the prior step as the primary covariate of interest (i.e. 30-day post-discharge MOUD), adjusting for all other covariates in the model. We extract a marginal probability from this logistic regression model- this is our Bayesian likelihood. We used information from experts in addiction (described below) as our prior. The Bayesian approach allows the integration of the prior and likelihood to estimate a posterior probability, which we use as our posterior probability.”(Methods, Paragraph 12) Results 9. In the first paragraph, who are the 6,654 patients? those seen by ACS? Thank you- this sentence was confusing. We have rewritten as follows: “There were 8,450 patients admitted at least once with OUD in Oregon from April 2015 through August 2018. A subset of 6,654 patients were seen by January 1st, 2018. Among this subset, at twelve months, 114 (1.7%) participants died from drug-related causes and 408 (6.1%) died from non-drug related causes.” (Results, Paragraph 1) Discussion 10. It would be helpful to include greater discussion around how such modeling adds value - for example, how hospital systems may use such models to make calculations on value of incorporating such programs or how hospitals with ACS could use these modeling techniques to better understand their own population and how they can improve their protocols/systems. We have updated the final paragraph of the Discussion to read as follows: “This model can be used to evaluate changing scenarios of care in spaces where healthcare providers, healthcare systems, or policymakers are considering implementing or changing ACS coverage in their applicable system. Specifically, this model has been used to evaluate the impact of expanding ACS in Oregon on post-discharge treatment engagement, and to estimate the impact of increasing fentanyl contamination in Oregon’s drug supply on post-discharge overdose deaths. Results from these analyses have been submitted to help inform Oregon Medical Association and Oregon Hospital Association decision-making about ACS expansion in the state. The strength of the model comes from the estimates used to populate it, and with recalibration, the model can be adapted to different settings of ACS care delivery. For example, a different hospital with an ACS could estimate transition probabilities by using this model and their own local data. Similarly, hospitals considering ACS expansion could use our Bayesian estimate of ACS effectiveness (derived with a prior from sites across the United States) and combine this with local data for other estimates. This could provide hospitals with an estimate of what might be feasible with ACS implementation. In this paper, we describe data that reflects ACS care in Oregon. Using this data, we can model changing scenarios of care in Oregon, from increasing ACS care delivery to implementing drug-policy related changes, potentially including reducing barriers to naloxone access, implementing safe consumption sites or safe supply interventions, and others. Future research should use this model to evaluate changes in care delivery in Oregon to understand how these changes may impact survival among patients with OUD.” (Discussion, Paragraph 4). 11. Reviewer #2: The statistical analysis looks complete from the perspective of the data gathered, but it is not clear how this can be used by others in the field, especially by decision-makers in public policy. Thank you. We have updated the Discussion; please see comment 10 above from Reviewer #1. Additionally, we have included Appendix 3, which is a lay-summary for healthcare policy makers who may wish to use the model to estimate policy changes. 12. The paper presents a Markov-chain based model to analyze data from patients suffering from OUD. A Bayesian regression-based model is used to estimate the transition probabilities of different stages. The data set is quite large, and the work is described well. While I do believe this work will benefit decision makers, the authors definitely need to do some additional analysis of what the impact is of the consultation service. What is the difference in mortality when they take the consultation service and/ or do not use the post-discharge treatment? Without this analysis, it is unclear what gap in the literature is filled and what the motivation is for this research: it just provides some data, but not clear how it may be used by decision-makers in public policy. It is possible to use steady-state probabilities to determine the long-run survival and mortality rates (see e.g., Gosavi et al., 2020). Further, even some basic analysis is possible from the probabilities in Fig 2: what is the probability that someone who took advantage of the consulting service survived, vs that of someone who did not? Thank you. We agree. We had two goals for this work: to derive a simulation model using Bayesian techniques that could estimate post-discharge mortality, and to derive estimates of mortality under different conditions to answer pertinent health policy questions. This manuscripts addresses the first question. To answer the second, we conducted several other analyses, which we have described in publications that have been submitted elsewhere. These manuscripts estimate changes in treatment engagement and mortality outcomes in the context of ACS engagement. The first explores the impact of ACS expansion through Medicaid Coordinated Care Organizations in Oregon, on post-discharge treatment engagement. The second evaluates the impact of increasing fentanyl contamination in Oregon on post-discharge mortality, and elucidates the role ACS expansion could play in decreasing mortality. We originally considered combining all estimates into one paper. However, the work in this manuscript is itself novel in constructing the model, and, for maximum impact, we wanted to write the additional manuscripts with only as much technical information as was necessary to interpret model results. To achieve this balance, we decided to submit this manuscript as a model development and validation paper, with subsequent manuscripts geared at using the model to answer pertinent policy questions. In case of interest, abstracts from these two additional manuscripts are as follows: Expanding inpatient Addiction Consult Services through Accountable Care Organizations for Medicaid enrollees: A modeling study Introduction Addiction Consult Services (ACS) care for patients with opioid use disorder (OUD) in the hospital. Medicaid Accountable Care Organizations (ACOs) could enhance access to ACS. This study extends data from Oregon's only ACS to Oregon's 15 regional Medicaid Coordinated Care Organizations (CCOs) to illustrate the potential value of enhanced in- and out-patient care for hospitalized patients with OUD. The study objectives were to estimate the effects of 1) expanding ACS care through CCOs in Oregon, and 2) increasing community treatment access within CCOs, on post-discharge OUD treatment engagement. Methods We used a validated Markov model, populated with Oregon Medicaid data from April 2015 to December 2017, to estimate study objectives. Results Oregon Medicaid patients hospitalized with OUD with care billed to a CCO (n=5,878) included 1,298 (22.1%) patients engaged in post-discharge OUD treatment. Simulation of referral to an ACS increased post-discharge OUD treatment engagement to 47.0% (95% CI 45.7%, 48.3%), or 2,684 patients (95% CI 2610, 2758). Ten of fifteen (66.7%) CCOs had fewer than 20% of patients engage in post-discharge OUD care. Without ACS, increasing outpatient treatment such that 20% of patients engage increased the patients engaging in post-discharge OUD care from 12.9% or 296 patients in care at baseline to 20% (95% CI 18.1%, 21.4%) or 453 (95% CI 416, 491). Discussion ACOs can improve care and coordination for patients hospitalized with OUD. Implementing ACS in ACO networks can potentially improve post-discharge OUD treatment engagement, but community treatment systems must be prepared to accept more patients as inpatient addiction care improves. Hospitalizations, drug supply contamination, and 12-month post-discharge drug-related mortality among patients with opioid use disorder: modeling the role of Addiction Consult Services Introduction Addiction Consult Services (ACS) care for patients with opioid use disorder (OUD) in the hospital. Medicaid Accountable Care Organizations (ACOs) could enhance access to ACS. This study extends data from Oregon's only ACS to Oregon's 15 regional Medicaid Coordinated Care Organizations (CCOs) to illustrate the potential value of enhanced in- and out-patient care for hospitalized patients with OUD. The study objectives were to estimate the effects of 1) expanding ACS care through CCOs in Oregon, and 2) increasing community treatment access within CCOs, on post-discharge OUD treatment engagement. Methods We used a validated Markov model, populated with Oregon Medicaid data from April 2015 to December 2017, to estimate study objectives. Results Oregon Medicaid patients hospitalized with OUD with care billed to a CCO (n=5,878) included 1,298 (22.1%) patients engaged in post-discharge OUD treatment. Simulation of referral to an ACS increased post-discharge OUD treatment engagement to 47.0% (95% CI 45.7%, 48.3%), or 2,684 patients (95% CI 2610, 2758). Ten of fifteen (66.7%) CCOs had fewer than 20% of patients engage in post-discharge OUD care. Without ACS, increasing outpatient treatment such that 20% of patients engage increased the patients engaging in post-discharge OUD care from 12.9% or 296 patients in care at baseline to 20% (95% CI 18.1%, 21.4%) or 453 (95% CI 416, 491). Discussion ACOs can improve care and coordination for patients hospitalized with OUD. Implementing ACS in ACO networks can potentially improve post-discharge OUD treatment engagement, but community treatment systems must be prepared to accept more patients as inpatient addiction care improves. 13. Typo: In Discussion: “Second, claims data is often inaccurate in classifying patient race and ethnicity” → Do you mean “…claims that data…? There is a grammar error here. Please fix. Also, please note that data are and not is. Thank you. We were referring to Medicaid claims data and have clarified. We have also changed from data “is” to data “are” throughout. (Discussion, Paragraph 3). 14. Gosavi, A, Murray, S.L., and Karagiannis, N. (2020). A Markov Chain Approach for Forecasting Progression of Opioid Addiction. Proceedings of the Industrial and Systems Engineering Annual Conference, Virtual, L. Cromarty, R. Shirwaiker, P. Wang, eds. Thank you for this helpful citation- we have added this where appropriate (for example, Introduction, paragraph 4). Submitted filename: Response to Reviewers PLoS One.docx Click here for additional data file. 17 Aug 2021 Designing and validating a Markov model for hospital-based addiction consult service impact on 12-month drug and non-drug related mortality PONE-D-20-37837R1 Dear Dr. King, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Sungwoo Lim, DrPH Academic Editor PLOS ONE Additional Editor Comments (optional): Please make sure to include a summary about validation findings in the abstracts as well as discussion section. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors have addressed my concerns and the manuscript is significantly improved. My only suggestion is that authors refer to their other policy-oriented papers that came out of this work in the manuscript so readers can know where to find them. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No 31 Aug 2021 PONE-D-20-37837R1 Designing and validating a Markov model for hospital-based addiction consult service impact on 12-month drug and non-drug related mortality Dear Dr. King: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Sungwoo Lim Academic Editor PLOS ONE
  30 in total

Review 1.  A substance abuse consultation service. Enhancing the care of hospitalized substance abusers and providing training in addiction psychiatry.

Authors:  D R McDuff; B L Solounias; M Beuger; A Cohen; M Klecz; E Weintraub
Journal:  Am J Addict       Date:  1997

Review 2.  "I'm going home": discharges against medical advice.

Authors:  David J Alfandre
Journal:  Mayo Clin Proc       Date:  2009-03       Impact factor: 7.616

Review 3.  Medication Treatment of Opioid Use Disorder.

Authors:  James Bell; John Strang
Journal:  Biol Psychiatry       Date:  2019-07-02       Impact factor: 13.382

4.  Hospitals as a 'risk environment': an ethno-epidemiological study of voluntary and involuntary discharge from hospital against medical advice among people who inject drugs.

Authors:  Ryan McNeil; Will Small; Evan Wood; Thomas Kerr
Journal:  Soc Sci Med       Date:  2014-01-19       Impact factor: 4.634

5.  Recommendations for integrating peer mentors in hospital-based addiction care.

Authors:  Honora Englander; Jessica Gregg; Janie Gullickson; Onesha Cochran-Dumas; Chris Colasurdo; Juliet Alla; Devin Collins; Christina Nicolaidis
Journal:  Subst Abus       Date:  2019-09-06       Impact factor: 3.716

6.  Systematic validation of disease models for pharmacoeconomic evaluations. Swiss HIV Cohort Study.

Authors:  P P Sendi; B A Craig; D Pfluger; A Gafni; H C Bucher
Journal:  J Eval Clin Pract       Date:  1999-08       Impact factor: 2.431

7.  Patterns of substance use before and after hospitalization among patients seen by an inpatient addiction consult service: A latent transition analysis.

Authors:  Caroline King; Christina Nicolaidis; P Todd Korthuis; Kelsey C Priest; Honora Englander
Journal:  J Subst Abuse Treat       Date:  2020-08-24

Review 8.  Inpatient Addiction Consult Service: Expertise for Hospitalized Patients with Complex Addiction Problems.

Authors:  Zoe M Weinstein; Sarah E Wakeman; Seonaid Nolan
Journal:  Med Clin North Am       Date:  2018-07       Impact factor: 5.456

9.  Inpatient Addiction Medicine Consultation and Post-Hospital Substance Use Disorder Treatment Engagement: a Propensity-Matched Analysis.

Authors:  Honora Englander; Konrad Dobbertin; Bonnie K Lind; Christina Nicolaidis; Peter Graven; Claire Dorfman; P Todd Korthuis
Journal:  J Gen Intern Med       Date:  2019-12       Impact factor: 5.128

10.  Long-term Infective Endocarditis Mortality Associated With Injection Opioid Use in the United States: A Modeling Study.

Authors:  Joshua A Barocas; Golnaz Eftekhari Yazdi; Alexandra Savinkina; Shayla Nolen; Caroline Savitzky; Jeffrey H Samet; Honora Englander; Benjamin P Linas
Journal:  Clin Infect Dis       Date:  2021-12-06       Impact factor: 9.079

View more
  3 in total

1.  Simulating the impact of Addiction Consult Services in the context of drug supply contamination, hospitalizations, and drug-related mortality.

Authors:  Caroline A King; Ryan Cook; Haven Wheelock; P Todd Korthuis; Judith M Leahy; Amelia Goff; Cynthia D Morris; Honora Englander
Journal:  Int J Drug Policy       Date:  2021-11-24

2.  Causes of Death in the 12 Months After Hospital Discharge Among Patients With Opioid Use Disorder.

Authors:  Caroline King; Ryan Cook; P Todd Korthuis; Cynthia D Morris; Honora Englander
Journal:  J Addict Med       Date:  2021-09-10       Impact factor: 4.647

3.  Expanding Inpatient Addiction Consult Services Through Accountable Care Organizations for Medicaid Enrollees: A Modeling Study.

Authors:  Caroline A King; Ryan Cook; P Todd Korthuis; Dennis McCarty; Cynthia D Morris; Honora Englander
Journal:  J Addict Med       Date:  2022-02-08       Impact factor: 4.647

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.