Literature DB >> 36240201

A measurement invariance analysis of selected Opioid Overdose Knowledge Scale (OOKS) items among bystanders and first responders.

James A Swartz1, Qiao Lin2, Yerim Kim1.   

Abstract

The Opioid Overdose Knowledge Scale (OOKS) is widely used as an adjunct to opioid education and naloxone distribution (OEND) for assessing pre- and post-training knowledge. However, the extent to which the OOKS performs comparably for bystander and first responder groups has not been well determined. We used exploratory structural equation modeling (ESEM) to assess the measurement invariance of an OOKS item subset when used as an OEND training pre-test. We used secondary analysis of pre-test data collected from 446 first responders and 1,349 bystanders (N = 1,795) attending OEND trainings conducted by two county public health departments. Twenty-four items were selected by practitioner/trainer consensus from the original 45-item OOKS instrument with an additional 2 removed owing to low response variation. We used exploratory factor analysis (EFA) followed by ESEM to identify a factor structure, which we assessed for configural, metric, and scalar measurement invariance by participant group using the 22 dichotomous items (correct/incorrect) as factor indicators. EFA identified a 3-factor model consisting of items assessing: basic overdose risk information, signs of an overdose, and rescue procedures/advanced overdose risk information. Model fit by ESEM estimation versus confirmatory factor analysis showed the ESEM model afforded a better fit. Measurement invariance analyses indicated the 3-factor model fit the data across all levels of invariance per standard fit statistic metrics. The reduced set of 22 OOKS items appears to offer comparable measurement of pre-training knowledge on opioid overdose risks, signs of an overdose, and rescue procedures for both bystanders and first responders.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 36240201      PMCID: PMC9565426          DOI: 10.1371/journal.pone.0271418

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Background

Although overshadowed the past two years by the rapid, global emergence of COVID-19 as a major public health threat, the decades-long opioid epidemic continues unabated, worsening in 2020. For the 12-month period ending in May 2020, 81,243 opioid overdose-related fatalities OORF were reported to the CDC, representing the largest 12-month increase in fatalities since 2015 [1]. At least part of the recent trend reversal in OORF is directly related to COVID-19 [2]. The pandemic increased stress, isolation and loneliness, as well as housing instability, and made accessing drug treatment more difficult [3]. Given the enduring nature of the opioid epidemic, however, and despite the lessening of the COVID-19 pandemic over the past year in the US, opioid misuse and OORF remain substantial public health concerns [4]. Prominent among CDC recommendations for reducing OORF is to “expand the provision and use of naloxone and overdose prevention education” with an emphasis on raising awareness about the “critical need for bystanders to have naloxone on hand and use it during an overdose” [1]. Expanding community naloxone access has been one of three US Department of Health and Human Services priority areas for addressing the opioid crisis [5]. Providing take-home naloxone (THN), a competitive opioid antagonist that can rapidly reverse the life-threatening effects of an opioid-related overdose, to non-medically trained “bystanders” (e.g., a heterogeneous group that can include opioid users, their social networks and family members, and social service staff) has become an important component of expanding naloxone access [6-8]. THN has become more widespread over the past 5 years, abetted by the passage of “Good Samaritan” laws; increased availability through pharmacies, hospitals, and emergency departments; Medicaid expansion under the Affordable Care Act; and the availability of a nasal spray, obviating the need for administration by injection [9-11]. Despite increased availability, there continues to be a need for further expansion of community-based overdose education and naloxone distribution (OEND) for bystanders as well as for non-medical first responders such as the police. Though studies of whether THN is a moral hazard that increases risky opioid use (i.e., risk compensation) have yielded mixed results, the preponderance of the evidence suggests that, on balance, THN reduces OORF [5,12,13]. Moreover, while it is possible the effectiveness of providing THN is enhanced when combined with education and training on recognizing the signs of an overdose and how to correctly administer naloxone, current evidence that providing education and training provides additional benefit has not been well established [5,13-18]. As the continuation of THN programs that incorporate education and training is likely and because there is no single standardized set of instructional materials or manualized training protocol, it is important to have validated instruments for assessing whether any specific training effectively increases basic knowledge on recognizing the signs of an opioid overdose and how to correctly administer naloxone for the intended audience, whether bystanders or first responders. Having valid information on whether education and training effectively increase knowledge and skills are necessary pre-requisites to determining if OEND further contributes to reductions in OORF beyond naloxone distribution alone and for whom and for determining which training content and materials are most effective. The Opioid Overdose Knowledge Scale (OOKS) was the first validated instrument of which we are aware to be used in conjunction with OEND [19] although other instruments have become available more recently [20]. Consisting of 45 items in its original version, the OOKS continues to be frequently cited in studies that evaluate THN/OEND trainings [21-23]. The OOKS has been translated into several languages for use in European countries and a short, 10-item form (i.e., the Brief Opioid Overdose Knowledge Scale [BOOK]) has been developed as well as a version adapted for use to assess knowledge specific to prescription opioids [24-26]. Despite its popularity, there remain questions about whether the OOKS is optimally constructed for use with bystanders as well as first responders. In the original study, the OOKS was administered to both bystanders (i.e., small samples of friends and family members of heroin users; N = 42) as well as healthcare professionals (N = 56). The BOOK was developed based on larger samples of illicit opioid users and patients prescribed an opioid for chronic pain treatment [N = 848; Dunn, Barrett [24]]. In both studies, most participants would be considered bystanders with the status of the trained healthcare professionals unclear. Police, who are very often the first responders when overdoses occur, were not involved in the development of the OOKS or the BOOK. Considering the continuing need to assess the effectiveness of different OEND training strategies for groups of participants with varying knowledge and professional training, this study sought to evaluate the performance of a selected subset of OOKS items when used as a training pre-test for first responders as well as bystanders. To the best of our knowledge, there has not been a measurement invariance analysis of the OOKS to assess whether it measures basic pre-training knowledge for these two groups for whom naloxone education and training are commonly conducted. To address this gap, we conducted a multigroup analysis of OOKS item equivalence by assessing levels of measurement invariance (i.e., configural, metric, and scalar) for first responders and bystanders. If the OOKS demonstrates measurement invariance, then one version would be suitable for use with both groups whereas lack of measurement invariance suggests either different versions of the OOKS containing different item subsets or different scoring thresholds for bystanders and first responders would be more appropriate. We also wanted to re-examine the OOKS item factor structure using a relatively new analytic tool, exploratory structural equation modeling (ESEM), which combines elements of exploratory factor analysis and confirmatory factor analysis [27,28]. We describe the potential advantages of ESEM in the analysis section below.

Methods

This study was a secondary analysis of de-identified pre-test data collected as part of an evaluation of a project to reduce OORF through increasing THN availability and conducting OEND trainings in Illinois communities with high OORF rates. The University of Illinois Chicago IRB determined the study did not constitute human subjects research and granted an exemption. This determination was made on the basis of no interaction between the investigators and study participants as well as the analyses would be restricted to fully de-identified data. Given the study’s exempt status, the requirement for obtaining written or verbal consent were also waived by the IRB.

Study setting and sample

The Substance Use and Mental Health Services Administration (SAMSHA) provided funding for a five-year project (2016–2021) to reduce OORF among users of any type of opioid including street drugs such as heroin or fentanyl. Group trainings on naloxone administration and recognizing an overdose were scheduled as needed at each of the six participating sites based on outreach efforts in participating communities. As part of the project evaluation, two sites agreed to administer pre- and post-test quizzes to assess knowledge gain and assess training effectiveness. A 24-item version of the OOKS was used and required about 10 minutes to administer. One site trained both first responders (N = 498) and bystanders (N = 506) whereas the second site exclusively trained bystanders (N = 1,137). We eliminated data from participants with more than 3 missing responses or with a response pattern indicating rote responding such as selecting the same response option across all items (N = 346, 16.2%). This yielded a final analytic sample of 1,795 participants composed of 446 (24.8%) first responders and 1,349 (75.2%) bystanders. First responders were composed predominantly of police whereas composition of the bystander groups varied considerably. Aggregate data provided by the site that conducted bystander trainings indicate the following types of bystanders participated: active drug users/syringe exchange program clients; family members and friends of drug users; staff working in substance use treatment programs; staff in human services agencies such as shelters and transitional living programs; and staff working in public institutions such as schools, restaurants, and libraries. In total, the two participating sites conducted 109 trainings between March 10, 2018 and January 20, 2020. The mean number of participants per training was 36.1 (sd = 25.3), with the size of bystander trainings larger and more variable (mean = 38.7, sd = 27.1) compared with first responder trainings (mean = 24.9, sd = 9.8; t(df = 2,029) = 10.0, p < .001). We did not collect individual demographic information to preserve training participant anonymity. However, aggregated data indicate the following demographic composition of the study sample: most participants were male (80.1%); white (82.1%); and between the ages of 25 to 44 (57.2%) with smaller proportions reporting they were 18 to 24 years old (19.4%) or 45 to 64 years old (10.6%). About 8.3% were African American/Black with 12.7% indicating Latino/Latinx ethnicity.

Statistical methods

Variables

Because training content and delivery varied by trainer, setting, and participant composition, we restricted our analyses to the pre-tests to eliminate variance owing to training-related factors that could affect post-test responses. At project outset, a panel composed of site directors and training practitioners reviewed the original OOKS for use in the evaluation of trainings. Because of the limited time for trainings, 50 to 60 minutes at most, the review panel shortened the 45-item OOKS by removing 21 items they believed were redundant or less informative to the project training goals. After preliminary analyses of the collected training data, we removed two additional items–one should call an ambulance when managing an overdose as well as stay with the person until an ambulance arrives—that were answered correctly by 98% or more of all participants in each group and consequently lacked variability, causing convergence problems during the measurement invariance analyses. The resulting 22-items for the modified version of the OOKS are shown in Table 1 with the original OOKS section and item number provided parenthetically.
Table 1

Percentage correct responses to selected OOKS items by OEND training group.

Site A—First RespondersSite A—BystandersSite B—BystandersTotalSigSig
Selected OOKS Pre-test Items(N = 446)(N = 412)(N = 937)(N = 1,795)(FR—BYS)(BYS_A—BYS_B)
Which of the following factors increase the risk of a heroin (opioid) overdose? Correct %Correct %Correct %Correct %
Switching from smoking to injecting heroin (A2)82.777.577.979.0%NSNS
Using heroin with other substances, such as alcohol or sleeping pills (A3)96.695.697.696.9NSNS
Increase in heroin purity (A4)95.194.495.194.9NSNS
Using heroin again soon after release from prison (A8)85.077.075.078.0 *** NS
Using heroin again after a detoxification treatment (A9)90.485.084.085.8 ** NS
Which of the following are indicators of an opioid overdose?
Having blood-shot eyes (B1)39.927.430.432.1 *** NS
Slow or shallow breathing (B2)95.188.184.287.8 *** NS
Lips, hands or feet turning blue (B3)89.078.979.881.9 *** NS
Loss of consciousness (B4)97.394.091.193.3 *** NS
Deep snoring (B7)66.855.249.655.2 *** NS
Very small pupils (B8)82.373.669.973.8 *** NS
Agitated behaviour (B9)47.338.733.238.0 *** NS
Rapid heartbeat (B10)46.033.429.634.5 *** NS
Which of the following should be done when managing a heroin (opioid) overdose?
Call an ambulance (C1)99.898.197.197.9 ** NS
Stay with the person untill an ambulance arrives (C2)99.397.398.898.6NSNS
Give stimulants (e.g. cocaine or black coffee) (C5)56.150.452.653.0NSNS
Place the person in the recovery position (on their side with mouth clear) (C6)92.880.279.683.0 *** NS
Put the person in bed to sleep it off (C11)69.554.255.658.7 *** NS
What is naloxone used for?
To reverse the effects of any overdose (D4)63.946.538.246.5 *** **
How can naloxone be administerd?
Into mouth or swallowed orally (E4)79.463.045.858.1 *** ***
How long do the effects of naloxone last?
2 to 6 hours (H3)27.315.721.121.4 *** NS
Please indicate which of the following statements are correct
If the first dose of naloxone has no effect a second dose can be given (I1)72.252.349.856.0 *** NS
Someone can overdose again even after having received naloxone (I3)79.672.670.273.1 *** NS
Naloxone can provoke withdrawal symptoms (I6)50.243.840.643.7  ** NS

Note. All figures shown are percentages of correct responses for each item by participant group and site. The labels in parentheses by each item reference the original OOKS instrument item section and number. All significance tests are based on Pearson chi-square tests with 2 degrees of freedom. Only results significant at p < .01 are reported. The first set of significance tests compare the comined set of bystanders with first responders. The second set compares the percent of correct responses for bystanders at site A with bystanders at site B.

†These two items, "Call an ambulance" and "Stay with the person until an ambulance arrives", were removed from the final version given the very high correct response rate across all participants and the resulting convergence issues caused when these items were included in the modeling steps.

** = p < .01

*** = p < .001, NS = Non-significant.

Note. All figures shown are percentages of correct responses for each item by participant group and site. The labels in parentheses by each item reference the original OOKS instrument item section and number. All significance tests are based on Pearson chi-square tests with 2 degrees of freedom. Only results significant at p < .01 are reported. The first set of significance tests compare the comined set of bystanders with first responders. The second set compares the percent of correct responses for bystanders at site A with bystanders at site B. †These two items, "Call an ambulance" and "Stay with the person until an ambulance arrives", were removed from the final version given the very high correct response rate across all participants and the resulting convergence issues caused when these items were included in the modeling steps. ** = p < .01 *** = p < .001, NS = Non-significant. The pre-test was administered as a self-report paper questionnaire. For each item, participants were asked to indicate true, false, or unsure/unknown. The self-reported data were then entered into REDCap [29], downloaded and scored as correct or incorrect according to the answer key provided with the original OOKS [19]. All unsure/don’t know responses were coded as incorrect, effectively converting the items from trichotomous to dichotomous.

Measurement invariance analysis

We used Stata version 17.0 for data screening and generating bivariate statistics [30]. We first ran descriptive analyses to assess percentage correct by item as well as number of total correct responses, disaggregated into three groups: bystanders trained at site A; first responders trained at site A; and bystanders trained at site B. For each item, a bivariate logistic regression was used to compare the odds of a correct response by group. We also calculated a mean score for each group based on the total correct and compared these using a one-way ANOVA. To assess the 22 OOKS items for measurement invariance, we first ran an exploratory factor analysis (EFA) to determine the number of factors for the pre-test instrument. Although a factor structure has been identified for the original 45-item OOKS [19], we wanted to re-assess this given we were using an item subset. We examined factor structure in several ways: using principal factor analysis with goemin rotation and robust weighted least squares in Mplus version 8.7 [31] to generate models with 2 to 6 factors and comparing the CFI, TLI, RMSEA, and SRMR fit statistics for each model. We also conducted parallel analysis in the R software program version 4.1.3 [32] using the fa.parallel function in the psych package for R [33] assessing polychoric correlations, given the factor indicators were binary (correct/incorrect). Based on analyses of the relative accuracy of different metrics for most accurately determining the number of factors in an EFA under varying circumstances (e.g., sample size, underlying number of factors, factor correlations and loading), we gave greater weight to the RMSEA statistic [34] in selecting a final model. Our goal was to find the model with the minimum number of factors that still provided adequate fit to the data. We then used exploratory structural equation modeling (ESEM) to test levels of measurement invariance for the selected factor model. ESEM combines EFA, and CFA/SEM and is less restrictive than CFA because factor cross-loadings are not constrained to zero. This potentially permits estimation of better-fitting and more realistic models whereby items can be associated with multiple factors [27,35]. We directly assessed this by estimating two models based on CFA–one that did and one that did not allow for selected residual covariances–with two corresponding models based on ESEM. We used conventional fit statistics and thresholds to evaluate model fit [36]: root mean square error of approximation (RMSEA) ≤ 0.5; comparative fit (CFI) and Tucker–Lewis indices (TLI) ≥ 0.90; and standardized root mean square residual (SRMR) ≤ 0.8. We assessed the OOKS pre-test for measurement invariance for first responders and bystanders across three increasingly restrictive measurement levels using the best fitting model derived from the CFA-ESEM comparative analyses: configural, metric or “weak invariance”, and scalar [37]. Briefly, configural invariance is indicated when factors are composed of the same items for each group, but the item factor loadings and intercepts/thresholds are allowed to vary across groups; with metric invariance, an equality constraint is applied to the factor loadings of each item to hold them equal across groups; and finally, with scalar invariance, an additional equality constraint is applied to the item intercepts/thresholds. For each successive model, we again used a conventional set of fit statistics and thresholds to evaluate model fit.

Results

Descriptive statistics

Table 1 shows the percentage of total correct responses by item as well as by participant group. In general, the percentages of correct answers were similar among the two bystander groups with only two of the more difficult items–whether naloxone can be used to reverse an overdose for any drug and how long the effects of naloxone last–showing significantly different proportions of correct responses. There were many more significant differences between the combined bystander groups and the first responders with first responders having a higher proportion of correct responses for 18 of the 22 OOKS items. Across all participants, the three items that were answered correctly least often were the length of time naloxone lasts (21.4% correct), having blood shot eyes (32.1% correct), or displaying agitated behavior (38.0% correct) as indicators of an opioid overdose. On average, first responders answered 16 of the 22 items correctly (mean = 16.1, SD = 3.5) whereas bystanders at site A averaged under 14 correct responses (mean = 13.9, SD = 3.9) as did bystanders at site B (mean = 13.5, SD = 4.2). These overall differences were statistically significant (F(2,1792) = 63.82, p < .001) with post-hoc analyses indicating that the first responders scored significantly higher than either bystander group but that the small difference among the bystander groups was non-significant.

Exploratory factor analyses

Results for the exploratory analyses for the 2 to 6 factor models ruled out only the 2-factor model as not providing adequate fit (RMSEA = .053 [95% CI = .050 - .056]; CFI = .905; TLI = .883; SRMR = .077). Models with 3 to 6 factors fit the data well per these same statistics with each showing improvement in model fit over the preceding model as determined by Chi-square tests comparing each model with the model having k-1 factors. These comparisons supported the 6-factor model as providing the best fit as did inspection of the scree plot obtained following parallel analysis. Despite these results, and for several reasons enumerated below, we selected between the 3- (RMSEA = .044 [95% CI = .041 - .048]; CFI = .941; TLI = .919; SRMR = .067) and 4-factor models (RMSEA = .044 [95% CI = .034 - .041]; CFI = .962; TLI = .941; SRMR = .057), ultimately selecting the 3-factor model for further analysis. As noted by Finch (2020) in his simulation study, when the fit statistics are inconsistent with the underlying factor structure of the simulated models, the statistics tend to favor over-factored (i.e., too many factors) results. Parsimony was also an important consideration. All else being equal, simpler models are better given relatively similar fit statistics. One of the factors in the 4-factor model had only two items with loadings greater than .50, with the 5 and 6-factor models having a similar factor structure whereby only several items had loadings greater than .50. Current recommendations are that a factor is identified when there are 3 or more items with sufficient loadings [38]. Finally, in subsequent preliminary analyses, the 4-factor model failed to converge at the metric invariance step suggesting it could be overly complex. Given that both the 3- and 4-factor model fit statistics indicated both fit the data well and the item to factor structure appeared to be more robust in the 3-factor model, which had more than 2 items with loadings > .50 on every factor, and the pattern of factor loadings for the items made sense substantively, we selected the 3-factor model for measurement invariance testing.

ESEM model

The final ESEM factor structure for the 3-factor model identified in the EFA step is shown in Table 2, which displays factor loadings and significance levels of the selected OOKS items. Based on the significant factor loadings, we assessed factor 1 (overdose risks) as determined by correct identification of opioid overdose risk items drawn from section A of the OOKS. Items corresponding to recognition of overdose signs and drawn from section B of the OOKS were most strongly associated with factor 2 (overdose signs). Factor 3 (rescue/advanced knowledge) was composed of knowledge related to what to do to rescue a person from an opioid overdose as well as some items corresponding to more advanced knowledge (as reflected by more difficult items on the pre-test) such as whether blood shot eyes or agitated behavior indicate an opioid overdose or how long the effects of naloxone last. Items loading on factor 3 were drawn from across multiple sections (B–I) of the original OOKS.
Table 2

ESEM three-factor model structure for selected OOKS pre-test items.

Which of the following factors increase the risk of a heroin (opioid) overdose?Factor 1 Overdose RisksSigFactor 2 Overdose SignsSigFactor 3 Rescue/Advanced KnowledgeSig
Switching from smoking to injecting heroin (A2)0.609 *** 0.193**-0.023
Using heroin with other substances, such as alcohol or sleeping pills (A3)0.555 *** 0.191-0.007
Increase in heroin purity (A4)0.624 *** 0.1180.099
Using heroin again soon after release from prison (A8)0.490 *** 0.264 *** 0.074
Using heroin again after a detoxification treatment (A9)0.516 *** 0.146 * 0.172 ***
Which of the following are indicators of an opioid overdose?
Having blood-shot eyes (B1)-0.398 *** 0.0160.673 ***
Slow or shallow breathing (B2)-0.0400.815 *** 0.031
Lips, hands or feet turning blue (B3)0.0170.767 *** -0.015
Loss of consciousness (B4)0.235 *** 0.544 *** 0.034
Deep snoring (B7)0.208 *** 0.510 *** 0.121 **
Very small pupils (B8)0.143 ** 0.423 *** -0.036
Agitated behaviour (B9)-0.288 *** 0.0610.603 ***
Rapid heartbeat (B10)-0.198 *** 0.166 ** 0.593 ***
Which of the following should be done when managing a heroin (opioid) overdose?
Give stimulants (e.g. cocaine or black coffee) (C5)-0.211 *** -0.0160.471 ***
Place the person in the recovery position (on their side with mouth clear) (C6)0.286 *** 0.210 ** 0.129 *
Put the person in bed to sleep it off (C11)-0.213 *** 0.143 * 0.391 ***
What is naloxone used for?
To reverse the effects of any overdose (D4)0.0530.0300.654 ***
How can naloxone be administerd?
Into mouth or swallowed orally (E4)0.0380.138 * 0.607 ***
How long do the effects of naloxone last?
2 to 6 hours (H3)0.095-0.1040.691 ***
Please indicate which of the following statements are correct
If the first dose of naloxone has no effect a second dose can be given (I1)0.159 *** 0.0620.626 ***
Someone can overdose again even after having received naloxone (I3)0.413 *** -0.158 ** 0.526 ***
Naloxone can provoke withdrawal symptoms (I6)0.320 *** -0.167 ** 0.588 ***

Note. All figures shown are standardized factor loadings on each item based on exploratory stuctural equation modeling with target rotation for a 3-factor model. Model parameters were estimated using weighted least square with means and variance adjusted estimator. The labels in parentheses by each item reference the original OOKS instrument section and item number. Shaded items indicate the highest factor loading for that item.

* = p < .05

** = p < .01

*** = p < .001.

Note. All figures shown are standardized factor loadings on each item based on exploratory stuctural equation modeling with target rotation for a 3-factor model. Model parameters were estimated using weighted least square with means and variance adjusted estimator. The labels in parentheses by each item reference the original OOKS instrument section and item number. Shaded items indicate the highest factor loading for that item. * = p < .05 ** = p < .01 *** = p < .001.

CFA and ESEM model estimation comparison

To assess whether allowing estimation of factor cross-loadings in ESEM provided better model fit than constraining the cross-loadings to zero as in CFA, we estimated both CFA and ESEM models with and without allowing 3 selected correlated residual terms that modification indices suggested would improve model fit: OOKS items B10 (rapid heartbeat) and B9 (agitated behavior) as overdose indicators; items C11 (put person in bed to “sleep it off”) and C5 (place person in recovery position); and items I6 (provoke withdrawal symptoms) and I3 (someone can overdose again after receiving naloxone). The results of these analyses are shown in Table 3 and provide support for ESEM estimation as resulting in a better fitting model (RMSEA = .032 [95% CI = .028 - .035]; CFI = .971; TLI = .959; SRMR = .049) compared with the best fitting CFA model (RMSEA = .043 [95% CI = .040 - .046]; CFI = .934; TLI = .924; SRMR = .072).
Table 3

Exploratory models, parameter constraints, and fit statistics for increasingly restrictive measurement invariance models.

Model ParametersFit Statistics
 Factor LoadingsItem ThresholdsResidual CovariancesChi-2 (df)TLICFIRMSEA(95% CI)SRMR
Exploratory Models (No Grouping)        
CFA—Uncorrelated ResidualsConstrainedNANone1598.967 (206)0.8450.861.061(.059 - .064)0.087
CFA—Correlated ResidualsConstrainedNAIncluded822.203 (203)0.9240.934.043 (.040 - .046)0.072
ESEM—Uncorrelated ResidualsTargetedNANone762.781 (168)0.9190.941.044(.041 - .048)0.062
ESEM—Correlated ResidualsTargetedNAIncluded459.627 (165)0.9590.971.032(.028 - .035)0.049
Multi-group Measurement Invariance Models                
ESEM—Configural Invariance (Correlated Residuals)FreeFreeIncluded602.752 (330)0.9520.966.030(.026 - .034)0.061
ESEM—Metric Invariance (Correlated Residuals)FixedFreeIncluded593.864 (387)0.9690.974.024(.020 - .028)0.072
ESEM—Scalar Invariance (Correlated Residuals)FixedFixedIncluded654.755 (406)0.9500.969.026(.022 - .030)0.073

Note. All measurement invariance modeling was conducted using Exploratory Structural Equation Modeling (ESEM) with targeted rotation and weighted least squares mean and variance adjusted robust estimation using Mplus (Muthén, & Muthén, 2021). ’Free’ means the model parameter was allowed to vary across groups, whereas fixed means the parameter was constrained to be equal across groups. Models where residual covariances were included allowed estimation of the residual covariances between three sets of OOKS items as identified in the text. Otherwise, residual covariances were fixed to zero. In the CFA models, cross-factor loadings were constained to be zero whereas in the ESEM exploratory and measurement invariance models, cross-factor loadings were estimated to be as close to zero as possible.

CFI = Compariative Fit Index; TLI = Tucker Lewis Index; RMSEA = Root Mean Square Error of Approximation; and SRMR = Standardized Root Mean Square Residual.

Note. All measurement invariance modeling was conducted using Exploratory Structural Equation Modeling (ESEM) with targeted rotation and weighted least squares mean and variance adjusted robust estimation using Mplus (Muthén, & Muthén, 2021). ’Free’ means the model parameter was allowed to vary across groups, whereas fixed means the parameter was constrained to be equal across groups. Models where residual covariances were included allowed estimation of the residual covariances between three sets of OOKS items as identified in the text. Otherwise, residual covariances were fixed to zero. In the CFA models, cross-factor loadings were constained to be zero whereas in the ESEM exploratory and measurement invariance models, cross-factor loadings were estimated to be as close to zero as possible. CFI = Compariative Fit Index; TLI = Tucker Lewis Index; RMSEA = Root Mean Square Error of Approximation; and SRMR = Standardized Root Mean Square Residual. Our final analyses tested whether the factor structure for the best fitting ESEM model (shown in Table 2) fit the data equally well for bystanders and first responders or whether a different factor structure would better fit the data for each group. The results of these measurement invariance analyses are also shown in Table 3. We began by estimating the configural model, which has the same overall factor structure but allows the item factor loadings and thresholds to vary across groups. This model fit the data reasonably well (RMSEA = .041 [95% CI = .038 - .045]; CFI = .935; TLI = .911; SRMR = .071). The metric model showed improved fit (RMSEA = .024 [95% CI = .020 - .028]; CFI = .974; TLI = .969; SRMR = .072) relative to the configural invariance model. The scalar invariance model fit statistics showed only slightly poorer but still very good model fit compared with the metric invariance model. Moreover, per thresholds recommended by Finch (e.g., change in CFI > = .01, change in SRMR > .03, change in RMSEA > = .015) for comparing fit differences between models where differences greater than the thresholds indicate poorer fit, differences between the scalar and metric invariance models were well below the recommended thresholds. We therefore concluded that the OOKS items met the criteria for establishing scalar invariance for first responders and bystanders.

Discussion

We believe the findings from this study have implications for assessing participants attending OEND trainings as well as for anticipating areas of training that might require more emphasis, especially among bystander trainees. From a measurement standpoint, we found that a 3-factor model adequately represented the variances and covariances among the selected OOKS items. Although our goal was not to develop a shortened version of the OOKS given the existence of the 12-item BOOK, the study that developed the BOOK also found a corresponding though not entirely overlapping 3-factor structure: opioid knowledge, opioid overdose knowledge, and opioid overdose response knowledge [24]. The original OOKS was organized into four sections based on substantive considerations that were not derived through statistical analyses [19]. The four sections include: (1) questions on knowledge of risks for an overdose; (2) signs of an overdose; (3) actions to take in response to an overdose; and (4) the correct use of naloxone. Our 3-factor structure more closely approximates the BOOK factor structure whereby, essentially, actions to take in response to an opioid-related overdose and the correct use of naloxone form a single rather than two separate factors. We believe the similarity of the factor analytic results across our study and the BOOK development study as well as the general consistency of both factor-analytic studies with the original substantively driven design of the original OOKS suggests the 3-factor structure we identified is robust and well captures the knowledge areas that should be covered during OEND trainings. Our analyses then focused on determining if the selected set of OOKS items met criteria for measurement invariance among first responders and bystanders. The results supported use of this subset of OOKS items for assessing pre-training knowledge among both groups of participants. The results also supported using the items for comparative purposes given the 3-factor model evidenced scalar invariance. This indicates that comparisons of the mean scores for these two groups of OEND participants are valid. Further work would need to establish fully whether these findings are applicable to the BOOK short-form and the OOKS parent instrument, but our findings suggest the original OOKS and/or an item subset such as with the instrument used in this study have broad applicability for use in OEND training assessment. From a practical standpoint, the results indicated that first responders are likely to have greater knowledge of the risks for an opioid-related overdose, signs of an overdose, and what to do to reverse an overdose prior to training compared with bystanders. That this finding held across different groups of bystanders trained in two different counties, with considerable heterogeneity within each bystander group, gives us some though not complete confidence in the generalizability of the findings. Whereas trainings for both groups should address all necessary knowledge relevant to recognizing and reducing opioid-related overdoses and fatalities, trainings for first responders might focus more on details such as exactly how long naloxone can be expected to last and which symptoms do not specifically indicate an opioid-related overdose (e.g., having blood-shot eyes, agitated behavior, and a rapid heartbeat). These were the issues generating the highest proportion of incorrect responses among first responders. Conversely, bystanders tended to begin trainings without knowledge in these same areas but also lacked information more generally such as whether naloxone can be used to reverse overdoses for any drug. If possible, longer OEND trainings might be needed to address fully the greater background knowledge deficit among bystanders relative to first responders.

Limitations

Our determination to seek a model with the smallest number of factors to represent the OOKS data means there is a possibility that our model is “under-factored”, and that the data might be better represented by a model with more factors. However, we assessed models with more factors and did not find they provided substantially better fit, had issues with model estimation during the measurement invariance testing, and the additional factors included had only several items with loadings greater than .50. The models with more factors did not seem to improve either fit or substantive interpretation. In addition, although the collaborating sites carefully deliberated on which items to remove to avoid redundancy, it is possible that better performing items were remove or the original OOKS item set could have demonstrated better or worse measurement invariance. For this reason, we recommend a replication of this study on the full OOKS item set. Generalizability of study findings remains a potential issue. Although we had a large and diverse sample, participants were drawn from two counties in a single midwestern state in the United States and might not be representative of OEND training participants in other parts of the country or in other countries or with other backgrounds. Racial and ethnic minorities were also under-represented in our samples. Moreover, the bystander group was heterogeneous and composed of people who use drugs, family members, professionals working in treatment settings, and so on. Given the data were collected to preserve participant anonymity fully, we could not subdivide this sample to compare these different subpopulations for OEND knowledge and knowledge gaps. Consequently, we were unable to determine if gaps in knowledge among bystanders were differentially attributable to drug users, family members (of drug users), or professionals working in treatment settings. Future research should examine whether this shortened version of the OOKs works equally well for these different subgroups of bystanders. Last, we also do not know how applicable the findings are with respect to post-training measurement. Clearly, an important goal of administering pre- and post-tests at trainings, is to assess training effectiveness through knowledge gain. It would therefore seem important as a next step, to assess measurement invariance across occasions rather than types of participants. This would help determine if comparing mean pre- and post-training factor scores is statistically valid [39]. We also believe because of the high number of items with significant cross-loadings in the final ESEM model, a bi-factor model with a single factor representing general knowledge across items and a 3-factor representation of specific knowledge as revealed in this study is worth further exploration [40].

Conclusion

This study supports use of a subset of items of the OOKS and, by inference, the full OOKS or shorter version BOOK, as adjuncts to assess pre-training knowledge among broadly constituted groups of OEND participants. We found that the OOKS items demonstrated scalar measurement invariance supporting the use of scale score comparisons of opioid overdose related knowledge among bystanders and first responders as valid. Although not a focus of the study, we also found that bystanders tend to have larger knowledge gaps in key areas related to recognizing and reversing an opioid overdose, particularly with respect to details on naloxone use and duration of effect. These areas could be more emphasized in future trainings to address these knowledge gaps. (XLS) Click here for additional data file. 14 Sep 2022
PONE-D-22-18050
A measurement invariance analysis of selected opioid overdose knowledge scale (OOKS) items among bystanders and first responders
PLOS ONE Dear Dr. Swartz, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Oct 29 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Michelle Melgarejo da Rosa Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. Additional Editor Comments: General points must be addressed: 1) In table 1, the visualization of the original data must be available to see all the people's choices. The original questionnaire must be included. 2)The statistics analysis methods must be more detailed, as much as, the comparison criteria among groups. 3) The innovative aspect of the article is not clear. Also, it is not clear the hypothesis of the authors. 4) On page 18 the authors state: "Despite these results, and for several reasons.... What are the several reasons mentioned? 5) There is a lack of literature discussion. The authors do not compare original data with general knowledge. The factors included and the importance of each must be contextualized. 6) The results are not overall representative to make a general statement of it 7) The bystander group of analysis should be divided into drug users, family members, and professionals. The way the authors represent does not point to the right source of misinformation. It is not clear where/who the questionnaire requires modification. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Page 18, first sentence "From a practical standpoint, the results indicated that first responders are likely to have greater knowledge of the risks for an opioid-related overdose, signs of an overdose, and what to do to reverse an overdose prior to training compared with first responders" I do apologize but it is unclear for me, first responders compared with first responders??? Reviewer #2: General points must be addressed: 1) In table 1, the visualization of the original data must be available to see all the people's choices. The original questionnaire must be included. 2)The statistics analysis methods must be more detailed, as much as, the comparison criteria among groups. 3) The innovative aspect of the article is not clear. Also, it is not clear the hypothesis of the authors. 4) On page 18 the authors state: "Despite these results, and for several reasons.... What are the several reasons mentioned? 5) There is a lack of literature discussion. The authors do not compare original data with general knowledge. The factors included and the importance of each must be contextualized. 6) The results are not overall representative to make a general statement of it. 7) The bystander group of analysis should be divided into drug users, family members, and professionals. The way the authors represent does not point to the right source of misinformation. It is not clear where/who the questionnaire requires modification. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 26 Sep 2022 I have provided all of my responses in the cover letter. But to be certain, here are those responses again: Dr. Michelle Melgarejo da Rosa Academic Editor PLOS ONE Re: Manuscript revision and resubmission Dear Dr. Melgarejo da Rosa: Thank you for your consideration of and feedback on the submitted manuscript titled: A Measurement invariance analysis of selected Opioid Overdose Knowledge Scale (OOKS) items among bystanders and first responders. On behalf of my co-authors and myself, we wish to express our appreciation for your and the reviewers’ suggestions for strengthening the manuscript. In this rebuttal letter, we provide detail on the changes made or, in a very few instances, an explanation for why we chose not to make a change based on the critique provided in your notification email. For reference, we have included the provided critiques and then, below each, indicate our response. We have also reviewed the journal’s formatting requirements as provided in the response letter and have tried to ensure that we follow them correctly. Last, in reviewing the manuscript, we caught a few typos and have corrected these as well as updated the background section (in one sentence) to indicate that opioid-related fatalities continue to be an issue despite the waning COVID epidemic in the US. General Comments/Issues: Comment: If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent… If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files Response: I checked with the state funding organization who checked with their legal department. They provided me with approval to upload the de-identified data set. There are no ethical or legal constraints with sharing the data and so I will upload the data set (in Excel format) as a supporting information file. Comment: Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. Response: This study received a determination of exemption from human subjects research since the investigators had no interaction with participants for the purpose of collecting data and only secondary analysis of fully de-identified data collected by program staff was done. We already indicated in the full name of the IRB in the methods section of the original manuscript. We have added that written and verbal consent were waived given the exempt status of the study. Reviewer #1: Comment: Page 18, first sentence "From a practical standpoint, the results indicated that first responders are likely to have greater knowledge of the risks for an opioid-related overdose, signs of an overdose, and what to do to reverse an overdose prior to training compared with first responders" I do apologize but it is unclear for me, first responders compared with first responders??? Response: Thank you for catching this typo. We have amended the sentence to be: “From a practical standpoint, the results indicated that first responders are likely to have greater knowledge of the risks for an opioid-related overdose, signs of an overdose, and what to do to reverse an overdose prior to training compared with bystanders.” Reviewer #2: Comment: 1) In table 1, the visualization of the original data must be available to see all the people's choices. The original questionnaire must be included. Response: On page 8 in the original manuscript, we explain that there were 24 items administered with 2 items removed because nearly 100% of respondents answered them correctly (causing convergence problems since there is almost no variation in the responses). Table 1 showed the 22 items that remained and which were subject to the invariance analyses. We have put back in the 2 items removed for the analyses to reflect the original questionnaire construction. We have identified the removed items and added a sentence to the table note to clarify they were not included in the invariance analysis. But they are now shown in the Table 1 as requested. Comment: 2) The statistics analysis methods must be more detailed, as much as, the comparison criteria among groups. Response: Given the level of detail of the ESEM and CFA methods used, we believe this comment pertains to the very brief section that provides sample descriptive statistics in the Study setting and sample section. We agree it would be best if we had more and more detailed sample statistics so that we could, for example, compare the two groups on demographics and other background characteristics. However, as we indicate in the manuscript, the fully anonymous nature of the data collected at the study sites means the data provided did not include demographic or any other information at the individual level. We only know if a training group consisted of first responders or bystanders. The study sites did provide aggregate data on their trainings separately from the OOKS pre-test data. We provided all the aggregate information from the sites, but because of this aggregation, we could not conduct more detailed descriptive statistical comparisons. We also could not subdivide and compare the demographics by bystander subgroup as this information was also not available – a concern we address further below. Comment: 3) The innovative aspect of the article is not clear. Also, it is not clear the hypothesis of the authors. Response: We believe the manuscript adds to the literature in two important ways but perhaps we were not as clear about these as we could have been. We have added the following text (on page 5 of our revised manuscript and at the end of the introduction section): To the best of our knowledge, there has not been a measurement invariance analysis of the OOKS to test whether it measures background knowledge for these two groups for whom naloxone education and training are commonly conducted. To address this gap, we conducted a multigroup analysis of OOKS item equivalence by assessing levels of measurement invariance (i.e., configural, metric, and scalar) for first responders and bystanders. We also believe that the application of ESEM, a relatively new and not widely used technique for analyzing the factor structure of measure items is a further innovative contribution. Immediately after the added text, the original manuscript continues: We also wanted to re-examine the OOKS item factor structure using a relatively new analytic tool, exploratory structural equation modeling (ESEM), which combines elements of exploratory factor analysis and confirmatory factor analysis. We describe the potential advantages of ESEM in the analyses section below. We did not have a specific hypothesis but instead were guided by a research question: Does the OOKS perform equally well assessing knowledge of signs of an overdose and naloxone administration by first responders and bystanders? To clarify this point, we added this text immediately following the sentences supporting the study’s innovativeness: If the OOKS demonstrates measurement invariance then one version would be suitable for use with both groups whereas lack of measurement invariance suggests either different versions of the OOKS containing different item subsets or different scoring thresholds for bystanders and first responders would be more appropriate. We hope these additions provides adequate support for the study’s innovativeness and unique contribution made as well as the intent of the study to assess the OOKS measurement invariance and the implications of having or not being invariant. Comment: 4) On page 18 the authors state: "Despite these results, and for several reasons.... What are the several reasons mentioned? Response: The several reasons are enumerated after that statement. Below is the paragraph in which this statement occurs, provided for context. We added the clause, as enumerated below, to clarify that the reasons for selecting the 3-factor model follow. We also added a reference to support the assertion that a factor was identified only when 3 or more items has loadings greater than .50 to further support the model selected. …Despite these results, and for several reasons enumerated below, we selected between the 3- (RMSEA=.044 [95% CI=.041 - .048]; CFI=.941; TLI=.919; SRMR=.067) and 4-factor models (RMSEA=.044 [95% CI=.034 - .041]; CFI=.962; TLI=.941; SRMR=.057), ultimately selecting the 3-factor model for further analysis. As noted by Finch (2020) in his simulation study, when the fit statistics are inconsistent with the underlying factor structure of the simulated models, the statistics tend to favor over-factored (i.e., too many factors) results. Parsimony was also an important consideration. All else being equal, simpler models are better given relatively similar fit statistics. One of the factors in the 4-factor model had only two items with loadings greater than .50, with the 5 and 6-factor models having a similar factor structure whereby only several items had loadings greater than .50. Current recommendations are that a factor is identified when there are 3 or more items with sufficient loadings.(38) Finally, in subsequent preliminary analyses, the 4-factor model failed to converge at the metric invariance step suggesting it could be overly complex. Given that both the 3- and 4-factor model fit statistics indicated both fit the data well and the item to factor structure appeared to be more robust in the 3-factor model, which had more than 2 items with loadings > .50 on every factor, and the pattern of factor loadings for the items made sense substantively, we selected the 3-factor model for measurement invariance testing. Comment: 5) There is a lack of literature discussion. The authors do not compare original data with general knowledge. The factors included and the importance of each must be contextualized. Response: Thank you for pointing out this oversight. We did not discuss the factor structure we obtained and how it did or did not comport with the original instrument’s structure as intended or of the structure found in another study to develop a much shorter version of the OOKS that is called the BOOK. We have added this additional language in the discussion to better put our factor structure findings into context: …The original OOKS was organized into four sections based on substantive considerations that were not derived through statistical analyses.(18) The four sections include: (1) questions on knowledge of risks for an overdose; (2) signs of an overdose; (3) actions to take in response to an overdose; and (4) the correct use of naloxone. Our 3-factor structure more closely approximates the BOOK factor structure whereby, essentially, actions to take in response to an opioid-related overdose and the correct use of naloxone form a single rather than two separate factors. We believe the similarity of the factor analytic results across our study and the BOOK development study as well as the general consistency of both factor-analytic studies with the substantively driven design of the original OOKS suggests the 3-factor structure we identified is robust and well captures the knowledge areas that should be covered during OEND trainings. Comment: 6) The results are not overall representative to make a general statement of it. Response: We believe we were careful to point out that our sample was not generalizable if that is what is meant by overall representative. For example, on page 19 we state the following: Generalizability of study findings remains a potential issue. Although we had a large and diverse sample, participants were drawn from two counties in a single midwestern state in the United States and might not be representative of OEND training participants in other parts of the country or in other countries or with other backgrounds. We do stand by the finding of measurement invariance for bystanders and first responders meaning the selected items worked equally well as a test of knowledge for both groups. And we tried not to make claims that this finding would hold universally for different kinds of OEND training participants despite having a large and (moderately) diverse sample collected from two sites and across many training sessions. We believe we do present the findings as being qualified with respect to generalizability and state we do not have complete confidence in the generalizability of the findings: From a practical standpoint, the results indicated that first responders are likely to have greater knowledge of the risks for an opioid-related overdose, signs of an overdose, and what to do to reverse an overdose prior to training compared with bystanders. That this finding held across different groups of bystanders trained in two different counties, with considerable heterogeneity within each bystander group, gives us some though not complete confidence in the generalizability of the findings. If there is a specific statement where we have made overly broad claims for the findings, however, we would be happy to review and amend as needed. Comment: The bystander group of analysis should be divided into drug users, family members, and professionals. The way the authors represent does not point to the right source of misinformation. It is not clear where/who the questionnaire requires modification. Response: We are not certain as to where we indicate the questionnaire requires modification. Our analyses indicated that the form has full measurement invariance for bystanders (as a group) and first responders. This means as a pre-test for our sample, the selected OOKS items identified gaps in knowledge as well as strengths in knowledge about equally well for both groups and that mean comparisons of knowledge gain post-training should be valid for either group. We did identify that, collectively, bystanders were less knowledgeable than first responders and indicated the specific items where knowledge was most lacking. For example, on page 20 of the original manuscript, we state the following in the conclusions: Although not a focus of the study, we also found that bystanders tend to have larger knowledge gaps in key areas related to recognizing and reversing an opioid overdose, particularly with respect to details on naloxone use and duration of effect. These areas could be more emphasized in future trainings to address these knowledge gaps. We understand regarding there being no further analysis of subgroups to identify the source of misinformation if what is meant by misinformation is that the lower background knowledge of the bystanders. We do indicate in the limitations that we were unable to disentangle the results for these bystander subsamples but have added these statements in the limitations section (page 19 of the track-changed manuscript) to provide what we hope is further clarity as to why further analysis was not possible and the implications of that: Given the data were collected to fully preserve participant anonymity, we could not subdivide this sample to compare these different subpopulations for OEND knowledge and knowledge gaps. Consequently, we were unable to determine if gaps in knowledge among bystanders were differentially attributable to drug users, family members (of drug users), or professionals working in treatment settings Future research should examine whether this shortened version of the OOKs works equally well for these potentially different subgroups of bystanders. Thank you again for providing the thoughtful reviews and for the opportunity to revise and resubmit. We think the manuscript has been strengthened because of the revisions made in response to the reviews. We hope these revisions adequately address all the reviewers’ concerns and/or that our responses explain why some revisions are not feasible. But we are prepared to make further changes if requested. We look forward to hearing a decision on the revised manuscript. Sincerely, James A. Swartz, Ph.D. Professor University of Illinois Chicago Jane Addams College of Social Work Submitted filename: Response to Reviewers.pdf Click here for additional data file. 27 Sep 2022 A measurement invariance analysis of selected opioid overdose knowledge scale (OOKS) items among bystanders and first responders PONE-D-22-18050R1 Dear Dr. Swartz, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Michelle Melgarejo da Rosa Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 5 Oct 2022 PONE-D-22-18050R1 A measurement invariance analysis of selected Opioid Overdose Knowledge Scale (OOKS) items among bystanders and first responders Dear Dr. Swartz: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Michelle Melgarejo da Rosa Academic Editor PLOS ONE
  29 in total

1.  Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support.

Authors:  Paul A Harris; Robert Taylor; Robert Thielke; Jonathon Payne; Nathaniel Gonzalez; Jose G Conde
Journal:  J Biomed Inform       Date:  2008-09-30       Impact factor: 6.317

Review 2.  The role of take-home naloxone in the epidemic of opioid overdose involving illicitly manufactured fentanyl and its analogs.

Authors:  Hong K Kim; Nicholas J Connors; Maryann E Mazer-Amirshahi
Journal:  Expert Opin Drug Saf       Date:  2019-05-16       Impact factor: 4.250

3.  Measurement Invariance Conventions and Reporting: The State of the Art and Future Directions for Psychological Research.

Authors:  Diane L Putnick; Marc H Bornstein
Journal:  Dev Rev       Date:  2016-06-29

Review 4.  A systematic review of community opioid overdose prevention and naloxone distribution programs.

Authors:  Angela K Clark; Christine M Wilder; Erin L Winstanley
Journal:  J Addict Med       Date:  2014 May-Jun       Impact factor: 3.702

5.  Naloxone laws facilitate the establishment of overdose education and naloxone distribution programs in the United States.

Authors:  Barrot H Lambdin; Corey S Davis; Eliza Wheeler; Stephen Tueller; Alex H Kral
Journal:  Drug Alcohol Depend       Date:  2018-05-15       Impact factor: 4.492

6.  Implementation of an Opioid Overdose and Naloxone Distribution Training in a Pharmacist Laboratory Course.

Authors:  Min Kwon; Ashley E Moody; Jonathan Thigpen; Andrea Gauld
Journal:  Am J Pharm Educ       Date:  2020-02       Impact factor: 2.047

7.  A pilot study to compare virtual reality to hybrid simulation for opioid-related overdose and naloxone training.

Authors:  Nicholas A Giordano; Clare E Whitney; Sydney A Axson; Kyle Cassidy; Elvis Rosado; Ann Marie Hoyt-Brennan
Journal:  Nurse Educ Today       Date:  2020-02-10       Impact factor: 3.442

Review 8.  Opioid overdose prevention and naloxone rescue kits: what we know and what we don't know.

Authors:  Todd Kerensky; Alexander Y Walley
Journal:  Addict Sci Clin Pract       Date:  2017-01-07

9.  Addressing co-occurring public health emergencies: The importance of naloxone distribution in the era of COVID-19.

Authors:  Alexandra B Collins; Colleen Daley Ndoye; Diego Arene-Morley; Brandon D L Marshall
Journal:  Int J Drug Policy       Date:  2020-07-21

Review 10.  Are take-home naloxone programmes effective? Systematic review utilizing application of the Bradford Hill criteria.

Authors:  Rebecca McDonald; John Strang
Journal:  Addiction       Date:  2016-03-30       Impact factor: 6.526

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.