| Literature DB >> 35357652 |
David A Cook1,2, John M Wilkinson3, Jonathan Foo4.
Abstract
INTRODUCTION: We sought to evaluate the reporting and methodological quality of cost evaluations of physician continuing professional development (CPD).Entities:
Keywords: Cost effectiveness; Costs and cost analysis; Education, continuing; Education, medical
Mesh:
Year: 2022 PMID: 35357652 PMCID: PMC9240125 DOI: 10.1007/s40037-022-00705-z
Source DB: PubMed Journal: Perspect Med Educ ISSN: 2212-2761
Methodological quality appraised using the Medical Education Research Study Quality Instrument (MERSQI): operational considerations for cost evaluations and prevalence
| Domain: Item | Operational adjustments | Level | Prevalence |
|---|---|---|---|
| Study design | Added option for economic modeling studies (score 1.5) | 1‑group post-only (1) | 6 (10%) |
| 1‑group pre-post, or modeling (1.5) | 20 (32%) | ||
| 2‑group non-randomized (2) | 16 (26%) | ||
| 2‑group randomized (3) | 20 (32%) | ||
| Sampling: No. of institutions studied | No change | 1 (0.5) | 54 (87%) |
| 2 (1) | 1 (2%) | ||
| >2 (1.5) | 7 (11%) | ||
| Sampling: Response rate | For cost data: Data derived from large record sets unlikely to reflect bias (e.g., institutional electronic health record or regional claims database) count as high (score 1.5) | <50% or not specified (0.5) | 24 (39%) |
| 50–74% (1) | 7 (11%) | ||
| ≥75% or large record | 31 (50%) | ||
| Type of data (data source) | For cost data: Details of resource quantitation (both data source and quantity [number of units, not just total cost]) count as high (score 3). Cost alone counts as low (score 1) | Self-reported data, or cost without resource quantitation (1) | 8 (13%) |
| Objective measurement, or cost with data source and quantity (3) | 54 (87%) | ||
| Validation of evaluation instrument: Content | For cost data: | Reported (1) | 8 (13%) |
| Validation of evaluation instrument: Internal structure | For cost data: | Reported (1) | 9 (15%) |
| Validation of evaluation instrument: Relations with other variables | For cost data: | Reported (1) | 1 (2%) |
| Data analysis: Appropriateness | For cost data: The following count as “appropriate” (score 1): cost effectiveness ratio, net benefit, or other similar analysis of cost data | Inappropriate for study design (0) | 37 (60%) |
| Appropriate (1) | 25 (40%) | ||
| Data analysis: Complexity | For cost data: The following count as “beyond descriptive” (score 2): cost effectiveness ratio, net benefit, visual display of cost-effectiveness | Descriptive analysis only (1) | 37 (60%) |
| Beyond descriptive analysis (2) | 25 (40%) | ||
| Outcomes | For cost outcomes: As per Foo, we distinguished education costs in a “test setting” or a “real setting,” namely: | Knowledge, skills, or education costs in a “test” or hypothetical training setting, or estimated from literature (1.5) | 1 (2%) |
| Behaviors in practice or education costs in a “real” training setting (2) | 25 (40%) | ||
| Patient effects, including health care costs (3) | 36 (58%) |
For each item in a given study, the design feature (study design, outcome, evaluation instrument, etc.) that supported the highest level of coding was selected. For example, for a study reporting both cost and effectiveness (non-cost) outcomes, the outcome corresponding to the highest-scoring level was selected for coding (and as a result, in some cases the design features in the cost evaluation [i.e., the features coded in this review] are less than those reported in this table)
Fig. 1Reporting quality as per CHEERS guideline criteria. N = 62 except as indicated. Numbers in [brackets] indicate item number in CHEERS checklist [10]. Details on abstract reporting are provided in Fig. 2. Operational considerations used in coding are provided in Tab. S1 in ESM
Fig. 2Reporting quality of abstract. N = 56 studies with abstract, except as indicated