| Literature DB >> 32842953 |
Robert Fahed1, Tim E Darsaut2, Behzad Farzin3, Miguel Chagnon4, Jean Raymond5.
Abstract
BACKGROUND: Clinical uncertainty and equipoise are vague notions that play important roles in contemporary problems of medical care and research, including the design and conduct of pragmatic trials. Our goal was to show how the reliability study methods normally used to assess diagnostic tests can be applied to particular management decisions to measure the degree of uncertainty and equipoise regarding the use of rival management options.Entities:
Keywords: Agreement; Clinical decision-making; Equipoise; Kappa; Methodology; Randomized trials; Reliability; Uncertainty
Mesh:
Year: 2020 PMID: 32842953 PMCID: PMC7448326 DOI: 10.1186/s12874-020-01095-8
Source DB: PubMed Journal: BMC Med Res Methodol ISSN: 1471-2288 Impact factor: 4.615
Fig. 1Reliability studies. Reliability studies of diagnostic tests assess the agreement among X clinicians for the diagnosis Y for each of the Z patients included in the study. The reliability studies of treatment decisions we propose use a similar methodology to study the agreement for management options. After asking X clinicians to choose one of the Y management options proposed for each of the Z patients, we can measure the agreement/uncertainty
Fig. 2The portfolio. Example from the electronic portfolio used for the thrombectomy agreement study. Each page displayed a clinical vignette with basic clinical information (age, gender, NIHSS score, etc.…) and a few selected brain imaging slices. For each patient, raters were asked whether they would perform mechanical thrombectomy (yes/no). Other questions were also asked for further analyses on other parameters (agreement for intravenous thrombolysis, etc.…)
Fig. 3Thrombectomy decisions. Legend: Panel a shows the proportion (%) of decisions to perform thrombectomy (in percentages) for all raters and among each specialty. Black dots represent the individual results of each of the 86 clinicians. The bar graphs show similar proportions of decisions between neurologists and interventional neuroradiologists (INRs), but they hide individual discrepancies among physicians, shown here by black dots, revealing a wide range of decisions. Panel b shows, for each patient, the proportions (%) of thrombectomy decisions. This panel better illustrates the spectrum of results in various patients, as it shows that some cases had almost unanimous decisions for (complete/almost complete blue bar at the top) or against thrombectomy (complete/almost complete red bar at the bottom part). However, a significant proportion of cases (in the middle) reveal wide disagreements. None of these panels can give an overall idea of the degree of agreement in the study. Panel c shows the levels of agreements (through kappa values) in a bar graph. It shows that thrombectomy decisions lack reliability (i.e kappa value is below 0.6) for all raters and also within each subspecialty (vascular neurologists and interventional neuroradiologists)
Reporting patient management agreement studies (inspired from GRRAS)
| 1. Identify in title and/or abstract the clinical dilemma for which uncertainty and intra-inter physician agreement was investigated | |
| 2. Name and describe the subject of interest explicitly: what disease(s), what available management options, what clinical dilemmas are being considered | |
| 3. Specify the patients that are confronted with uncertainty | |
| 4. Specify the clinicians involved in making clinical decisions or recommendations | |
| 5. Describe what is already known about reliability/agreement and provide a rationale for the study. | |
| 6. Explain how the number of patients and clinicians was chosen. | |
| 7. Describe how patients and clinicians were selected. | |
| 8. Describe the experimental setting (e.g time interval between sessions, availability of clinical information, blinding…) | |
| 9. State whether judgments were made independently. | |
| 10. Describe the statistical analyses | |
| 11. State the actual number of raters and subjects that were included, and the number of replicated judgments which were collected. | |
| 12. Describe the characteristics of clinicians (training, experience) and patients (any clinical data judged relevant to the study question). | |
| 13. Report estimates of reliability and agreement including measures of statistical uncertainty. | |
| Discuss the practical relevance of results. | |
| Provide detailed results if possible (e.g online). |
Index of uncertainty and potential for trial recruitment
*According to Landis and Koch. [35]