| Literature DB >> 30049693 |
Liv Marit Valen Schougaard1, Annette de Thurah2,3, David Høyrup Christiansen4, Per Sidenius5, Niels Henrik Hjollund1,6.
Abstract
OBJECTIVES: Patient-reported outcome (PRO) measures have been used in epilepsy outpatient clinics in Denmark since 2011. The patients' self-reported PRO data are used by clinicians as a decision aid to support whether a patient needs contact with the outpatient clinic or not based on a PRO algorithm. Validity and reliability are fundamental to any PRO measurement used at the individual level in clinical practice. The aim of this study was to evaluate the test-retest reliability of the PRO algorithm used in epilepsy outpatient clinics and to analyse whether the method of administration (web and paper) would influence the result. DESIGN ANDEntities:
Keywords: clinical decision support; epidemiology; epilepsy; patient-reported outcomes; test–retest reliability
Mesh:
Year: 2018 PMID: 30049693 PMCID: PMC6067405 DOI: 10.1136/bmjopen-2017-021337
Source DB: PubMed Journal: BMJ Open ISSN: 2044-6055 Impact factor: 2.692
Patient characteristic measured in test 1 in responders and non-responders in test 2 among outpatients with epilepsy, n=1640
| Responders (n=554) | Non-responders (n=1086) | |
| Gender, men | 286 (52) | 511 (47) |
| Age, year, median (IQR) | 57.3 (42.7 to 67.7) | 49.7 (33.8 to 64.8) |
| Department | ||
| Aarhus | 409 (74) | 831 (77) |
| Holstebro | 115 (21) | 174 (16) |
| Viborg | 30 (5) | 81 (7) |
| Patient-reported outcome algorithm in test 1 | ||
| Green | 116 (21) | 200 (18) |
| Yellow | 349 (63) | 670 (62) |
| Red | 89 (16) | 216 (20) |
| WHO-5 Well-Being Index, median (IQR) | 76 (60 to 84) | 72 (56 to 80) |
| General health | ||
| Excellent/very good | 258 (47) | 448 (41) |
| Good | 209 (38) | 427 (39) |
| Fair/poor | 87 (16) | 206 (19) |
| Missing item categories | 5 (1) | |
Figure 1Flow chart of eligible participants’ response method in test 1, randomisation of response method in test 2, non-responders in test 2 and participants included in the analysis. In paper responders, the randomisation allocation was 1:1 from August to November 2016, and 0.25:0.75 in favour of the web method from the end of November 2016 to April 2017. PRO, patient-reported outcome.
Agreement between the automated PRO algorithm from test 1 to test 2, n=554
| PRO algorithm test 1 | PRO algorithm test 2 | |||
| Green (%) | Yellow (%) | Red (%) | Total (%) | |
| Green | 104 (19) | 42 (8) | 1 (0.1) | 147 (27) |
| Yellow | 34 (6) | 328 (59) | 18 (3) | 380 (69) |
| Red | 0 (0) | 5 (1) | 22 (4) | 27 (5) |
| Total | 138 (25) | 375 (68) | 41 (7) | 554 (100) |
Green, no need of contact with the outpatient clinic.
Yellow, may need contact with the clinic (a clinician has to assess the PRO response).
Red, need of contact with the clinic.
PRO, patient-reported outcome.
Test–retest reliability and agreement between the PRO algorithm from test 1 to test 2 in the study population and in different methods of administration
| PRO algorithm | n | Perfect agreement | Disagreement improved status % (95% CI) | Disagreement worsening status % (95% CI) | Kappa* (95% CI) |
| Pooled | 554 | 82 (78 to 85) | 7 (5 to 9) | 11 (9 to 14) | 0.67 (0.60 to 0.74) |
| Web–web | 166 | 87 (80 to 92) | 5 (2 to 9) | 8 (5 to 14) | 0.78 (0.67 to 0.86) |
| Paper–paper | 112 | 82 (74 to 89) | 8 (4 to 15) | 10 (5 to 17) | 0.69 (0.57 to 0.81) |
| Mixed† | 276 | 79 (74 to 84) | 8 (5 to 12) | 13 (9 to 18) | 0.59 (0.48 to 0.69) |
*Weighted Kappa with squared weights.
†Web–paper and paper–web.
PRO, patient-reported outcome.
Figure 2Test–retest reliability from test 1 to test 2 of the pooled PRO algorithm (n=554), web–web (n=166), paper–paper (n=112) and the mixed group (web–paper or paper–web, n=276). PRO, patient-reported outcome.