| Literature DB >> 32673228 |
Mohamed Khalifa1, Farah Magrabi1, Blanca Gallego Luxan2.
Abstract
BACKGROUND: While selecting predictive tools for implementation in clinical practice or for recommendation in clinical guidelines, clinicians and health care professionals are challenged with an overwhelming number of tools. Many of these tools have never been implemented or evaluated for comparative effectiveness. To overcome this challenge, the authors developed and validated an evidence-based framework for grading and assessment of predictive tools (the GRASP framework). This framework was based on the critical appraisal of the published evidence on such tools.Entities:
Keywords: clinical decision rules; clinical prediction rule; evaluation study; evidence-based medicine
Mesh:
Year: 2020 PMID: 32673228 PMCID: PMC7381257 DOI: 10.2196/15770
Source DB: PubMed Journal: J Med Internet Res ISSN: 1438-8871 Impact factor: 5.428
Figure 1The grading and assessment of predictive tools framework concept.
Figure 2Survey workflow and randomization of the 4 scenarios.
Figure 3Consolidated Standards of Reporting Trials 2010 flow diagram of the experiment. GRASP: grading and assessment of predictive tools.
Proposed hypotheses and related outcome measures.
| Proposed hypotheses | Related outcome measures |
| Using GRASPa will make predictive tools’ selection decisions more accurate, that is, selecting the best predictive tools | Accuracy of tools’ selection decisions |
| Using GRASP will make decisions more objective, informed, and evidence-based, that is, decisions are based on the information provided by the framework | Making decisions objectively based on the information and evidence provided in the experiment |
| Using GRASP will make decisions less subjective, that is, less based on guessing, prior knowledge, or experience | Making decisions subjectively based on guessing and making decisions subjectively based on prior knowledge or experience |
| Using GRASP will make decisions more efficient, that is, decisions are made in less time | The time needed for tools’ selection decision making |
| Using GRASP will make participants face less decisional conflict, that is, be more confident and satisfied with decisions | Levels of participants’ confidence in their decisions and levels of participants’ satisfaction with their decisions |
aGRASP: grading and assessment of predictive tools.
The checklist for reporting results from the internet surveys checklist.
| Item category and checklist item | Explanation | |
|
|
| |
|
| Describe survey design | A randomized controlled trial experiment testing the impact of using the GRASP framework on clinicians and health care professionals’ decisions in selecting predictive tools for CDSa, using a convenience invited sample to participate in the experiment |
|
| ||
|
| IRB approval | The experiment was approved by the Human Research Ethics Committee, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, Australia |
|
| Informed consent | Informed consent was introduced at the beginning of the survey for participants to agree before they take the survey, including the length of time of the survey, types of data collected and its storage, investigators, and the purpose of the study |
|
| Data protection | Collected personal information was protected through Macquarie University account on Qualtrics survey system |
|
|
| |
|
| Development and testing | The first author (MK) developed the survey and pilot tested the questions and its usability before deploying the survey to the participants |
|
| ||
|
| Open survey versus closed survey | This was a closed survey; only invited participants had access to complete the survey |
|
| Contact mode | An initial contact, via email, was sent to all invited participants. Only those who agreed to participate completed the web-based survey |
|
| Advertising the survey | The survey was not advertised. Only invited participants were informed of the study and completed the survey |
|
| ||
|
| Web or email | The survey was developed using the Qualtrics survey platform, and the link to the web-based survey was sent to invited participants via email. Responses were automatically collected through the Qualtrics survey platform then retrieved by the investigators for analysis |
|
| Context | Only invited participants were informed of the study via email |
|
| Mandatory/voluntary | The survey was not mandatory for invited participants |
|
| Incentives | The only incentive was that participants could request to be acknowledged in the published study. Participants were also informed of the results of the survey after the analysis is complete |
|
| Time/date | Data were collected over 6 weeks, from March 11 to April 21, 2019 |
|
| Randomization of items or questionnaires | To prevent biases, items were randomized. |
|
| Adaptive questioning | Four scenarios were used and randomized, but they were not conditionally displayed |
|
| Number of items | From 5 to 8 items per page |
|
| Number of screens (pages) | The questionnaire was distributed over 5 pages |
|
| Completeness check | Completeness checks were used after the questionnaire was submitted, and mandatory items were highlighted. Items provided a nonresponse option “not applicable” or “don’t know” |
|
| Review step | Respondents were able to review and change their answers before submitting their answers |
|
| ||
|
| Unique site visitor | We used the IPc addresses to check for unique survey visitors |
|
| View rate (ratio of unique survey visitors/unique site visitors) | Only invited participants had access to the survey. Survey visitors included those who completed the survey and those who started the survey but did not complete it or gave incomplete answers |
|
| Participation rate (ratio of unique visitors who agreed to participate/unique first survey page visitors) | The recruitment rate was 90% (218 participants agreed to participate out of 242 invited participants who visited the first page) |
|
| Completion rate (ratio of users who finished the survey/users who agreed to participate) | The completion rate was 91% (198 participants completed the survey out of 218 participants who agreed to participate) |
|
| ||
|
| Cookies used | Cookies were not used to assign a unique user identifier; instead, we used users’ computer IP to identify unique users |
|
| IP address check | The IP addresses of participants’ computers were used to identify potential duplicate entries from the same user. Only 2 duplicate entries were captured and were eliminated before analysis |
|
| Log file analysis | We also checked the provided demographic information, of all participants, to make sure the 2 identified duplicates were the only incidents |
|
| Registration | Data were collected and the user IP and other demographic data were used later on to eliminate duplicate entries before analysis. Most recent entries were used in the analysis |
|
| ||
|
| Handling of incomplete questionnaires | Only completed surveys were used in the analysis |
|
| Questionnaires submitted with an atypical timestamp | The task completion time was captured. However, no specific timeframe was used. In the analysis, we excluded statistical outliers, since the survey allowed users to re-enter after a while, for example, the next day. This is discussed in the paper |
|
| Statistical correction | No statistical correction was required |
aCDS: clinical decision support.
bIRB: institutional review board.
cIP: internet protocol.
The impact of using grading and assessment of predictive tools on participants’ decisions (n=194).
| Criteria | No GRASPa | GRASP | Change (%) | |
| Score (0 to 100%) | 53.7 | 88.1 | 64 | <.001 |
| Guessing (1 to 5) | 2.48 | 1.98 | −20 | <.001 |
| Subjective (1 to 5) | 3.55 | 3.27 | −8 | .003 |
| Objective (1 to 5) | 3.11 | 4.10 | 32 | <.001 |
| Confidence (1 to 5) | 3.55 | 3.96 | 11 | <.001 |
| Satisfaction (1 to 5) | 3.54 | 3.99 | 13 | <.001 |
| Time in min (90th percentile) | 12.4 | 6.4 | −48 | .38 |
aGRASP: grading and assessment of predictive tools.
Estimation for paired difference and effect size.
| Measure | Mean (SD) | SE | 99.3% CIa | Effect sizeb | |||||
|
|
|
|
|
|
| Value | Actual size |
| |
| Score | 0.340 (0.555) | 0.040 | 0.231 to 0.449 | 8.53 (193) | <.001 | 0.274 | Large | ||
| Guessing | −0.519 (1.303) | 0.095 | −0.777 to −0.260 | −5.47 (188) | <.001 | 0.134 | Moderate | ||
| Subjective | −0.319 (1.464) | 0.107 | −0.613 to −0.028 | −2.99 (187) | .003 | 0.044 | Small | ||
| Objective | 1.005 (1.496) | 0.109 | 0.709 to 1.302 | 9.24 (189) | <.001 | 0.307 | Large | ||
| Confidence | 0.392 (1.261) | 0.092 | 0.141 to 0.642 | 4.27 (188) | <.001 | 0.086 | Moderate | ||
| Satisfaction | 0.439 (1.235) | 0.090 | 0.194 to 0.684 | 4.89 (188) | <.001 | 0.110 | Moderate | ||
| Durationc | −447 (7152) | 514 | −1847 to 952 | −0.87 (193) | .39 | N/Ad | N/A | ||
aBonferroni correction conducted.
bEffect size calculated using the eta-square statistic (0.01=small effect, 0.06=moderate effect, and 0.14=large effect [58]).
cTask completion duration is reported in seconds.
dN/A: not applicable.
Comparing the impact of grading and assessment of predictive tools on participant groups.
| Health care professional group | Criteria | |||||||||||||||
|
| Score (0 to 100%) | Guessing (1 to 5) | Subjective (1 to 5) | Objective (1 to 5) | Confidence (1 to 5) | Satisfaction (1 to 5) | Time in min (90th percentile) | |||||||||
|
| ||||||||||||||||
|
|
| |||||||||||||||
|
|
| No GRASPa | 61.4 | 2.4 | 3.7 | 3.0 | 3.6 | 3.6 | 10.9 | |||||||
|
|
| GRASP | 89.0 | 2.0 | 3.5 | 4.0 | 4.0 | 4.0 | 6.1 | |||||||
|
|
| Change (%) | 45 | −18 | −5 | 33 | 10 | 12 | −44 | |||||||
|
|
| <.001 | <.001 | .080 | <.001 | <.001 | <.001 | .62 | ||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 37 | 2.7 | 3.3 | 3.5 | 3.5 | 3.5 | 15.3 | |||||||
|
|
| GRASP | 85 | 2.0 | 2.8 | 4.4 | 3.8 | 3.9 | 6.6 | |||||||
|
|
| Change (%) | 127 | −25 | −16 | 28 | 10 | 14 | −57 | |||||||
|
|
| <.001 | <.001 | .007 | <.001 | .047 | .008 | .26 | ||||||||
|
| ||||||||||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 73 | 2.4 | 4.1 | 2.8 | 3.8 | 3.8 | 11.0 | |||||||
|
|
| GRASP | 93 | 1.9 | 3.7 | 3.8 | 4.1 | 4.1 | 6.5 | |||||||
|
|
| Change (%) | 29 | −19 | −10 | 36 | 6 | 7 | −41 | |||||||
|
|
| <.001 | <.001 | .009 | <.001 | .07 | .04 | .51 | ||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 36 | 2.6 | 3.0 | 3.4 | 3.3 | 3.2 | 15.0 | |||||||
|
|
| GRASP | 83 | 2.0 | 2.9 | 4.4 | 3.8 | 3.8 | 6.5 | |||||||
|
|
| Change (%) | 129 | −21 | −6 | 28 | 15 | 19 | −57 | |||||||
|
|
| <.001 | <.001 | .096 | <.001 | .001 | <.001 | .11 | ||||||||
|
| ||||||||||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 67.0 | 2.3 | 4.1 | 2.8 | 3.8 | 3.8 | 8.1 | |||||||
|
|
| GRASP | 89.6 | 1.8 | 3.7 | 3.8 | 4.1 | 4.1 | 5.3 | |||||||
|
|
| Change (%) | 34.0 | −22.0 | −10.0 | 39.0 | 8.0 | 8.0 | −34.0 | |||||||
|
|
| <.001 | <.001 | .007 | <.001 | .016 | .013 | .51 | ||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 36 | 2.7 | 2.8 | 3.6 | 3.3 | 3.2 | 18.2 | |||||||
|
|
| GRASP | 85 | 2.2 | 2.7 | 4.5 | 3.7 | 3.8 | 7.9 | |||||||
|
|
| Change (%) | 134 | −18 | −5 | 23 | 14 | 19 | −57 | |||||||
|
|
| <.001 | .002 | 0.16 | <.001 | .003 | <.001 | .24 | ||||||||
|
| ||||||||||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 54.2 | 2.3 | 3.5 | 3.1 | 3.7 | 3.6 | 13.5 | |||||||
|
|
| GRASP | 82.2 | 2.0 | 3.3 | 4.1 | 3.9 | 4.0 | 7.4 | |||||||
|
|
| Change (%) | 52 | −14 | −7 | 33 | 8 | 10 | −45 | |||||||
|
|
| <.001 | .005 | .08 | <.001 | .009 | .002 | .41 | ||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 55 | 2.9 | 3.5 | 3.3 | 3.3 | 3.4 | 12.2 | |||||||
|
|
| GRASP | 97 | 2.0 | 3.1 | 4.3 | 3.9 | 4.0 | 5.3 | |||||||
|
|
| Change (%) | 78 | −30 | −12 | 29 | 17 | 18 | −56 | |||||||
|
|
| <.001 | <.001 | .004 | <.001 | .004 | .001 | .54 | ||||||||
|
| ||||||||||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 59 | 2.6 | 3.6 | 3.1 | 3.5 | 3.5 | 9.1 | |||||||
|
|
| GRASP | 87 | 2.0 | 3.3 | 4.1 | 4.0 | 4.0 | 6.0 | |||||||
|
|
| Change (%) | 48 | −25 | −7 | 34 | 13 | 14 | −34 | |||||||
|
|
| <.001 | <.001 | .06 | <.001 | .001 | .001 | .45 | ||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 47 | 2.3 | 3.5 | 3.2 | 3.6 | 3.6 | 15.9 | |||||||
|
|
| GRASP | 88 | 2.0 | 3.2 | 4.1 | 3.9 | 4.0 | 7.7 | |||||||
|
|
| Change (%) | 89 | −13 | −10 | 28 | 7 | 10 | −52 | |||||||
|
|
| <.001 | .03 | .009 | <.001 | .08 | .004 | .19 | ||||||||
|
| ||||||||||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 59 | 2.6 | 3.6 | 3.0 | 3.5 | 3.4 | 8.1 | |||||||
|
|
| GRASP | 87 | 2.0 | 3.3 | 4.0 | 3.9 | 4.0 | 6.5 | |||||||
|
|
| Change (%) | 48 | −24 | −7 | 36 | 12 | 16 | −20 | |||||||
|
|
| <.001 | <.001 | .09 | <.001 | .009 | .001 | .46 | ||||||||
|
|
| |||||||||||||||
|
|
| No GRASP | 49 | 2.4 | 3.5 | 3.3 | 3.6 | 3.6 | 15.0 | |||||||
|
|
| GRASP | 88 | 2.0 | 3.2 | 4.2 | 3.9 | 4.0 | 6.8 | |||||||
|
|
| Change (%) | 80 | −16 | −10 | 28 | 9 | 9 | −54 | |||||||
|
|
| <.001 | .004 | .006 | <.001 | .004 | .004 | .11 | ||||||||
aGRASP: grading and assessment of predictive tools.