| Literature DB >> 29097355 |
Inga Hege1, Andrzej A Kononowicz2, Martin Adler3.
Abstract
BACKGROUND: Clinical reasoning is a fundamental process medical students have to learn during and after medical school. Virtual patients (VP) are a technology-enhanced learning method to teach clinical reasoning. However, VP systems do not exploit their full potential concerning the clinical reasoning process; for example, most systems focus on the outcome and less on the process of clinical reasoning.Entities:
Keywords: clinical decision-making; computer-assisted instruction; educational technology; learning
Year: 2017 PMID: 29097355 PMCID: PMC5691243 DOI: 10.2196/mededu.8100
Source DB: PubMed Journal: JMIR Med Educ ISSN: 2369-3762
Overview of categories and subthemes, which have been translated into software requirements and how they have been implemented in the clinical reasoning tool.
| Category | Subtheme | Requirements |
| Psychological theories | Patient illness script | The concept of developing an illness script is implemented as a concept map (directed weighted graph), with findings, differential diagnoses, tests, and therapy options as nodes. Relations can be visualized with connections between the nodes, which can be weighted (eg, “slightly related,” “highly related”) |
| Dual processing | Learners can submit a final diagnosis anytime in the virtual patient (VP) scenario to encourage pattern recognition approaches. | |
| Patient-centeredness | Cognitive errors | The final diagnosis/-es of the learner are compared with the expert’s diagnoses. In case of a mismatch, the tool analyzes potential sources of errors or biases. |
| Teaching/assessment | Methods | Concept mapping as a suitable method of teaching and assessing clinical reasoning is the basis of the tool. |
| Scoring | The nodes of the concept map are based on the Medical Subject Heading thesaurus; therefore, they can be scored by comparing them with expert nodes, including synonyms and more/less specific entries. | |
| Learner-centeredness | Learning analytics | After each VP session, the learners can access a dashboard with their clustered scores, development of their performance over time/VPs, and comparison with their peers. |
| Feedback | Both, process- and outcome-oriented feedback is provided by the tool and can be accessed by the learner anytime. | |
| Context | Cognitive load | In the development process, we conducted usability tests to test the general usability of the tool and specifically uncover potential improvements in terms of extraneous cognitive load [ |
Figure 1Wireframe model of the clinical reasoning tool (right side) integrated into a virtual patient system (left side).
Figure 2Screenshot of an exemplary VP and a learner's map embedded in the VP system CASUS. The switches on top allow to show/hide all connections and the expert's map; a help page and a short introductory video are available. Diagnoses can be marked as final or working diagnoses and as must-not-miss (exclamation mark) diagnosis.
Overview of errors that can be detected by the tool in case the learner has submitted a final diagnosis that is different from that of the expert’s.
| Type of error | Detection | Data required |
| Premature closure | Submission of a final diagnosis at an early stage, after which the expert has added finding(s) or tests that are connected to the final diagnosis | Findings and tests of the learner and the expert (including stage) |
| Connections to final diagnosis of expert | ||
| Submission stage | ||
| Availability bias | Learner has worked on or accessed a virtual patient with a related final diagnosis (one Medical Subject Heading hierarchy level up/down) within the last 5 days | Previously created concept maps (date of last access and final diagnoses) |
| Confirmation bias | Learner has not added disconfirming finding(s) or “speaks against” connections between disconfirming finding and the final diagnosis | Findings of the learner and the expert |
| Connections between findings and differential diagnoses | ||
| Representativeness | Learner has connected nonprototypical findings as “speak against” findings to the correct final diagnosis | Findings of the learner and the expert |
| Nonprototypical findings (additional information in expert map) | ||
| Base rate neglect | A rare final diagnosis has been submitted instead of the more prevalent correct final diagnosis | Differential diagnoses of the learner and the expert |
| Prevalence of diagnoses (additional information in expert map) |
Figure 3Flowchart of the process of submitting a final diagnosis by a learner.
Description of clusters on which the learning analytics dashboard is based on.
| Concepts in the model by Charlin et al | Cluster |
| Representation of the problem and determination of objectives of encounter | Scores for adding problems/findings |
| Investigations | Scores for adding tests |
| Therapeutic interventions | Scores for adding therapeutic options |
| Categorization for the purpose of action | Scores for generating differential diagnoses and scores for the final diagnosis |
| Final representation of the problem and semantic transformation | Scores for the summary statement |
Results of the usability questionnaire (n=10), rated on a 6-point Likert scale (0=totally disagree, 5=totally agree).
| Question | Mean response (minimum; maximum) |
| 1. I think that I would like to use the clinical reasoning tool frequently. | 3 (0; 5) |
| 2. I found the clinical reasoning tool unnecessarily complex. | 3.2 (1; 5) |
| 3. I found the various functions in the clinical reasoning tool were well integrated. | 3.4 (2; 5) |
| 4. The clinical reasoning tool helps structuring my thoughts. | 2.8 (1; 5) |
| 5. What was good? What should be improved? | 3 free text responses |
Total number and average number of nodes added per virtual patient (VP) by the users. The number of nodes added by the expert for each VP is shown in parentheses.
| Category | Total VP 1 | Average VP 1 user (expert) | Total VP 2 | Average VP 2 user (expert) | Total VP 3 | Average VP 3 user (expert) |
| Created maps | 62 | 24 | 31 | |||
| Final diagnosis submitted | 38 (61%) | 7 (29%) | 20 (65%) | |||
| Findings/problems | 159 | 2.6 (8) | 66 | 2.8 (7) | 59 | 1.9 (8) |
| Differential diagnoses | 163 | 2.6 (8) | 94 | 3.9 (8) | 67 | 2.2 (5) |
| Tests | 67 | 1.1 (5) | 50 | 2.1 (8) | 41 | 1.3 (8) |
| Therapies | 9 | 0.1 (1) | 4 | 0.2 (1) | 8 | 0.3 (4) |
| Connections | 21 | 0.3 (5) | 14 | 0.6 (8) | 1 | 0 (5) |