| Literature DB >> 32215140 |
Brent Thoma1, Venkat Bandi2, Robert Carey1, Debajyoti Mondal2, Rob Woods1, Lynsey Martin1, Teresa Chan3,4.
Abstract
BACKGROUND: Competency-based programs are being adopted in medical education around the world. Competence Committees must visualize learner assessment data effectively to support their decision-making. Dashboards play an integral role in decision support systems in other fields. Design-based research allows the simultaneous development and study of educational environments.Entities:
Year: 2020 PMID: 32215140 PMCID: PMC7082472 DOI: 10.36834/cmej.68903
Source DB: PubMed Journal: Can Med Educ J ISSN: 1923-1202
Thematic analysis of Competence Committee needs, associated dashboard elements, and representative quotes.
| CC Member Needs | Dashboard Element | Quotes |
|---|---|---|
| FG1: A big part is specific EPA data. So, from each of the numerical EPAs. And then the narrative comments that go along with those. | ||
| I2: So usually would start just by looking at kinda total number of EPAs observed. So the EPAs per week total and then the expired ones that they have. And then I would just break those down based on the numbers for the last EPA period just to have an overall idea of how the resident has done. | ||
| FG2: Just kinda overall within residents within specific stages would kinda compare total number of EPAs with different rotations done just to see kinda what the trends were for residents in different years. | ||
| FG1: I think currently where it’s most helpful is seeing how many EPAs residents are getting on specific rotations. Because if you have one resident that rotates through general surgery and they’re getting 15 EPAs and then another one that comes through and they’re only getting three to five, then that kinda helps kind of assess them from that perspective as well. | ||
| I3: Sometimes the attending just doesn’t fill it out. Like they can’t get any more. But to get an idea of how many are expiring in general and in particular for that resident | ||
| I2: The trend is the most important thing. So it’s looking at the overall number and then what they’ve done ‘cause you can clearly see the trend if they’re down in the 2s and 3s versus if they’re up in the 4s and 5s. | ||
| I1: Or did they present to you sort of a representative sampling of procedures that they would be expected to do in the ED? | ||
| I1: So it’s handy actually to have the narrative feedback where you can just sort of look with one click or one mouse over to see all of the things that have been said in that area. So that’s a big timesaver. | ||
| I4: I just use (narrative assessments) to get an overall picture of how the resident’s doing. If they all sort of paint the same picture then it’s great and you get a better feel of where they’re at then just the feedback data on the EPA. They tend to be a little bit more in-depth. So it just gives you a better overall picture and a better – just gives you a better feel of where they’re at. | ||
| FG1: I suspect because so much of this information is hard to collate together, we probably haven’t even dreamed up what would even be the best. Because once we actually have some sort of usable interface to look at data, we can look at more of it and expect more of it. Whereas right now I think we’re just wrapping our heads around collating bits and pieces from so many sources […] But if it was all on one interface dashboard, we would look at it and go, “Awesome, this would be a great place to now add this bit and this bit and this bit.” | ||
| I4: [Resident self-assessments] are very useful, mostly ‘cause it kinda summarizes a lot of the data that you get from the EPAs. So it gives you a little bit more of a background of what they were on... then I get a bit of a better idea of where they think they’re going in the next few months. And it really helps me with their goals especially. So their goals for their past rotation, their goals for their next rotation. And that way you can kinda follow-up with their EPAs and correlate their EPAs with their goals that they identified and make sure that they’re actually getting to where they wanna be. | ||
| I1: So I try to look through the report or the minutes from the previous competency committee to just refresh my memory on what we were saying our priorities for the resident were. | ||
| FG2: So we do now twice annual written exams and once annual mock oral exams. And so it would be nice to see like a running tally of their exam scores across the years to see where they’re trending and where they rank. | ||
| CC2: What are they missing? Scholarly activities? Required activities for the year that have been ticked off - how many do they have checked off? Should be a constant reminder for them. A 'tick sheet' of their activities. | ||
| FG2: I would look at EPA numbers just overall to get a sense of how many they’re doing. Then I would focus in on – I’d have a quick scan of what rotations they’d done recently to get a sense of whether or not that was a reasonable number of EPAs or not. Then I would move down into the specific stage of training that they were in and I would look at the EPAs they’d done in terms of scores as well as narrative comments. And I would filter it for the last three months to make sure I’m looking at the most recent data. […]And then I would take their narrative comments from previous – like their previous summary of how they were doing and what they wanted to work on to make sure that those things had been incorporated into this quarter of their work. | ||
| FG1: It’s, yeah, from our perspective I think it’s more [the dashboard’s] organization and having things like readily available as opposed to the [previous] system right now where it’s clickclick download click click. So it’s more the things together that can be easily accessed. | ||
| I1: maybe just like a one-page job aid to how to get the most out of the dashboard. Just so that if there’s anything like that, people could quickly just scan a one-page summary and say, “Oh! I see it can do that. I didn’t realize,” or something like that. | ||
| FG1: It’d be nice also if you could have a bar of like what rotation – clinical rotation – they were on. So you could be like, “Oh well they didn’t get many this month but they were on plastic surgery, and we know that they’re only gonna get a handful.” But then the next block they were on emerg and then they got only seven which is way below what we’d expect. | ||
| I4: The first thing I do is I try and narrow down the data just from the period that we’re looking at. So I just try and get the date filter right for just the block that we’re looking at. And eliminate all the other pieces of the data to make it a little bit cleaner to look through. | ||
| FG1: The quality of the data we have varies from faculty to faculty. Some are very good about filling out EPAs and getting a sense of what they mean. Other faculty don’t understand as much. There’s also quite a variability in the quality of the narrative comments. Some people are very descriptive and get to the heart of where the residents’ thought processes are. And other faculty write very non-descript vague statements about what was done. | ||
| FG1: It doesn’t matter to me where [the data is] stored as long as it’s secure. In terms of where it’s viewed, as long as we can – all committee members – can access it and look at changes to the screen in real time. | ||
Outline of the dashboard elements requested during each of the three data collection periods.
| Design and Construction | Evaluation and Reflection 1 | Evaluation and Reflection 2 |
|---|---|---|
| 2.1 Resident Self-Assessment | ||
| 3.1 Efficiency | 3.1 Efficiency | |
Figure 1Visual representation of the EPA acquisition metrics displayed since the beginning of the resident’s participation in the competency-based assessment program and for a selected period.
Figure 2Visual representation of the achievement of a single entrustable professional activity assessments incorporating numerical metrics, a graphical representation of entrustment scores over time, and narrative feedback.
Figure 3Visual representation of the achievement of entrustable professional activity assessments highlighting specific clinical presentations and/or patient demographics.
Figure 4Visual representation of residents’ acquisition metrics plotting the number of overall entrustable professional activity assessments per week (y-axis) of each resident (x-axis) since the beginning of the resident’s participation in the competency-based assessment program (green line) and for a selected period (blue line).
Figure 5Visual representation of the number of entrustable professional activities observed for a single resident on each rotation with a heat map indicating the proportion of expected assessments (<25% of expected red; >80% expected green).
Figure 6Tabular presentation of non-EPA narrative assessment data for an individual resident.
Figure 7Visual representation of the status of a resident within their residency program over time incorporating narrative feedback from the Competence Committee.
Figure 8Visual representation of the within-cohort percentile rank score of an individual emergency medicine resident on their national written exam from 2016 through 2018.
Figure 9Visual representation of the oral examination scores of an individual resident in the 2018-19 academic year.
| Inactive | Failure to Progress | Not Progressing | Progressing as Expected | Progress is Accelerated |
|---|---|---|---|---|