| Literature DB >> 34567305 |
Yusuf Yilmaz1,2, Robert Carey3, Teresa M Chan1,4, Venkat Bandi5, Shisong Wang5, Robert A Woods3, Debajyoti Mondal5, Brent Thoma3.
Abstract
BACKGROUND: Canadian specialist residency training programs are implementing a form of competency-based medical education (CBME) that requires frequent assessments of entrustable professional activities (EPAs). Faculty struggle to provide helpful feedback and assign appropriate entrustment scores. CBME faculty development initiatives rarely incorporate teaching metrics. Dashboards could be used to visualize faculty assessment data to support faculty development.Entities:
Year: 2021 PMID: 34567305 PMCID: PMC8463237 DOI: 10.36834/cmej.72067
Source DB: PubMed Journal: Can Med Educ J ISSN: 1923-1202
Thematic analysis of faculty development needs and the dashboard elements developed to address them
| Theme | Subtheme, Dashboard Element and Quotation |
|---|---|
| 1. Analyses of assessments ( | |
| 1.1 Quantitative acquisition metrics | |
| 1.2 Narrative acquisition metrics | |
| 2. Contextualization of analyses ( | |
| 2.1 By peer | |
| 2.2 By rotation | |
| 2.3 By time | |
| 2.4 By resident | |
| 2.5 By assessment system | |
| 3. Accessible and Clear Reporting (Supplemental Data, Video 1) | |
| 3.1 Accessible Reporting | |
| 3.2 Clear Reporting | |
|
| |
Figure 1Visual representation of acquisition metrics for all faculty (top row) and the selected faculty member (bottom row)
Figure 2Visual representation of the table containing narrative comments that can be searched and filtered by date, resident, EPA, and entrustment score 2. Contextualization of analyses
Figure 3Bar chart of faculty EPA observations with each faculty member represented as a bar and the selected faculty highlighted in red.
Figure 4Bar chart of faculty EPA observations with each faculty member represented as a bar with overlayed time-filtered data.
Figure 5Spider graph visualizing over- and under-completion of EPAs by the selected faculty (blue) and overall program (purple)
| Theme | Subtheme | Category | Quotation | |
|---|---|---|---|---|
| 1. Analyses of assessments ( | ||||
| 1.1 Quantitative acquisition metrics | ||||
| 1.1.1 Average EPA score of individual faculty | I think I’m still trying to refine it and the thing that influenced my feedback most was my average entrustment score and realizing how far to the right I was. It actually--since then, I’ve been a little bit more aware of when it is okay, so to speak, to give the not fives and when it is okay for me to dictate which EPAs we’re doing which was maybe a big learning point for the residents. (Faculty member 1) | |||
| 1.1.2 Average EPA score of entire faculty | I mean the average EPA score, that could be really good or really bad. Like that’s gonna have to come I think with context for the program. You’re getting very junior learners at a junior time and all your average EPAs are 5. That doesn’t compute (Faculty development expert 1) | |||
| 1.1.3 Number of EPAs completed | So nice having the total number of EPAs served and then especially if you filter it by day having it for the period because we usually do a check-in every three months with the competence committee to see who’s completing EPAs, who maybe needs some prompting for actually getting their EPAs done or for getting more helpful comments so it’s good to have that, of that filter. (Program leader 2) | |||
| 1.1.4 Number of EPAs expired | We have some faculty that are notorious for letting EPAs go expired. So I think that’s a big thing to focus on for faculty development just to make sure that if they’re continuously having expired EPAs, send them an email and say, “Hey I noticed that you’ve had several EPAs expire for our residents. If you could just remember to complete them on shift with the resident or as close to the shift as possible, that would be extremely helpful because our competency committee depends on this data.” (Program leader 2) | |||
| 1.1.5 Distribution of scores | I think I’m gonna be informed by the things that already exist. So knowing the number of EPA’s I filled out and my range of scores that I provide relative to the rest of the faculty, gives me a sense of where I’m performing. (Program leader 1) | |||
| 1.1.6 Percentage of EPAs filled out vs sent | So, from an individual side, you obviously have your own individual data. So you would have this nicely from a faculty development side, I think this would be useful to highlight the percentage being high, whether it’s a small number or a large number of total EPA’s that have been sent or filled out, then percentage is gonna be useful. Just like the way it is. (Program leader 1) | |||
| 1.1.7 Time to EPA completion | One thing I thought would be useful is maybe to like do a thing where the time or number of days or whatever, that you fill out the EPA from when they submitted it. I think it kinda holds you accountable and is super interesting for yourself to know and lately, I think from that session trying to open it up earlier, trying to get on top of that earlier and talk to the residents during shifts, so that was also very useful, too, a good idea. (Faculty member 2) | |||
| 1.2 Narrative acquisition metrics | I would think so because I mean I can certainly see that what the paper is saying, that you know, number of words would have a correlation with the quality of the feedback, but we know people who can speak well and people who can, you know, need more words to say. But ultimately it has to- for me, ultimately it has to be meaningful to the residents and to the CCC to make their decision, right? So if there was any way to capture residents’ opinion on this or the CCC’s assessment of that feedback would be really helpful. I would think, going forward (Faculty development expert 1) | |||
| 1.2.1 Words per comment | I really feel that anytime you just kind of give one number, like an average thing, that only gives limited picture, right? So if there was any way to provide some measure of dispersion whether it’s standard deviation or range, whatever else you wanted to provide, like this is the average EPA score but this is their whatever it might be. And same thing for the average words. I think that gives a bit more meaningful information. (Faculty development expert 2) | |||
| 1.2.2 Average words per comment | I think the average words per comment are helpful. I’m just seeing that now for the first time. But you can clearly see the difference in comments between people that have a higher word count versus people who have a lower word count. So I think that would be one way to stratify out which faculty we may need to work on. (Program leader 2) | |||
| 1.2.3 Quality of comments | I mean the word counting’s one thing, but I guess I wonder how much--’cause you can--how much is put in there ‘cause you can write a lot of stuff and say very little, right? So, I guess there’s a balance between sorta not saying enough and sort of brevity, I guess, basically, right? So, I wonder if it’s not necessarily the number of words versus how much it’s the quality of actually what’s written, basically, right? I know the thesis or the thought is that if you write more, you’re probably giving better quality EPAs to the residents, but I don’t know that word count is the absolute be all end all, basically. (Faculty member 3) | |||
| 2. Contextualization of analyses ( | ||||
| 2.1 By peer | I love the comparisons because I think as physicians we’re quite competitive and sometimes it helps us to know where we are in the pack; to know just how much time and effort we’d have to put into something to pull up our socks. But I would want a way to help them, and I think you did show me this, where they can track themselves over time to see how well they’re improving on specific parameters. (Faculty development expert 3) | |||
| 2.1.1 Overview Graph | Like I feel maybe if it was a graph of all the scores individually that they’ve ever given out and then you could hover over the comment that was associated with each of the scores. (Program leader 2) | |||
| 2.1.1.1 People who have high proportions of expired EPAs | it’d be very easy to identify people who have high proportions of expired EPAs (Program leader 1) | |||
| 2.1.1.2 People who have very high and low deviation in their EPA scores | it’d be very easy to identify … people who have very very high and low deviation in their EPA scores… (Program leader 1) | |||
| 2.1.1.3 People who have very small word counts | it’d be very easy to identify … people who have very very small word counts. (Program leader 1) | |||
| 2.1.1.4 Frequently vs infrequently EPAs done | Honestly, I think it’s interesting ‘cause there’s so many EPAs, but if it was like you’ve never done an EPA on this or--’cause if you keep picking the same ones, that’s also kind of interesting like for all your residents, you’re really good at getting the x,y, z or 90% of residents are missing this and that sort of stays up, but maybe it would have to be location specific, so at the U of S, our residents have the hardest time with these five EPAs. Then, that sort of almost a little just reminder piece and you could scroll over them and maybe read them ‘cause I do find it had. I know some general description of the ones from that workshop, but are the four point five point whatever, is that in my brain? No. I don’t actually know how to go back and find that. (Faculty member 4) | |||
| 2.1.1.5 EPA expiring changes overtime (e.g., Histogram) | So I think that’s a big thing to focus on for faculty development just to make sure that if they’re continuously having expired EPAs, send them an email and say, “Hey I noticed that you’ve had several EPAs expire for our residents. If you could just remember to complete them on shift with the resident or as close to the shift as possible, that would be extremely helpful because our competency committee depends on this data.” So I think that’s the first thing is just having them do the EPA. (Program leader 2) | |||
| 2.1.1.6 Number of Shifts vs EPA filled out | Cause obviously there’s a few people at the top end who clearly work with residents a lot, and then the rest of them aren’t separated by a ton, so it’d be useful for them to know, well, this person far left might have only had one shift, or maybe they’ve had 50 shifts. And that’s dramatically different than the fact that they’ve only filled out four EPA’s. And so from an individual level, from just an accountability side of things, to be like oh boy, I’m at one EPA per four shifts and we could have like a target on that one that said, you know, “The goal of our residency program is a minimum of one EPA per emerge shift, and you as a faculty member are filling out an average of 0.3 per emerge shift. You’re below the target”. (Program leader 1) | |||
| 2.1.1.7 Individual EPA score distribution | I think that would be good. The only temporality a graph would offer is if we do do some faculty development it’d be people who were like, five, five, five, five, five initially and we’re actually like, no maybe take a minute, think about what comes into the five. Think about what you’re doing with the EPA getting some prompting on kind of proper EPA procedures and then maybe there are threes, fours, they have a better mix afterwards. That might be a little bit more valuable but. (Program leader 2) | |||
| 2.1.1.8 Expired EPA filter | So I have to admit, you know again going back to my specialty, the expired is very helpful. Because residents will often say “oh they just all expired, we didn’t get any” well it’s like, less than 20% is not bad. Because there are gonna be some. So I think this is a very nice visual and the EPA rating (Faculty development expert 2) | |||
| 2.1.1.9 Comparing average EPA score of individual faculty to the overall norm | I like the comparative to see where you are. I think that’s very useful just to see where you sit amongst your peers. Like I said, being an outlier, being in the middle doesn’t necessarily mean anything, but it stimulates thought and reflection as to where you might wanna put some effort into change. And yeah, so you’ve got the number you’ve provided. (Program leader 2) | |||
| 2.1.1.10 Individual faculty to national norm | What we have here is I’m comparing to USask Faculty, but how do I compare it to faculty, nationally, right? And what’s sorta the right mix, so are we an anomaly. Are we doing things right or more than right or less than right, so that would be interesting to me, as well, like not only how do I compare locally, but also like nationally, basically, right? At least taking the cohort of emergency physicians that are filling these out from outside of Saskatchewan or outside or our group. (Faculty member 3) | |||
| 2.2 By rotation | ||||
| 2.2.1 Prioritizing EPAs for faculty | And similarly with faculty we could look at that rotation and say, “Look, this is the rotation we really need to get this one observed. Can your faculty please target it?” (Program leader 1) | |||
| 2.3 By time | ||||
| 2.3.1 Date | I think that is helpful. That gives you a little bit more of the background and the ability to compare a certain time frame. It definitely gives you information on how you’re doing in terms of EPAs filling out and stuff. (Faculty member 4) | |||
| 2.3.2 Pre-evaluation, post-evaluation trend and comparison | And then almost like putting a marker, a date marker on here of like your filter could be since you received your evaluation, right? On this day. Like pre-evaluation, post-evaluation trend and comparison. (Program leader 1) | |||
| 2.3.3 Words filter (e.g. number of words threshold) | I don’t know if there’s a way to filter it. Like overall if you could do a global filter of all faculty and then put it like anybody who has less than 30 words per comment and then have those faculty come up and start there with targeted faculty development I think could be an option. … one way to stratify that would be okay anybody that’s done comments with less than 20 words, maybe I could look at those. They’re probably all gonna be lesser quality comments. Just based on what I’ve seen going through the data. And then I could have that list of those people and then I could target them individually. ‘Cause I think especially with having off-service faculty on the system, like I think there’s gonna be a plan in the future of doing detailed faculty development with our E.R. group and doing this on top of that, targeting off-service people as well I think would be super helpful. And that would be a quick way to filter it. (Program leader 2) | |||
| 2.4 By resident | ||||
| 2.4.1 Resident | Being able to sort by EPA, by resident you worked with, by the date, and by the rating, I think the EPA and the rating are gonna be more useful. ‘Cause you can look and see, okay, when did I give ones and twos? When did I give fours and fives? And how often am I doing that? And then specific if there’s an EPA that seems to be challenging for me, then I can focus in on that and see where my score ranges are. That would be useful. (Program leader 1) | |||
| 2.4.2 Resident Tier | I think it would be maybe even a breakdown of which tier of residents you evaluate, right? So, core versus end of residency versus early. (Faculty member 1) | |||
| 2.5 By assessment system | So, I’m just imagining something that could kind of show you some semblance of this, but also with the focus of which (EPAs) have you done more and less of relative to everyone else… It just gives you somewhere to focus, I think, too, to help them get that specific learning experience or to really seek it out and be on the lookout for it ‘cause there are gonna be rare ones that you could be like, ‘Oh, did we see it this shift? So, let’s make sure we go take it and take the opportunity to fill out an EPA on it.’ (Faculty Member 4) | |||
| 3. Accessible and clear reporting (Supplemental Data, Video 1) | ||||
| 3.1 Accessible Reporting | ||||
| 3.1.1 Exportable Report card | It’d be helpful to have this be downloadable as some sort of a report that could be given to faculty with context or discussed. (Faculty development expert 1) | |||
| 3.1.2.1 Frequency of report card | I think the ability to login is great, but for me, I am sadly living in a world where I need things pushed into my face, sometimes multiple times, so I think the Rob and Lindsay workshop was super helpful because it was like, “Here’s your report. Let’s talk about it.” So, at least having--I don’t think I need that now because I understand it and plus this session with you, looking at those extra things that have been added, but, now, I would be able to, if I had a report pushed at me, I would be like oh, okay, here’s my annual, bi-annual--maybe bi-annual’s probably fine. I feel like you want it frequently enough to look at it, but then if it’s coming out every month, it becomes something that you delete, you know what I mean? Definitely I think a push method would be better and somewhere between quarterly and annual, but I think monthly would be too much. (Faculty member 4) | |||
| 3.1.1 Faculty dashboard login | You could also track yourself in the way of, obviously, things you’re doing and how you’re comparing up to the group? ‘Cause that would--it’s your data right? This is probably shared with the Program Directors and probably yourself, basically, right? But, that would--yeah, that might be interesting to look at your--almost like your faculty score or whatever you’re calling this, basically. (Faculty member 3) | |||
| 3.2 Clear Reporting | ||||
| 3.2.1 Mouse-over Explanations | And then, the other is, is it possible, when I mouse over 4.2, it would tell me what 4.2 is? Because when I look at this, I’m gonna--as a faculty, I’m gonna look at this and say, “Oh geez, I never fill out EPA blah, blah, blah,” right? What is that? And then, I could just mouse over the number 3.8 and see, oh okay, that’s--3.8, I forget. It’s like tox or obstetrical or whatever it is, but yeah okay, I can start filling more of that one out or whatever else, as opposed to having to go to a separate list… Because when I look at this, I’m gonna--as a faculty, I’m gonna look at this and say, “Oh geez, I never fill out EPA blah, blah, blah,” right? What is that? And then, I could just mouse over the number 3.8 and see, oh okay, that’s--3.8, I forget. It’s like tox or obstetrical or whatever it is, but yeah okay, I can start filling more of that one out or whatever else, as opposed to having to go to a separate list.” (Program Leader 1) | |||
| Legend: EPA: Entrustable Professional Activity, CCC: Competence Committee Chair, CBD: Competence By Design | ||||