| Literature DB >> 31984344 |
Valy Fontil1,2, Kate Radcliffe1,2, Helena C Lyson1,2, Neda Ratanawongsa1,2, Courtney Lyles1,2, Delphine Tuot2,3, Kaeli Yuen4, Urmimala Sarkar1,2.
Abstract
OBJECTIVES: Usable tools to support individual primary care clinicians in their diagnostic processes could help to reduce preventable harm from diagnostic errors. We conducted a formative study with primary care providers to identify key requisites to optimize the acceptability of 1 online collective intelligence platform (Human Diagnosis Project; Human Dx).Entities:
Keywords: clinical reasoning; collective intelligence; diagnostic accuracy; diagnostic error; human diagnosis project
Year: 2019 PMID: 31984344 PMCID: PMC6952011 DOI: 10.1093/jamiaopen/ooy058
Source DB: PubMed Journal: JAMIA Open ISSN: 2574-2531
Figure 1.Screenshot of a collective intelligence output from the Human Dx platform for a clinical case. The left column displays the information entered by the participating clinician. The right column displays collective intelligence output with the differential diagnosis, plan, and rationales.
Figure 2.We used semi-structured debrief interviews to conduct acceptability and early usability testing of the collective intelligence online platform. This assessment was completed in two phases of testing using standardized clinical cases (phase 1) and real-world clinical cases from the participant's own practice (phase 2).
Characteristics and practice settings of participating clinicians
| Description | No. of participants ( |
|---|---|
| Participant training | |
| Internal medicine | 8 |
| Family medicine | 3 |
| Nurse practitioner | 1 |
| Physician assistant | 1 |
| Participant location | |
| San Francisco | 8 |
| Saipan | 2 |
| Georgia | 1 |
| New Mexico | 1 |
| San Diego | 1 |
| Years in practice | |
| Less than 5 years | 6 |
| 5–10 years | 2 |
| 10–20 years | 1 |
| More than 20 years | 2 |
| Clinic setting | |
| Urban health clinic | 8 |
| Rural health clinic | 3 |
| Medium-sized city solo practice | 1 |
| Medium-sized city group practice | 1 |
| Associated with a university | 2 |
| Associated with a hospital | 3 |
| Safety-net clinic | 12 |
| Access to specialty consultation | |
| Electronic consultation system embedded in Electronic Medical Record | 9 |
| Telemedicine for certain subspecialists, refer to the nearby island of Guam for in-person consultation | 2 |
| Telemedicine and in-person specialty access | 1 |
| No access to electronic specialty consultation | 1 |
Definitions for key domains of acceptability for technology-enabled collective intelligence
| Domains of acceptability | Definitions |
|---|---|
| 1. Perceived usefulness | The degree to which participants felt that collective intelligence added value to their work as a provider, or was helpful in diagnostic thinking or decision-making for a particular case |
| 2. Perceived accuracy | The degree to which users found the list of diagnoses and recommendations provided in the collective intelligence output reasonable, accurate, and safe |
| 3. Transparent quality assurance | The degree to which the technology platform provides information on the qualifications and expertise of its collective intelligence contributors and expertise is relevant to a case |
| 4. Trust | End-user belief and confidence that the technology platform is legitimate, reliable, and able to consistently provide high-quality, accurate output to help clinical decision-making |
| 5. Ease of use | The facility with which the user can enter cases into the platform and the anticipated efficiency of incorporating the process of case entry within the user’s routine workflow |
Note: We derived these definitions in part from qualitative analysis of the semistructured interviews with study participants.
Summarized keys to acceptability and related potential pitfalls to avoid based on analysis of interviews
| Keys to acceptability | Potential pitfalls |
| Trust in quality of contributors and accuracy of the collective intelligence output | Avoid unreasonable, inappropriate, or irrelevant recommendations |
| Avoid contributors that can be perceived as unqualified (overall or for a specific case) | |
| Importance of cognitive contribution to provider clinical thinking or decision-making | Avoid output that fails to enhance users’ thinking process or help with next steps in diagnostic decision-making |
| Importance of timeliness of content | Delayed feedback may be difficult to incorporate in usual workflow for diagnostic decisions |
| Education on best use cases | Insufficient guidance or training on appropriate target use-case scenarios can lead to infrequent or inappropriate use of a collective intelligence technology platform |
| Ease of use | Avoid cumbersome and time-intensive user procedures |
Note: This table is a summarized interpretation drawing key requisites for development of clinician-facing collective intelligence technology platforms based on the themes from our qualitative analysis outlined in Table 2.
Primary care providers’ reactions regarding the acceptability and potential usability of technology-enabled collective opinion
| Domains of acceptability | Themes | Quotes |
|---|---|---|
Perceived usefulness | Providers expect concrete cognitive contribution to clinical reasoning and decision-making as a utility. | “I think that if it can be honed and improved, especially in terms of providing multiple next steps or even providing more clinical decision-making support and more information about why certain providers made that certain decision, I think then I would say it would probably be moderately to very helpful.” |
| Where this tool would be really helpful is if somebody’s able to display how they solved the problem…I would just make it so you can click on the diagnosis and read the opinions of the five people with the evidence.” | ||
| “The way that this helps me is that it makes me look into areas that I previously may not have looked into.” | ||
| “I think that’s why I would consider using this tool just because it can help me broaden my thinking process about things or even trigger new ideas about the case. So that is what I’m looking for.” | ||
| Affirmation of users’ current diagnostic thinking is a valuable contribution to boosting confidence in decision-making. | “Just having the differential diagnosis there all within the lines that I was thinking about. It influenced me keeping with the work-up…to order the test that I was thinking about. It definitely influenced it knowing that other people were thinking of my same diagnoses before I get the work-up.” | |
| “So, getting all of that [output] reassures me that she maybe doesn’t need the $300 000 workup and the anxiety is playing a big role in this. When I started to see the results coming in, I was like, ‘Oh, I wonder,’ because she does have findings but there’s also this element of anxiety. So, seeing that [output] would make me more confident in addressing the anxiety right away before doing the million-dollar workup.” | ||
| “I think in some ways I felt satisfied that it seemed like there were some agreements with what I was thinking.” | ||
| Receiving the collective intelligence in a timely manner is key to usefulness. | “More rapid turnaround time would make [the tool] a lot more useful…I mean, before the patient leaves the office.” | |
| “I think if it was like instantaneous or like in 30 minutes or something then I think it would be much more useful.” | ||
| “Ideally if [the results] were within the same business day or within the same half day would be nice.” | ||
| Ideal use-cases and clinical settings may be important to consider. | “I think [using the tool] is going to be [for] the cases where I’m just kind of stumped and I don’t know who to ask, something really complex that doesn’t have a well-defined problem for one specialist.” | |
| “I think it adds value specifically to cases where you think that you’ve done all the necessary work up and you’re still not entirely sure. Especially in this rural area that I’m working in, where I don’t quite have a lot of consultants to ask or a lot of available colleagues to discuss cases about, that actual reassurance is valuable.” | ||
| I think without having somebody to bounce ideas off of, it is really nice just to have reassurance that I’m on the right track even for that. It’s useful.” | ||
Perceived accuracy | Providing output that is consistently reasonable, accurate, and safe is paramount to establish and maintain trust. | “Regardless of everything, it has to be accurate. If it’s not accurate and it’s not fast, then I’d have resources that already exist that are much better.” |
| “I think that if I trusted that the information I’m getting is consistently accurate and seems to- even ring true based on the cases, I think that then I would feel more confident in using it.” | ||
| “I’m beginning to lose confidence…because I don’t really know where you get—[the output] just didn’t make any sense at all” | ||
Transparent quality assurance | Uncertainty about the competence or relevant expertise of contributors on the platform can erode trust. | “I worry about who these people are…are these just volunteers? Are they paid? Who’s sitting there and doing this?” |
| “It could be nice to see if somebody is always suggesting the right thing and then the accuracy of that…If you had a way of designating one of those contributors who was getting the correct diagnoses over 80% of the time. I think that would be helpful.” | ||
| “I think it would be a little bit more settling to know that if there’s somebody who is a specialist in a certain area who is eyeballing certain cases. I’m just not very convinced by seeing a dermatologist answer a question about chest pain. That really doesn’t help persuade me.” | ||
| “I like that [the tool] tells you whether [the consultants] are like PCPs or dermatologists or like the surgeon. I think that breakdown is helpful just cause like for me I’m coming from a PCP background so I maybe would, depending on the case, out more weight on who’s suggesting what.” | ||
Trust | Consistent accuracy and quality assurance are essential to engender and maintain end-user trust. | “I did wonder…I don’t know who these PCPs are. I don’t know if they’re well-qualified. I don’t know if I trust their opinions. So I do think it’s possible to make the wrong decisions based on this app.” |
| “I don’t trust [the tool] because I don’t know who [the contributors] are. I don’t know what their training is, I don’t know what evidence they’re using to make these decisions.” | ||
| “I think that if I trusted that the information I’m getting is consistently accurate and seems to- even ring true based on the cases, I think that then I would feel more confident in using it.” | ||
Ease of use | Inputting data into platform should be straightforward and output should be understandable. | “I don’t know how you input all these data but it is—it looks a little bit maybe tedious. If you’re checking through chest pain, how long etc. and you’ve already typed that into your note, you’re not going to want to type it in somewhere else or enter it somewhere else so I can see that being a barrier.” |
| “The tool itself could be a little easier to use and a little differently organized… Ease of input is a huge factor. If it’s going to take me 10, 15 minutes to put the information in, I’ll most likely not use it.” |
Figure 3.Proposed modified Technology Acceptance Model for collective intelligence technology with trust added as potential contributing factor to perceived usefulness and a key factor for acceptability.