| Literature DB >> 27638490 |
Gowri Gopalakrishna1, Mariska M G Leeflang1, Clare Davenport2, Andrea Juliana Sanabria3, Pablo Alonso-Coello3, Kirsten McCaffery4, Patrick Bossuyt1, Miranda W Langendam1.
Abstract
OBJECTIVES: Development of medical test guidelines differs from intervention guideline development. These differences can pose unique challenges in building evidence-based recommendations to guide clinical practice. The aim of our study was to better understand these challenges, explore reasons behind them and identify possible solutions. SETTING AND PARTICIPANTS: In this qualitative study, we conducted in-depth interviews between February 2012 and April 2013 of a convenience sample of 17 European guideline developers experienced in medical test guideline development. OUTCOMES MEASURED: We used framework analysis with deductive and inductive approaches to generate the themes from the interviews. We kept interpretation grounded in the data.Entities:
Keywords: QUALITATIVE RESEARCH; guideline development; in depth interviews; medical education; medical tests; test accuracy
Mesh:
Year: 2016 PMID: 27638490 PMCID: PMC5030557 DOI: 10.1136/bmjopen-2015-010549
Source DB: PubMed Journal: BMJ Open ISSN: 2044-6055 Impact factor: 2.692
Overview of characteristics of the 17 interviewees
| Category (number) | |
|---|---|
| Type of guideline group (n) | Institutional (3) |
| Countries of interviewees (n) | UK (6), Germany (1), Belgium (2), the Netherlands (3), USA (1), Spain (2), Australia (1), Finland (1) |
| Size of guideline development group (range) | 10–20 panel members |
| Areas interviewees have developed medical test guidelines for | Paediatrics, mental health, womens' health, point-of-care tests, oncology, acute pain, laboratory medicine, celiac disease, diabetes, tuberculosis |
| Role of interviewee in guideline development group (n) | Methodologist (17) |
| Interviewed face to face or via telephone (n) | Face to face (11) |
Summary of key themes and suggested solutions/future areas for development
| Impacted area | Key message | Representative quote (ID) | Suggested solutions/future areas for development (when applicable) |
|---|---|---|---|
|
Guideline development stages | |||
| Scoping | Scoping in diagnostic guideline is more extensive and resource intensive compared to intervention guidelines. | “It takes a fair chunk of an analyst's time over several months to go through scoping, yes. It's absolutely critical that the problem is well defined…to understand exactly what all the ins and outs of the problem are and it's not something you just throw together. It's one of the things that makes diagnostics different. It requires a vastly, a more complicated problem definition phase that you would typically have for treatment.” (ID 1) | |
| Key question formulation and test—treatment pathway | PICO format for question formulation is not very useful for diagnostics. The panel needs to be educated and trained on how to develop focused questions that include patient outcomes. | “PICO is not very relevant for diagnostic questions…Educating the panel on test's downstream consequences helps define exact questions to be answered.” (ID 5) | A test-treatment pathway can help panelists develop focused questions that are patient outcome centred, but the awareness of the panel on its importance needs to be raised and it requires resource commitment in terms of time, money and training. |
| Impacted area | Key message | Representative quote (ID) | Proposed solutions/future areas for development (when applicable) |
| Searching and synthesising the evidence | Search filters are not well developed for test accuracy studies; meta-analysis is often complex due to complexity of the methods and the heterogeneous nature of test accuracy studies. | “Yes, we have a lot of them that show very heterogeneous data. That indicates there are a lot of things still to be done and that we should be very cautious of single studies and drawing conclusions.” (ID 10) | We need good search filters for test accuracy studies, training and more explicit guidance on meta-analysis methods. There is a need for better quality primary studies on test accuracy for meaningful data syntheses to occur. |
| Types of outcomes and evidence | Resource is a major consideration as to whether the panel includes outcomes other than test accuracy. This is compounded by the lack of availability of such data. | “Define the budget, get another team to bring the resource … I can't just extract diagnostic test accuracy studies. It's not a complete enough picture.” (ID 11) | Inclusion of qualitative data and/or methods (eg, Delphi method and focus groups) should be explored as alternative ways to include patient outcome-related evidence in a structured way in the guideline process. |
| Making recommendations | Expert opinion is important, but in the face of the lack of good quality evidence this can make the process unstructured, not transparent and political. Usefulness of modelling to overcome this lack of evidence has conflicting views. | “There's a discussion about the benefits and harms, about resources and about patient values and preferences. Do we know those? No. Again, people give you their opinions about it, but that's the best we can do at this point.” (ID 11) | Delphi processes, focus groups or the use of modelling could make the process more systematic and transparent, but there are contrasting views on this. |
|
Awareness, education and training | |||
| Within the guideline panel | Educating the guideline panel about test accuracy statistics and how test accuracy can impact a range of patient outcomes prior to the start of guideline development is crucial. | “The panel finds it very hard to make a choice as to when high sensitivity is important and when is high specificity important. They do not understand the consequences of high sensitivity and low specificity … hence cannot guide the methodologist either on what are important characteristics for the tests.” (ID 3) | Guideline panels should consider investing, prior to starting the guideline development, training of the panelists on test accuracy and downstream test consequences. This can be in the form of developing a test-treatment pathway, for example. |
| Outside the guideline panel | General medical education of doctors in test accuracy was seen as inadequate. | “Most attention goes to intervention studies in journals and in guidelines normally…in the education of medical professionals there is less focus on diagnostic accuracy. They're not used to it.” (ID 3) | Having a regulatory framework that recognises the importance of tests' downstream consequences can help bring the needed attention to medical test evaluation at several levels that were identified as lacking. |
RCTs, randomised controlled trials.
Figure 1Challenges guideline developers face are interconnected among the domains of methodological issues, resource limitations and the need for awareness and education.