| Literature DB >> 35005698 |
Abstract
Background: The current pandemic of COVID-19 has changed the way health information is distributed through online platforms. These platforms have played a significant role in informing patients and the public with knowledge that has changed the virtual world forever. Simultaneously, there are growing concerns that much of the information is not credible, impacting patient health outcomes, causing human lives, and tremendous resource waste. With the increasing use of online platforms, patients/the public require new learning models and sharing medical knowledge. They need to be empowered with strategies to navigate disinformation on online platforms. Methods and Design: To meet the urgent need to combat health "misinformation," the research team proposes a structured approach to develop a quality benchmark, an evidence-based tool that identifies and addresses the determinants of online health information reliability. The specific methods to develop the intervention are the following: (1) systematic reviews: two comprehensive systematic reviews to understand the current state of the quality of online health information and to identify research gaps, (2) content analysis: develop a conceptual framework based on established and complementary knowledge translation approaches for analyzing the existing quality assessment tools and draft a unique set of quality of domains, (3) focus groups: multiple focus groups with diverse patients/the public and health information providers to test the acceptability and usability of the quality domains, (4) development and evaluation: a unique set of determinants of reliability will be finalized along with a preferred scoring classification. These items will be used to develop and validate a quality benchmark to assess the quality of online health information. Expected Outcomes: This multi-phase project informed by theory will lead to new knowledge that is intended to inform the development of a patient-friendly quality benchmark. This benchmark will inform best practices and policies in disseminating reliable web health information, thus reducing disparities in access to health knowledge and combat misinformation online. In addition, we envision the final product can be used as a gold standard for developing similar interventions for specific groups of patients or populations.Entities:
Keywords: assessment; benchmark; credibility; health information; online; protocol
Year: 2021 PMID: 35005698 PMCID: PMC8732749 DOI: 10.3389/fdgth.2021.801204
Source DB: PubMed Journal: Front Digit Health ISSN: 2673-253X
Figure 1Methods to develop quality benchmarks for use by patients and public.
Conceptual framework for analyzing contents.
|
|
|
|---|---|
| Step 1 | Identify a list of domains in each tool |
| Step 2 | Group the domains that overlap across tools in specific categories to reflect core themes. For example, domains with similar definitions or sub-themes should be categorized under a particular theme, such as Accuracy, which may be defined as a sub-theme like accurate or reliable. |
| Step 3 | Identify domains that are unique or lack specificity (i.e., do not overlap across tools). |
| Step 4 | Systematically evaluate psychometric properties or validity testing when provided. |
| Step 5 | Explore heterogeneity (inconsistency across tools) in terms of population, domain type, number of items, scoring criteria, and format. |
| Step 6 | Present convergence of themes into a final list of unique and parsimonious domains. |
| Step 7 | Determine the scoring criteria with intuitive and precise interpretation. |
Guideline for delphi method.
|
|
|
|---|---|
| Round 0 | A minimum of 2 research team members will generate a list of domains and accompanying questions from the selected quality assessment tools for the subsequent steps. |
| Round 1 | A minimum of 2 team members (presumably the lead researcher and a KT expert) will analyze the contents of each tool and compare them across the tools. A list of overlapping domains will be classified as “common” domains. A similar method should be used to identify domains that lack specificity, classified as “unique” domains. Disagreement will be resolved after discussion. |
| Round 2 | A minimum of 3 independent reviewers (presumably the lead researcher, a clinician, and a KT expert) will evaluate the set of “common and “unique” domains along with questions. The facilitator will compose their input in terms of the reviewers” perspectives and recommendations based on their understanding of the health information needs of the stakeholders. |
| Round 3 | An online group discussion will be conducted where the facilitator will present the independent reviews to the panel members. The members will have an opportunity to compare their responses with other independent reviewers' responses. Following an active discussion, the facilitator will draft a list of domains and scoring criteria to present to the experts and patients and public. |
| Round 4 | A group of experts will be comprised of clinicians, health information providers, KT experts, and the lead researcher who will be presented with the domains and scoring criteria from round 3. They will review the list of the domains along with questions and scoring criteria for usability and relevance to the stakeholders. Finally, the experts will be asked to approve or modify the existing list to be used in round 6. This round will be conducted online. |
| Round 5 | Different groups of patients and public will meet online/in-person in their sub-category (i.e., age, race, gender, health condition etc.) focus group where the facilitator will present the domains along with descriptions, questions and scoring criteria. Then, the participants will discuss their understanding of the domains and decide the final list based on their health information needs and preferences. Finally, the facilitator will produce a report on participants' responses, with their preferences for the final round. |
| Round 6 | The research team members led by the primary investigator will review both the responses and recommendations of the experts and patients and public. A final list of domains and scoring criteria will be established in this step. Then, a representative group of patients and public will validate the final list using the “Member checking” technique. |
| Deliverable | A final list of quality domains and scoring criteria for the quality benchmark. |