| Literature DB >> 34568867 |
Ryan Yanqi Tan1, Alyssa Elyn Pua2, Li Lian Wong1, Kevin Yi-Lwern Yap3.
Abstract
BACKGROUND: Video-sharing platforms are a common source for health information such as Coronavirus Disease 2019 (COVID-19) vaccines. It is important that they provide good quality, evidence-based information. However, to date, the quality of information surrounding COVID-19 vaccines on video-sharing platforms has not been established.Entities:
Keywords: COVID-19 vaccines; Facebook Watch; Information quality; TikTok; Video-sharing platforms; YouTube
Year: 2021 PMID: 34568867 PMCID: PMC8243644 DOI: 10.1016/j.rcsop.2021.100035
Source DB: PubMed Journal: Explor Res Clin Soc Pharm ISSN: 2667-2766
Summary of quality evaluation tools for assessing online health information and videos.
| HONcode | JAMA | DISCERN instrument | LIDA | QUEST | QCSS | MICI | CSS | Usefulness Score | Customised usefulness score | VIQI | PEMAT A/V | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Evaluation tools for online health information (internet) | Evaluation tools for videos | |||||||||||
| Disease-specific | Non-disease specific | |||||||||||
| Authorship (Provides author name and qualification) | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Complementarity (Supports, not replace the role of a physician) | ✓ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Confidentiality (Respects user privacy) | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Attribution of sources (Sources are cited) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Justifiability (Balanced and objective claims) | ✓ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Transparency (Provide contact details) | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Financial disclosure (Funding details are provided) | ✓ | ✓ | ✗ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Advertising (Distinguish advertising and editorial content) | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Currency of content (Dates of the information cited are provided) | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Purpose of the site (What the site is about and what it is meant to cover) | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Links to other resources (Provide additional sources of information) | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Relevance (Is the information relevant to the user) | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| User's opinion of the overall quality of the publication | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Address areas of uncertainty | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Accessibility of site (Is the information accessible to those who need it) | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Usability (Can users make sense of the site) | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Prevalence of disease | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ |
| Transmission of disease | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ |
| Signs & Symptoms of disease | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ |
| Screening Testing | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Treatment/Outcome | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ |
| Cause of condition | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ |
| Diagnosis of condition | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ |
| Recovery from condition | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
| Risk Factors | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| Prevention | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ |
| Information accuracy of video | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ |
| Understandability (Degree that users can explain the content of the video) | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ |
| Actionability (Whether users can identify actions to take after watching the video) | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Precision (level of coherence between video title and content) | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ |
✓ Criterion is evaluated by the tool, ✗ Criterion is not evaluated by the tool.
HONcode: Health On the Net code; JAMA: Journal of American Medical Association benchmark; LIDA: Minervation Validation Instrument; QUEST: Quality Evaluation Scoring Tool; QCSS: Quality Component Scoring System; MICI: Medical Information Content Index; CSS: COVID-19 Specific Score; VIQI: Video Information and Quality Index; PEMAT A/V: Patient Education Materials Assessment Tool for Audio-Visual Materials
Fig. 1Search methodology for YouTube, Facebook Watch and TikTok videos.
Definition of the quality parameters used in this study.
| Quality Parameter | Definition | Measurement Component/Score | Adapted from |
|---|---|---|---|
| Understandability | Consumers of diverse backgrounds and varying levels of health literacy can process and explain the key messages of the videos | 0 = Disagree, 1 = Agree | Patient Education Materials Assessment Tool for Audio-visual Materials (PEMAT A/V) |
| Actionability | Consumers of diverse backgrounds and varying levels of health literacy can identify what they can do based on the information presented in the videos | ||
| Accuracy | Information in the videos is “scientifically correct” | 0 = all points inaccurate,1 = points partially accurate,2 = all points accurate | Criteria based on FAQs from the World Health Organization (WHO), US Centers for Disease Control and Prevention (CDC), Singapore Ministry of Health (MOH), UK National Health Service (NHS), and the Australian Government Department of Health |
| Comprehensiveness | The extensiveness in which the criteria obtained from the Frequently Asked Questions (FAQs) are described in the videos | 0 = lesser than or equal to 33% of points mentioned,1 = 34–67% of points mentioned,2 = 68% or more of points mentioned | |
| Reliability | The extent of trustworthiness of the videos as a source of information on COVID-19 vaccines | 2 or 3-point Likert scale, with a higher score translating to a better reliability | DISCERN Instrument and Quality Component Scoring System |
Video parameters of the videos on YouTube, Facebook Watch and TikTok.
| Median number of views (IQR) | Median number of Likes (IQR) | Median Duration (mins) (IQR) | Date of Upload range | |
|---|---|---|---|---|
| YouTube (n = 35) | 93,137 (7,786–358,379) | 737 (51–5,500) | 4.2 (2.7–6.8) | 15 Mar 2020–10 Feb 2021 |
| Facebook Watch (n = 23) | 428,151 (23,857–1,948,694) | 2,400 (201–103,500) | 1.7 (1.3–2.7) | 03 Dec 2020–28 Feb 2021 |
| TikTok (n = 14) | 96,000 (4,610–682,225) | 1,848 (438–43,113) | 0.9 (0.6–1.0) | 12 Nov 2020–02 Mar 2021 |
| Total ( | 139,603 (7,786–789,228) | 1,264 (94–11,300) | 2.3 (1.2–5.1) | 15 Mar 2020–02 Mar 2021 |
| 0.29 | 0.18 | <0.001 | – |
p < 0.05 based on Kruskal-Wallis test.
Median durations between YouTube and Facebook Watch (p < 0.001), YouTube and TikTok (p < 0.001) and Facebook Watch and TikTok (p < 0.001) were statistically significant based on Wilcoxon Rank Sum test with Bonferroni adjustment.
Median percentage scores for each quality domain and composite scores among video-sharing platforms, videos with different author qualification grading, and “general” and “specific” videos.
| Median Accuracy Score % (IQR) | Median Comprehensiveness Score % (IQR) | Median Reliability Score % (IQR) | Median Understandability Score % (IQR) | Median Actionability Score % (IQR) | Median Composite Score % (IQR) | |
|---|---|---|---|---|---|---|
| YouTube | 100 (87.5–100) | 12.5 (6.3–18.0) | 37.5 (32.5–43.8) | 80.0 (71.1–81.8) | 0 (0–16.7) | 36.8 (30.7–43.0) |
| Facebook Watch | 100 (86.3–100) | 6.3 (2.3–12.5) | 35.0 (35.0–45.0) | 80.0 (70.0–80.5) | 0 (0–41.7) | 32.4 (25.9–37.3) |
| TikTok | 100 (75.6–100) | 6.3 (1.2–7.8) | 35.0 (30.0–35.0) | 96.9 (88.2–100) | 0 (0–0) | 27.5 (24.7–31.5) |
| 0.76 | 0.004 | 0.078 | < 0.001 | 0.43 | 0.001 | |
| Level of agreement among reviewers | W = 0.880 | W = 0.976 | W = 0.904 | W = 0.972 | W = 0.978 | W = 0.970 |
| Tier One ( | 100 (94.8–100) | 9.4 (2.0–13.7) | 40.0 (35.0–45.0) | 80.0 (71.6–80.7) | 0 (0–66.7) | 36.0 (31.3–39.5) |
| Tier Two ( | 100 (89.6–100) | 6.3 (3.1–13.3) | 35.0 (30.0–37.5) | 83.3 (62.7–96.9) | 0 (0–16.7) | 35.2 (29.0–42.5) |
| Tier Three ( | 87.5 (77.5–100) | 9.4 (6.3–15.6) | 37.5 (27.5–40.0) | 81.8 (79.5–86.4) | 0 (0–0) | 31.6 (25.7–36.5) |
| 0.12 | 0.56 | 0.042 | 0.074 | 0.13 | 0.29 | |
| General (n = 41) | 100 (85.7–100) | 6.3 (1.6–14.1) | 35.0 (32.5–42.5) | 81.8 (77.3–92.8) | 0 (0–50.0) | 32.1 (25.7–37.8) |
| Specific ( | 100 (87.5–100) | 9.4 (6.3–14.8) | 37.5 (33.8–45.0) | 77.3 (69.2–82.6) | 0 (0–0) | 35.1 (30.3–41.2) |
| 0.67 | 0.81 | 0.35 | 0.086 | 0.013 | 0.19 | |
p < 0.05 based on Kruskal-Wallis test.
p < 0.05 based on Wilcoxon Rank Sum test.
Comprehensiveness scores of YouTube were significantly higher than Facebook Watch (p = 0.015) and TikTok (p = 0.004) based on Wilcoxon Rank Sum test with Bonferroni adjustment.
Understandability scores of TikTok were significantly higher than YouTube (p = 0.001) and Facebook Watch (p < 0.001) based on Wilcoxon Rank Sum test with Bonferroni adjustment.
Composite score of YouTube was significantly higher than TikTok (p = 0.001) based on Wilcoxon Rank Sum test with Bonferroni adjustment.
Inter-rater reliability based on Kendall's coefficient of concordance (p < 0.05).
No statistically significant differences found between tiers based on Wilcoxon Rank Sum test with Bonferroni adjustment.