Corine S Meppelink1, Hanneke Hendriks2, Damian Trilling2, Julia C M van Weert2, Anqi Shao3, Eline S Smit2. 1. Amsterdam School of Communication Research, University of Amsterdam, Amsterdam, the Netherlands. Electronic address: c.s.meppelink@uva.nl. 2. Amsterdam School of Communication Research, University of Amsterdam, Amsterdam, the Netherlands. 3. Amsterdam School of Communication Research, University of Amsterdam, Amsterdam, the Netherlands; Life Sciences Communication, University of Wisconsin-Madison, United States.
Abstract
OBJECTIVE: To investigate the applicability of supervised machine learning (SML) to classify health-related webpages as 'reliable' or 'unreliable' in an automated way. METHODS: We collected the textual content of 468 different Dutch webpages about early childhood vaccination. Webpages were manually coded as 'reliable' or 'unreliable' based on their alignment with evidence-based vaccination guidelines. Four SML models were trained on part of the data, whereas the remaining data was used for model testing. RESULTS: All models appeared to be successful in the automated identification of unreliable (F1 scores: 0.54-0.86) and reliable information (F1 scores: 0.82-0.91). Typical words for unreliable information are 'dr', 'immune system', and 'vaccine damage', whereas 'measles', 'child', and 'immunization rate', were frequent in reliable information. Our best performing model was also successful in terms of out-of-sample prediction, tested on a dataset about HPV vaccination. CONCLUSION: Automated classification of online content in terms of reliability, using basic classifiers, performs well and is particularly useful to identify reliable information. PRACTICE IMPLICATIONS: The classifiers can be used as a starting point to develop more complex classifiers, but also warning tools which can help people evaluate the content they encounter online.
OBJECTIVE: To investigate the applicability of supervised machine learning (SML) to classify health-related webpages as 'reliable' or 'unreliable' in an automated way. METHODS: We collected the textual content of 468 different Dutch webpages about early childhood vaccination. Webpages were manually coded as 'reliable' or 'unreliable' based on their alignment with evidence-based vaccination guidelines. Four SML models were trained on part of the data, whereas the remaining data was used for model testing. RESULTS: All models appeared to be successful in the automated identification of unreliable (F1 scores: 0.54-0.86) and reliable information (F1 scores: 0.82-0.91). Typical words for unreliable information are 'dr', 'immune system', and 'vaccine damage', whereas 'measles', 'child', and 'immunization rate', were frequent in reliable information. Our best performing model was also successful in terms of out-of-sample prediction, tested on a dataset about HPV vaccination. CONCLUSION: Automated classification of online content in terms of reliability, using basic classifiers, performs well and is particularly useful to identify reliable information. PRACTICE IMPLICATIONS: The classifiers can be used as a starting point to develop more complex classifiers, but also warning tools which can help people evaluate the content they encounter online.
Authors: Israel Júnior Borges do Nascimento; Ana Beatriz Pizarro; Jussara M Almeida; Natasha Azzopardi-Muscat; Marcos André Gonçalves; Maria Björklund; David Novillo-Ortiz Journal: Bull World Health Organ Date: 2022-06-30 Impact factor: 13.831