| Literature DB >> 34803813 |
Aaron Chuey1, Mika Asaba1, Sophie Bridgers2, Brandon Carrillo1, Griffin Dietz1, Teresa Garcia1, Julia A Leonard3, Shari Liu2, Megan Merrick4, Samaher Radwan1, Jessa Stegall1, Natalia Velez5, Brandon Woo5, Yang Wu1, Xi J Zhou1, Michael C Frank1, Hyowon Gweon1.
Abstract
Online data collection methods are expanding the ease and access of developmental research for researchers and participants alike. While its popularity among developmental scientists has soared during the COVID-19 pandemic, its potential goes beyond just a means for safe, socially distanced data collection. In particular, advances in video conferencing software has enabled researchers to engage in face-to-face interactions with participants from nearly any location at any time. Due to the novelty of these methods, however, many researchers still remain uncertain about the differences in available approaches as well as the validity of online methods more broadly. In this article, we aim to address both issues with a focus on moderated (synchronous) data collected using video-conferencing software (e.g., Zoom). First, we review existing approaches for designing and executing moderated online studies with young children. We also present concrete examples of studies that implemented choice and verbal measures (Studies 1 and 2) and looking time (Studies 3 and 4) across both in-person and online moderated data collection methods. Direct comparison of the two methods within each study as well as a meta-analysis of all studies suggest that the results from the two methods are comparable, providing empirical support for the validity of moderated online data collection. Finally, we discuss current limitations of online data collection and possible solutions, as well as its potential to increase the accessibility, diversity, and replicability of developmental science.Entities:
Keywords: cognitive development; meta-analysis; moderated data collection; online research; replication
Year: 2021 PMID: 34803813 PMCID: PMC8595939 DOI: 10.3389/fpsyg.2021.734398
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Factors to consider when choosing software for moderated online data collection.
| Accessibility | Software should ideally be easy to obtain and use, especially for participants. In addition to monetary concerns or internet access ( |
| Functionality | A software’s user interface, customizability, and security features determine how studies are conducted and the extent to which researchers can customize participants’ online experience. Importantly, security standards regarding recording and storage of online sessions vary across institutions and countries; researchers should keep these in mind when assessing the level of security a given software provides. Additionally, while basic video- and screen-sharing as well as text-chat functionalities are common in most software, the details vary in a number of ways, including how users customize what they can view on screen and how recording is implemented (e.g., local vs. cloud storage). More broadly, intuitive design and real-time flexibility often trades off with precise structure and customization options. Some software (e.g., Adobe Connect) allows experimenters to predetermine the layout of participants’ screens before sessions, and others (e.g., Zoom) automatically generate participants’ layouts and allow participants to modify their layout in real time (following instructions from experimenters). While the former type is ideal for experiments that require precise control over what participants view on screen, the latter type of software is more suitable for sessions involving rapid transitions between multiple experiments with different visual layouts. |
| Robustness | Recurring lag, audio or video problems, and even login errors can slow down or derail an online session. Although technical issues can also occur in person, issues can be more difficult to resolve in remote interactions where experimenters have limited means to understand participants’ issues. Therefore, it is important to test the frequency and duration of technical issues on both experimenters’ and participants’ ends before committing to a particular video-conferencing software. Depending on the software, screen-sharing or streaming large video or audio files can contribute to unwanted lag or delays. Further, their severity can vary depending on connection speed or devices used by both experimenters and participants. For experiments that rely on precise timing of presented stimuli, researchers might consider presentation methods that do not rely on screen-sharing (e.g., hosting video stimuli on servers or other platforms where participants can access directly, such as online video-hosting or slide-presentation services). If there are consistent participant-end issues that impact the fidelity of a study, researchers can also set explicit criteria for participation (e.g., must use a laptop or cannot use a phone signal-based internet connection). |
FIGURE 1An example screenshot of a moderated session using Zoom. By positioning the experimenter’s video relative to stimuli presented via screen-sharing, the experimenter can use gaze and pointing to elicit joint attention or refer to specific stimuli, similar to how she would use gaze or pointing during in-person interactions. In this example, the participant (or the parent) was instructed to position the experimenter’s video at the bottom of the screen; the experimenter can then “look” at one or more objects on screen and ask the participant to report what she is looking at. Interactions like these can be used as a warmup task to create a “shared reality” between the experimenter and the participant and to facilitate engagement and attention.
FIGURE 2One option for eliciting choice in Study 2. Children could be asked to choose which agent is better at math, Hannah in orange or Zoe in purple?
FIGURE 4Summary of results across Studies 1–4, comparing in-person and online data collection methods. Error bars indicate standard error.
FIGURE 3Screenshots of video stimuli implemented in Study 3 (preferential looking, see Woo and Spelke, 2020). (A) Participants were first familiarized to the bear’s preferred toy. (B) The contents of the boxes were then switched either in the rabbits’ presence or absence. (C) One rabbit opened the box where the desired toy was moved to while the other opened the one where the desired toy was originally. (D) At test, infants were shown the two rabbits. In person, infants were asked to choose the one they like and their reach was recorded; online, infants were presented with a video of the two rabbits and their preferential looking was measured.
FIGURE 5Forest plot showing the standardized effect of interest across Studies 1–4, comparing in-person and online data collection methods. Points are sized based on sample, and error bars indicate effect size variance. Green triangles show random-effects multilevel meta-regression model estimates of the effect size for each study.