Literature DB >> 32624386

Online Developmental Science to Foster Innovation, Access, and Impact.

Mark Sheskin1, Kimberly Scott2, Candice M Mills3, Elika Bergelson4, Elizabeth Bonawitz5, Elizabeth S Spelke6, Li Fei-Fei7, Frank C Keil8, Hyowon Gweon9, Joshua B Tenenbaum2, Julian Jara-Ettinger8, Karen E Adolph10, Marjorie Rhodes10, Michael C Frank9, Samuel A Mehr11, Laura Schulz2.   

Abstract

We propose that developmental cognitive science should invest in an online CRADLE, a Collaboration for Reproducible and Distributed Large-Scale Experiments that crowdsources data from families participating on the internet. Here, we discuss how the field can work together to further expand and unify current prototypes for the benefit of researchers, science, and society.
Copyright © 2020 The Authors. Published by Elsevier Ltd.. All rights reserved.

Entities:  

Keywords:  best practices; cognitive development; collaboration; internet; research methods; social development

Mesh:

Year:  2020        PMID: 32624386      PMCID: PMC7331515          DOI: 10.1016/j.tics.2020.06.004

Source DB:  PubMed          Journal:  Trends Cogn Sci        ISSN: 1364-6613            Impact factor:   20.229


The Need for Large-Scale, Shared Infrastructure for Developmental Science

The COVID-19 pandemic has disrupted society, including how we use the internet. Technology companies are recognizing advantages of permanent ‘work from home’ policies, the healthcare industry is re-evaluating its core infrastructure to support telemedicine, and schools are pushing the current limits of remote learning. Many of these developments are not new ideas, but rather overdue accelerations and extensions of existing trends. Paralleling previous moves to online research in other areas of cognitive science (e.g., adult surveys on Amazon Mechanical Turk [1]), researchers are using the internet in multiple ways to support developmental science. Here, we argue that a unified platform for online developmental science would benefit both researchers and society. Our call for large-scale coordinated action mirrors advances in other sciences. Consider the Hubble Space Telescope: although many meaningful discoveries can be and were made with individually owned telescopes, Hubble hosts large-scale collaborative research that would otherwise be impossible. Similarly, ImageNet [2], which crowdsourced a massive amount of image-labeling data, transformed machine object recognition. For developmental science, we propose a unified, discipline-wide, online Collaboration for Reproducible and Distributed Large-Scale Experiments (CRADLE). This discipline-wide infrastructure would support ‘big science’ research that would otherwise be impossible, and would also be a transformative platform for many ‘small science’ studies to produce better and more reproducible data. An online CRADLE is advantageous even for studies that cannot be conducted online, such as those that rely on specialized physical spaces (e.g., visual cliff studies with infants), special recording equipment [e.g., electroencephalogram (EEG)], or interactions with physical objects (e.g., physical puzzles), in that it frees up resources for those in-person studies. Thus, it can increase the quality of all developmental science research.

Current Efforts Towards Large-Scale Infrastructure in Developmental Science

The internet is already used in several ways to support developmental science. The Child Language Data Exchange System (CHILDES [3]) allows the reuse of high-value data sets by thousands of researchers across many countries. The Databrary Project (Databrary.org [4]) likewise makes video data globally accessible. Laboratories use the internet to closely coordinate their in-person data collection, leading to distributed, large-scale data collection (e.g., the ManyBabies Consortium [5]). Recently, several groups established platforms to collect data with children online, including unmoderated studies that are available any time (Discoveries Online [6]; Lookit [7]; and themusiclab.org [8]), and moderated studies that are scheduled video sessions with a researcher (TheChildLab.com [9]). Potential advantages of unmoderated studies include rapid collection of large samples and high consistency of study delivery; moderated studies enable live interaction with researchers to maximize child engagement and science outreach, and allow the study of social and communicative phenomena that require live interaction. As multiple projects develop in parallel, researchers have begun to discuss advantages of larger collaborations. A first step in this direction is a joint website (ChildrenHelpingScience.com) that allows families to browse studies from many researchers across the world and find information about how to participate in a chosen study (which then takes place outside of the joint website). We appreciate the value of these pre-existing projects (indeed, many of the ones we described earlier are our own). However, our suggestion for an online CRADLE goes beyond the sum of these projects: we envision a shared infrastructure for recruitment, experiment implementation, data collection, and data sharing; families find and participate in studies across developmental science, and researchers coordinate as members of a ‘superlab,’ in some ways similar to the Psychological Science Accelerator [10]. For example, a family might participate in a 30-min video chat session with a single researcher trained to collect data for studies from many researchers at many universities. That same family might later participate in unmoderated studies from researchers at other universities, and also upload parent-collected videos of their child interacting with nonscreen objects. In this integrated infrastructure, further research questions might later be addressed by analyzing shared data from the same families across multiple studies.

Five Advantages of Online CRADLE Infrastructure

First, a CRADLE can massively increase sample size and diversity. Rather than each research team having access only to families who live nearby, the potential pool expands worldwide (for more about sampling, including participation by families who do not have internet at home, see Box 1 ). Researchers with access to larger sample sizes can collect data with multiple versions of stimuli to ensure generalizability, investigate subtle interaction effects, and adequately power studies even when they have narrow inclusion criteria (children from specific language or cultural backgrounds, children with a rare diagnosis, children who are home-schooled, etc.). Large data sets will be crucial to support neighboring disciplines that build on insights from developmental science, including early childhood education, the study of developmental disorders and interventions, and computational models of typical cognitive development, including artificial intelligence (AI) research, which is increasingly interested in reverse-engineering the ways in which infants and young children learn and think to build more human-like machine intelligence that can live better in our human world. Moving studies online can greatly increase diversity. However, moving studies online is not a guarantee of increased diversity, without intentional action to increase it [14]. For example, online researchers who recruit from their own social networks are likely to have less diverse samples compared with researchers who run studies at public schools in a demographically diverse city. However, in practice, many of us who implemented studies online (using platforms mentioned in the main text) found an increase in diversity: under-representation of participants without home internet access trades off against other increases, such as not requiring travel to another location. Furthermore, first-time participants may feel more comfortable when they are at home on their own computers or cell phones. A CRADLE could expand to populations who currently do not have access to the internet, by installing computers and internet connections at schools in developing countries, and supporting local staff to coordinate participation. Then, unmoderated studies could be developed by any researcher (assisted by a translator), and moderated studies could be run (remotely) by any researcher with the appropriate language skills. This could be a particularly good fit for some types of research, such as educational field studies [15]. Sample diversity can also increase science outreach. Participating in developmental studies is an accessible way for families to interact with scientists and to learn about scientific research. Thus, online research has the potential to improve outreach for all of science. Alt-text: Box 1 Second, decoupling family and researcher location equalizes opportunities for researchers in less densely populated locations. It also improves the quality of data collection. Whereas studies from a single laboratory are often on similar topics (and, thus, may interfere with each other), a family participating via a CRADLE can participate in a diverse set of studies on different topics. The opportunity to participate in a range of interesting studies can increase the likelihood that families will participate in many study sessions and endorse participation to other families. Third, in a CRADLE, multiple research teams can align on shared design principles and participant experiences. Working in a unified infrastructure, research teams can review each other’s methods before the start of data collection, and all information about previous studies (including video files) can be accessible by researchers with approval to see them. When useful, researchers can use shared design principles and standard methods (e.g., for how to implement a moderated study with forced binary choices and the researcher blind to the correct answer, or for how to have parents supply prompts for an unmoderated looking-while-listening study with infants). In addition to making study development easier, standard methods decrease cognitive load for repeat participants (e.g., by already being familiar with instructions and what is expected of them), leading to a better experience for the family and cleaner data for the study. However, developing and maintaining a library of standard methods would require funding and expert development (for more on the challenges of creating shared infrastructure, see Box 2 ). The underlying technology for a CRADLE (e.g., availability of high-speed internet and webcams) has been available for many years, but its development is hindered by current funding and incentive structures in the field that prioritize specific research projects over infrastructure development and maintenance. First, an online CRADLE needs intuitive and attractive interfaces for families that encourage participation in diverse studies, tools for creating and delivering both moderated and unmoderated study sessions (including a library of standard options for study design, as described in the main text), and secure (Institutional Review Board-approved) systems for sharing data. Existing solutions from other contexts (e.g., website hosting that can handle spikes in traffic; guidelines for authorship on large projects) require time and funding to be implemented in this new context. Developing and maintaining such infrastructure requires significant financial investment and sustained efforts that go beyond the capacity of a single research team and typical grants. Second, the long-term success of an online CRADLE depends on incentive structures and career paths for people who drive the development of such infrastructure and systems that incentivize research teams to contribute to its improvement. We hope that this will provide momentum for field-wide changes in some of our values. Alt-text: Box 2 Fourth, studies in a CRADLE can be administered either by a computer or by a researcher other than the one who designed the study. Separating study design from study administration can increase reproducibility. When a study is implemented in a CRADLE, the first iteration is already one in which the entire protocol was communicated to, and implemented by, others (or a computer), and so a replication only requires collecting more data. Building on previous studies is facilitated for the same reasons: the exact protocol is available, and the new work takes place within the same infrastructure. To be clear, researchers could still pilot test and modify studies before the ‘full launch’ of data collection in a shared infrastructure, the CRADLE infrastructure could include staff to ensure quality control over the distributed running of moderated studies, and researchers would remain free to use some aspects of the shared infrastructure without needing to use all of them (to avoid ‘big science crowding out small science’). Fifth, a CRADLE facilitates longitudinal data collection because families may find it easier to participate online rather than traveling to a laboratory, and because they can continue to participate even if they move to another location. With easier access to longitudinal data, researchers can move beyond conclusions at the group level (e.g., on average, children of a particular age do X but not Y), to better characterize the time course of developmental change (e.g., how suddenly do children learn the meaning of the word ‘three’?) and increase power to study intraindividual variation (e.g., receptiveness to learning just before versus just after a nap). Large-scale, longitudinal research would also enable researchers to learn more about the efficacy of interventions, especially those that can be delivered affordably, at scale, outside of a laboratory setting.

Concluding Remarks

Now is a critical juncture for developmental science. Creating discipline-wide infrastructure to facilitate reproducible, distributed, large-scale experiments will allow us to better address issues such as sample size and diversity [11,12] and reproducibility [13], and will also provide a wealth of new knowledge about early learning and development, touching fields ranging from education and healthcare to AI and machine learning. However, a CRADLE will not materialize on its own. Without intentional action, the most likely outcome is a proliferation of individual laboratories creating online research platforms in an uncoordinated and ‘local’ way, without harnessing the advantages of a shared CRADLE. It is also possible that an outside for-profit organization will create something ‘good enough’ and thereby sap motivation to build something better (e.g., arguably the role of Amazon’s Mechanical Turk in online research with adults). By contrast, we think that the best outcome is one in which researchers organize to overcome the challenges, enabling a world where we understand the factors that affect human development and, thus, that affect us all. We look forward to working with our colleagues to improve developmental science for everyone involved, including the families who are at the core of every study.
  9 in total

1.  Cognitive science in the field: A preschool intervention durably enhances intuitive but not formal mathematics.

Authors:  Moira R Dillon; Harini Kannan; Joshua T Dean; Elizabeth S Spelke; Esther Duflo
Journal:  Science       Date:  2017-07-07       Impact factor: 47.728

2.  An Evaluation of Amazon's Mechanical Turk, Its Rapid Rise, and Its Effective Use.

Authors:  Michael D Buhrmester; Sanaz Talaifar; Samuel D Gosling
Journal:  Perspect Psychol Sci       Date:  2018-03

3.  Sampling in Developmental Science: Situations, Shortcomings, Solutions, and Standards.

Authors:  Marc H Bornstein; Justin Jager; Diane L Putnick
Journal:  Dev Rev       Date:  2013-12

4.  Universality and diversity in human song.

Authors:  Samuel A Mehr; Manvir Singh; Dean Knox; Daniel M Ketter; Daniel Pickens-Jones; S Atwood; Christopher Lucas; Nori Jacoby; Alena A Egner; Erin J Hopkins; Rhea M Howard; Joshua K Hartshorne; Mariela V Jennings; Jan Simson; Constance M Bainbridge; Steven Pinker; Timothy J O'Donnell; Max M Krasnow; Luke Glowacki
Journal:  Science       Date:  2019-11-22       Impact factor: 47.728

5.  The Psychological Science Accelerator: Advancing Psychology through a Distributed Collaborative Network.

Authors:  Hannah Moshontz; Lorne Campbell; Charles R Ebersole; Hans IJzerman; Heather L Urry; Patrick S Forscher; Jon E Grahe; Randy J McCarthy; Erica D Musser; Jan Antfolk; Christopher M Castille; Thomas Rhys Evans; Susann Fiedler; Jessica Kay Flake; Diego A Forero; Steve M J Janssen; Justin Robert Keene; John Protzko; Balazs Aczel; Sara Álvarez Solas; Daniel Ansari; Dana Awlia; Ernest Baskin; Carlota Batres; Martha Lucia Borras-Guevara; Cameron Brick; Priyanka Chandel; Armand Chatard; William J Chopik; David Clarance; Nicholas A Coles; Katherine S Corker; Barnaby James Wyld Dixson; Vilius Dranseika; Yarrow Dunham; Nicholas W Fox; Gwendolyn Gardiner; S Mason Garrison; Tripat Gill; Amanda C Hahn; Bastian Jaeger; Pavol Kačmár; Gwenaël Kaminski; Philipp Kanske; Zoltan Kekecs; Melissa Kline; Monica A Koehn; Pratibha Kujur; Carmel A Levitan; Jeremy K Miller; Ceylan Okan; Jerome Olsen; Oscar Oviedo-Trespalacios; Asil Ali Özdoğru; Babita Pande; Arti Parganiha; Noorshama Parveen; Gerit Pfuhl; Sraddha Pradhan; Ivan Ropovik; Nicholas O Rule; Blair Saunders; Vidar Schei; Kathleen Schmidt; Margaret Messiah Singh; Miroslav Sirota; Crystal N Steltenpohl; Stefan Stieger; Daniel Storage; Gavin Brent Sullivan; Anna Szabelska; Christian K Tamnes; Miguel A Vadillo; Jaroslava V Valentova; Wolf Vanpaemel; Marco A C Varella; Evie Vergauwe; Mark Verschoor; Michelangelo Vianello; Martin Voracek; Glenn P Williams; John Paul Wilson; Janis H Zickfeld; Jack D Arnal; Burak Aydin; Sau-Chin Chen; Lisa M DeBruine; Ana Maria Fernandez; Kai T Horstmann; Peder M Isager; Benedict Jones; Aycan Kapucu; Hause Lin; Michael C Mensink; Gorka Navarrete; Miguel A Silan; Christopher R Chartier
Journal:  Adv Methods Pract Psychol Sci       Date:  2018-10-01

6.  The persistent sampling bias in developmental psychology: A call to action.

Authors:  Mark Nielsen; Daniel Haun; Joscha Kärtner; Cristine H Legare
Journal:  J Exp Child Psychol       Date:  2017-05-30

7.  A Collaborative Approach to Infant Research: Promoting Reproducibility, Best Practices, and Theory-Building.

Authors:  Michael C Frank; Elika Bergelson; Christina Bergmann; Alejandrina Cristia; Caroline Floccia; Judit Gervain; J Kiley Hamlin; Erin E Hannon; Melissa Kline; Claartje Levelt; Casey Lew-Williams; Thierry Nazzi; Robin Panneton; Hugh Rabagliati; Melanie Soderstrom; Jessica Sullivan; Sandra Waxman; Daniel Yurovsky
Journal:  Infancy       Date:  2017-03-09

8.  No Participant Left Behind: Conducting Science During COVID-19.

Authors:  Stella F Lourenco; Arber Tasimi
Journal:  Trends Cogn Sci       Date:  2020-05-11       Impact factor: 20.229

9.  PSYCHOLOGY. Estimating the reproducibility of psychological science.

Authors: 
Journal:  Science       Date:  2015-08-28       Impact factor: 47.728

  9 in total
  11 in total

1.  Adapting psychophysiological data collection for COVID-19: The "Virtual Assessment" model.

Authors:  Alexandra R Tabachnick; Tabitha Sellers; Emma Margolis; Madelyn Labella; Dylan Neff; Sheila Crowell; K Lee Raby; Celine Saenz; Elisabeth Conradt; Mary Dozier
Journal:  Infant Ment Health J       Date:  2021-12-21

2.  Does It Matter How We Speak About Social Kinds? A Large, Preregistered, Online Experimental Study of How Language Shapes the Development of Essentialist Beliefs.

Authors:  Rachel A Leshin; Sarah-Jane Leslie; Marjorie Rhodes
Journal:  Child Dev       Date:  2021-01-29

3.  Bringing Home Baby Euclid: Testing Infants' Basic Shape Discrimination Online.

Authors:  Agata Bochynska; Moira R Dillon
Journal:  Front Psychol       Date:  2021-12-20

4.  Comparing Online Webcam- and Laboratory-Based Eye-Tracking for the Assessment of Infants' Audio-Visual Synchrony Perception.

Authors:  Anna Bánki; Martina de Eccher; Lilith Falschlehner; Stefanie Hoehl; Gabriela Markova
Journal:  Front Psychol       Date:  2022-01-11

5.  Moderated Online Data-Collection for Developmental Research: Methods and Replications.

Authors:  Aaron Chuey; Mika Asaba; Sophie Bridgers; Brandon Carrillo; Griffin Dietz; Teresa Garcia; Julia A Leonard; Shari Liu; Megan Merrick; Samaher Radwan; Jessa Stegall; Natalia Velez; Brandon Woo; Yang Wu; Xi J Zhou; Michael C Frank; Hyowon Gweon
Journal:  Front Psychol       Date:  2021-11-03

6.  A Framework for Online Experimenter-Moderated Looking-Time Studies Assessing Infants' Linguistic Knowledge.

Authors:  Desia Bacon; Haley Weaver; Jenny Saffran
Journal:  Front Psychol       Date:  2021-09-24

7.  Implementing Remote Developmental Research: A Case Study of a Randomized Controlled Trial Language Intervention During COVID-19.

Authors:  Ola Ozernov-Palchik; Halie A Olson; Xochitl M Arechiga; Hope Kentala; Jovita L Solorio-Fielder; Kimberly L Wang; Yesi Camacho Torres; Natalie D Gardino; Jeff R Dieffenbach; John D E Gabrieli
Journal:  Front Psychol       Date:  2022-01-07

Review 8.  Disruption Leads to Methodological and Analytic Innovation in Developmental Sciences: Recommendations for Remote Administration and Dealing With Messy Data.

Authors:  Sheila Krogh-Jespersen; Leigha A MacNeill; Erica L Anderson; Hannah E Stroup; Emily M Harriott; Ewa Gut; Abigail Blum; Elveena Fareedi; Kaitlyn M Fredian; Stephanie L Wert; Lauren S Wakschlag; Elizabeth S Norton
Journal:  Front Psychol       Date:  2022-01-04

9.  Parental Report via a Mobile App in the Context of Early Language Trajectories: StarWords Study Protocol.

Authors:  Karolina Mieszkowska; Grzegorz Krajewski; Krzysztof Sobota; Agnieszka Dynak; Joanna Kolak; Magdalena Krysztofiak; Barbara Łukomska; Magdalena Łuniewska; Nina Gram Garmann; Pernille Hansen; Anna Sara Hexeberg Romøren; Hanne Gram Simonsen; Katie Alcock; Napoleon Katsos; Ewa Haman
Journal:  Int J Environ Res Public Health       Date:  2022-03-05       Impact factor: 3.390

10.  Online Assessment of Motor, Cognitive, and Communicative Achievements in 4-Month-Old Infants.

Authors:  Corinna Gasparini; Barbara Caravale; Valentina Focaroli; Melania Paoletti; Giulia Pecora; Francesca Bellagamba; Flavia Chiarotti; Serena Gastaldi; Elsa Addessi
Journal:  Children (Basel)       Date:  2022-03-16
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.