Literature DB >> 35710865

Can we measure individual differences in cognitive measures reliably via smartphones? A comparison of the flanker effect across device types and samples.

Thomas Pronk1,2,3, Rebecca J Hirst4, Reinout W Wiers5, Jaap M J Murre5.   

Abstract

Research deployed via the internet and administered via smartphones could have access to more diverse samples than lab-based research. Diverse samples could have relatively high variation in their traits and so yield relatively reliable measurements of individual differences in these traits. Several cognitive tasks that originated from the experimental research tradition have been reported to yield relatively low reliabilities (Hedge et al., 2018) in samples with restricted variance (students). This issue could potentially be addressed by smartphone-mediated administration in diverse samples. We formulate several criteria to determine whether a cognitive task is suitable for individual differences research on commodity smartphones: no very brief or precise stimulus timing, relative response times (RTs), a maximum of two response options, and a small number of graphical stimuli. The flanker task meets these criteria. We compared the reliability of individual differences in the flanker effect across samples and devices in a preregistered study. We found no evidence that a more diverse sample yields higher reliabilities. We also found no evidence that commodity smartphones yield lower reliabilities than commodity laptops. Hence, diverse samples might not improve reliability above student samples, but smartphones may well measure individual differences with cognitive tasks reliably. Exploratively, we examined different reliability coefficients, split-half reliabilities, and the development of reliability estimates as a function of task length.
© 2022. The Author(s).

Entities:  

Keywords:  Experimental effects; Flanker effect; Individual differences; Internet; Reliability; Smartphones; Web applications

Year:  2022        PMID: 35710865     DOI: 10.3758/s13428-022-01885-6

Source DB:  PubMed          Journal:  Behav Res Methods        ISSN: 1554-351X


  31 in total

1.  Psychophysics in a Web browser? Comparing response times collected with JavaScript and Psychophysics Toolbox in a visual search task.

Authors:  Joshua R de Leeuw; Benjamin A Motz
Journal:  Behav Res Methods       Date:  2016-03

2.  Best practices: Two Web-browser-based methods for stimulus presentation in behavioral experiments with high-resolution timing requirements.

Authors:  Pablo Garaizar; Ulf-Dietrich Reips
Journal:  Behav Res Methods       Date:  2019-06

3.  Testing the efficiency and independence of attentional networks.

Authors:  Jin Fan; Bruce D McCandliss; Tobias Sommer; Amir Raz; Michael I Posner
Journal:  J Cogn Neurosci       Date:  2002-04-01       Impact factor: 3.225

Review 4.  Human research and data collection via the internet.

Authors:  Michael H Birnbaum
Journal:  Annu Rev Psychol       Date:  2004       Impact factor: 24.137

5.  Smart phone, smart science: how the use of smartphones can revolutionize research in cognitive science.

Authors:  Stephane Dufau; Jon Andoni Duñabeitia; Carmen Moret-Tatay; Aileen McGonigal; David Peeters; F-Xavier Alario; David A Balota; Marc Brysbaert; Manuel Carreiras; Ludovic Ferrand; Maria Ktori; Manuel Perea; Kathy Rastle; Olivier Sasburg; Melvin J Yap; Johannes C Ziegler; Jonathan Grainger
Journal:  PLoS One       Date:  2011-09-28       Impact factor: 3.240

6.  QRTEngine: An easy solution for running online reaction time experiments using Qualtrics.

Authors:  Jonathan S Barnhoorn; Erwin Haasnoot; Bruno R Bocanegra; Henk van Steenbergen
Journal:  Behav Res Methods       Date:  2015-12

7.  Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.

Authors:  Matthew J C Crump; John V McDonnell; Todd M Gureckis
Journal:  PLoS One       Date:  2013-03-13       Impact factor: 3.240

8.  Crowdsourced Measurement of Reaction Times to Audiovisual Stimuli With Various Degrees of Asynchrony.

Authors:  Pavlo Bazilinskyy; Joost de Winter
Journal:  Hum Factors       Date:  2018-07-23       Impact factor: 2.888

9.  Realistic precision and accuracy of online experiment platforms, web browsers, and devices.

Authors:  Alexander Anwyl-Irvine; Edwin S Dalmaijer; Nick Hodges; Jo K Evershed
Journal:  Behav Res Methods       Date:  2020-11-02
View more
  1 in total

1.  jsQuestPlus: A JavaScript implementation of the QUEST+ method for estimating psychometric function parameters in online experiments.

Authors:  Daiichiro Kuroki; Thomas Pronk
Journal:  Behav Res Methods       Date:  2022-09-07
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.