Literature DB >> 34938826

Does a 7-day restriction on the use of social media improve cognitive functioning and emotional well-being? Results from a randomized controlled trial.

Marloes M C van Wezel1, Elger L Abrahamse1,2, Mariek M P Vanden Abeele1,3.   

Abstract

INTRODUCTION: Screen time apps that allow smartphone users to manage their screen time are assumed to combat negative effects of smartphone use. This study explores whether a social media restriction, implemented via screen time apps, has a positive effect on emotional well-being and sustained attention performance.
METHODS: A randomized controlled trial (N = 76) was performed, exploring whether a week-long 50% reduction in time spent on mobile Facebook, Instagram, Snapchat and YouTube is beneficial to attentional performance and well-being as compared to a 10% reduction.
RESULTS: Unexpectedly, several participants in the control group pro-actively reduced their screen time significantly beyond the intended 10%, dismantling our intended screen time manipulation. Hence, we analyzed both the effect of the original manipulation (i.e. treatment-as-intended), and the effect of participants' relative reduction in screen time irrespective of their condition (i.e. treatment-as-is). Neither analyses revealed an effect on the outcome measures. We also found no support for a moderating role of self-control, impulsivity or Fear of Missing Out. Interestingly, across all participants behavioral performance on sustained attention tasks remained stable over time, while perceived attentional performance improved. Participants also self-reported a decrease in negative emotions, but no increase in positive emotions.
CONCLUSION: We discuss the implications of our findings in light of recent debates about the impact of screen time and formulate suggestions for future research based on important limitations of the current study, revolving among others around appropriate control groups as well as the combined use of both subjective and objective (i.e., behavioral) measures.
© 2021 The Authors.

Entities:  

Keywords:  Cognitive performance; Emotional well-being; Screen time; Screen time intervention; Self-report bias; Sustained attention

Year:  2021        PMID: 34938826      PMCID: PMC8664777          DOI: 10.1016/j.abrep.2021.100365

Source DB:  PubMed          Journal:  Addict Behav Rep        ISSN: 2352-8532


Introduction

Smartphones are a valuable addition to modern life: They provide unlimited access to information and facilitate social interactions at all places, at all times (Vanden Abeele, 2020, Vanden Abeele et al., 2013). Despite these valuable assets, smartphone use is under scrutiny as research suggests that (over-)use adversely is associated with, among others, reduced emotional well-being (e.g., Twenge and Campbell, 2018, Twenge and Campbell, 2019, Twenge et al., 2018) and reduced capacity for sustained attention (e.g., Ophir et al., 2009, Ralph et al., 2014). Screen time apps promise help to individuals in countering these negative effects by helping them to better manage their ‘screen time’. Screen time apps such as MyTime (Hiniker et al., 2016), but also the Screen Time and Digital Well-being features embedded in IoS and Android temporarily limit the use of the phone, often by placing time restrictions on a selection of apps. They log usage and enable users to set timers that limit their usage (Rooksby, Asadzadeh, Rost, Morrison, & Chalmers, 2016). They operate under the idea that self-imposed limits may – at least temporarily – reverse some of the adverse consequences of smartphone use. Research shows that screen time apps are typically perceived as effective: On average, users report that screen time app use reduces the use of the ‘time-wasting apps’ by 21% (Hiniker, Hong, Kohno, & Kientz, 2016). Moreover, a handful of recent studies suggest that intervention-induced change in smartphone and social media use leads to self-perceived improvements in well-being (e.g., Brailovskaia et al., 2020, Tromholt, 2016, Stieger and Lewetz, 2018). However, experimental studies of this type remain scarce, even though they are critical in exposing causality. Moreover, the majority have zoomed in only on emotional well-being, while few have explored impacts on dedicated cognitive functions. This is unfortunate, given that mobile social media use has been related to reduced attentional and memory performance (e.g., Uncapher et al., 2017, Rosen et al., 2013, Judd, 2014). Intervention-induced changes in mobile social media use can be predicted to lead to improved attentional performance, but such causality has not yet been intensely tested. The central goal of the current study is to explore in a randomized controlled trial whether restricting social media use through a screen time app affects sustained attention (RQ1), and whether personality traits such as self-control, impulsivity and Fear of Missing Out (FoMO) moderate this effect (RQ2). Additionally, we replicate prior research by also exploring the impact on emotional well-being (RQ3), as prior research shows mixed findings (e.g., Brailovskaia et al., 2020, Tromholt, 2016, Stieger and Lewetz, 2018 versus Hall et al., 2021, Przybylski et al., 2021). To address these questions, participants in an experimental group underwent a 7-day 50% reduction in their use of four popular social media apps: Facebook, Instagram, Snapchat and YouTube (de Best, 2019). This was compared to a control group undergoing a 7-day screen time intervention of only 10%. The choice for a 10% reduction (rather than no reduction at all) in the control group was opted in an attempt to avoid Hawthorne-like effects (cf. McCambridge et al., 2014, Taylor, 2004), making sure that the control group also underwent an active demand. In addition, given that prior research links active use of social media to positive rather than negative well-being outcomes (e.g., Escobar-Viera et al., 2018) and posits that total abstinence may force individuals to ‘throw out the good with the bad’ (Vanden Abeele, 2020), the experimental group was exposed to a 50% reduction rather than a 100% reduction. As such, participants could potentially still benefit from active social media use, which is generally positively linked to well-being, while reducing their passive social media use, which is negatively linked to well-being (e.g., Escobar-Viera et al., 2018). Anticipating the results, these choices in manipulation did not have the expected effect and this will be discussed in the Discussion section.

Social media use and attention

Sustained attention refers to our ability to maintain focused on a specific task over longer periods of time, without getting distracted (Esterman & Rothlein, 2019). Studies suggest that smartphone use impedes people’s attentional performances (e.g., Kushlev, Proulx, & Dunn, 2016; Rosen et al., 2013, Uncapher et al., 2017 Ward et al., 2017, Wei et al., 2012). This has been attributed to the fact that smartphones – and the social media platforms they give access to – promote multi-tasking. Such smartphone multi-tasking may have both immediate and enduring effects on sustained attention. Various experiments, cross-sectional surveys and observational studies have found immediate distraction effects of smartphone use taking place during the execution of tasks that require sustained attention (e.g., Rosen et al., 2013, Kushlev et al., 2016, Ward et al., 2017, Wei et al., 2012). Especially social media are a culprit: They form a source of external distraction via their push notifications (see Kushlev et al., 2016) and give way to internal distractions (e.g., smartphone related thoughts, see Ward et al., 2017), occupying attention needed for tasks ahead. Studies point at activity in the brain’s reward center in response to social media use (Montag et al., 2017, Wilmer and Chein, 2016, Meshi et al., 2013), suggesting that social media induce fragmented usage patterns via processes of positive intermittent reinforcement (Oulasvirta et al., 2012, Van Deursen et al., 2015), which scatter attentional focus (Monsell, 2003). It should be noted, however, that while several studies have exposed negative associations between social media use and sustained attention, the large majority of these findings was correlational in nature and focused on concurrent task performance (i.e., the impact of social media use while performing an attention task) rather than an impact of general use of social media on the longer-term capacity for sustained attention. That there may also be such enduring consequences of general use of social media for capacity for sustained attention, is suggested by only a few studies (Fitz et al., 2019, Madore et al., 2020). Fitz et al. (2019) found that reduced exposure to distracting smartphone notifications increases self-reported attention and productivity. Furthermore, Madore et al. (2020) found that heavy media multi-tasking relates to increased experiences of attentional lapses in general. Given these findings, one might expect that restricting mobile social media use might benefit one’s capacity for sustained attention. While research on both the immediate and enduring impact of social media use on attention is still growing, we are currently witnessing the emergence of new initiatives that aim to improve people’s capacity for attention by restricting their digital media access and use. For instance, screen time apps such as ForestTM, Google Digital Wellbeing or Apple Screen Time enable individuals to restrict their access and use of the smartphone or social media, among others with the aim to re-gain or keep focus. To date, however, evidence concerning the effectiveness of these apps, which artificially reduce one’s screen time, remains limited. Indeed, the theoretical evidence for a causal relationship between reduced social media use and increased capacity for sustained attention is propitious. In order to establish causality between social media use and one's capacity for sustained attention, intervention studies are a promising tool. Social media screen time is reduced for a longer period of time, in order to examine enduring impacts on sustained attention by comparing sustained attention measures before versus after such an intervention. As such, a first aim of this study is to explore whether social media use negatively impacts one’s capacity for sustained attention, when using objective, behavioral measures of attention and social media use instead of self-report measures. To that end, we perform a randomized controlled trial in which we examine the effect of social media screen time restrictions (10% versus 50% restrictions for the control and experimental groups, respectively; see Method section for elaboration) on both behavioral measures of attention and self-report measures for assessing sustained attention performance. We expect that: H1a. A 7-day social media screen time reduction of 50% leads to a greater improvement in objective sustained attentional performance than a 7-day social media screen time reduction of 10%. H1b. A 7-day social media screen time reduction of 50% leads to a greater improvement in self-reported sustained attentional performance than a 7-day social media screen time reduction of 10%.

Social media screen time and emotional well-being

A secondary aim of this study is to explore if a mobile social media restriction also benefits emotional well-being. The literature on this association between social media use and emotional well-being points towards both negative (Twenge and Campbell, 2018, Twenge and Campbell, 2019, Twenge et al., 2018) and positive effects (Przybyslki & Weinstein, 2017). For instance, Twenge and Campbell (2018) found that social media screen time negatively predicts well-being outcomes such as depressive symptoms and anxiety in girls. Orben and Przybylski, 2019a, Orben and Przybylski, 2019b, however, warn for false positives due to flexible analysis of very large data sets, and other studies show that moderate digital technology use is not per se harmful for adolescents, but may even assist them in modern society (Przybyslki & Weinstein, 2017). Given the above, it might not be surprising that evidence on the effect of screen time restrictions on emotional well-being is also mixed. A number of studies found that a longer period of reduced social media use leads to increased emotional well-being, life satisfaction and/or the experience of positive emotions (e.g., Brailovskaia et al., 2020, Graham et al., 2020, Stieger and Lewetz, 2018, Tromholt, 2016), as well as to decreased feelings of loneliness and depression (Hunt, Marx, Lipson, & Young, 2018). Similar results have been found in social media abstinence studies (Brown and Kuss, 2020, Turel et al., 2018). Interestingly, Fioravanti, Prostamo, and Casale (2020) found these effects only for women, and not for men. Other studies found no or even reversed effects (e.g., Hall et al., 2021, Przybylski et al., 2021). These mixed findings are likely in part due to differences in the interventions used (e.g., whether it focuses on general smartphone use or social media only; the length of the intervention), the mechanisms examined (e.g., complete versus partial abstinence) and the outcome measures that are focused on (e.g., generalized well-being versus anxiety). Further research is needed to help to clarify when, why, and for which outcomes these types of interventions may be beneficial. Several scholars have suggested that complete abstinence forces people to sacrifice the ‘good’ of social media together with the ‘bad’ (cf. Vanden Abeele, 2020). Namely, participants’ passive use of social media is typically associated with negative outcomes (though recent work points at influences of persons-specific characteristics on this relationship; Valkenburg, Beyens, Pouwels, van Driel & Keijsers, 2021), while active use of social media (i.e. actively posting, commenting and interacting) is typically associated with positive outcomes (e.g., Escobar-Viera et al., 2018, Hanley et al., 2019). For this reason, a partial reduction may be more beneficial than complete abstinence, since a partial reduction may successfully reduce participants’ passive use, while not removing opportunities for active use of social media. Consequently, we expect that: H2. A 7-day social media screen time reduction of 50% leads to a greater improvement in emotional well-being than a 7-day social media screen time reduction of 10%.

Inter-individual variability in the effectiveness of screen time interventions

It is likely that there is inter-individual variance in the effectiveness of a social media screen time restriction. A factor that may moderate the effect on participants’ capacity for sustained attention is self-control. Self-control refers to the conscious exertion of control over responses (Baumeister, Vohs, & Tice, 2007). A construct closely, yet negatively associated with self-control is impulsivity, or the tendency to prefer short-term stimulus-driven actions (Nigg, 2017). Self-control enables a person to align their behavior with personal standards and long-term goals (Baumeister et al., 2007). Individuals high in self-control, or low in impulsivity, may be better at avoiding smartphone-induced interruptions when focusing their attention on a task. Several studies support this assumption. Wei et al. (2012), for example, found that students who were better at self-regulating, were less likely to text during class, which was in turn positively associated with their sustained attention performances. Similarly, individuals high in self-control show less habitual smartphone use and experience less difficulty to enact self-control strategies over their smartphone behavior (Brevers & Turel, 2019). Self-control and impulsivity might moderate the effectiveness of a social media screen time intervention on participants’ capacity for sustained attention. Because individuals low in self-control, and/or high in impulsivity experience more problems keeping their smartphone use under control (e.g., Brevers and Turel, 2019, Wei et al., 2012), these individuals may benefit more from an intervention: H3a. Self-control moderates the effect of the intervention on sustained attention: The expected improvement is larger for those low in self-control compared to those high in self-control. H3b. Impulsivity moderates the effect of the intervention on sustained attention: The expected improvement is larger for those high in impulsivity compared to those low in impulsivity. Fear of Missing Out (FoMO) may moderate the effect of a social media screen time restriction on emotional well-being. Individuals with higher FoMO desire more strongly to stay connected and updated all the time (Przybylski, Murayama, DeHaan, & Gladwell, 2013). Social media offers a very attractive way to fulfil this desire (Przybylski et al., 2013). A social media screen time restriction may negatively impact the emotional well-being of individuals high in FoMO because it reduces their possibility to stay up to date of what others are doing (cf. Blackwell et al., 2017, Franchina et al., 2018). Hence, we expect that: H4. The Fear of Missing Out (FoMO) moderates the effect of the intervention on emotional well-being: The expected improvement is larger for those low in FoMO compared to those high in FoMO.

Method

Participants

The study took place over the course of 3 weeks in February 2020. In total, 102 student participants enrolled for the study, of which 101 had access to a screen time feature on their Android or iPhone and could thus participate in the study. Another five participants were excluded at the end of the baseline measurement because their screen time feature was not activated, and they could therefore not ‘donate’ their smartphone use data. Four participants were additionally excluded because of missing data, for example due to them using the mousepad instead of the mouse to respond to the Metronome Response Task (MRT), resulting in a non-response. Another sixteen participants were excluded because they used less than two of the four social media apps of interest (one of these was already excluded because the screen time feature was inactive). Excluding the latter group of participants did not affect the main findings reported below – those directly related to our hypotheses stated above1. Finally, one participant was excluded because his data donation was incorrectly reported in the data set, leading to an extreme outlier in relative reduced screen time. The final sample used for the data analysis thus involved 76 participants (N 27, N 49, M 20.95, SD = 3.38). Fig. 1 presents a CONSORT flow chart of the phases of the randomized controlled trial. Fourty participants were in the experimental group and 36 were in the control group. Fourty-seven individuals used an Android phone, 29 an iPhone. Six individuals already used an app timer on their phone. The demographics for each condition are specified in Table 1.
Fig. 1

Sample flow chart.

Table 1

Demographics specified per condition.

Experimental (N = 40)Control (N = 36)
Age21.42 (SD = 3.75)20.47 (SD = 2.92)
Women2623
Men1413
Android2621
iPhone1415
Dutch3126
Non-Dutch910
Use timers already24
Sample flow chart. Demographics specified per condition.

Materials and measures

Logged smartphone use

Because self-reports of smartphone use are notoriously inaccurate (Sewall et al., 2020, Shaw et al., 2020, Kaye et al., 2020, Whitlock and Masur, 2019), we assessed screen time by asking participants to donate their behavioral smartphone use data as collected via the Screen Time feature for iPhone and the Digital Well-being feature for Android. Twelve students used an alternative third-party app (N 3 N 6, N 2). Four of these students could not set a time restriction. These were assigned to the control group2. We registered average daily smartphone screen time in the form of time spent (minutes), number of pickups and number of received notifications. We collected these measures for the mobile applications of WhatsApp, Instagram, Facebook, Snapchat, YouTube, and for the participants’ total screen activity3.

Behavioral measures of sustained attention

We administered two behavioral measures of sustained attention: the Sustained Attention to Response Task (SART) and the Metronome Response Task (MRT). The SART procedure from Robertson, Manly, Andrade, Baddeley, and Yiend (1997) was adopted and slightly adjusted. Participants were exposed to 450 randomized images of digits in different sizes and fonts (Hilbert, Nakagawa, Schuett, & Zihl, 2014) (1–9; 50 times per digit; visible for 250 ms; see Fig. 2) that were followed by a mask (900 ms), and asked to press the spacebar for every digit, except in the case of a 3. All digits were presented 45 times. A practice round was executed to get familiar with the task. Participants were told to focus on both accuracy and speed. No feedback was given during the trials. The total task took around 7 min, and provided a measure of both response time (M 321.08 ms, SD = 68.62, M 317.77 ms, SD = 76.36) and response accuracy (M 0.93, SD = 0.04, M 0.92, SD = 0.05).
Fig. 2

Timeline SART showing 3 trials with digits (250 ms) and a mask (900 ms).

Timeline SART showing 3 trials with digits (250 ms) and a mask (900 ms). The MRT was administered using Seli, Cheyne, and Smilek (2013) procedure. Participants were instructed to click a computer mouse in synchrony with an auditory beep tone. One trial took 1,300 ms and started off with silence (650 ms), then the beep (75 ms), followed by silence (575 ms). The total task included 450 trials (about 10 min). We calculated participant’s response variability (Seli et al., 2013) by first calculating the rhythmic response times (RRTs) – the time between the tone onset and the clicking response –for each trial. Then, we calculated the variance of the absolute RRT values over a five-trial moving window to avoid outliers from influencing the overall outcome. As this variance had a skewed distribution, we used a natural logarithmic transformation to obtain a normal distribution (M 6.03, SD = 0.62, M 6.01, SD = 0.63). For the SART, lower response times indicate faster performances and higher accuracy scores indicate more accurate performances. For the MRT, lower scores indicate greater synchrony with the beeps.

Self-report measures of sustained attention

We administered two self-report scales to measure participants’ perceptions of attention. We assessed the perceived frequency of experienced attentional lapses using the 12-item MAAS-LO scale by Carriere, Cheyne, and Smilek (2008) (pre-test: α = 0.83, post-test: α = 0.89; 1 = Never to 7 = Very often). An example item is “During the past week… I could be experiencing some emotion and not be conscious of it until sometime later”. Second, we measured the frequency of experienced cognitive errors due to attentional lapses with Carriere et al. (2008) 12-item ARCES scale (pre-test: α = 0.89, post-test: α = 0.91; 1 = Never to 7 = Very often). An example item is “During the past week… I have gone to the fridge to get one thing (e.g., milk) and taken something else (e.g., juice)”. Higher scores on these measures indicate experiencing more attentional lapses (MAAS-LO), and more cognitive errors due to attentional lapses (ARCES).

Emotional well-being

To measure emotional well-being, a shortened version of the 20-item PANAS scale by Watson, Clark, and Tellegen (1988) was used with five positive affect items (pre-test: α = 0.73, post-test: α = 0.79; 1 = Never to 7 = Very often) and five negative affect items (pre-test: α = 0.48, post-test: α = 0.56; 1 = Never to 7 = Very often). The positive emotions used were interested, excited, proud, alert and active. The negative emotions used were distressed, upset, guilty, scared and irritable. A higher score indicated more frequent experiences of positive or negative emotions respectively.

Fear of Missing Out

We assessed FoMO using the 10-item scale of Przybylski et al. (2013) (pre-test: α = 0.78, post-test: α = 0.83; 1 = Strongly disagree to 7 = Strongly agree). An example item is “I fear others have more rewarding experiences than me”. A higher score indicates higher FoMO. For the moderator analysis, this variable was recoded into a dichotomous variable with the mean score of the pre-test as cut-off score (M = 3.92, SD = 0.88). Individuals scoring higher than this cut-off score were categorized as “High FoMO” (55.3% of the sample), those lower than the cut-off score were categorized as “Low FoMO” (44.7%).

Impulsivity

To measure impulsivity, we used the revised 30-item Barratt Impulsivity Scale version 11 (BIS-11) by Patton, Stanford, and Barratt (1995) (α = 0.83). An example item is “I often have extraneous thoughts when thinking” (1 = Strongly disagree to 7 = Strongly agree). A higher score indicated more impulsivity. To be included in the moderator analysis, this variable was recoded into a dichotomous variable with the mean score as cut-off score (M = 3.58, SD = 0.59). Individuals scoring higher than this cut-off score were categorized as “High impulsivity” (52.6% of the sample), those lower than the cut-off score were categorized as “Low impulsivity” (47.4%).

Self-control

Self-control was measured using the 9-item self-regulation scale by Van Deursen and colleagues (2015) (α = 0.80). An example item is “I can concentrate on one activity for a long time, if necessary” (1 = Strongly disagree to 7 = Strongly agree). A higher score indicates greater self-control. To be included in the moderator analysis, this variable was recoded into a dichotomous variable with the mean score as cut-off score (M = 4.09, SD = 0.85). Individuals scoring higher than this cut-off score were categorized as “High self-control” (51.3% of the sample), those lower than the cut-off score were categorized as “Low self-control” (48.7%). We collected three additional measures that concern participants’ perceptions of their own smartphone use for exploratory analyses:

Habitual smartphone use

We assessed self-reported habitual smartphone use with an adapted version of Verplanken and Orbell (2003) 10-item Self-Reported Habitual Index (SRHI) (pre-test: α = 0.86, post-test: α = 0.87). An example item is “My smartphone is something I use frequently” (1 = Strongly disagree to 7 = Strongly agree). A higher score indicates more habitual smartphone use.

Problematic smartphone use

To assess problematic smartphone use, we administered the 27-item Mobile Phone Problem Use Scale (MMPUS) by Bianchi and Phillips (2005) (pre-test: α = 0.87, post-test: α = 0.90). An example item is “I can never spend enough time on my mobile phone” (1 = Strongly disagree to 7 = Strongly agree). A higher score indicates more problematic smartphone use.

Self-reported smartphone use

A self-reported measure of smartphone use was included for exploratory purposes. Three open-ended questions asked for daily screen time in minutes, daily pickups and daily received notifications. These estimates were collected for smartphone use in general, and for overall social media use.

Self-perceived response to the intervention

Finally, we asked questions to gauge participants’ perceptions of the intervention. In the post-test measure, we asked participants (1) if they had used the restricted social media apps more frequently on alternative devices (1 = Completely disagree, 7 = Completely agree), (2) whether they had deactivated and then reinstalled the timers (Yes/No/Don’t want to say), and (3) how much they had struggled to respect the timers per app (1 = Completely disagree to 7 = Completely agree and 8 = Not applicable).

Procedure

Participants came to the lab, received a unique participant number and provided their informed consent to participate in a study that they were told “examines the effect of a screen time intervention on cognitive and emotional outcomes”. A maximum of 10 participants participated during each timeslot (M = 6.4 participants per session). We randomized participants by assigning even participant numbers to the experimental and uneven numbers to the control group. Participants took place behind a table equipped with a laptop, a headphone and mouse. Tables were positioned so that there were minimal distractions from the environment or other participants. Depending on the number of enrolled participants in a session, the experiment leader was accompanied by one or two student assistants to help. The study was approved by the university’s Ethical Review Board (#2019.139). Participants first performed one of two sustained attention tasks (the order was counterbalanced across participants). Next, they completed self-report measures for sustained attention, personality, and mood. Third, they performed the remaining sustained attention task. Fourth, participants donated their smartphone use data from the 7 days preceding the baseline measurement by letting the researchers copy the information from the Apple Screen Time, respectively Android Digital Well-being feature, in an Excel sheet. Finally, the experimental manipulation was implemented: Timers were installed to limit the participant’s use of Facebook, Instagram, Snapchat and YouTube for the upcoming week. The relative screen time reduction for these apps was 50% in the experimental group and 10% in the control group. We opted for a 10% reduction (as opposed to for example no reduction at all) in the control group to prevent Hawthorne(-like) effects (McCambridge et al., 2014, Taylor, 2004). Precisely 7 days later, participants were re-invited to the lab. They completed the same measures in the same location and order, using the same procedure. However, impulsivity and self-control were not re-measured, as these were considered stable over time. Finally, we asked participants how they experienced the intervention week.

Data analysis

All data analyses were performed using SPSS. We first conducted t-tests with the manipulation as the independent variable and the relative reduction based on the donated screen time measures (total and app-specific) as dependent variables, to examine if our manipulation had succeeded. A relative reduction score was computed by the formula (1-post/pre)*100%. Positive relative reductions indicate a reduced screen time at the post-test, while negative relative reductions indicate an increased screen time at the post-test as compared to the pre-test. Bootstrapping with N = 1,000 was applied to infer more reliable estimates. As we will elaborate in the Results and Discussion sections, these analyses indicated that the manipulation had failed. The large standard deviations in the relative reduction of social media screen time (see Table 2 in the Results section) show that there was substantial variability between participants, independent of the condition they were in. This implies that we can meaningfully assess the effect of the relative reduction in social media screen time, independent of the condition participants were in using the intention-to-treat method (McCoy, 2017). This method is common in randomized controlled trials and offers a way to further explore both successful and failed manipulations. Below we report both the results from the treatment-as-intended (i.e., in function of the manipulation), and from the treatment-as-is.
Table 2

Independent samples t-test of differences in relative reduction between conditions.

Mean relative reduction (%)SD95% CIBootstrap N = 1,000dftp
FacebookControlExperimental49.74%58.32%26.1920.5340.62, 59.4151.21, 64.74641.490.141
InstagramControlExperimental35.01%53.72%27.8319.3925.67, 44.0947.37, 59.66733.400.001*
SnapchatControlExperimental26.82%38.56%48.7346.076.86, 40.6821.26, 52.85631.000.322
YouTubeControlExperimental53.76%66.29%43.6438.2636.32, 68,3451.46, 78.25571.180.245
Social Media Screen Time
Control38.01%25.6129.14, 46.1957.693.83<0.001**
Experimental56.59%16.0751.69, 61.55
Total Daily Screen Time
Control14.59%30.023.95, 23.3974−0,280.779
Experimental12.55%32.782.46, 22.78

Note. Social media screen time is a combined measure of screen time of Facebook, Instagram, Snapchat and YouTube. Total screen time is the total daily amount of time spent on the smartphones, including all app activities.

p < .01, **p < .001

Independent samples t-test of differences in relative reduction between conditions. Note. Social media screen time is a combined measure of screen time of Facebook, Instagram, Snapchat and YouTube. Total screen time is the total daily amount of time spent on the smartphones, including all app activities. p < .01, **p < .001 To analyze the effect of the treatment-as-intended, we performed a series of repeated measures AN(C)OVA’s with the condition and the dichotomized moderators as the fixed factors, and the outcome measures as the dependent variables4. To analyze the effect of the treatment-as-is, we performed multiple regression analyses, with the relative reduction in social media screen time and the interaction term between this reduction and the moderators as predictor variables, and the outcome measures as dependent variables5.

Results

Effect of manipulation

The conducted t-tests with the manipulation as the independent variable and the relative reductions of screen time as dependent variables, showed that the manipulation had failed. The difference in screen time reduction between the conditions only reached significance for Instagram and overall social media screen time, and was generally not in line with the reductions that were aimed for (control −10%, experimental −50%; see Table 2). The manipulation failed mostly because participants in the control group reduced their social media app use on average with 38%, which was much more than the intended 10% (see Table 2). Moreover, an examination of the overall screen time revealed that there were no differences between the two conditions in terms of their overall average reduction in screen time, t(74) = − 0.28, p = .779 (see Table 2)6.

Descriptives

Before we test our hypotheses we provide some descriptive information. On average, participants used their phone for 274.79 min per day (SD = 110.44) in the week prior to the experiment, which is broadly in line with previous research (e.g., Andrews et al., 2015, Deng et al., 2019, Ellis et al., 2019). We registered the use of five social media apps. At the time of the pre-test, Instagram (M = 49.04, SD = 29.62) and WhatsApp (M = 50.30, SD = 40.94) were the most used (N 75, N 75), followed by Facebook, YouTube and Snapchat (see Table 3).
Table 3

Smartphone measures pre-test vs. post-test, results from paired samples t-tests.

MeasurePre-test M(SD)Post-test M(SD)tpCohen’s d
Overall screen time274.79 (110.44)226.99 (100.96)5.60<0.001***0.452
Facebook screen time25.64 (22.27)12.15 (12.42)8.60<0.001***0.748
WhatsApp screen time50.30 (40.94)48.51 (41.28)0.680.4980.036
Instagram screen time49.04 (29.62)27.16 (22.25)10.22<0.001***0.835
Snapchat screen time21.31 (23.98)12.74 (13.84)4.61<0.001***0.438
YouTube screen time34.29 (37.73)17.41 (27.39)5.36<0.001***0.512
Overall pickups128.47 (56.93)131.96 (56.20)−0.860.3910.062
Facebook pickups11.19 (15.22)6.95 (9.40)3.91<0.001**0.384
WhatsApp pickups45.20 (49.77)45.34 (46.20)−0.050.9570.003
Instagram pickups19.96 (17.23)12.63 (12.24)4.74<0.001***0.490
Snapchat pickups22.37 (37.15)14.18 (13.80)1.880.065*0.292
YouTube pickups3.76 (4.79)2.17 (3.40)3.060.004**0.383
Overall notifications376.18 (469.91)331.01 (257.28)0.950.3460.119
Facebook notifications3.38 (4.23)3.12 (3.06)0.440.6630.068
WhatsApp notifications303.72 (472.00)217.94 (182.50)1.530.1310.240
Instagram notifications16.92 (20.85)8.76 (8.34)3.310.002**0.514
Snapchat notifications102.84 (181.68)58.48 (64.98)2.080.042**0.325
YouTube notifications2.95 (4.87)2.14 (2.86)1.040.3060.203
Subjective screen time267.92 (430.39)169.65 (106.47)1.990.051*0.313
Subjective screen time social media185.49 (309.23)102.32 (93.31)2.330.023**0.367
Problematic smartphone use3.54 (0.69)3.36 (0.78)2.580.012**0.244
Habitual smartphone use5.37 (0.83)5.03 (0.87)3.86<0.001***0.400

Note. Screen time expressed in minutes. * p < .10, **p < .05, ***p < .001

Smartphone measures pre-test vs. post-test, results from paired samples t-tests. Note. Screen time expressed in minutes. * p < .10, **p < .05, ***p < .001 Although participants were randomly assigned to their conditions based upon the order of entry into the experiment room (even vs. uneven numbers), an independent samples t-test revealed that the control group and the experimental group were not fully equivalent in terms of their baseline smartphone behavior before the experiment started. Appendices A and B display the descriptive information split per condition7, and Appendix C provides the results of independent samples t-tests to compare the smartphone use of the experimental group with the control group at times of the pre-test. These tests revealed that in the week prior to the baseline measurement participants in the control group accessed WhatsApp less frequently than participants in the experimental group did (control: M(SD) = 33.27(25.45), experimental: M(SD) = 56.32(61.79), t(47.97) = 2.04, p = .046). Additionally, in the week prior to the baseline measurement participants in the control group accessed Instagram more frequently (control: M(SD) = 25.15(19.58), experimental: M(SD) = 16.10(13.58), t(54.38) = -2.12, p = .033) and for a longer duration of time (control: M(SD) = 58.83 (32.39), experimental: M(SD) = 40.01(23.83), t(74) = -3.03, p = .003) than participants in the experimental group. These findings suggest that Instagram might have been of greater importance to participants in the control group, which may explain why – for this platform – participants in the control group did not reduce their use as much as for their use of the other platforms, leading to a significant difference in the relative reduction in Instagram use between the control group and the experimental group. Given this non-equivalence between groups, the findings of our study need to be interpreted with caution. Appendix D shows how many participants reached and/or violated the enforced restriction8. Appendix E shows that participants in the experimental group reached significantly more timers than in the control group, though they did not significantly differ in the frequency of violating them. Both the number of times reaching the enforced restrictive timers and the number of times violating them, did not differ in relation to gender or operating system. Before testing our hypotheses, we explored overall differences between the baseline and the post-measurement. These show a significant overall decrease in the time spent on the examined social media apps (except for WhatsApp), their use frequency, and – for Instagram and Snapchat – the amount of notifications received (see Table 3). However, while overall screen time decreased significantly, the overall number of pickups and notifications did not differ between pre- and post-measurement (see Table 3). Participants in both conditions reported a significant decrease in their habitual phone use (experimental: Mdif = 0.29, t(39) = 2.04, p = .049, control: Mdif = 0.37, t(35) = 4.32, p <.001). Self-reported problematic smartphone use decreased in the experimental group only (pre: M = 3.65, SD = 0.67; post: M = 3.41, SD = 0.79; t(39) = 2.14, p = .038). While self-perceived screen time was on average higher at the pre- than at the post-measurement in both the experimental (pre: M = 305.90 min, SD = 580.02; post: M = 155.64 min, SD = 86.91; t(38) = 1.69, p = .099) and the control group (pre: M = 223.03 min, SD = 87.52; post: M = 186.21 min, SD = 125.14; t(32) = 1.64, p = .110), these differences in self-reported use did not reach significance. Regardless of condition though, self-reported social media screen time was significantly lower at the post- than at the pre-measurement (pre: M = 185.49 min, SD = 309.23; post: M = 102.32, SD = 83.81, t(71) = 2.33, p = .023). Appendix F shows the correlations between overall screen time, social media screen time and the outcome variables, as assessed during the baseline measurement. Overall, it shows that the total time that participants spend on their smartphone was unrelated to any of the outcome measures of the study. Participants’ social media screen time correlated with only one of the outcome measures, but weakly and in the opposite direction of what one would expect based on the extant literature: Participants who spent more time on social media reported experiencing less attentional lapses (r = -0.27, p = .087).

Treatment-as-intended

Sustained attention

To examine whether the intervention improved participants’ attentional performance (H1a, H1b), we performed repeated-measures ANOVAs with the attention measures as dependent variables, Condition (2; experimental versus control) as a between-subjects variable, and Time (2; pre-test versus post-test) as a within-subjects variable. A Bonferroni correction was applied to correct for multiple tests (critical α = 0.05/5 = 0.01). The results showed that the intervention had no main effect on the behavioral measures of sustained attention (see Table 4).With respect to the self-report attention measures, we found a main effect of time: Compared to the week prior to the baseline measure, participants experienced significantly less attentional lapses (Mdif = -0.20, F(1,74) = 8.71, p = .004, partial η 0.105) and cognitive errors due to these lapses (Mdif = -0.85, F(1,74) = 79.84, p < .001, partial η 0.519) during the intervention week. There was also a main effect of condition for the latter measure: The experimental group experienced significantly more cognitive errors overall then the control group (Mdif = 0.57, F(1,74) = 9.03, p = .004, partial η 0.109). There was no interaction effect, however. In other words, the treatment-as-intended analyses indicate that the findings do not support H1a and H1b (see Table 4).
Table 4

Results from ANOVA with attention measures as dependent variables (treatment-as-intended).

PredictorMSDFppartial η2
SART accuracyConditionControlExperimentalTimePre-testPost-testCondition*Time0.930.920.930.920.050.050.040.050.110.840.000.7430.3630.9980.0010.0110.000
SART response timeConditionControlExperimentalTimePre-testPost-testCondition*Time314.92323.48321.08317.7782.3462.6368.6276.360.340.150.200.5610.7010.6540.0050.0020.003
MRT variabilityConditionControlExperimentalTimePre-testPost-testCondition*Time6.085.986.036.010.670.600.620.630.450.113.310.5020.7390.0730.0060.0020.043
Attentional lapses (MAAS-LO)ConditionControlExperimentalTime*Pre-testPost-testCondition*Time3.323.713.633.430.870.860.810.954.458.712.330.0380.004*0.1310.0570.1050.030
Cognitive errors (ARCES)Condition*ControlExperimentalTime*Pre-testPost-testCondition*Time3.053.623.782.930.870.980.940.999.0379.840.590.004*< 0.001*0.4440.1090.5190.008

Indicates that p-value falls below Bonferroni corrected alpha (0.05/5) = 0.01

Results from ANOVA with attention measures as dependent variables (treatment-as-intended). Indicates that p-value falls below Bonferroni corrected alpha (0.05/5) = 0.01 We explored if there was a potential moderating effect of FoMO, self-control and impulsivity on the former relationships (H3). To that end, we performed repeated-measures ANCOVAs with the attention measures as dependent variables, Group (2; intervention versus control) as a between-subjects variable, Time (2; pre-test versus post-test) as a within-subjects variable, and the dichotomized moderators as covariates. None of the included moderators showed significant moderating effects (all p’s > 0.085). We expected the intervention to improve participants’ emotional well-being (H2). The intervention had no main effect on participants’ experienced positive emotions (all p’s > 0.221; see Table 5). There was a main effect of time: At the post-test, participants reported having experienced significantly less negative emotions during the prior week than at the pre-test (Mdif = -0.20, F(1,74) = 5.49, p = .022, partial η 0.069). Nevertheless, this effect was not contingent upon the manipulation (F(1,74) = 1.79, p = .185, partial η 0.024) – which makes sense as the manipulation was hardly effective. Again, none of the included moderators (FoMO, impulsivity and self-control) showed significant moderators effects (all p’s > 0.203).
Table 5

Results from ANOVA with emotional well-being measures as dependent variables (treatment-as-intended).

PredictorMSDFppartial η2
Positive emotionsConditionControlExperimentalTimePre-testPost-testCondition*Time4.454.674.514.620.940.730.790.881.541.520.010.2190.2210.9320.0200.0200.000
Negative emotionsConditionControlExperimentalTime*Pre-testPost-testCondition*Time3.203.213.303.100.660.750.710.700.005.491.790.9980.022*0.1850.0000.0690.024

Indicates that p-value falls below Bonferroni corrected alpha (0.05/2) = 0.025

Results from ANOVA with emotional well-being measures as dependent variables (treatment-as-intended). Indicates that p-value falls below Bonferroni corrected alpha (0.05/2) = 0.025

Summary

Overall, the results did not support the hypotheses that a 50% social media screen time restriction leads to better attentional performance (H1) and greater emotional well-being (H2) than a 10% restriction, nor the hypotheses that self-control, impulsivity and FoMO would consistently moderate these effects (H3, H4). This may in large part have to do with the manipulation not leading to the expected behavior in participants, especially for the control group (see above).

Treatment-as-is

Because of the failed manipulation, we followed up with a second set of analyses in which we ignored the condition participants were in, but rather used participants’ actual social media screen time reduction during the intervention week as a predictor for differences in attentional performance and emotional well-being. For these analyses, we used a total ‘social media screen time’ measure at both the pre- and the post-test that added up the measures of Facebook, Instagram, Snapchat and YouTube screen time. Then, we computed a measure of each participant’s relative reduction in social media screen time. Next, we performed a set of multiple regression analyses with the relative reduction in social media screen time as the independent variable, and the difference scores (i.e., the pre-test score minus the post-test score) of attention and well-being as the dependent variables. Because the below exploratory analyses are repeated across five different attention outcome measures, we implemented a Bonferroni correction to obtain a new critical alpha of 0.01 (0.05/5) to test the attention hypotheses. The analyses on emotional well-being are repeated across two different outcome measures, and so a Bonferroni correction to obtain a new critical alpha of 0.025 (0.05/2) was implemented. First, we explored whether the relative reduction in social media screen time could predict any of the outcome measures. The results revealed that none of the difference scores of the attention and well-being outcomes could be predicted by a change in social media screen time (all p’s > 0.083). Next, we examined whether adding the moderators FoMO, self-control and impulsivity to the model would affect the results. We included these moderators both as direct predictors and in interaction terms with relative social media screen time reduction. The analyses revealed that the extent to which participants reduced their social media screen time did not predict any change in attentional performance and emotional well-being nor was any moderator significantly influential (all p’s > 0.025)9.

Discussion

In the past decade, we have witnessed an increase in studies focusing on the complex associations between the use of the smartphone and its (mobile) social media apps on the one hand, and attentional functioning (Fitz et al., 2019, Judd, 2014, Marty-Dugas et al., 2018, Rosen et al., 2013, Ward et al., 2017, Wei et al., 2012) as well as emotional well-being (Aalbers et al., 2019, Brailovskaia et al., 2020, Escobar-Viera et al., 2018, Frison and Eggermont, 2017, Stieger and Lewetz, 2018, Tromholt, 2016, Twenge and Campbell, 2018, Twenge and Campbell, 2019, Twenge et al., 2018) on the other hand. While research in this field is not without criticism, among others for its over-reliance on self-report data and cross-sectional survey methodologies, the concerns over the potential harm of mobile social media use have nonetheless given impetus to the development of screen time apps that can help people to protect themselves from harm by restricting their social media use. The current study explored the effects of such a social media screen time restriction on sustained attention and emotional well-being. The findings show that, first of all, the intervention did not have the intended effect. Specifically, we implemented a 50% restriction in social media screen time for an experimental group, and compared this to a control group with a 10% restriction. Yet, this screen time manipulation failed mostly because participants in the control group reduced their social media app use on average with 38%, which was much more than the intended 10%. We deliberately opted to not include a 0% reduction control group in our design, in order to avoid Hawthorne(-like) effects (cf. McCambridge et al., 2014, Taylor, 2004) – hence, in order to provide also the control group participants with a full-blown sense of being involved in an experiment. The current finding that a non-zero percent reduction for a control group may trigger additional – and more problematic – side effects than the Hawthorne(-like) effects that we aimed to prevent with it, is an interesting finding in itself. It provides clear suggestions for optimal implementation of control groups in intervention studies of the current type, and deserves to be followed up as a target of investigation in itself. Indeed, some participants indicated that they felt uncomfortable when encountering a time limit. It is imaginable that participants reduced their screen time more than they needed to in order to avoid that situation. Alternatively, the failed manipulation may be due to a placebo effect (cf. Stewart-Williams & Podd, 2004). In this case, the mere expectation of receiving a social media reduction may have sufficed in promoting behavior change in the form of reduced social media use. Similar placebo effects were found in marketing research (Irmak, Block, & Fitzsimons, 2005). To deal with the failed screen time manipulation, we provided analyses both for treatment-as-intended and treatment-as-is, with the latter set of analyses disregarding the intervention conditions but rather exploring linear associations between the degree of relative screen time reduction based on the data we obtained. Interestingly, neither analyses revealed a noticeable effect on the outcome measures. This finding suggests an alternative explanation for the lack of findings, namely that there may not be any negative association between social media screen time and the outcome measures to begin with. Indeed, the pre-test data – which are unaffected by the failed screen time manipulation – did not show any of the hypothesized correlations between social media screen time, emotional well-being and attentional performance. On the contrary, the only relationships found between social media screen time and the outcome measures ran counter to what one might expect: Heavier social media users reported experiencing less attentional lapses and negative emotions. The lack of any negative association between social media screen time and the outcome measures may explain why reducing this screen time has no causal impact: If social media screen time does not affect these outcomes much, altering it will unlikely cause much change in them. This finding is interesting in light of recent debates in the field over the validity of screen time studies. A recurring concern voiced in these debates is that self-report measures of screen time are flawed to such an extent that their use can lead to biased interpretations (Kaye et al., 2020, Sewall et al., 2020). A key strength of the current study is that we used a behavioral measure of screen time. The fact that this measure shows no relationship to cognitive performance nor emotional well-being, calls into question the ‘moral panic’ over social media screen time (Orben, 2020). An alternative explanation that should be mentioned here, is that despite the randomization of participants, the control and experimental group were not fully equivalent in terms of their smartphone behavior in the week prior to the experiment. The control group appeared to consist of heavier Instagram users whereas the experimental group consisted of heavier WhatsApp users. It is thinkable that this non-equivalence has had some influence on our findings. After all, for the light Instagram users in the experimental group, a 50% reduction in Instagram use may not have been very impactful, whereas for the heavy Instagram users in the control group, the actually enforced relative reduction of 35% may have had a more profound impact, thus leveling out any difference between the two groups. Future researchers thus need to carefully consider their experimental procedures to maximize the chances of equivalence between conditions. While we believe that a strength of our current study is the use of actual smartphone data and performance based measures of attention, the paucity of the use of such measures in previous work prevented us from conducting an appropriate a priori power analysis, resulting in a sample size that may have been too small – as indeed indicated by for example the accidental but significant differences between conditions in terms of their baseline app use (see above). We hope that our study can serve to that purpose in the future. While the manipulation did not resort an effect, the findings of our study did show that – disregarding of the condition they were in – people reported experiencing less cognitive errors and attentional lapses at the post-test. This is interesting, given that their actual attentional performances did not improve. Again, these findings are interesting in light over the recent debates over the use of self-report measures in research on the associations between screen time and psychological functioning. Recent studies show that the use of self-report measures leads to an artificial inflation of effect sizes of these associations (Sewall et al., 2020, Shaw et al., 2020), that self-reports of especially smartphone use are inaccurate (Boase and Ling, 2013, Ellis et al., 2019, Vanden Abeele et al., 2013), and that the discrepancies between self-reported and behavioral measures of smartphone use are themselves correlated with psychosocial functioning (Sewall et al., 2020). The mixed findings in research on the effects of screen time have led to a call for greater conceptual and methodological thoroughness (e.g., Sewall et al., 2020, Shaw et al., 2020, Kaye et al., 2020, Whitlock and Masur, 2019), with a specific call to prioritize behavioral measures over self-report measures. The discrepancy between the behavioral and self-report attention measures may be an artifact of this shortcoming of self-report methodology. The null-results of FoMO, self-control and impulsivity as influential moderators should be elaborated on here. It was expected that a screen time intervention would negatively impact the emotional well-being of individuals, especially those high on FoMO, since reduced social media screen time also reduces the possibility to stay up-to-date. However, our results could not corroborate this notion. Several authors have suggested that rather than being a predictor of social media use, FoMO may be a consequence of such online behavior (e.g., Alutaybi, Al-thani, McAlaney & Ali, 2020; Buglass, Binder, Bets, & Underwood, 2017; Hunt et al., 2018). In the three-week intervention study of Hunt et al. (2018) for example, reduced social media use actually reduced feelings of FoMO. With our data, we could test this possibility. Hence, we executed a repeated measures ANOVA with FoMO as within-subjects factor and condition as between-subjects factor. This analysis revealed that the intervention had no significant effect on experienced FoMO (i.e., the experimental group did not experience larger changes in FoMO than the control group: F(1,74) = 0.09, p = .762). However, there was an effect of time on FoMO: at the post-test, FoMO was significantly lower than at the pre-test (Mdif = 0.18, F(1,74) = 6.65, p = .012). Perhaps this is indicative of an “intervention effect”, since our manipulation had failed and all participant significantly reduced their social media use during the intervention week. Also, an overall finding of this study, which aligns with what prior research has found, was that participants were not able to estimate their screen time accurately: While participants’ actual screen time decreased during the intervention week, their self-reported screen time did not differ over time. Interestingly, participants did report a decrease in habitual use and problematic use. This may suggest that people may have a vague sense of their behavior (“I reduced my smartphone use”), but are unable to convert this adequately into numbers such as screen time in minutes. Alternatively, participants may have provided a socially desirable answer. Either case, our findings aligned with both recent and older studies showing that subjective screen time measures deviate from objective measures (e.g., Andrews et al., 2015, Boase and Ling, 2013, Vanden Abeele et al., 2013, Verbeij et al., 2021).

Limitations and future directions

This study is among the first to examine the effectiveness of a social media screen time reduction on sustained attention and emotional well-being. One of its strengths is the inclusion of behavioral measures, both for screen time and for sustained attention. The study is not without limitations, however. A number of methodological choices were made that significantly limit comparability with other findings in the field. The lack of a true control group (in which no intervention was implemented) and the limited sample size are major limitations to the current study. Future research should include more participants and should consider the use of a true control group, in which no intervention is implemented. Moreover, future research might look at different degrees of screen time reductions, ranging from no reduction to complete abstinence, to better address to what extent the magnitude of the restriction matters. To add, future work ought to consider how to account for individuals’ unique smartphone app repertoires. For instance, some individuals in our study were super users of mobile games rather than of social media. While this may lower generalizability, researchers might account for unique app repertoires by setting time restrictions on an individual’s top 5 apps, or on screen time in-total. Also, a one-week intervention is short. It is likely that a longer intervention is needed to produce an effect on the outcomes examined. Overall, a general observation that we make is that future research on screen time interventions needs to carefully question and compare (1) which types of interventions affect (2) which outcomes, (3) for whom and (4) under which conditions, and (5) because of which theoretical mechanisms. An additional limitation is that, although they were kept blind about which condition they were in, participants were informed about what the experiment was about because willingness to set a restriction to one’s screen time was an important eligibility criterion; installing such a timer without the participants’ informed consent was deemed unethical. Given that the timers were installed on participants’ personal phones, it was easy for participants to look up what restriction was enforced on them. Future research might explore if participants can be kept in the blind. Perhaps this can be attained via the development of a screen time app tailored to this purpose. Notably, even though we found no increase in the use of social media on alternative devices, it should be acknowledged that social media can be accessed from other devices than smartphones alone, something that could be accounted for in future work. In this context, it is relevant to mention Meier and Reinecke’s (2020) taxonomy of computer-mediated communication. Meier and Reinecke advice researchers to carefully consider which level of analysis they are focusing on, most notably that of the device (i.e., a ‘channel-based’ approach) versus that of the functionality or interaction one has through the device. Decisions regarding the level of analysis are typically grounded in theoretical assumptions about the mechanisms explaining effects. We consider this observation relevant to researchers studying ‘digital detoxes’ or screen time interventions, as they similarly have to consider what it is exactly that they want participants to ‘detox’ from, the device, a particular app or functionality, or a type of interaction. Careful consideration of this issue is important, as it may be key to understanding why the extant research shows mixed evidence. In the current study we attempted to address the type of interaction people have with social media, targeting especially ‘passive social media use’ by enforcing only a partial restriction, but we only focused on mobile social media. Future researchers may wish consider more explicitly their level of analysis and how to operationalize it in an intervention. Finally, as other research also shows (e.g., Ohme, Araujo, de Vreese, & Piotrowski, 2020), research designs that include behavioral measures of smartphone use are both ethically and methodologically challenging. In the current study, we only invited participants to the lab with smartphones running on recent versions of IoS or Android. However, some participants showed up unaware of the operating system of their phone. Others used older versions, on which the screen time monitoring features did not function, or had forgotten to activate the screen time monitoring feature prior to the baseline measurement (which we had also specified as an eligibility criterion). This led to exclusion of several participants. Additionally, in a pilot study of the experiment, we noticed that different phone brands and types use different interfaces to display screen time information. This led to confusion, for instance, over whether the displayed numbers were weekly or daily totals. Hence, to avoid errors, we chose not to let participants record their own screen time but rather explicitly asked participants to hand over their phone to a trained researcher who copied the information into a spreadsheet and installed the timers. Participants who felt uncomfortable with this procedure were invited to closely monitor the researcher, or – if desired – to navigate the interface themselves. Although only a handful of students chose this option, this shows that there are ethical implications to using data donation procedures that researchers have to consider. To circumvent these issues in future studies, participants could be instructed to install the same app. However, this will increase the demands placed on participants. Participation in studies of this nature are already highly demanding and intensive, since participants have to undergo a multi-day intervention on behavior that is intrinsic to their daily lives, and with sharing of personal information. Additionally, asking participants to install a specific app that potentially remotely monitors their phone use can raise ethical concerns, especially when using a commercial app that makes profit of monitoring (and selling) user data. Overall, it became clear that it is difficult to achieve the required sample size to investigate complex designs of this nature. Nonetheless, the contrasting findings in extant research call for more research on causal relations between social media use on the one hand, and emotional well-being and cognitive functioning on the other hand. This can only be achieved by the use of slow science and large resources.

Conclusion

This study explored whether a one-week 50% reduction in the time people spend on Facebook, Instagram, Snapchat and YouTube leads to greater improvements in sustained attention and emotional well-being than a 10% reduction. The intervention did not have an effect. Because the manipulation failed, we explored whether participants’ social media screen time reduction predicted changes in attentional performance and emotional well-being, independent of the condition participants were in. This was not the case. Overall, the study calls for further research on the effect of screen time restrictions on attention and emotional well-being. While we encourage scholars to embed behavioral measures into their research design, future studies should be attentive to the methodological implications of using such measures.

CRediT authorship contribution statement

Marloes M.C. Wezel: Conceptualization, Methodology, Formal analysis, Writing - original draft, Writing - review & editing. Elger L. Abrahamse: Methodology, Formal analysis, Writing - review & editing. Mariek M.P. Vanden Abeele: Conceptualization, Methodology, Formal analysis, Writing - review & editing, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Table A1

Summary of smartphone usage at time of pre-test and post-test for the control group (N = 36).

Variable
Pre-test

Post-test
M (95% CI)% reduction installedSDNM (95% CI)% reduction observedSDN
Overall screen time266.88 (234.81, 300.45)103.2436221.79 (191.74, 251.47)95.5336
Overall pickups130.42 (113.48, 149.55)54.6236126.95 (110.76, 145.55)50.4536
Overall notifications325.57 (229.97, 438.20)294.5032344.16 (250.17, 464.57)312.0033
WhatsApp
Screen time42.41 (33.85, 52.00)27.313540.84 (32.77, 49.35)25.4035
Pickups33.27 (24.78, 43.04)25.453134.90 (27.33, 43.61)24.2432
Notifications267.17 (157.71, 406.77)363.1831219.30 (147.87, 302.69)220.3031
Facebook
Screen time24.26 (18.05, 30.73)18.733013.06 (8.53, 18.09)13.1730
Installed timer22.09 (15.74, 28.98)8.9%17.232848.01%
Pickups12.95 (6.82, 20.86)18.04268.18 (4.43, 12.88)11.5827
Notifications3.16 (1.94, 4.55)3.19212.58 (1.37, 4.09)3.5826
Instagram
Screen time58.83 (49.16, 69.81)32.393637.23 (29.69, 45.72)25.4036
Installed timer52.15 (42.04, 63.20)11.4%29.833235.0%
Pickups25.15 (18.95, 32.00)19.583217.61 (12.88, 22.51)14.6732
Notifications15.11 (10.01, 21.71)16.172913.20 (6.62, 22.66)24.8831
Snapchat
Screen time20.15 (12.73, 29.36)22.773014.52 (9.19, 20.74)16.5130
Installed timer18.35 (12.20, 27.16)8.9%21.172826.8%
Pickups18.72 (13.87, 24.62)15.302716.59 (11.08, 23.58)16.9927
Notifications84.96 (44.12, 136.53)110.802462.16 (35.98, 96.23)82.3526
YouTube
Screen time40.10 (26.54, 56.52)40.362921.03 (10.81, 33.80)33.0333
Installed timer33.51 (20.17, 48.97)16.4%36.892553.8%
Pickups4.71 (2.73, 6.94)5.70282.71 (1.32, 4.31)4.0528
Notifications2.31 (0.89, 3.88)3.31171.35 (0.38, 2.49)2.9326
Table B1

Summary of smartphone usage at time of pre-test and post-test for the experimental group (N = 40).

VariablePre-test

Post-test
M (95% CI)% reduction installedSDNM (95% CI)% reduction observedSDN
Overall screen time281.92 (245.14, 318.33)117.3940231.66 (199.43, 265.70)106.6040
Overall pickups126.63 (108.32, 145.39)59.7536135.07 (116.34, 155.80)60.3738
Overall notifications422.45 (273.75, 627.77)587.3235315.37 (255.35, 373.14)188.9436
WhatsApp
Screen time57.21 (44.89, 75.40)49.254055.22 (41.68, 71.69)50.7240
Pickups56.32 (38.53, 77.04)61.793653.30 (37.51, 74.08)56.6337
Notifications338.60 (189.07, 545.43)548.8035212.95 (170.85, 259.30)138.5435
Facebook
Screen time26.53 (19.36, 34.80)24.833610.98 (7.91, 15.26)11.6937
Installed timer13.38 (9.62, 17.57)49.9%12.523558.3%
Pickups9.96 (5.97, 14.33)12.50325.07 (3.19, 7.37)6.5034
Notifications3.30 (1.81, 5.45)4.79292.68 (1.89, 3.57)2.5030
Instagram
Screen time40.01 (32.58, 48.06)23.833917.87 (13.58, 21.88)13.6239
Installed timer20.13 (16.38, 23.98)49.7%11.683953.7%
Pickups16.10 (11.96, 20.62)13.58367.75 (5.67, 9.99)6.6437
Notifications17.63 (9.80, 28.01)24.37317.35 (4.84, 10.12)7.8134
Snapchat
Screen time22.30 (14.87, 31.83)25.253511.21 (7.74, 14.92)11.0835
Installed timer11.41 (7.57, 16.14)48.8%12.483538.6%
Pickups25.57 (13.40, 44.71)48.003111.25 (8.21, 14.55)9.6232
Notifications111.61 (52.27, 198.17)218.913151.85 (37.50, 66.60)42.8831
YouTube
Screen time25.08 (15.05, 36.35)32.59328.87 (5.12, 13.45)12.8933
Installed timer12.86 (7.88, 18.97)48.7%16.103266.3%
Pickups2.41 (1.46, 3.44)2.61250.84 (0.38, 1.44)1.3829
Notifications2.79 (0.75, 5.67)5.52181.14 (0.51, 1.83)1.7126
Table C1

Independent samples t-test of smartphone use to compare the control group (N = 36) with the experimental group (N = 40) at the pre-test.

VariableMdifSDdif95% CItdfp
Overall screen time15.0425.48−35.74, 65.820.59740.557
Overall pickups−3.7913.71−31.15, 23.58−0.28680.783
Overall notifications96.88115.19−133.17, 326.930.84650.403
WhatsApp
Screen time14.809.38−3.90, 33.501.58730.119
Pickups23.0411.270.39, 45.702.0447.970.046*
Notifications71.43116.16−160.63, 303.430.62640.541
Facebook
Screen time3.665.15−6.61, 13.930.71740.480
Pickups−2.994.02−11.04, 5.06−0.74560.460
Notifications0.141.20−2.27, 2.560.12480.905
Instagram
Screen time−19.826.53–32.84, −6.80−3.03740.003*
Pickups−9.054.13−17.33, −0.76−2.1254.380.033*
Notifications2.515.38−8.25, 13.270.47580.642
Snapchat
Screen time2.725.40−8.04, 13.490.51740.615
Pickups6.859.65−12.48, 26.180.71560.481
Notifications26.6548.98−71.59, 124.890.54530.589
YouTube
Screen time−12.248.08−28.35, 3.87−1.51740.134
Pickups−2.301.20−4.73, 0.12−1.9338.800.062
Notifications0.481.55−2.67, 3.640.31330.758

Note: * p < .05

Table D1

Descriptive statistics of the timer analysis.

Did participants see and respected a timer?
Never (N = 5)Seen and respected (N = 19)Seen and violated at least once (N = 38)Always violated (N = 4)
GenderMaleFemale14514152322
Phone typeAndroidiPhone50127281022
ConditionExperimentalControl14109231522
Table E1

Independent samples t-test results of differences in ignoring and reaching timers between conditions and operating system.

MSD95% CIBootstrap N = 1,000dftp
Violated
ConditionControlExperimental2.334.064.125.471.00, 4.032.42, 6.03641.420.160
Operating systemiPhoneAndroid1.843.853.485.350.63, 3.612.47, 5.5250.611.510.136
GenderMaleFemale4.392.676.513.822.09, 7.341.61, 3.7530.301.160.254
Reached
Condition*ControlExperimental2.675.612.344.271.87, 3.554.22, 7.0055.973.550.001*
Operating systemiPhoneAndroid3.684.512.774.152.47, 4.903.35, 5.76640.800.428
GenderMaleFemale3.964.443.054.172.79, 5.213.33, 5.7464−0.490.625

Note. * p < .05

Table F1

Correlations between (social media) screen time and outcome variables at baseline measurement.

1.2.3.4.5.6.7.8.9.10.11.12.
1. Overall Screen Time0.612***-0.0100.0520.163-0.112-0.0020.148-0.094-0.0240.1190.014
2. Social Media Screen Time<0.001***0.0130.0930.106-0.291**-0.0360.158-0.096-0.1100.160-0.135
3. SART Response Time0.9300.9120.613***0.047-0.056-0.028-0.034-0.091-0.0760.0500.108
4. SART Accuracy0.6540.426<0.001***-0.185-0.237**-0.287**-0.0090.068-0.0120.170-0.169
5. MRT Variability0.1590.3640.6890.1100.271**0.137-0.0730.0640.126-0.0190.340**
6. Attentional Lapses0.3340.011**0.6290.040**0.018**0.638***-0.262**0.1140.358**-0.539***0.441***
7. Cognitive Errors Due to Attentional Lapses0.9860.7600.8110.012**0.237<0.001***-0.244**-0.0420.170-0.628***0.484***
8. Positive Emotions0.2010.1740.7680.9410.5320.022**0.033**0.093-0.1580.169-0.207
9. Negative Emotions0.4210.4070.4340.5610.5840.3250.7210.4230.261**-0.072-0.143
10. FoMO (pre-test)0.8370.3430.5120.9180.2780.001***0.1420.1730.023**-0.268**0.202
11. Self-control0.3040.1670.6690.1430.873<0.001***<0.001***0.1440.5360.019-0.414
12. Impulsivity0.9030.2450.3550.1450.003**<0.001***<0.001***0.0730.2170.080<0.001***

Note. Above diagonal = Pearson’s r, below diagonal = p-value.

* p < .10, ** p < .05, *** p < .001

  41 in total

Review 1.  The placebo effect: dissolving the expectancy versus conditioning debate.

Authors:  Steve Stewart-Williams; John Podd
Journal:  Psychol Bull       Date:  2004-03       Impact factor: 17.737

2.  Everyday attention lapses and memory failures: the affective consequences of mindlessness.

Authors:  Jonathan S A Carriere; J Allan Cheyne; Daniel Smilek
Journal:  Conscious Cogn       Date:  2007-06-15

3.  The association between adolescent well-being and digital technology use.

Authors:  Amy Orben; Andrew K Przybylski
Journal:  Nat Hum Behav       Date:  2019-01-14

4.  Taking a Break from Social Media Improves Wellbeing Through Sleep Quality.

Authors:  Sarah Graham; Andre Mason; Benjamin Riordan; Taylor Winter; Damian Scarf
Journal:  Cyberpsychol Behav Soc Netw       Date:  2020-12-31

Review 5.  Media Multitasking and Cognitive, Psychological, Neural, and Learning Differences.

Authors:  Melina R Uncapher; Lin Lin; Larry D Rosen; Heather L Kirkorian; Naomi S Baron; Kira Bailey; Joanne Cantor; David L Strayer; Thomas D Parsons; Anthony D Wagner
Journal:  Pediatrics       Date:  2017-11       Impact factor: 7.124

6.  Decreases in psychological well-being among American adolescents after 2012 and links to screen time during the rise of smartphone technology.

Authors:  Jean M Twenge; Gabrielle N Martin; W Keith Campbell
Journal:  Emotion       Date:  2018-01-22

7.  Taking a Short Break from Instagram: The Effects on Subjective Well-Being.

Authors:  Giulia Fioravanti; Alfonso Prostamo; Silvia Casale
Journal:  Cyberpsychol Behav Soc Netw       Date:  2019-12-17

8.  The Facebook Experiment: Quitting Facebook Leads to Higher Levels of Well-Being.

Authors:  Morten Tromholt
Journal:  Cyberpsychol Behav Soc Netw       Date:  2016-11

9.  Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study.

Authors:  Jean M Twenge; W Keith Campbell
Journal:  Prev Med Rep       Date:  2018-10-18

10.  Memory failure predicted by attention lapsing and media multitasking.

Authors:  Kevin P Madore; Anna M Khazenzon; Cameron W Backes; Jiefeng Jiang; Melina R Uncapher; Anthony M Norcia; Anthony D Wagner
Journal:  Nature       Date:  2020-10-28       Impact factor: 69.504

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.