Literature DB >> 27703756

Online screening and feedback to increase help-seeking for mental health problems: population-based randomised controlled trial.

Philip J Batterham1, Alison L Calear2, Matthew Sunderland3, Natacha Carragher4, Jacqueline L Brewer5.   

Abstract

BACKGROUND: Community-based screening for mental health problems may increase service use through feedback to individuals about their severity of symptoms and provision of contacts for appropriate services. AIMS: The effect of symptom feedback on service use was assessed. Secondary outcomes included symptom change and study attrition.
METHOD: Using online recruitment, 2773 participants completed a comprehensive survey including screening for depression (n=1366) or social anxiety (n=1407). Across these two versions, approximately half (n=1342) of the participants were then randomly allocated to receive tailored feedback. Participants were reassessed after 3 months (Australian New Zealand Clinical Trials Registry ANZCTR12614000324617).
RESULTS: A negative effect of providing social anxiety feedback to individuals was observed, with significant reductions in professional service use. Greater attrition and lower intentions to seek help were also observed after feedback.
CONCLUSIONS: Online mental health screening with feedback is not effective for promoting professional service use. Alternative models of online screening require further investigation. DECLARATION OF INTEREST: None. COPYRIGHT AND USAGE: © The Royal College of Psychiatrists 2016. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence.

Entities:  

Year:  2016        PMID: 27703756      PMCID: PMC4995576          DOI: 10.1192/bjpo.bp.115.001552

Source DB:  PubMed          Journal:  BJPsych Open        ISSN: 2056-4724


Screening for mental health problems in clinical settings has been purported to increase recognition and lead to better treatment outcomes.[1] However, evidence from systematic reviews and meta-analyses suggests that screening alone has little impact on the detection and management of depression by clinicians.[2-5] The US Preventive Services Task Force now recommends routine screening for depression only if there are systems in place to deliver adequate treatment and follow-up.[6] Furthermore, the UK National Institute for Health and Care Excellence (NICE) guidelines no longer recommend screening in primary care.[7] Nevertheless, research has tended to focus on the use of screening tools in clinical settings including primary care, rather than in population settings. Screening in the population may empower the individual to seek appropriate care by providing them with tailored feedback about their symptoms and providing recommendations for appropriate services.[8,9] The rise of internet technology has enabled population screening with feedback to be rapidly disseminated.[10,11] Uncontrolled studies have suggested that providing feedback from community-based screening may be effective for encouraging service use[12] and encouraging retention in research studies.[11] A quasi-experimental study has suggested that depression screening may reduce suicide in Japanese older adults.[13] However, there have been no randomised controlled trials (RCTs) of online screening, to evaluate how the use of online screening and feedback platforms might impact outcomes for individuals at risk of mental health problems. In addition, there have been very few studies of community-based screening programmes for anxiety disorders.[10,14] The current study describes the outcomes of an RCT that aimed to evaluate whether screening with tailored feedback – including listings and linkage to appropriate clinical resources – would increase help seeking from professional sources. Use of professional services, rather than informal sources of help, was chosen as the primary outcome because health professionals are more likely to provide evidence-based treatments and more accurate assessment and information than other sources.[15] A number of secondary outcomes were also investigated, specifically whether screening would increase intentions to seek help, decrease symptom levels for the target disorder, increase quality of life, decrease disability or decrease attrition from the study. Screening for two of the most common mental health problems, depression and social anxiety, was examined in two independent samples recruited simultaneously. Participants in each sample were screened online and randomly allocated to receive: (a) tailored feedback on their symptoms with appropriate resources, or (b) no feedback on their symptoms. On the basis of previous uncontrolled screening trials,[9,12] it was hypothesised that participants in the feedback conditions for each of the disorders would have significantly increased rates of service use after 3 months, compared with those in the control (no feedback) conditions.

Method

Participants and procedure

Participants were recruited from the online social media website Facebook. The target population of Facebook users aged ≥18 years was 8.8 million, representing approximately 45% of the total Australian population aged ≥18. From August to December 2014, a series of advertisements were placed on Facebook targeting Australian adults with the wording: ‘Assessing Mental Health Survey: Participate in a study examining your mental health by completing a 40 minute survey now’. These advertisements linked individuals to one of two versions of the survey. The surveys were administered online using LimeSurvey, with data stored on a secure server at the Australian National University (ANU), Canberra. The study had ethics approval from the ANU Human Research Ethics Committee (protocol #2013/509). The trial was registered with the Australian New Zealand Clinical Trials Registry (ANZCTR12614000324617). Two versions of the survey were administered, with each version providing feedback on symptoms of a different mental disorder: depression or social anxiety. During the recruitment period, 27 158 people clicked the advertisement and 12 240 ‘liked’ the study's page. A total of 6292 people consented to participate in the survey, with 3323 (52.8% of consenters) completing the survey. Of these, 2773 (83.4% of completers) consented to participate in the follow-up assessment by providing an email address at the end of the survey, with 966 (34.8% response rate) commencing the follow-up and 895 (32.3%) completing the follow-up assessment. This sample size provided 95% power to detect a 20% increase in service use (from 58.0% to 69.6%) at follow-up. A CONSORT diagram of participant flow through the study is presented in Fig. 1.
Fig. 1

CONSORT diagram showing flow of participants in the trial

Participants were provided with comprehensive survey information before giving consent to participate. The information sheet outlined what was involved in the survey, including the potential risks of participation, and provided contact information for psychological and crisis services across Australia. The survey took approximately 40–60 min to complete. Towards the end of the survey, participants were asked whether they would be willing to complete a brief survey after 3 months, by providing their email address. Those who provided an email address were randomly allocated (simple randomisation by concealed computer assignment in 1:1 ratio) to receive tailored feedback about their mental health (intervention group for depression or social anxiety) or receive no feedback (control group). Participants completed a brief survey (approximately 15 min) 3 months after the initial survey, with two email reminders given 1 week apart when the follow-up survey was due.

Intervention conditions

Participants in the feedback intervention condition for depression or social anxiety were informed that their symptoms indicated ‘low risk’, ‘at risk’ or ‘high risk’. Category membership was determined based on scores on either the Patient Health Questionnaire (PHQ-9)[16] or the Social Phobia Screener (SOPHS)[17,18] for depression and social anxiety respectively. Low-risk participants were classified as those scoring <10 on the PHQ-9 or <6 on the SOPHS. At-risk participants scored 10–19 on the PHQ-9 or 6–11 on the SOPHS, whereas high-risk participants scored >19 on the PHQ-9 or >11 on the SOPHS. These cut points were determined based on previous validation studies of the screening instruments.[18,19] Feedback was provided using a traffic-light image as illustrated in Fig. DS1, towards the end of the survey approximately 10–20 min after completion of the screening measure. The text of the low-risk feedback was presented below the image in the following format: ‘What does it mean? Your [depression/social anxiety] score was in the low-risk category. This suggests that you are unlikely to be experiencing [depression/social anxiety]. If you would like more information on [depression/social anxiety], a number of websites provide information about the treatment and management of [depression/social anxiety]’. Links to a number of websites providing evidence-based information were then provided. In the at-risk condition, a similar format was used but with additional information: ‘Your [depression/social anxiety] score was in the at risk category. This suggests that you may be at risk of experiencing [depression/social anxiety]. You may benefit from seeking help from one of the resources listed below’. Brief psychoeducation regarding specific evidence-based treatment options and treatment sources for the disorder was then provided, and a list of evidence-based effective online therapy programmes was provided. The at-risk and high-risk groups also received the same informational resources as the low-risk groups. The high-risk groups for depression and social anxiety received similar feedback to the at-risk group, with wording slightly altered to indicate increased risk: ‘Your [depression/social anxiety] score was in the high risk category. This suggests that you are likely to be experiencing problems with [depression/social anxiety]’. In addition, professional help seeking from a general practitioner (GP) or mental health professional was encouraged: ‘Many people find that seeking help from a GP or mental health professional is helpful for reducing the symptoms of depression. Take a look at the resources below to find an appropriate service for you’.

Control conditions

Participants in the control conditions for depression and social anxiety did not receive any feedback about their symptom levels. However, to meet ethical and duty of care requirements, all participants received generic, untailored advice at the conclusion of the survey that they should contact a GP, crisis telephone line, online support/information service or crisis service if they were concerned about their mental health.

Measures

The primary outcome was self-reported professional service use at the 3-month follow-up assessment. The Actual Help Seeking Questionnaire (AHSQ)[20] was administered to all participants, enquiring: ‘Have you sought help for a mental health problem from any of the following sources in the past 3 months?’, followed by 10 response choices (mental health professional, doctor/GP, intimate partner, friend, parent, other relative, telephone helpline, minister/religious leader, other, nobody). Participants who checked ‘Mental health professional (e.g. psychologist, social worker, counsellor)’ or ‘Doctor/GP’ were classified as using professional services. A number of secondary outcomes were also investigated. Symptom severity for the disorder of focus (depression or social anxiety) was assessed using the PHQ-9 (9 items) or SOPHS (5 items) respectively. Symptom scores on these scales can range from 0 to 27 and 0 to 20 respectively, with higher scores indicating greater symptom severity. These scales have previously been shown to be accurate in screening for risk of disorder[18,19] and had high internal consistency in the current sample (Cronbach's α=0.93 and 0.96 for the PHQ-9 and SOPHS respectively). Intentions to seek help from a medical professional (mental health professional or doctor/GP) for a mental health problem were assessed using two items from the General Help Seeking Questionnaire (GHSQ),[21] with total scale scores ranging from 1 to 14 and higher scores indicating greater intentions to seek help. Health-related quality of life was assessed using the 12-item Assessment of Quality of Life (AQoL-4D) instrument.[22] The scale covers four dimensions, independent living, relationships, senses and mental health, and had fair internal consistency in the current sample (Cronbach α=0.78). Utility scores were calculated as prescribed by the scale authors, ranging from 0 to 1, with higher utility scores indicating greater quality of life. Mental health-related disability was assessed based on self-reported days out of role (i.e. number of days for which the individual was completely unable to work or carry out normal activities) in the past month due to mental health problems. All outcomes were assessed at the baseline and follow-up assessments. Independent predictors were assessed based on self-report questions at baseline, including age, gender, education, employment, area of residence and language spoken at home.

Analysis

Sample characteristics were compared across conditions (feedback v. no feedback) and versions of the intervention (depression v. social anxiety), based on χ 2-tests for categorical variables and F-tests from one-way ANOVA for continuous variables. Binary logistic regression analysis was used to compare participants who completed the follow-up assessment with those who did not to identify correlates of attrition. All analyses were undertaken on an intent-to-treat (ITT) basis. Mixed model repeated measures (MMRM) analyses[23] were used to include all available data from participants who consented to follow-up (n=2773). This approach yields unbiased estimates of intervention effects under the assumption that data were missing at random. An unstructured matrix was assumed and degrees of freedom were estimated using Satterthwaite's correction. Analyses were conducted using the combined sample from both versions of the intervention (depression and social anxiety), repeated separately within the two versions to test for disorder-specific effects and repeated separately for the three levels of feedback (low risk, at risk, high risk). The analysis of the primary outcome, professional service use, was based on a binary outcome, necessitating the use of a mixed effects logit analysis that accounted for initial service use and incorporated all available data. This analysis was conducted in StataIC v10 (StataCorp, College Station, Texas, USA) using the xtlogit command. All remaining analyses were conducted by using SPSS v20 (IBM Corp, Chicago, Illinois, USA).

Results

Sample characteristics at baseline by version (depression and social anxiety) and intervention condition (feedback v. no feedback) are shown in Table DS1. There were no significant differences between the intervention and control conditions on the basis of any demographic or clinical indicators, with the exception of gender and screening status. Males were more highly represented in the depression version (=12.55, P=0.006), and participants in the depression version were more likely to screen as at risk or high risk than those in the social anxiety version (=14.68, P=0.023). However within the two versions, there were no significant differences between the intervention and control groups. The majority of participants were middle-aged, female and well educated. The sample tended to have elevated depression and social anxiety symptoms, with mean scores close to clinical cut points. Quality-of-life scores were lower than population norms,[24] and participants had high rates of service use and a mean of 3.4 days out of role in the past month due to mental health problems.

Attrition effects

Attrition was examined using a binary logistic regression, to examine whether receiving feedback or other participant characteristics were associated with completion of the follow-up assessment (Table 1). The depression version had higher completion rates than the social anxiety version. Receiving feedback was associated with significantly less completion of follow-up, with approximately 31% higher odds of completion among those who did not receive feedback overall (27% for the depression version and 36% for the social anxiety version). However, the level of feedback (reflecting symptom severity) was not significantly associated with attrition. There were also significantly higher levels of completion among participants who were older, were employed, had higher quality of life or had greater intentions to seek help for a mental health problem.
Table 1

Binary logistic regression model examining factors associated with the completion of the follow-up assessment at 3 months

Estimates.e.Odds ratioP
Survey version<0.001
 Social anxiety (reference)0.001.00
 Depression1.180.093.24<0.001

Intervention condition0.003
 No feedback control (reference)0.001.00
 Feedback intervention−0.260.090.770.003

Feedback level0.576
 Low risk (reference)0.001.00
 At risk0.010.111.010.953
 High risk0.160.171.180.332

Age group 0.023
 18-25 (reference)0.001.00
 26-350.400.211.490.054
 36-450.530.191.70 0.004
 46-550.430.181.54 0.015
 56-650.620.181.85 0.000
 >650.440.201.56 0.029

Gender0.668
 Female (reference)0.001.00
 Male−0.050.110.960.668

Employment 0.014
 Employed (reference)0.001.00
 Not in employment−0.240.100.79 0.014

Language spoken0.210
 English only (reference)0.001.00
 Another language−0.230.190.790.210

Years of education0.030.021.030.072

Area of residence0.217
 Metropolitan (reference)0.001.00
 Regional−0.130.100.880.183
 Rural−0.200.130.820.127

AQoL utility score0.480.221.62 0.026

Days out of role0.000.011.000.549

Professional help-seeking intentions0.060.031.06 0.026

Constant−2.430.350.09<0.001

AQoL, Assessment of Quality of Life.

Bold values indicate P<0.05.

AQoL, Assessment of Quality of Life. Bold values indicate P<0.05.

Intervention effects

To examine whether feedback modified outcomes, the interaction between time and condition from linear and binary mixed effects models was tested. These are presented in Table 2 for the total sample, for each version of the intervention, and based on symptom feedback level. There were no overall effects of feedback on help-seeking behaviour or any secondary outcomes. However, in the social anxiety version only, there were small significant effects of symptom feedback on professional service use (between group effect size: Cohen's d=−0.17)[25] and help-seeking intentions (Cohen's d=−0.19), both favouring the control condition (no feedback). The significant interaction effect for the social anxiety intervention on use of professional health services is illustrated in Fig. 2, with service use increasing more for participants in the control group, most prominently among high-risk participants (although no significant subgroup effect was found for high-risk participants). In a sensitivity analysis, we included variables associated with attrition in the mixed models, and found no new significant intervention effects for any outcome, whereas the effects of social anxiety feedback on help-seeking behaviours (Z=−2.03, P=0.042) and help-seeking intentions (F=6.1, P=0.014) remained significant.
Table 2

Interaction effects between time (pre/post) and intervention condition (feedback/control) from linear and binary mixed effects models for the total sample and subgroups[a]

SampleF/Z[b] d.f.P
Total sample
 Professional service use0.60.547
 AQoL utility score3.01, 959.80.082
 Days out of role0.31, 1080.70.571
 Help-seeking intentions1.51, 1107.20.227
 Depression score (PHQ-9)0.11, 915.80.724
 Social anxiety score (SOPHS)0.21, 922.70.684

Version 1 (depression)
 Professional service use1.10.287
 AQoL utility score1.51, 657.70.224
 Days out of role0.11, 726.00.823
 Help-seeking intentions0.31, 770.90.591
 Depression score (PHQ-9)0.61, 649.80.431
 Among low risk: PHQ-9 score0.11, 378.80.713
 Among at risk: PHQ-9 score1.81, 169.10.187
 Among high risk: PHQ-9 score0.11, 69.40.810

Version 2 (social anxiety)
 Professional service use2.1 0.038
 AQoL utility score1.61, 300.10.205
 Days out of role0.11, 342.80.720
 Help-seeking intentions6.01, 326.70.015
 Social anxiety score (SOPHS)0.71, 297.90.389
 Among low risk: SOPHS score0.51, 185.30.491
 Among at risk: SOPHS score0.41, 61.50.544
 Among high risk: SOPHS score4.01, 38.00.051

Low risk participants*
 Professional service use1.5 0.134
 AQoL utility score0.11, 588.90.727
 Days out of role0.51, 672.90.460
 Help-seeking intentions4.41, 681.6 0.036

At risk participants*
 Professional service use 1.1 0.276
 AQoL utility score2.91, 253.70.092
 Days out of role0.51, 261.80.462
 Help-seeking intentions0.71, 279.00.399

High risk participants*
 Professional service use1.5 0.123
 AQoL utility score2.71, 113.70.102
 Days out of role0.51, 130.10.498
 Help-seeking intentions0.81, 127.30.380

AQoL, assessment of quality of life; PHQ, Patient Health Questionnaire; SOPHS, Social Phobia Screener.

models are adjusted for main effects of time and condition, except for models marked *, which are adjusted for version, time and condition and all 2/3-way interactions between these variables.

F-tests are based on time × condition interaction terms from linear mixed models; z-tests are based on time × condition interaction terms from binary mixed models.Italic values indicate Z-test. bold values indicate P<0.05.

Fig. 2

Effect of feedback on service use in the social anxiety intervention

AQoL, assessment of quality of life; PHQ, Patient Health Questionnaire; SOPHS, Social Phobia Screener. models are adjusted for main effects of time and condition, except for models marked *, which are adjusted for version, time and condition and all 2/3-way interactions between these variables. F-tests are based on time × condition interaction terms from linear mixed models; z-tests are based on time × condition interaction terms from binary mixed models.Italic values indicate Z-test. bold values indicate P<0.05. To further explore the significant findings, service use at baseline and follow-up across versions was tabulated among completers only, as shown in Table DS2, with participants classified as ongoing service users (using professional services at both time points), service use exiters (using services only at baseline), new service users (using services only at follow-up) or non-service users (at neither time point). There were no differences overall in terms of participants entering or exiting treatment within versions, suggesting the effects were general rather than specific to participants already in treatment. Nevertheless, this three-way breakdown of completing participants (condition × disorder focus × risk status) had limited power to find effects.

Discussion

The current trial assessed whether online screening with feedback increased professional service use in a large community-based sample. Results indicated very little benefit of providing tailored feedback based on online screening to promote formal help seeking. If anything, there appeared to be a small negative effect of providing feedback to individuals, with reductions in professional service use among those given feedback about symptoms of social anxiety. Greater attrition and lower intentions to seek help were also observed in this group, suggesting that feedback for social anxiety may actually be detrimental to both help-seeking outcomes and research engagement. There are a number of potential explanations for these findings, which were contrary to our hypotheses. Participants given feedback that they were at risk of social anxiety were provided with links to online evidence-based programmes for reducing social anxiety symptoms. It is possible that these participants used these programmes and found them beneficial, resulting in less need for traditional face-to-face services. However, there was no significant change in symptoms reported, suggesting that this explanation may not fully account for the study's observations. Unfortunately, use of internet programmes for the treatment of social anxiety symptoms was not assessed in the trial, due to the complexity in differentiating online content that is evidence-based from content that is not based on self-report. Specifically, the public may have difficulty distinguishing different forms of online support and their quality[26] and difficulty in self-reporting which programmes they used, further complicated by a need to account for levels of engagement. Another explanation for the findings may be that social anxiety by its nature may involve avoidance of face-to-face services,[27] with treatment delay common among individuals with social anxiety symptoms.[28] Feedback regarding symptom levels may have inadvertently exacerbated avoidance behaviours. Another explanation may be that the control group may have been prompted to seek help based on their impressions of their responses to the mental health scales, accompanied by the provision of contact information for mental health services at the end of the survey. Providing such information without directive feedback may be less confronting, particularly for individuals with social anxiety. The finding that participants who received feedback had higher attrition was also unexpected. The mechanism underlying this finding is unclear. It may be that those who did not receive feedback anticipated that participation might lead to additional insight into their mental health, whereas those who received feedback were more satisfied with their participation at the end of the baseline survey when they received feedback. Further examination of this outcome is warranted. Other factors significantly associated with attrition were younger age, poorer quality of life, unemployment, low help-seeking intentions and receiving the social anxiety survey version. Greater adherence among older and employed participants has been observed previously.[11,29] The effect of quality of life suggests that poor health may be a barrier to research participation, whereas the effect of help-seeking intentions may reflect a propensity towards agreeableness being associated with both help-seeking intentions and survey completion. The version effect (depression v. social anxiety) may be an artefact of the recruitment method; although recruitment for both versions of the survey occurred simultaneously, there were times when recruitment for one version may have been dominant (due to Facebook algorithms and fluctuations in public interest). This may have led to differences in the samples recruited for the two versions of the survey. This study was the first RCT to test online mental health screening and feedback, and the first to trial the effects of social anxiety feedback. The study benefitted from recruitment of a large community-based sample. However, limitations of the findings should be noted. First, possible factors (e.g. online service use, survey satisfaction) that may impact help-seeking behaviours and engagement with the study were not collected. Additionally, the measures used were based on self-report, which may be prone to response biases and not adequately capture professional service use. Nevertheless, the use of an online survey with limited identifiable information reduced the risk of participants giving socially desirable responses. Second, the service use measure broadly queried use of services for mental health problems, rather than for the specific disorder of interest. Therefore, changes in service use may have reflected mental health problems unrelated to the focus of the intervention. Third, the sample may have been prone to self-selection biases, with underrepresentation of males and overrepresentation of individuals with mental health problems. Therefore, the findings may not generalise to other forms of online screening, although it might be anticipated that people experiencing mental health symptoms are more likely to self-screen.[11] Fourth, the follow-up period of 3 months was chosen to strike a balance between attrition (longer study periods may result in greater drop-out)[30] and sufficient time to observe changes in service use. Nevertheless, the period may not have been sufficient for many of the participants to demonstrate a change in help-seeking behaviours, given the evidence of long-term treatment delay particularly in social anxiety.[28] In addition, despite the use of robust statistical methods that account for differential attrition, completion rates for the follow-up assessment were suboptimal, although similar to other fully online studies.[31] The possibility that attrition was positively associated with help-seeking behaviour remains; future investigation into reasons for attrition may be warranted. Finally, despite the large sample, effect sizes were small, suggesting a number of other factors that are important to service use outcomes were not included in the analyses. In conclusion, the findings suggest that providing tailored feedback based on online screening may be ineffective for promoting professional service use or for mental health outcomes. These findings echo cautions that there is little evidence to support screening in primary care settings.[3] Effective screening may require embedding screening tools within a mental health service, rather than simply using feedback to encourage service use. However, given the present findings that feedback may be detrimental to service use outcomes and research engagement, clinicians and researchers should be cautious about using screening feedback to support engagement from patients or participants. Further investigation is warranted into other uses of screening, with or without feedback, such as using screening to tailor services and identify specific targets for intervention.
  26 in total

Review 1.  Mental health literacy. Public knowledge and beliefs about mental disorders.

Authors:  A F Jorm
Journal:  Br J Psychiatry       Date:  2000-11       Impact factor: 9.319

2.  Community-based prevention for suicide in elderly by depression screening and follow-up.

Authors:  Hirofumi Oyama; Junichi Koida; Tomoe Sakashita; Keiko Kudo
Journal:  Community Ment Health J       Date:  2004-06

3.  Hierarchical screening for multiple mental disorders.

Authors:  Philip J Batterham; Alison L Calear; Matthew Sunderland; Natacha Carragher; Helen Christensen; Andrew J Mackinnon
Journal:  J Affect Disord       Date:  2013-06-24       Impact factor: 4.839

4.  Population norms for the AQoL derived from the 2007 Australian National Survey of Mental Health and Wellbeing.

Authors:  Graeme Hawthorne; Sam Korn; Jeff Richardson
Journal:  Aust N Z J Public Health       Date:  2013-02       Impact factor: 2.939

5.  The PHQ-9: validity of a brief depression severity measure.

Authors:  K Kroenke; R L Spitzer; J B Williams
Journal:  J Gen Intern Med       Date:  2001-09       Impact factor: 5.128

6.  Participant retention in an automated online monthly depression rescreening program: patterns and predictors.

Authors:  Supria Gill; Omar Contreras; Ricardo F Muñoz; Yan Leykin
Journal:  Internet Interv       Date:  2014-03

7.  Validation and utility of a self-report version of PRIME-MD: the PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire.

Authors:  R L Spitzer; K Kroenke; J B Williams
Journal:  JAMA       Date:  1999-11-10       Impact factor: 56.272

Review 8.  Screening and case finding instruments for depression.

Authors:  S Gilbody; A O House; T A Sheldon
Journal:  Cochrane Database Syst Rev       Date:  2005-10-19

9.  Service use by at-risk youths after school-based suicide screening.

Authors:  Madelyn S Gould; Frank A Marrocco; Kimberly Hoagwood; Marjorie Kleinman; Lia Amakawa; Elizabeth Altschuler
Journal:  J Am Acad Child Adolesc Psychiatry       Date:  2009-12       Impact factor: 8.829

10.  Website quality indicators for consumers.

Authors:  Kathleen M Griffiths; Helen Christensen
Journal:  J Med Internet Res       Date:  2005-11-15       Impact factor: 5.428

View more
  15 in total

1.  "I Wanted to See How Bad it Was": Online Self-screening as a Critical Transition Point Among Young Adults with Common Mental Health Conditions.

Authors:  Kaylee Payne Kruzan; Jonah Meyerhoff; Theresa Nguyen; David C Mohr; Madhu Reddy; Rachel Kornfield
Journal:  Proc SIGCHI Conf Hum Factor Comput Syst       Date:  2022-04-29

2.  Increasing intentions to use mental health services among university students. Results of a pilot randomized controlled trial within the World Health Organization's World Mental Health International College Student Initiative.

Authors:  David Daniel Ebert; Marvin Franke; Fanny Kählke; Ann-Marie Küchler; Ronny Bruffaerts; Philippe Mortier; Eirini Karyotaki; Jordi Alonso; Pim Cuijpers; Matthias Berking; Randy P Auerbach; Ronald C Kessler; Harald Baumeister
Journal:  Int J Methods Psychiatr Res       Date:  2018-11-20       Impact factor: 4.035

3.  Using New and Emerging Technologies to Identify and Respond to Suicidality Among Help-Seeking Young People: A Cross-Sectional Study.

Authors:  Frank Iorfino; Tracey A Davenport; Laura Ospina-Pinillos; Daniel F Hermens; Shane Cross; Jane Burns; Ian B Hickie
Journal:  J Med Internet Res       Date:  2017-07-12       Impact factor: 5.428

4.  Online versus paper-based screening for depression and anxiety in adults with cystic fibrosis in Ireland: a cross-sectional exploratory study.

Authors:  Jennifer Cronly; Alistair J Duff; Kristin A Riekert; Ivan J Perry; Anthony P Fitzgerald; Aine Horgan; Elaine Lehane; Barbara Howe; Muireann Ni Chroinin; Eileen Savage
Journal:  BMJ Open       Date:  2018-01-21       Impact factor: 2.692

5.  Adolescents' Perspectives on a Mobile App for Relationships: Cross-Sectional Survey.

Authors:  Bridianne O'Dea; Melinda Rose Achilles; Aliza Werner-Seidler; Philip J Batterham; Alison L Calear; Yael Perry; Fiona Shand; Helen Christensen
Journal:  JMIR Mhealth Uhealth       Date:  2018-03-08       Impact factor: 4.773

6.  Impact of Mental Health Screening on Promoting Immediate Online Help-Seeking: Randomized Trial Comparing Normative Versus Humor-Driven Feedback.

Authors:  Isabella Choi; David N Milne; Mark Deady; Rafael A Calvo; Samuel B Harvey; Nick Glozier
Journal:  JMIR Ment Health       Date:  2018-04-05

7.  Using different Facebook advertisements to recruit men for an online mental health study: Engagement and selection bias.

Authors:  Isabella Choi; David N Milne; Nicholas Glozier; Dorian Peters; Samuel B Harvey; Rafael A Calvo
Journal:  Internet Interv       Date:  2017-03-02

8.  Impact of online mental health screening tools on help-seeking, care receipt, and suicidal ideation and suicidal intent: Evidence from internet search behavior in a large U.S. cohort.

Authors:  Nicholas C Jacobson; Elad Yom-Tov; Damien Lekkas; Michael Heinz; Lili Liu; Paul J Barr
Journal:  J Psychiatr Res       Date:  2020-11-09       Impact factor: 4.791

9.  Social media recruitment for mental health research: A systematic review.

Authors:  Catherine Sanchez; Adrienne Grzenda; Andrea Varias; Alik S Widge; Linda L Carpenter; William M McDonald; Charles B Nemeroff; Ned H Kalin; Glenn Martin; Mauricio Tohen; Maria Filippou-Frye; Drew Ramsey; Eleni Linos; Christina Mangurian; Carolyn I Rodriguez
Journal:  Compr Psychiatry       Date:  2020-08-12       Impact factor: 3.735

10.  Preliminary Analysis of the Factor Structure, Reliability and Validity of an Obsessive-Compulsive Disorder Screening Tool for Use with Adults in Malaysia.

Authors:  Normah Che Din; Liana Mohd Nawi; Shazli Ezzat Ghazali; Mahadir Ahmad; Norhayati Ibrahim; Zaini Said; Noh Amit; Ponnusamy Subramaniam
Journal:  Int J Environ Res Public Health       Date:  2019-11-28       Impact factor: 3.390

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.