Literature DB >> 33725530

Considerations for evaluating digital mental health tools remotely- reflections after a randomized trial of Thought Spot.

Brian Lo1, Jenny Shi2, Howard W Wong3, Alexxa Abi-Jaoudé4, Andrew Johnson4, Elisa Hollenberg4, Gloria Chaim5, Kristin Cleverley6, Joanna Henderson5, Andrea Levinson7, Janine Robb8, Marcos Sanches9, Aristotle Voineskos10, David Wiljer11.   

Abstract

Entities:  

Mesh:

Year:  2021        PMID: 33725530      PMCID: PMC8807138          DOI: 10.1016/j.genhosppsych.2021.02.010

Source DB:  PubMed          Journal:  Gen Hosp Psychiatry        ISSN: 0163-8343            Impact factor:   3.238


× No keyword cloud information.
Physical distancing restrictions amidst the COVID-19 pandemic have significantly hampered the ability to conduct in-person research [1]. The pandemic has also create challenges in mental health service delivery, exacerbated existing mental health concerns, and accelerated the need for digital mental health tools [2]. Such tools, including mental health apps and virtual care platforms, can potentially offer improved access to services and provide mental health support from afar [2]. However, while digital mental health tools are promising, it is still imperative to evaluate the efficacy, safety, and usability of these solutions before widespread implementation. Through an iterative process, our team developed and evaluated Thought Spot [3]; a mobile health app designed to support transition-aged youth (17 to 29 years old) in seeking mental health and wellness resources. The platform features an interactive map where users can learn and conduct filtered searches for resources that are relevant to their mental health needs. Users can also participate in crowdsourcing, in which they can add additional mental health resources or review those they have used. Thought Spot was the result of a co-design project where transition-aged youth played an active role in the design, development, and evaluation of the mental health app. Currently, there are few guidelines for conducting remote evaluations of digital mental health tools [[4], [5], [6]]. The current letter shares five considerations from a recently completed randomized controlled trial (RCT) of Thought Spot [3]. Although the evaluation was completed prior to the pandemic, the considerations described in this letter could apply to other remote evaluations.

Consider implementing e-consent procedures to reduce administrative burden and improve accessibility for certain participants

During the RCT, many administrative procedures and study activities were conducted remotely with participants. For example, eligible participants could choose between a conventional in-person consent process or the remote “e-consent” process, which was delivered through an electronic data capture tool, REDCap. Implementing e-consent is timesaving and can accommodate participants who are unable sign the consent form in person. However, using e-consent involves additional confidentiality, privacy, and data security considerations in order to comply with policies for human subject research [7]. Currently, there are few published guidelines on how to best implement e-consent processes, warranting more research dedicated to understanding the risks and benefits of this approach.

Consider integrating a suicide and risk management protocol to support participants virtually if adverse events occur

For the RCT, a detailed suicide and risk management protocol (SRMP) [8] that outlined step-by-step instructions on handling mental health emergencies was implemented. It included detailed workflows involving research team members and mental health personnel at each study site. Additionally, moderation plans may be necessary to help mitigate risks such as inappropriate content or triggering language in mental health apps that allow participants to communicate and interact with each other. For example, during the RCT, a research team member served as a moderator and reviewed any new user contributions (e.g comments & reviews) for appropriateness. Overall, having a SRMP can be integral for capturing, preventing, and responding to potential adverse events during an evaluation.

Consider collecting, measuring, and analyzing usage data to reveal additional insight into participant behaviour

Analyzing usage data can be useful for understanding how participants engaged with the app without requiring in-person observation. Usage data can also be used to compare the intervention's impact on users who used the app versus those who did not [3,9]. In depth analysis of usage data can be further augmented with qualitative research methods to help contextualize findings and improve our understanding of user behaviour.

Consider employing screen-sharing and video-conferencing tools to evaluate user experience and usability

While in-person usability testing remains the gold standard [10], over-the-phone interviews are still capable of generating valuable insights about users' expectations and challenges. When engaging with participants during the development of Thought Spot, over half of these individuals preferred to participate in usability and user experience interviews by phone. However, during these phone interviews many participants wanted to elaborate their experiences with the app in hand. Thus, it can be beneficial to adopt video-conferencing tools to allow app users to easily share their screen with researchers.

Consider developing a convenient and simple approach for participants to report technical issues

Responding to unexpected technical issues in a timely manner was a key challenge during the RCT. Since technologies evolve and receive constant updates, technical issues are inevitable and must be addressed. As technical issues can potentially affect users' experience and the outcomes of remote evaluations, it is crucial to establish a process that encourages efficient identification, reporting, and resolution of issues. It is also valuable to outline and report the types of updates or fixes permitted during a trial. Tracking all software changes is important because it can affect data analysis and interpretation. Overall, as research practices evolve due to the pandemic, remote evaluations of digital mental health tools will become a necessity. While the considerations in this article are not exhaustive, they can inform the development of a safer and more effective remote evaluation plan.

Funding statement

The Thought Spot project was funded by the Ontario Ministry of Training, Colleges and Universities through the Mental Health Innovation Fund and the eHealth Innovation Partnership Program (eHIPP) grant from the Canadian Institute for Health Research (EH1-143558).

Competing Interests Statement

There are no conflicts of interest to disclose.
  9 in total

1.  Intention-to-treat and per-protocol analysis.

Authors:  Pankaj B Shah
Journal:  CMAJ       Date:  2011-04-05       Impact factor: 8.262

2.  Cognitive and usability engineering methods for the evaluation of clinical information systems.

Authors:  Andre W Kushniruk; Vimla L Patel
Journal:  J Biomed Inform       Date:  2004-02       Impact factor: 6.317

3.  "Smart" RCTs: Development of a Smartphone App for Fully Automated Nutrition-Labeling Intervention Trials.

Authors:  Ekaterina Volkova; Nicole Li; Elizabeth Dunford; Helen Eyles; Michelle Crino; Jo Michie; Cliona Ni Mhurchu
Journal:  JMIR Mhealth Uhealth       Date:  2016-03-17       Impact factor: 4.773

4.  Core components and strategies for suicide and risk management protocols in mental health research: a scoping review.

Authors:  Katye Stevens; Vivetha Thambinathan; Elisa Hollenberg; Fiona Inglis; Andrew Johnson; Andrea Levinson; Soha Salman; Leah Cardinale; Brian Lo; Jenny Shi; David Wiljer; Daphne J Korczak; Kristin Cleverley
Journal:  BMC Psychiatry       Date:  2021-01-07       Impact factor: 3.630

5.  Participant Recruitment and Retention in Remote eHealth Intervention Trials: Methods and Lessons Learned From a Large Randomized Controlled Trial of Two Web-Based Smoking Interventions.

Authors:  Noreen L Watson; Kristin E Mull; Jaimee L Heffner; Jennifer B McClure; Jonathan B Bricker
Journal:  J Med Internet Res       Date:  2018-08-24       Impact factor: 5.428

6.  Effects of a Mobile and Web App (Thought Spot) on Mental Health Help-Seeking Among College and University Students: Randomized Controlled Trial.

Authors:  David Wiljer; Jenny Shi; Brian Lo; Marcos Sanches; Elisa Hollenberg; Andrew Johnson; Alexxa Abi-Jaoudé; Gloria Chaim; Kristin Cleverley; Joanna Henderson; Wanrudee Isaranuwatchai; Andrea Levinson; Janine Robb; Howard W Wong; Aristotle Voineskos
Journal:  J Med Internet Res       Date:  2020-10-30       Impact factor: 5.428

Review 7.  Impact of the COVID-19 pandemic on clinical research.

Authors:  Katherine R Tuttle
Journal:  Nat Rev Nephrol       Date:  2020-10       Impact factor: 28.314

Review 8.  Remote Methods for Conducting Tobacco-Focused Clinical Trials.

Authors:  Jennifer Dahne; Rachel L Tomko; Erin A McClure; Jihad S Obeid; Matthew J Carpenter
Journal:  Nicotine Tob Res       Date:  2020-12-12       Impact factor: 4.244

9.  Digital Mental Health and COVID-19: Using Technology Today to Accelerate the Curve on Access and Quality Tomorrow.

Authors:  John Torous; Keris Jän Myrick; Natali Rauseo-Ricupero; Joseph Firth
Journal:  JMIR Ment Health       Date:  2020-03-26
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.