Literature DB >> 34323916

Automated conversational agents for post-intervention follow-up: a systematic review.

L Geoghegan1, A Scarborough2, J C R Wormald3, C J Harrison3, D Collins4, M Gardiner5, J Bruce6, J N Rodrigues6,7.   

Abstract

BACKGROUND: Advances in natural language processing and other machine learning techniques have led to the development of automated agents (chatbots) that mimic human conversation. These systems have mainly been used in commercial settings, and within medicine, for symptom checking and psychotherapy. The aim of this systematic review was to determine the acceptability and implementation success of chatbots in the follow-up of patients who have undergone a physical healthcare intervention.
METHODS: A systematic review of MEDLINE, MEDLINE In-process, EMBASE, PsychINFO, CINAHL, CENTRAL and the grey literature using a PRISMA-compliant methodology up to September 2020 was conducted. Abstract screening and data extraction were performed in duplicate. Risk of bias and quality assessments were performed for each study.
RESULTS: The search identified 904 studies of which 10 met full inclusion criteria: three randomised control trials, one non-randomised clinical trial and six cohort studies. Chatbots were used for monitoring after the management of cancer, hypertension and asthma, orthopaedic intervention, ureteroscopy and intervention for varicose veins. All chatbots were deployed on mobile devices. A number of metrics were identified and ranged from a 31 per cent chatbot engagement rate to a 97 per cent response rate for system-generated questions. No study examined patient safety.
CONCLUSION: A range of chatbot builds and uses was identified. Further investigation of acceptability, efficacy and mechanistic evaluation in outpatient care pathways may lend support to implementation in routine clinical care.
© The Author(s) 2021. Published by Oxford University Press on behalf of BJS Society Ltd.

Entities:  

Mesh:

Year:  2021        PMID: 34323916      PMCID: PMC8320342          DOI: 10.1093/bjsopen/zrab070

Source DB:  PubMed          Journal:  BJS Open        ISSN: 2474-9842


Introduction

The first known agent capable of conversation between human and machine was developed in 1966. Eliza used early natural language processing to return open-ended questions to users, simulating person-centred psychotherapy. Developments in speech recognition, natural language processing, natural language understanding and artificial intelligence have led to the design of systems capable of mimicking human interaction with unconstrained natural language input. A chatbot is defined as ‘a computer program designed to simulate conversation with human users, particularly over the internet’. A recent systematic review involving 17 studies and 1573 participants found that chatbots in healthcare were predominantly used in mental health conditions to educate patients and collect data from health-related questionnaires. Financial pressures and clinical demand have driven interest in virtual clinics for monitoring and surveillance following healthcare interventions, particularly during the COVID-19 pandemic, with rapid adoption of virtual services to moderate infection risk through reduction of direct clinician–patient contact. A recent randomised trial involving 209 general surgical patients demonstrated better attendance (92 versus 81 per cent) and higher patient satisfaction (95 per cent of participants happy or very happy versus 56 per cent) with virtual postoperative clinics compared with traditional outpatient follow-up. Chatbots hold promise in increasing the efficiency of outpatient care pathways and meeting the need for patient surveillance and education between face-to-face clinic appointments. Accuracy of information and patient safety, however, are important considerations. The aim of this systematic review was to determine the uptake, acceptability and utility of chatbots in the follow-up of patients who have received physical healthcare interventions.

Methods

The systematic review was designed and reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The protocol was prospectively registered in the PROSPERO database (registration number: CRD42020199919).

Search strategy

Search strategies included free text and index terms related to the following core concepts: ‘chatbot’ ‘intervention’ and ‘follow-up’ (, supplementary material). The following databases were searched from inception until 18 September 2020: MEDLINE, MEDLINE In Process, EMBASE, Cochrane CENTRAL, CINAHL and PsychINFO. The Central database was searched for registered clinical trials up until 9 November 2020. The search was not restricted by language or date of publication. A further search of the surgical grey literature was conducted by examining the proceedings of the 2020 Association of Surgeons in Training International Surgical Conference,.

Eligibility criteria

All studies reporting original data were eligible for inclusion, including randomised trials, quasi-experimental designs, cohort studies, case-control studies and case series. Case reports, reviews, meta-analyses and articles related to the technical development of systems without accompanying clinical data were excluded. Systematic reviews were screened for potentially eligible publications. The titles and abstracts of identified articles were independently screened by two authors.

Participants

Adult and paediatric patients who had undergone any physical healthcare intervention targeting physical rather than mental health and who were subsequently followed up using an automated conversational agent (a chatbot) at any point after an intervention were eligible for inclusion. Physical interventions were defined as procedures where purposeful access to the body was gained via an incision, percutaneous puncture or instrumentation via a natural orifice or the provision of medications to treat underlying disease. Examples of physical interventions included total hip replacement for osteoarthritis, steroid injection for carpal tunnel syndrome, transurethral resection of the prostate for benign prostatic hyperplasia and the prescription of antihypertensive medication.

Interventions and comparators

A chatbot was defined as a computer software application that permits two-way conversation (via text, speech or a combination of both) between a human user and a computer program. Comparators included other automated or non-automated follow-up systems, including, for example, routine care delivered via face-to-face outpatient clinics and follow-up telephone calls.

Outcomes

The primary outcome assessed was the acceptability of chatbots as a method of follow-up indicated by implementation success. Measures of acceptability included user engagement (defined as the proportion of patients who activated and interacted with the chatbot), patient adherence to the chatbot, response rate (defined as the proportion of patients responding to system queries), duration of adherence and interactions with the chatbot over time. Patient safety and accuracy statistics were assessed where reported. Additional outcomes assessed included patient cohort demographics, design features such as task orientation, dialogue management, input and output formats, platforms used, health questionnaires used and measures of patient satisfaction.

Study selection

Potentially eligible studies were compiled, and duplicate citations removed. Two authors independently screened titles and abstracts of retrieved studies using prespecified stepwise inclusion/exclusion criteria. Disagreements between reviewers were resolved through consultation with a third reviewer. Reference lists of included studies and published narrative/systematic reviews were examined for further potentially eligible studies.

Data extraction and analysis

Data were extracted using a predefined electronic data-collection form. Extracted data were collated, cross-checked by other authors and compared. Study setting, population demographics, healthcare interventions, cohort-specific factors, software design features, measures of adherence, patient experience and clinical outcomes were extracted. Formal meta-analysis was not performed due to heterogeneous outcome reporting and differences in study designs. A narrative synthesis and descriptive analysis were used.

Risk of bias analysis

Methodological quality of each included study was assessed. For randomised trials, this involved the revised Cochrane Risk of Bias tool, and for non-randomised comparative studies the Cochrane Risk of Bias in non-randomised studies of interventions (ROBINS-I) tool. The National Institute of Health (NIH) quality assessment tool for cohort studies was employed to assess the quality of cohort studies.

Results

From a total of 908 potential studies, 709 remained for screening after removal of duplicates, of which 11 articles were finally assessed with 10 meeting full inclusion criteria (). PRISMA flow diagram

Study characteristics

Three randomised control trials (RCTs) were identified. One involving 76 participants compared an automated text-based chatbot with standard postoperative care following upper or lower extremity fracture. The second, involving 142 participants, compared an automated chatbot versus physician-generated advice for women who had undergone breast cancer treatment and the third, with 45 participants, compared immediate versus delayed access to a chatbot in young patients affected by various cancers. The non-randomised comparative clinical study included 270 participants and compared an automated speech-based chatbot to manual telephone follow-up for patients who had undergone orthopaedic surgery. The remaining six studies were cohort studies based on an established definition. Collectively, eight out of 10 studies were published between 2019 to 2020.

Demographics

Of the 10 included studies, nine recruited adults, and one adolescents with a mean age of 15 years () resulting in a total of 5492 patients. Chatbots were used to follow up patients after elective orthopaedic surgery, orthopaedic trauma surgery, surgical intervention for varicose veins, women treated for breast cancer,, uretoscopy as well as the medical management of hypertension, asthma and various cancers,. Study demographics, quality and risk of bias * As per RoB 2 tool. As per ROBINS-I tool. Quality appraisal and risk of bias assessment not performed as full manuscript not published (data extracted from conference proceedings).

Quality of included studies

One RCT was deemed to have a high risk of bias due to ascertainment bias and risk of detection bias given the effect of unblinding on the outcome of interest. The remaining two RCTs were deemed at moderate risk of conduct bias,. The cohort studies were rated as fair,, or poor quality (). The quality of outcome measurement and assessment was deemed poor across all cohort studies.

Interventions

All studies deployed chatbots on mobile devices: two were also accessible via web-based applications, and one was accessible via Facebook Messenger. In terms of chatbot construct, seven used a frame-based knowledge-representation system,,, one used a rule-based knowledge-representation system and two studies did not report the type of system used,. Of the 10 studies, three used a system-focused dialogue,,, two a user-focused dialogue, and the other five used a mixed dialogue initiative. Task orientation was reported in two studies, one chatbot was able to book follow-up appointments and one was able to input patient data into electronic medical records. Measures of implementation success were reported in seven of 10 studies,. Adherence ranged from 31 per cent participant engagement rate to 97 per cent participant response rate for select system-generated questions. One study demonstrated a decline in engagement from 100 to 31 per cent after 8 months of chatbot use. A comparative study demonstrated a 92 per cent follow-up rate for patients contacted via an autonomous postoperative chatbot versus a 93 per cent follow-up rate for patients contacted directly by phone. Other outcome measures reported by studies included patient-reported outcome scores (PROMs), patient feedback, patient experience and technical details related to chatbot performance (). One RCT demonstrated that a chatbot with twice-daily text-based output for 2 weeks was associated with reduced opiate consumption compared with a control cohort (no messages received) following orthopaedic trauma surgery (26 opiate tablets versus 41 tablets). Another RCT found no differences in perceived quality of responses using the between chatbot versus real-time physician-written responses to user queries from women treated for breast cancer (average QLQ-INFO25 score 2.89 and 2.82 respectively). The third RCT reported no significant difference in symptoms of anxiety and depression, quantified using the Emotional Disturbance Anxiety Score, between patients using a chatbot (cohort 1) and a control cohort without chatbot access (cohort 2) over a 4-week study period. Upon completion of the first study period, the control cohort (cohort 2) were then granted access to the chatbot and symptoms of anxiety and depression were quantified after a second 4-week study period. After the second study period, patients in cohort 2 demonstrated a reduction in reported symptoms of anxiety compared with baseline measurements and anxiety scores after the first study period, although this reduction was not statistically significant. A non-randomised comparative study demonstrated comparable follow-up consultation rates after orthopaedic surgery using a telephone-based conversational agent compared with calls made by individuals, saving an estimated 9.3 hours per 100 participants. Technical details, acceptability criteria and outcomes assessed System-focused dialogue initiative Text output 36.5% reduction in number of opiate tablets used in intervention group (P = 0.004) 35% decrease in morphine milliequivalents consumed versus control (P  = 0.006) Lower mean postoperative pain intensity 3 A PROMIS score in intervention arm (45.9 ± 7.2 versus 49.7 ± 8.8, P  = 0.04) Lower mean postoperative pain interference 8 A PROMIS score in intervention arm (60.6 ± 8.2 versus 56.6 ± 9.4, P  = 0.04) Frame based Mixed dialogue initiative Spoken input/output 10.3% of patients contacted via chatbot provided feedback (versus 2.5% control) 0 versus 9.3 hours for chatbot and control respectively Frame based User-focused dialogue Text input and output Perceived quality of response to the answers provided to user queries assessed using the QLQ-INFO25 (a patient-satisfaction score). Patients assessing chatbot responses gave a higher average rating compared with rating for responses given by physicians in real time. Success was defined as a score greater than or equal to 3 on a satisfaction scale of 1–4. Overall, non-inferiority was demonstrated between perceived quality of responses, however when individual items of the QLQ-INFO25 were assessed individually, non-inferiority of response satisfaction could not be demonstrated in 9 of 25 items 59% of patients wanted more information (versus 65% control) 85% of patients found information useful (versus 83.1% control) 85% of patients satisfied with amount of information received (versus 77% control) Mixed dialogue initiative Orientated to book follow-up appointments 60% highly satisfied (rated chatbot useful or very useful) Frame based User-focused dialogue Text input and output User response characteristics Average response length 21.5 words 93.95% overall patient satisfaction 88% stated that chatbot provided them with support and helped them follow their treatment effectively Frame based Mixed dialogue initiative Spoken input/output Orientated to integrate with medical records 80% consultation conclusion rate reached by system 18 questions per interaction Average consultation call time 3.3 minutes Frame based System-focused dialogue initiative Text input/output Reasons for not activating chatbot Misplacing instructions for chatbot use (n = 6), relying on follow-up with clinic or discharge materials (n = 4), inability to activate chatbot (n = 2) and inability to text (n = 1) Frame based Mixed dialogue initiative Text input and output Patient satisfaction Patients rated chatbot as useful (average score 2/3) Patients likely to recommend chatbot to friend (average rating 6.9/10) Anxiety and depression symptoms Participants in intervention arm reported greater reduction in anxiety versus the control arm as per the PROMIS Emotional Distress-Anxiety Short Form (2.58 t-score units versus 0.7, P = 0.09) Both intervention and control arms reported a reduction in depressive symptoms as per the PROMIS Emotional Distress-Depression Short Form (1.83 versus 1.38, P = 0.77) Rule based System-focused dialogue initiative Voice and text input/output Patient experience 3 participants provided feedback (33%) All three were satisfied or very satisfied Technical details 3.5-minute average time to complete consultation All patients had a smartphone prior to recruitment Frame based Mixed dialogue initiative Text input/output Patient experience High overall satisfaction reported Technical details Average number of user imitated questions: 19 PROMIS, Patient Reported Outcomes Measurement Information System.

Registered trials

The authors’ search found two additional registered protocols for ongoing clinical trials. Study protocols outline the intended use of chatbots to facilitate questionnaire completion at 6 and 8 months following bariatric surgery and for daily consultation with patients treated for Parkinson’s disease (Fig.S3, supplementary material).

Discussion

The use of chatbots following a physical healthcare intervention is a new and evolving field, with eight of 10 studies published during or after 2019. It seems likely that this will continue to increase, with a move towards efficiency in healthcare systems and a move away from face-to-face follow-up arising from the COVID-19 pandemic. A review investigating the broader use of conversational agents in healthcare has been published, while the present review was focused on the role of technology after interventions. The systematic review identified 10 studies of different designs, mostly of moderate to poor quality. All outcome measures were inconsistently defined and outcome assessors were not blinded, predisposing to detection bias and Hawthorne effect. One study attempted to reduce this by blinding participants to responses from either the chatbot or physicians, although by the nature of the intervention, a Hawthorne effect cannot be ignored. Acceptability and patient experience using automated conversational agents was largely positive,. There was no clinically important difference in rates of patient satisfaction with chatbot responses compared with real-time physician-generated responses to user queries, measured using the QLQ-INFO25. Previous work has demonstrated the QLQ-INFO25 is acceptable with good internal consistency and test–retest reliability. The reduction in opiate prescribing, time and cost saving reported in one small study provides useful evidence supporting investment in automated follow-up systems. Despite the metrics used being heterogeneous, data around success of implementation suggest considerable variation. Some learning points were simple and applicable. One study described a 35 per cent interaction rate with their chatbot, with the primary reason for poor interaction being ‘misplacing instructions for chatbot use’, while another demonstrated an initial engagement rate of 100 per cent at the start of the study that gradually fell to 31 per cent over 8 months, likely to represent reduced enthusiasm for patient engagement, although it might represent patient adaptation to their current health state. Some support for the latter is that most (88 per cent) participants reported that the chatbot provided them with support and helped them follow their treatment plan. A structured sequence to implementation may increase success, and frameworks for this have been developed for the deployment of PROMs that might be applicable to automated follow-up systems. No study identified in the current systematic review examined patient safety. If autonomous agents are to be used in clinical practice to monitor patient status actively after intervention, rigorous safety testing using simulated patients is warranted before clinical adoption. Following implementation, prospective registries of technological adverse events should be kept. Here, technological adverse events refer to patient harm directly caused by technology. This harm may be direct (inappropriate clinical advice) or indirect (failing to identify clinical signs of deterioration). All studies identified in this systematic review deployed agents on mobile devices. In the UK, 70 per cent of adults own a smartphone and over half regularly use applications. Disparities in socioeconomic status and technological literacy may limit access to healthcare. Future epidemiological studies should seek to ascertain whether clinical implementation of technologies negatively impacts the health of certain cohorts within the population. The present study has a number of limitations. A small number of heterogeneous studies were identified, reporting a variety of different adherence and clinical-outcome measures. The majority of studies were small, non-comparative feasibility studies. The comparative studies were at risk of selection and detection bias owing to the nature of interventions and relative infancy of the field. Varying technical descriptions of agents were provided and heterogeneity in outcome reporting precluded meaningful meta-analysis, limiting the strength of conclusions that can safely be drawn. There is, nevertheless, early evidence of uptake of automated conversational agents in the outpatient management of patients following physical healthcare interventions. Despite a range of chatbot builds and clinical uses, they seem to be generally acceptable, although effectiveness remains to be proven. Attention to practical details around deployment may improve implementation success of future systems.

Acknowledgements

L.G. was involved in idea inception, search strategy design, data extraction, analysis and writing. A.S. and J.C.R.W. were involved in abstract screening and manuscript review. C.J.H., D.C. and M.G. critically reviewed the manuscript. J.B. and J.N.R. were involved in idea inception, search strategy design and manuscript review.

Funding

No specific funding was received for the conduct of this review. J.B. is supported by National Institute for Health Research Capability Funding via University Hospitals Coventry and Warwickshire. C.J.H. is funded by a National Institute for Health Research (NIHR) Doctoral Research Fellowship (NIHR300684). J.N.R. is funded by an NIHR Postdoctoral Fellowship (PDF-2017-10-075). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. Disclosure. The authors declare no conflicts of interest.

Supplementary material

Supplementary material is available at BJS Open online Click here for additional data file.
Table 1

Study demographics, quality and risk of bias

ReferenceStudy typenSpecialityCohortIntervention (+ control)Study qualityRisk of bias
Anthony et al., 202015RCT76OrthopaedicsAdult patients who had undergone operative fixation of upper or lower extremity fractureAutomated chatbot which delivered text messages to reduce opioid use versus standard postoperative careSome concerns*
Bian et al., 202018Comparative clinical study270OrthopaedicsAdult patients who had undergone orthopaedic interventionAutomated chatbot versus telephone follow-up for postoperative careModerate
Bibault et al., 201916RCT142OncologyAdult patients in remission or undergoing active treatment for breast cancerAutomated chatbot which provides information to patients about breast cancer, epidemiology, treatment options, side effects and quality of life improvement strategies versus information provided by treating physicians in real time via text messageSome concerns*
Black et al., 202021Prospective cohort158Vascular surgeryAdult patients undergoing intervention for lower extremity superficial venous reflux with endovascular ablation, sclerotherapy and phlebectomyAutomated postoperative chatbot offered to patients to educate patients, provide postoperative instructions, facilitate follow-up appointment booking and contact with the clinic
Chaix et al., 201924Prospective cohort4737OncologyAdult patients in remission or undergoing active treatment for breast cancerAutomated chatbot which provides information to patients about breast cancer, epidemiology, treatment options, side effects and quality-of-life improvement strategiesFair
Giorgino et al., 200520Prospective cohort15Internal medicineAdult patients with diagnosed hypertension treated with oral medicationsAutomated chatbot used to monitor hypertensive patients in the community. Collects health-related data such as heart rate and blood pressurePoor
Goldenthal et al., 201923Prospective cohort20UrologyAdult patients who had undergone ureteroscopy for nephrolithiasis within the previous monthAutomated chatbot used to educate and reassure patients regarding commonly experienced symptoms or post-procedural complicationsFair
Greer et al., 201917RCT45OncologyYoung adult patients (aged 18–25 years) who had completed active treatment for cancer within the past 5 yearsAutomated chatbot used to provide cognitive and behavioural intervention that develops eight positive psychological skills. Patients were given conversational teaching sessions and practice lessons. Control participants were asked to provide daily emotion ratings. The control group had no access to the chatbot but were given access after 4 weeksHigh*
Piau et al., 201922Prospective cohort9Internal medicineAdult patients aged >65 years with a diagnosis of cancer undergoing active treatment with chemotherapyAutomated chatbot used to identify the development of symptoms or treatment side effectsFair
Rhee et al., 201419Prospective cohort15Internal medicineAdolescent patients (and patient dyads) diagnosed with asthma receiving active treatmentAutomated chatbot used to monitor patient symptoms, activity levels and medication useFair

* As per RoB 2 tool.

As per ROBINS-I tool.

Quality appraisal and risk of bias assessment not performed as full manuscript not published (data extracted from conference proceedings).

Table 2

Technical details, acceptability criteria and outcomes assessed

ReferenceChatbot featuresDeviceAdherenceOther outcomes measured
Anthony et al., 202015

System-focused dialogue initiative

Text output

Smartphone (text)Not reported Postoperative pain

36.5% reduction in number of opiate tablets used in intervention group (P = 0.004)

35% decrease in morphine milliequivalents consumed versus control (P  = 0.006)

Patient-reported outcomes

Lower mean postoperative pain intensity 3 A PROMIS score in intervention arm (45.9 ± 7.2 versus 49.7 ± 8.8, P  = 0.04)

Lower mean postoperative pain interference 8 A PROMIS score in intervention arm (60.6 ± 8.2 versus 56.6 ± 9.4, P  = 0.04)

Bian et al., 202018

Frame based

Mixed dialogue initiative

Spoken input/output

Smartphone (call)92.2% follow-up rate (versus 93.3% in control) Patient feedback

10.3% of patients contacted via chatbot provided feedback (versus 2.5% control)

Time per 100 patients

0 versus 9.3 hours for chatbot and control respectively

Bibault et al., 201916

Frame based

User-focused dialogue

Text input and output

Web based or smartphone applicationNot reported Quality of response

Perceived quality of response to the answers provided to user queries assessed using the QLQ-INFO25 (a patient-satisfaction score). Patients assessing chatbot responses gave a higher average rating compared with rating for responses given by physicians in real time. Success was defined as a score greater than or equal to 3 on a satisfaction scale of 1–4. Overall, non-inferiority was demonstrated between perceived quality of responses, however when individual items of the QLQ-INFO25 were assessed individually, non-inferiority of response satisfaction could not be demonstrated in 9 of 25 items

Patient satisfaction

59% of patients wanted more information (versus 65% control)

85% of patients found information useful (versus 83.1% control)

85% of patients satisfied with amount of information received (versus 77% control)

Black et al., 202021

Mixed dialogue initiative

Orientated to book follow-up appointments

Smartphone (application)83.3% of participants engaged with the chatbot Patient experience

60% highly satisfied (rated chatbot useful or very useful)

Chaix et al., 201924

Frame based

User-focused dialogue

Text input and output

Web based or smartphone application31% retention rate after 8 months (N.B. only calculated for 956 patients)

User response characteristics

Average response length 21.5 words

Patient experience

93.95% overall patient satisfaction

88% stated that chatbot provided them with support and helped them follow their treatment effectively

Giorgino et al., 200520

Frame based

Mixed dialogue initiative

Spoken input/output

Orientated to integrate with medical records

Smartphone (call)Not reported Technical details

80% consultation conclusion rate reached by system

18 questions per interaction

Average consultation call time 3.3 minutes

Goldenthal et al., 201923

Frame based

System-focused dialogue initiative

Text input/output

Smartphone (application)35% of participants engaged with chatbot

Reasons for not activating chatbot

Misplacing instructions for chatbot use (n = 6), relying on follow-up with clinic or discharge materials (n = 4), inability to activate chatbot (n = 2) and inability to text (n = 1)

Greer et al., 201917

Frame based

Mixed dialogue initiative

Text input and output

Smartphone (Facebook messenger)Mean of 12.1 sessions (73.8 minutes total engagement time) across 4 weeks versus 18.1 sessions (27.1 minutes total engagement time)

Patient satisfaction

Patients rated chatbot as useful (average score 2/3)

Patients likely to recommend chatbot to friend (average rating 6.9/10)

Anxiety and depression symptoms

Participants in intervention arm reported greater reduction in anxiety versus the control arm as per the PROMIS Emotional Distress-Anxiety Short Form (2.58 t-score units versus 0.7, P = 0.09)

Both intervention and control arms reported a reduction in depressive symptoms as per the PROMIS Emotional Distress-Depression Short Form (1.83 versus 1.38, P = 0.77)

Piau et al., 201922

Rule based

System-focused dialogue initiative

Voice and text input/output

Smartphone (application)86% compliance over study period

Patient experience

3 participants provided feedback (33%)

All three were satisfied or very satisfied

Technical details

3.5-minute average time to complete consultation

All patients had a smartphone prior to recruitment

Rhee et al., 201419

Frame based

Mixed dialogue initiative

Text input/output

Smartphone (application)81–97% response rate for system-initiated questions

Patient experience

High overall satisfaction reported

Technical details

Average number of user imitated questions: 19

PROMIS, Patient Reported Outcomes Measurement Information System.

  21 in total

1.  RoB 2: a revised tool for assessing risk of bias in randomised trials.

Authors:  Jonathan A C Sterne; Jelena Savović; Matthew J Page; Roy G Elbers; Natalie S Blencowe; Isabelle Boutron; Christopher J Cates; Hung-Yuan Cheng; Mark S Corbett; Sandra M Eldridge; Jonathan R Emberson; Miguel A Hernán; Sally Hopewell; Asbjørn Hróbjartsson; Daniela R Junqueira; Peter Jüni; Jamie J Kirkham; Toby Lasserson; Tianjing Li; Alexandra McAleenan; Barnaby C Reeves; Sasha Shepperd; Ian Shrier; Lesley A Stewart; Kate Tilling; Ian R White; Penny F Whiting; Julian P T Higgins
Journal:  BMJ       Date:  2019-08-28

2.  Implementation of virtual consultation for hand surgeons and therapists: an international survey and future implications.

Authors:  Alexander Scarborough; Luke Geoghegan; Maxim D Horwitz; Zaf Naqui
Journal:  J Hand Surg Eur Vol       Date:  2020-06-28

3.  An international validation study of the EORTC QLQ-INFO25 questionnaire: an instrument to assess the information given to cancer patients.

Authors:  Juan Ignacio Arraras; Eva Greimel; Orhan Sezer; Wei-Chu Chie; Mia Bergenmar; Anna Costantini; Teresa Young; Karin Kuljanic Vlasic; Galina Velikova
Journal:  Eur J Cancer       Date:  2010-07-30       Impact factor: 9.162

4.  Assessing the feasibility of a chatbot after ureteroscopy.

Authors:  Steven B Goldenthal; David Portney; Emma Steppe; Khurshid Ghani; Chad Ellimoottil
Journal:  Mhealth       Date:  2019-03-15

5.  ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions.

Authors:  Jonathan Ac Sterne; Miguel A Hernán; Barnaby C Reeves; Jelena Savović; Nancy D Berkman; Meera Viswanathan; David Henry; Douglas G Altman; Mohammed T Ansari; Isabelle Boutron; James R Carpenter; An-Wen Chan; Rachel Churchill; Jonathan J Deeks; Asbjørn Hróbjartsson; Jamie Kirkham; Peter Jüni; Yoon K Loke; Theresa D Pigott; Craig R Ramsay; Deborah Regidor; Hannah R Rothstein; Lakhbir Sandhu; Pasqualina L Santaguida; Holger J Schünemann; Beverly Shea; Ian Shrier; Peter Tugwell; Lucy Turner; Jeffrey C Valentine; Hugh Waddington; Elizabeth Waters; George A Wells; Penny F Whiting; Julian Pt Higgins
Journal:  BMJ       Date:  2016-10-12

6.  Acceptance and Commitment Therapy Delivered via a Mobile Phone Messaging Robot to Decrease Postoperative Opioid Use in Patients With Orthopedic Trauma: Randomized Controlled Trial.

Authors:  Chris A Anthony; Edward Octavio Rojas; Valerie Keffala; Natalie Ann Glass; Apurva S Shah; Benjamin J Miller; Matthew Hogue; Michael C Willey; Matthew Karam; John Lawrence Marsh
Journal:  J Med Internet Res       Date:  2020-07-29       Impact factor: 5.428

7.  When Chatbots Meet Patients: One-Year Prospective Study of Conversations Between Patients With Breast Cancer and a Chatbot.

Authors:  Benjamin Chaix; Jean-Emmanuel Bibault; Arthur Pienkowski; Guillaume Delamon; Arthur Guillemassé; Pierre Nectoux; Benoît Brouard
Journal:  JMIR Cancer       Date:  2019-05-02

8.  A Chatbot Versus Physicians to Provide Information for Patients With Breast Cancer: Blind, Randomized Controlled Noninferiority Trial.

Authors:  Jean-Emmanuel Bibault; Benjamin Chaix; Arthur Guillemassé; Sophie Cousin; Alexandre Escande; Morgane Perrin; Arthur Pienkowski; Guillaume Delamon; Pierre Nectoux; Benoît Brouard
Journal:  J Med Internet Res       Date:  2019-11-27       Impact factor: 5.428

9.  Conversational agents in healthcare: a systematic review.

Authors:  Liliana Laranjo; Adam G Dunn; Huong Ly Tong; Ahmet Baki Kocaballi; Jessica Chen; Rabia Bashir; Didi Surian; Blanca Gallego; Farah Magrabi; Annie Y S Lau; Enrico Coiera
Journal:  J Am Med Inform Assoc       Date:  2018-09-01       Impact factor: 4.497

10.  Use of the Chatbot "Vivibot" to Deliver Positive Psychology Skills and Promote Well-Being Among Young People After Cancer Treatment: Randomized Controlled Feasibility Trial.

Authors:  Stephanie Greer; Danielle Ramo; Yin-Juei Chang; Michael Fu; Judith Moskowitz; Jana Haritatos
Journal:  JMIR Mhealth Uhealth       Date:  2019-10-31       Impact factor: 4.773

View more
  1 in total

Review 1.  Predicting clinical outcomes using artificial intelligence and machine learning in neonatal intensive care units: a systematic review.

Authors:  Ryan M McAdams; Ravneet Kaur; Yao Sun; Harlieen Bindra; Su Jin Cho; Harpreet Singh
Journal:  J Perinatol       Date:  2022-05-13       Impact factor: 2.521

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.