| Literature DB >> 35802407 |
Yi Shan1, Meng Ji2, Wenxiu Xie3, Xiaobo Qian4, Rongying Li5, Xiaomin Zhang6, Tianyong Hao4.
Abstract
BACKGROUND: Given the growing significance of conversational agents (CAs), researchers have conducted a plethora of relevant studies on various technology- and usability-oriented issues. However, few investigations focus on language use in CA-based health communication to examine its influence on the user perception of CAs and their role in delivering health care services.Entities:
Keywords: conversational agent; health communication; language use; systematic review
Mesh:
Year: 2022 PMID: 35802407 PMCID: PMC9308072 DOI: 10.2196/37403
Source DB: PubMed Journal: J Med Internet Res ISSN: 1438-8871 Impact factor: 7.076
Figure 1PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart of the selection of eligible studies.
Information extracted from the 11 selected studies.
| Reference, first author, and year of publication | Health care application | Target population | Study design | Major findings | Limitations |
| [ | A public health CAa prototype | People in French and German lingua cultures | An internet-based experiment | The CA’s choice of formal and informal forms of the second-person pronoun “You”— | Given that the study involved a complicated 4-way interaction between T/V distinction, language and culture, age, and gender, the sample size is not sufficiently large to ensure more generalizable findings. Therefore, the implications for CA designers are affected. |
| [ | Commonly available, general-purpose CAs on smartphones and smart speakers | Unspecified | Following a piloted script to present health- and lifestyle-related prompts to 8 CAs | The ratio of the CAs’ appropriate responses decreased when safety-critical prompts were rephrased or when the agent used a voice-only interface. | Some response structures were derived from the patterns observed in the responses to a reasonably limited set of studied prompts, possibly not capturing additional or different structural elements of the CAs’ responses based on a larger set of prompts. |
| [ | An expressive, speech-enabled digital health agent to deliver an internet-based brief behavioral health intervention for alcohol use | 51 alcohol users in the United States | Description of the CA design, acceptability, feasibility, and utility | The CA used a model of empathetic verbal and nonverbal behaviors to engage users, who had overwhelmingly positive experiences with the digital health agent, including engagement with the technology, acceptance, perceived utility, and intent to use the technology. | It is unclear whether the model of empathetic verbal and nonverbal behaviors the CA used to engage young and middle-aged adults successfully can equally engage the elderly or children in America and users in other countries, especially considering different cultural factors that may influence the perception of language. |
| [ | 68 phones from 7 producers | Investigators | A pilot study followed by a cross-sectional study | Some CAs replied to users’ concerns with respectful language and referred them to helplines, emergency services, and nearby medical facilities, but some failed to do so. | Investigators used standardized phrases for health and interpersonal violence concerns, but people asking for help on their personal smartphones may use different phrases, which may influence the CAs’ responses. |
| [ | A mental health and well-being chatbot named Ash | Young people aged 15-17 years and living in Australia | Interviews and a survey | The chatbot failed to identify and understand critical words and generate responses appropriate to critical words. | The imbalanced numbers of male and female participants who interacted with the chatbot may influence the responses of the chatbot. |
| [ | Chatbots for people with Parkinson disease | People with Parkinson disease | A description of chatbots for people with Parkinson disease | The chatbots can engage with patients in random, human-like conversations. | The study failed to cite chatbot-patient conversations to illustrate the randomness and human-likeness of the conversations. |
| [ | CAs | German participants | An internet-based questionnaire and a comparative study | Only an exemplary anamnesis with a CA shows the CA’s polite, respectful, and encouraging language style. | The study failed to discuss the role of the CA’s language style in soliciting disclosure of medical information from patients. |
| [ | A chatbot named Alex | Children on the autism spectrum | A description of a chatbot | The chatbot is able to engage with the user on a variety of topics using symbols and images. | The study aimed to describe a new chatbot and did not provide real-time exemplary conversations between the chatbot and real patients, making it difficult for us to understand the role of its language style in engaging patients. |
| [ | A motivational interviewing–based chatbot | Adult cigarette smokers | A single-arm prospective iterative design study | Due to the running head start technique that the chatbot used when engaging in conversations, 34.7% (42/121) of participants enjoyed the interaction with the chatbot. | The running head start technique might not be appropriate or helpful for those who were already exhibiting change behavior. |
| [ | An artificial CA named Harlie | People with neurological conditions such as Parkinson disease and dementia | A description of a chatbot | The chatbot is able to converse with the user on a variety of topics. | The study focused on the chatbot’s role in performing different tasks without attaching importance to the function of its language style in engaging the patients. |
| [ | A trainee chatbot named Edna | 5 genetic counselors and adults who had whole exome sequencing conducted for diagnosis of a genetic condition, either for themselves or their child | A description of a chatbot | The chatbot can engage users with a polite, respectful, and an encouraging language style. | The chatbot cannot engage in conversations related to the impact of specific genetic conditions, emotive personal circumstances, or expert medical advice, which possibly influences its language style. |
aCA: conversational agent.
Examples of conversational agents’ strategic choice of words and utterances.
| Categories | Examples |
| Respectful | “I will not pressure you in any way.” |
| Helpful | “Shall I call them for you?”/ “Need help?” / “Maybe it would help to talk to someone about it.” |
| Supportive | “I’ll always be right here for you.” / “There must be something I can do to make you feel better.” |
| Comforting | “Don’t worry. Things will turn around for you soon.” / “Keep your chin up, good things will come your way.” |
| Empathetic | “I’m sorry to hear that.” / “It breaks my heart to see you like that.” |