Literature DB >> 31467682

Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study.

Tom Nadarzynski1, Oliver Miles2, Aimee Cowie3, Damien Ridge1.   

Abstract

BACKGROUND: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants' willingness to engage with AI-led health chatbots.
METHODS: The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor.
RESULTS: Three broad themes: 'Understanding of chatbots', 'AI hesitancy' and 'Motivations for health chatbots' were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI95%:0.13-0.78] and dislike for talking to computers OR = 0.77 [CI95%:0.60-0.99] as well as positively correlated with perceived utility OR = 5.10 [CI95%:3.08-8.43], positive attitude OR = 2.71 [CI95%:1.77-4.16] and perceived trustworthiness OR = 1.92 [CI95%:1.13-3.25].
CONCLUSION: Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients' concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients' perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots.

Entities:  

Keywords:  AI; Acceptability; Artificial Intelligence; bot; chatbot

Year:  2019        PMID: 31467682      PMCID: PMC6704417          DOI: 10.1177/2055207619871808

Source DB:  PubMed          Journal:  Digit Health        ISSN: 2055-2076


Introduction

Artificial intelligence (AI) is an umbrella term for computer software consisting of a complex mathematical algorithm that processes input information to produce any specific pre-defined outputs, which lead to relevant outcomes.[1] AI systems, which utilise large datasets, can be designed to enhance decision-making and analytical processes while imitating human cognitive functions. AI has been applied in medicine and various healthcare services such as diagnostic imaging and genetic diagnosis, as well as clinical laboratory, screening and health communications.[2,3] These systems aid physicians by providing pertinent medical information in order to reduce diagnostic or therapeutic errors, and alerts about any high-risk health outcomes. The recent digitalisation of healthcare services in the UK offers access to large pools of clinical data such as medical notes, electronic records, physical and laboratory examination or patient demographic and behavioural characteristics.[4] It is anticipated that by 2024, every patient in England would have digital access to primary care consultations, with a reduced need for face-to-face outpatient visits. In addition, there is an ongoing transformation to provide fully digitalised acute, community and mental health services across all locations. AI systems can utilise such clinical data to enhance diagnostic accuracy and enable clinicians to offer patient-centred medical care, while eliminating variations across the country and helping patients in managing their conditions themselves. Chatbots, as part of AI devices, are natural language processing systems acting as a virtual conversational agent mimicking human interactions.[5] While this technology is still in its developmental phase, health chatbots could potentially increase access to healthcare, improve doctor–patient and clinic–patient communication, or help to manage the increasing demand for health services such as via remote testing, medication adherence monitoring or teleconsultations.[6-8] The chatbot technology allows for such activities as specific health surveys, setting up personal health-related reminders, communication with clinical teams, booking appointments, retrieving and analysing health data or the translation of diagnostic patterns taking into account behavioural indicators such as physical activity, sleep or nutrition.[9] Such technology could potentially alter the delivery of healthcare systems, increasing uptake, equity and cost-effectiveness of health services while narrowing the health and well-being gap,[10] but these assumptions require further research. So far, chatbots have been applied in health education, diagnostics and mental health. A survey of conversational agents from 40 articles outlines chatbot taxonomy, specifies the main challenges and defines the types and contexts related to chatbots in health.[11] For example, chatbots can provide instant responses to health-related enquiries from patients while looking for specific patterns of symptoms in predicting disease, as presented by the internet-based Doc-Bot delivered via mobile phone or a Messenger-based chatbot for outpatient and translational medicine.[12] They can be tailored to specific populations, health conditions or behaviours. Crutzen et al. demonstrated the high engagement with a chatbot for adolescent students providing education on sex, drugs and alcohol.[13] The users had positive views on the chatbot with emphasis on its anonymity as well as the quality and the speed of receiving information in comparison to popular search engines. The chatbot was seen as a reliable source of information; however, the ease of use was rated as low, indicating challenges in implementing the technology on a larger scale. Other systems have also been proposed to act as a symptom checker,[14] online triage service[15] or health promotion assistant,[16] providing live feedback in an interactive way. In addition, a number of studies have shown the usability of chatbot systems in mental health, in particular as a novel way of developing therapeutic and preventative interventions.[17-19] For example, Ly et al. demonstrated the effectiveness of a chatbot based on cognitive behavioural therapy and positive psychology interactions in the non-clinical population.[20] There was a significant impact on well-being and perceived stress, with some participants reporting a specific ‘digital relationship’ with the chatbot. Nevertheless, chatbot systems are typically designed for specific functions, mainly to provide information. One of the main criticisms of chatbots is that they are not capable of empathy, notably to recognise users’ emotional states and tailor responses reflecting these emotions. The lack of empathy may therefore compromise the engagement with health chatbots.[21] There is little research on health chatbot acceptability and motivations for its use. The acceptability of a healthcare intervention is a multi-faceted construct based on a range of dimensions including burden, values, effectiveness, cognition, and emotional responses.[22] A study of 100 physicians in the US concluded that although the majority believed chatbots could assist with scheduling doctors’ appointments, locating health clinics and providing information about medication, over 70% also thought they cannot care for all patients’ needs, display emotion and could be a risk to patients due to incorrect self-diagnosis.[23] As these digital systems are capable of enhancing patient experiences of healthcare, and potentially influencing health behaviours, a theory-driven and person-centred approach is needed to inform their development and implementation. This study aimed to explore the acceptability of AI-led health chatbots to identify possible barriers and facilitators that could influence the delivery of these novel services. The findings are likely to inform the development of health chatbots using person-based approaches.

Method

Design

We used a mixed-methods approach[24] to assist in creative knowledge generation of a multi-layered issue.[25] Specifically, we incorporated face-to-face semi-structured interviews and an online survey format to explore the motivations for the use of chatbots in healthcare. Our interviews were guided by a topic schedule and informed the development of the exploratory survey distributed on social media. The study was approved by the University of Southampton Ethics Committee (ref: 30986/31719).

Recruitment and data collection

Between November 2017 and January 2018, paper and digital adverts were distributed around the University of Southampton campus inviting students to take part in individual interviews in order to assess the attitudes towards new technologies in healthcare. Potential participants were asked to email researchers to arrange a suitable time and place for the interview. They were no specific exclusion criteria, although participants needed to be above the age of 18 years and capable of consenting to the study. There was no focus on any particular population in relation to healthcare utilisation, and this qualitative component of the study aimed to explore general views on health chatbots. As the advertisement strategy concentrated around the university settings, it was assumed that most participants had been familiar with conventional digital technologies. The qualitative arm of the study was guided by a topic schedule, which was based on the theoretical framework of healthcare intervention acceptability[22] that was adopted for health chatbots. The schedule consisted of five sets of open-ended questions exploring the understanding of chatbots, attitudes, usability and general concerns. The semi-structured interviews took place in a room at the University of Southampton. The interviews were conducted by two trained researchers who were also involved in transcription, analysis and data validation. All participants were asked to sign a consent form and were reminded about their rights for confidentiality and that they could withdraw from the study at any time without penalty. It was assumed that many participants had not experienced using a chatbot in the past; thus a chatbot demonstration was performed during the interview. The participants were asked to conduct a live conversation with a popular chatbot[26] in order to gather more credible views on chatbot acceptability. The interaction allowed participants to post any questions for the chatbot and gain immediate answers. The interviews lasted 20–30 minutes, were audio-recorded and transcribed verbatim. No incentive was offered to participants. Between February and June 2018, an advert for the online survey was distributed on social media pages (i.e. university accounts on Facebook, Twitter and eFolio such as student union) inviting users to complete a short questionnaire about health chatbots. No particular health-specific pages were targeted for the advertisement; however, this quantitative arm of the study used a digital snowball sampling method encouraging users to share the study advert on their social media profiles. This method is likely to represent views of the internet users more familiar with social media, although no specific populations were targeted. The participation was voluntary, and the respondents were offered a chance to enter a prize draw worth £50. The internet users were directed to the survey after clicking on a pre-designed online advert. They were shown information about the study and asked to provide online consent by ticking a box. The survey took about 10 minutes to complete. The online survey consisted of 24 items, both demographic and attitudinal. It was developed based on the theoretical framework of acceptability[22] and the findings from the qualitative interviews. The participants were asked general questions about the awareness and experience of chatbots, and more specifically about health chatbots. They were then presented with two sets of questions examining the perceived usefulness of health chatbots and their general attitudes. The perceived usefulness questions, assessed using a 5-point Likert scale (from ‘extremely unlikely’ to ‘extremely likely’), asked participants to rate their willingness to use chatbots for seeking general health information, information about medication, various diseases, potential symptoms, seeking results of medical tests, booking a medical appointment and looking for specialist medical services. The attitudinal questions, assessed using a 5-point Likert scale (from ‘strongly disagree’ to ‘strongly agree’), asked participants to indicate their agreement with 16 statements about their healthcare such as the worry about digital privacy, the accuracy of health information online, the preference for face-to-face interaction and trust in advice from a health chatbot. The main outcome measure – health chatbot acceptability – was assessed using one question: ‘How likely would you be to use a health chatbot in the next 12 months if it was available to you today?’ with five options (from ‘extremely unlikely’ to ‘extremely likely’).

Data analysis

Thematic analysis[27] was conducted on qualitative data to identify common patterns and trends. Two researchers familiarised themselves with the data by repeatedly reading the transcript to enhance understanding. The analyses were conducted independently using NVIVO software where data were coded and recorded to then be categorised into meaningful themes and subthemes. The results of the analysis were discussed between the researchers to find an agreement for the final set of findings. The themes were then validated by two researchers comparing quotes with the identified themes. Descriptive and inferential statistics were conducted on quantitative data. All variables were dichotomised, and neutral values excluded in order to perform binary logistic regressions with a single categorical predictor to determine the correlates of health chatbot acceptability. The model was not adjusted as it did not meet the statistical assumptions, due to multicollinearity and non-binominal distribution of responses. However, the regression allowed assessing the correlates of chatbot acceptability and their directions. The odds ratio and 95% confidence intervals were presented as the magnitude of association with the outcome variable in an explorative manner.

Results

Sample characteristics

In our qualitative sub-study, 29 participants (all university students, 24 self-identified as White and 15 as women) aged 18–22 years were interviewed. In our quantitative sub-study, 215 users completed the survey. The mean age was 30 years (SD = 12, range: 18–62) and the majority were women (61%), of White ethnicity (64%) and educated below the university degree (54%). Most (76%) rated their IT skills as ‘good’ or ‘very good’, and many reported looking for medical information online a few times per year (41%) or every month (33%).

Qualitative analysis

Qualitative data were organised into three themes: ‘Understanding of chatbots’, ‘AI Hesitancy’ and ‘Motivations for health chatbot’. Table 1 presents all themes and subthemes with corresponding quotes.
Table 1.

Quotes from thematic analysis on the motivations for AI-led chatbots in healthcare.

Theme (sub-theme)Illustrative quotes
Understanding of chatbots
 (Awareness)“I think that it’s online and you ask it questions and it can reply to you with information. It is not a real person. It is like stored information.”“There are those diagnosis bots and you can use. Those to get an idea of what is wrong with you or what your next step could be if it can just tell you a rough severity of what you may have. Then I suppose it is useful in those cases but, in terms of just chatting to something and telling it your problems or whatever and trying to get a diagnosis. If you are using it in that kind of way then I think that is a very limited thing to try and do.” “I’ve used one for banking before, and you just type in your query. And I think I’ve used some which you type to and it picks out keywords, and one which you can select the most appropriate response.”
 (Experience)“Where it talks back to you, it can be more specific, compared to Google where you have to search and look through.”
AI hesitancy
 (Perceived accuracy)“It’s not that I think they [chatbots] would intentionally give me false information. I just don’t know how accurate they are. I don’t know whether they can be as accurate as doctors can be.”“I would find it hard to trust a health chatbot because it is just looking online at things. You would want a professional opinion.”
 (Premature technology)“I think it has a lot of potential in the future, but now certain places are releasing it before they have perfected their own system, then it could put people off. Because you can end up chatting for like half an hour and go back to being at the same question you were at in the first place. With that people get angry.”“I don’t know whether the technology is as adequate as a doctor.”
 (Non-human interaction)“I think a lot of people would be put off just being with a chatbot. It is like a segregation thing, I don’t think it will replace human interaction.”“If you are looking at a chatbot thinking that it is a replacement for a person then you are looking it in the wrong way. If you were looking for a deep and meaningful conversation you are not going to find one.”
 (Cyber-security)“Some people might find issues with confidentiality because if you were with the doctor it is just you and them, but with chatbots, you don’t know who is behind it all.”“Some things are confidential and you wouldn’t just type it on the internet. You would want the confidentiality of the GP practice.”
Motivations for health chatbots
 (Anonymity)“I think for mental health it would be pretty useful because I think that it’s a lot harder to talk to a real person about that. Maybe sexual health too. I’m pretty open generally about both of those things but I can see where they might be seen as a better alternative due to privacy and not having to face a person and describe all of these problems.”“I think mental health would be a good thing to use a chatbot in, because some people with mental health issues they do not want to open up to an actual person, so it would be easier doing it over the internet in the comfort of their own home.”
 (Convenience)“You can use a chatbot instead of googling it and reading advice on the NHS page. If that information is integrated into the chatbot then yeah it would certainly save time.” “It would be good for healthcare because obviously, it is so hard to get a doctor’s appointment, so for people who have general queries like it would be good for them to quickly get advice on whether or not it is an urgent issue.”
 (Sign-posting)“Chatbot can tell you if you need further advice or you will worry. And it will reassure you so you don’t go to the doctors.”“People who have no awareness of or are perhaps too worried, it could be a good way to get in touch. Just like if you use 111, first of all, you go to a chatbot and they would figure out what is wrong and if there was a severity to it they would ring 999 and get an ambulance, otherwise, they can direct you to a GP and a clinic.”
Quotes from thematic analysis on the motivations for AI-led chatbots in healthcare.

Understanding of chatbots

Most participants reported hearing about bots, notably in the context of social media or customer service, but were unsure how they functioned technologically. Due to limited experience with chatbots, the majority were unable to recall if they ever used one for their healthcare. After the chatbot demonstration, the participants appreciated the mainstream chatbot systems available, such as Alexa or Google Home, in particular in relation to information searches. However, they agreed that this technology was still emergent, and not part of the mainstream culture, despite large media coverage about AI. There was a general lack of familiarity and understanding of health chatbots amongst participants.

AI hesitancy

Many participants were hesitant about whether they would incorporate chatbots as part of their healthcare. They were uncertain about the quality, trustworthiness and accuracy of the health information provided by chatbots, as the sources underpinning such services were not transparent. The majority of participants reported not being able to understand the technological complexity of chatbots, in particular how they are able to correctly respond to a health enquiry. There was a doubt about whether a chatbot could correctly identify symptoms of less common health conditions or diseases. A number of participants emphasised the potential for miscommunication between a chatbot and its users, who might not be able to accurately describe their health issues or name symptoms. There was a perception of a risk of harm if the information provided by a chatbot was inaccurate or inadequate. In general, there was a view that this technology was premature in terms of providing a diagnosis, as it was seen as an ‘unqualified’. However, most participants found receiving general health advice acceptable. While a few participants thought that well-designed chatbots can be more accurate and logical compared with doctors, the lack of human presence was seen as the main limitation. In particular, participants worried about a lack of empathy and inability of chatbots to understand more emotional issues, notably in mental health. The responses given by chatbots were seen as depersonalised, cold and inhuman. They were perceived as inferior to doctor consultation, although several participants admitted that this technology offered a level of anonymity which could facilitate the disclosure of more intimate or uncomfortable aspects to do with health. Other participants were concerned about cyber-security and the ability (or not) of chatbots to maintain confidentiality so that their sensitive health-related information was protected from potential hacking or data leakage. There was also a concern that health chatbots could reduce the overall quality of healthcare if they were to replace experienced trained professionals.

Motivations for health chatbots

The majority of participants were willing to use chatbots for minor health concerns that would not require a physical examination. They were perceived as a convenient tool that could facilitate the seeking of health information online. Several participants compared chatbots to medical phone helplines, such as NHS Direct, that provide rapid guidance and health advice on minor health issues. They perceived chatbots to be particularly useful when they might struggle to comprehend the advice given via telephone, seeing written information as easier to understand. Some expressed preferences for a web-chat format of conversation. Thus, if free at the point of access, chatbots were seen as time-saving and useful platforms for triaging users to appropriate healthcare services.

Quantitative analysis

Table 2 presents sample characteristics and correlates of health chatbot acceptability amongst 215 participants. The analysis showed that while 6% had heard of a health chatbot and 3% had experience of using it, 67% perceived themselves as likely to use one within 12 months. None of the demographic variables was associated with acceptability, although those who perceived themselves to have poor or moderate IT skills showed lower acceptability. The majority of participants would use a health chatbot for seeking general health information (78%), booking a medical appointment (78%) and looking for local health services (80%). However, a health chatbot was perceived as less suitable for seeking results of medical tests and seeking specialist advice such as sexual health. All nine items measuring perceived utility were associated with chatbot acceptability with the highest levels reported for seeking general health information as well as the information about symptoms and medication. The analysis of attitudinal variables showed that most participants reported their preference for discussing their health with doctors (73%) and having access to reliable and accurate health information (93%). While 80% were curious about new technologies that could improve their health, 66% reported only seeking a doctor when experiencing a health problem and 65% thought that a chatbot was a good idea. Interestingly, 30% reported dislike about talking to computers, 41% felt it would be strange to discuss health matters with a chatbot and about half were unsure if they could trust the advice given by a chatbot. Nine attitudinal items were associated with acceptability, with perceived trust and the belief that a chatbot was a good idea being the strongest predictors.
Table 2.

Sample characteristics and predictors of health chatbot acceptability.

VariableTotal of the sample (%)(%) of those ‘likely’ to use health chatbot within 12 months/Odds ratio [95%; CI]
Age [mean, SD][30, 12]
 Below 25 years113 (53)(89)/2.07 [0.87–4.93]
 25 years and above102 (47)(80)/Ref
Gender
 Male84 (39)(87)/1.36 [0.55–3.38]
 Female131 (61)(84)/Ref
Ethnicity
 White138 (64)(83)/0.49 [0.17–1.39]
 Non-white77 (36)(90)/Ref
Education
 Below university degree116 (54)(84)/0.79 [0.33–1.87]
 University degree99 (46)(86)/Ref
Perceived IT skills
 Poor or moderate51 (24)(72)/0.32 [0.13–0.78]*
 Good or very good164 (76)(89)/Ref
Health information seeking
 Several times per year103 (48)(83)/0.72 [0.31–1.70]
 Every month or more often112 (52)(87)/Ref
Health chatbot awareness
 Yes12 (6)Assumption not meta
 No203 (94)
Past health chatbot use
 Yes7 (3)Assumption not meta
 No202 (97)
Likelihood of using health chatbot within 12 months if available (acceptability)
 Likely143 (67)
 Neutral45 (21)
 Unlikely25 (12)
Perceived utility of health chatbots#
To seek general health information
 Likely168 (78)(98)/5.10 [3.08–8.43]*
 Unlikely24 (11)(8)/Ref
To seek information about medication
 Likely128 (60)(99)/3.21 [1.92–5.37]*
 Unlikely52 (24)(49)/Ref
To seek information about diseases
 Likely148 (69)(97)/2.97 [2.10-4.10]*
 Unlikely38 (18)(33)/Ref
To seek information about symptoms
 Likely144 (67)(98)/3.44 [2.25-5.24]*
 Unlikely28 (13)(28)/Ref
To seek results of medical tests
 Likely83 (39)(92)/1.42 [1.08-1.85]*
 Unlikely81 (38)(75)/Ref
To book a medical appointment
 Likely167 (78)(92)/1.88 [1.46-2.42]*
 Unlikely34 (16)(50)/Ref
To look for local medical services (e.g. pharmacy)
 Likely172 (80)(92)/2.25 [1.64-3.08]*
 Unlikely17 (8)(33)/Ref
To seek specialist advice (e.g. sexual health)
 Likely104 (48)(98)/2.69 [1.61-4.49]*
 Unlikely65 (30)(61)/Ref
Beliefs associated with chatbot acceptability#
Worried about health
 Agree107 (50)(81)/0.94 [0.74-1.19]
 Disagree70 (33)(85)/Ref
Worried about privacy using a health chatbot
 Agree100 (47)(93)/1.42 [1.08-1.86]*
 Disagree71 (33)(77)/Ref
Worried about the security of information
 Agree100 (47)(93)/1.36 [1.02-1.81]*
 Disagree71 (33)(80)/Ref
Confident in finding accurate health information online
 Agree104 (48)(87)/1.31 [1.03-1.67]*
 Disagree50 (23)(70)/Ref
Confident in identifying own health symptoms
 Agree146 (68)(88)/1.20 [0.90-1.60]
 Disagree32 (15)(78)/Ref
Comfortable with outlining symptoms to a chatbot
 Agree131 (61)(91)/1.46 [1.07-1.99]*
 Disagree26 (12)(68)/Ref
Prefer to talk face to face with a doctor about health
 Agree158 (73)(83)/Assumption not meta
 Disagree19 (9)(100)
I don’t like talking to computers
 Agree64 (30)(77)/0.77 [0.60-0.99]*
 Disagree103 (48)(90)/Ref
It would be strange to talk to a chatbot about health
Agree88 (41)(76)/0.72 [0.54-0.97]*
Disagree68 (32)(93)/Ref
Health chatbot could help to make better decisions
 Agree65 (30)(100)/Assumption not meta
 Disagree42 (19)(62)
Would trust advice from a health chatbot
 Agree59 (27)(98)/1.92 [1.13-3.25]*
 Disagree54 (25)(78)/Ref
A health chatbot is a good idea
 Agree139 (65)(93)/2.71 [1.77-4.16]*
 Disagree13 (6)(20)/Ref
Willing to enter symptoms on an online form
 Agree136 (63)(91)/1.34 [0.95-1.86]
 Disagree24 (11)(76)/Ref
Curious how new technologies could improve health
 Agree172 (80)(88)/1.60 [1.15-2.21]*
 Disagree15 (7)(54)/Ref
Reliable and accurate information is important
 Agree199 (93)(85)/Assumption not meta
 Disagree4 (2)(0)
Only seek a doctor if I have a health problem
 Agree141 (66)(84)/0.81 [0.55-1.19]
 Disagree33 (15)(92)/Ref

*Significant at p<0.05; SD: standard deviation; CI: confidence intervals; #Neutral values removed for the binary regression analysis; Ref: reference category for binary regression; aStatistical assumptions required to perform binary logistic regression with a single categorical predictor were not met.

Sample characteristics and predictors of health chatbot acceptability. *Significant at p<0.05; SD: standard deviation; CI: confidence intervals; #Neutral values removed for the binary regression analysis; Ref: reference category for binary regression; aStatistical assumptions required to perform binary logistic regression with a single categorical predictor were not met.

Discussion

To our knowledge, this is the first study exploring the acceptability of AI-led chatbot systems for healthcare from the perspective of the general public with no pre-existing medical conditions. The awareness and experience of health chatbots were low amongst our participants, and most had mixed attitudes towards these novel technologies. The qualitative analysis showed that a substantial proportion was hesitant to AI and health chatbots, mainly because of concerns about the accuracy and security of these services. There was also a view that chatbots could enable some users to discuss their intimate and perhaps embarrassing health issues, promoting access to professional health services. Although they were seen as a convenient and anonymous tool for minor health issues that may carry a level of stigma, the lack of empathy and professional human approach made chatbots less acceptable to some users. The survey demonstrated that the participants were more willing to use these systems to find general health information over finding out the results of medical tests or specialist advice. Amongst the strongest predictors of acceptability were positive attitudes towards health chatbots and the curiosity about new technologies that could improve health. Also, those who showed dislike for talking to chatbots and preferred to discuss their health face-to-face with a clinician were less likely to accept chatbots. Although these innovative services were acceptable by the majority of participants, we propose that ‘AI hesitancy’ would have a negative influence on the engagement and effectiveness of those technologies. Therefore, the patient perspective needs to be taken into consideration when developing AI-enabled health services. These findings are consistent with previous research and theoretical frameworks on the acceptability of novel health interventions. The acceptability rate in the present study is comparable to the acceptability level of 49% within a Dutch cohort offered use of a chatbot for smoking cessation.[16] Nevertheless, Laufer has already argued that the social acceptability of AI systems is compromised by the ambiguous status of ‘artificial’, which has negative and ‘inferior-to-natural’ connotations.[28] According to the Diffusion of Innovation theories, the implementation of new technologies is a process in which the adoption is dependent on widespread awareness, understanding and utilisation.[29] Adopters are generally divided into innovators, early adopters, majority users and laggards. While the passage of time is necessary for any innovation to be adopted, certain characteristics of social systems such as governmental endorsement, mass media campaigns or personal views of social role models are likely to influence potential adopters. The theoretical framework of acceptability[22] outlines that the burden of engaging with the intervention, ethical consequences and negative user experiences are likely to increase hesitancy or even lead to the failure of the intervention. Hence, the concerns about accuracy, trustworthiness and privacy, as well as the perceived lack of empathy, are likely to compromise the adoption of AI systems in healthcare. Therefore, user-centred approaches, for example incorporating qualitative methodologies or ‘A/B testing’ techniques, are necessary to overcome potential barriers to engagement.[30] These approaches require a thorough investigation into the awareness, comprehension and motivation for the use of novel health interventions. The specific personal agency, intervention content and quality as well as the ‘user experience’, notably the interaction and perceived support, need to be studied and optimised for best uptake. It is important to acknowledge that users perceived several benefits of health chatbots, notably in relation to anonymity, convenience and faster access to relevant information. This is in line with previous research showing that users might be as likely to disclose emotional and factual information to a chatbot as they do to a human partner.[31] The perceived understanding, disclosure intimacy and cognitive reappraisal were similar in conversations conducted with chatbots and humans, indicating that people psychologically engage with chatbots as they do with people. The perceived anonymity was noted by a few participants in sexual health and mental health settings, although the preferences for particular chatbot use in healthcare settings needs to be explored further. Our analysis also supports the findings from a qualitative study exploring user expectations of chatbots in terms of their understanding and preferences.[32] Users are generally unclear about what chatbots can do, although they foresee this technology as improving their experience by providing immediate access to relevant and valuable information. It was also shown that users saw the lack of judgement as a unique aspect of this technology, although it was noted that building rapport with a chatbot would require trust and meaningful interactions. These motivations for the use of chatbots need to be explored in more detail in order to understand how this technology could be safely incorporated into healthcare. The present study had a number of methodological issues. The use of mixed methods allowed new concepts to be tested in the online survey, as the qualitative analysis of views on AI-led chatbots fed into the quantitative arm of the study. In addition, the demonstration of the popular chatbot and the opportunity for participants to directly interact with it likely strengthen the validity of the findings, as the explored views were not purely hypothetical but based on experience during the study. The qualitative sub-study also informed the development of the exploratory survey, increasing its reliability, although a further investigation into the development of measurement tool is needed. Nevertheless, the survey responses were mainly drawn from students and internet users, in particular, a young and educated cohort that might not be representative of the population that might be asked to use health chatbots. It is likely that these answers are more typical of people who are relatively experienced with digital technologies. User ‘perceived IT skill’ was a correlate of chatbot acceptability; thus future studies need to assess the willingness to use these technologies in clinical and community-based populations. A perspective on patient acceptability of chatbots in those who experience acute and chronic conditions would enhance the understanding of the feasibility of this intervention within healthcare systems. In addition, the digital snowball sampling method through social media, as used in the present study, is likely to compromise the generalisability of the findings if participants were selected based on comparable characteristics. Thus, subsequent assessments of chatbot acceptability should employ robust methodological design able to capture diverse views representative of potential healthcare users. It would also be useful to examine the acceptability of specialist chatbots that serve a particular population or in relation to a specific condition, as well as general chatbots used as a triage tool. Various designs of chatbots, notably when the health information is stored and retrieved or when chatbots are fully conversational, could also affect acceptability and engagement. There are several implications of this study. AI intervention designers need to include opinions of users and health professionals to maximise engagement and retention. No AI-led health chatbot should be implemented without rigorous piloting that can address patients’ concerns and remove any potential barriers. As a large number of participants reported the preference for face-to-face interaction, health chatbots should be a supplementary service rather than a replacement of the professional health force. While for some users they might be perceived as a reduction in the quality of care, others might see chatbots as an improvement, notably in overcoming ‘shameful’ issues. Thus, their mechanism of action and clinical effectiveness as an intervention should be clearly and transparently communicated to all users. Intervention designers should reassure users of the human dimensions of AI systems that are developed to improve health and well-being in order to increase the acceptability of these services. This study has identified the concept of ‘AI hesitancy’. As outlined, concerned about accuracy, cyber-security, lack of empathy and technological maturity were reported as potential factors associated with the delay in acceptability or refusal. The construct of ‘hesitancy’ has been applied in various acceptability studies, notably in vaccinations, mainly referred to the level of confidence, complacency and convenience.[33] Although the constructs from the vaccine hesitancy model could potentially overlap with AI hesitancy, future research is needed to further define and operationalise this concept in order to have a precise understanding of motivations for patient engagement with AI systems. As there is a substantial investment in the development of AI in healthcare, purely driven by the need for cost-effectiveness, it is essential to produce a theory contributing to its design. In conclusion, as the application of AI chatbot services in healthcare is becoming more apparent, service users’ motivation, uptake and engagement need to be evaluated to maximise the benefits from these technologies. At present, we identified that many are receptive to health chatbots, but a substantial number may feel hesitant to use AI modules. Intervention designers need to apply user-centred and theory-based approaches in order to address user concerns and develop effective and ethical services, capable of reducing the gap in health and well-being. Future studies are required to explore how health chatbots could be used in preventative medicine and healthcare utilisation, notably by allowing patients to engage with their health.
  14 in total

1.  An artificially intelligent chat agent that answers adolescents' questions related to sex, drugs, and alcohol: an exploratory study.

Authors:  Rik Crutzen; Gjalt-Jorn Y Peters; Sarah Dias Portugal; Erwin M Fisser; Jorne J Grolleman
Journal:  J Adolesc Health       Date:  2010-12-30       Impact factor: 5.012

Review 2.  Vaccine hesitancy: an overview.

Authors:  Eve Dubé; Caroline Laberge; Maryse Guay; Paul Bramadat; Réal Roy; Julie Bettinger
Journal:  Hum Vaccin Immunother       Date:  2013-04-12       Impact factor: 3.452

3.  Quro: Facilitating User Symptom Check Using a Personalised Chatbot-Oriented Dialogue System.

Authors:  Shameek Ghosh; Sammi Bhatia; Abhi Bhatia
Journal:  Stud Health Technol Inform       Date:  2018

4.  The person-based approach to intervention development: application to digital health-related behavior change interventions.

Authors:  Lucy Yardley; Leanne Morrison; Katherine Bradbury; Ingrid Muller
Journal:  J Med Internet Res       Date:  2015-01-30       Impact factor: 5.428

5.  Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework.

Authors:  Mandeep Sekhon; Martin Cartwright; Jill J Francis
Journal:  BMC Health Serv Res       Date:  2017-01-26       Impact factor: 2.655

Review 6.  Application of Synchronous Text-Based Dialogue Systems in Mental Health Interventions: Systematic Review.

Authors:  Simon Hoermann; Kathryn L McCabe; David N Milne; Rafael A Calvo
Journal:  J Med Internet Res       Date:  2017-07-21       Impact factor: 5.428

Review 7.  Artificial intelligence in healthcare: past, present and future.

Authors:  Fei Jiang; Yong Jiang; Hui Zhi; Yi Dong; Hao Li; Sufeng Ma; Yilong Wang; Qiang Dong; Haipeng Shen; Yongjun Wang
Journal:  Stroke Vasc Neurol       Date:  2017-06-21

8.  An Embodied Conversational Agent for Unguided Internet-Based Cognitive Behavior Therapy in Preventative Mental Health: Feasibility and Acceptability Pilot Trial.

Authors:  Shinichiro Suganuma; Daisuke Sakamoto; Haruhiko Shimoyama
Journal:  JMIR Ment Health       Date:  2018-07-31

9.  Towards an Artificially Empathic Conversational Agent for Mental Health Applications: System Design and User Perceptions.

Authors:  Robert R Morris; Kareem Kouddous; Rohan Kshirsagar; Stephen M Schueller
Journal:  J Med Internet Res       Date:  2018-06-26       Impact factor: 5.428

10.  Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot.

Authors:  Annabell Ho; Jeff Hancock; Adam S Miner
Journal:  J Commun       Date:  2018-05-30
View more
  50 in total

1.  Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems.

Authors:  Zhan Zhang; Yegin Genc; Dakuo Wang; Mehmet Eren Ahsen; Xiangmin Fan
Journal:  J Med Syst       Date:  2021-05-04       Impact factor: 4.460

2.  Self-Diagnosis through AI-enabled Chatbot-based Symptom Checkers: User Experiences and Design Considerations.

Authors:  Yue You; Xinning Gui
Journal:  AMIA Annu Symp Proc       Date:  2021-01-25

Review 3.  Artificial Intelligence and Machine Learning for HIV Prevention: Emerging Approaches to Ending the Epidemic.

Authors:  Julia L Marcus; Whitney C Sewell; Laura B Balzer; Douglas S Krakower
Journal:  Curr HIV/AIDS Rep       Date:  2020-06       Impact factor: 5.071

Review 4.  Conversational Agents: Goals, Technologies, Vision and Challenges.

Authors:  Merav Allouch; Amos Azaria; Rina Azoulay
Journal:  Sensors (Basel)       Date:  2021-12-17       Impact factor: 3.576

Review 5.  Digital health-enabled genomics: Opportunities and challenges.

Authors:  Yvonne Bombard; Geoffrey S Ginsburg; Amy C Sturm; Alicia Y Zhou; Amy A Lemke
Journal:  Am J Hum Genet       Date:  2022-07-07       Impact factor: 11.043

6.  Identifying patterns in administrative tasks through structural topic modeling: A study of task definitions, prevalence, and shifts in a mental health practice's operations during the COVID-19 pandemic.

Authors:  Dessislava Pachamanova; Wiljeana Glover; Zhi Li; Michael Docktor; Nitin Gujral
Journal:  J Am Med Inform Assoc       Date:  2021-11-25       Impact factor: 7.942

7.  Multiple approaches to enhancing cancer communication in the next decade: translating research into practice and policy.

Authors:  Claire C Conley; Amy K Otto; Glynnis A McDonnell; Kenneth P Tercyak
Journal:  Transl Behav Med       Date:  2021-11-30       Impact factor: 3.046

Review 8.  Artificial Intelligence in Medicine: Chances and Challenges for Wide Clinical Adoption.

Authors:  Julian Varghese
Journal:  Visc Med       Date:  2020-10-12

9.  Exploring the Influential Factors of Consumers' Willingness Toward Using COVID-19 Related Chatbots: An Empirical Study.

Authors:  Manal Almalki
Journal:  Med Arch       Date:  2021-02

10.  The evaluation of chatbot as a tool for health literacy education among undergraduate students.

Authors:  Nur Azlina Mohamed Mokmin; Nurul Anwar Ibrahim
Journal:  Educ Inf Technol (Dordr)       Date:  2021-05-25
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.