| Literature DB >> 31467682 |
Tom Nadarzynski1, Oliver Miles2, Aimee Cowie3, Damien Ridge1.
Abstract
BACKGROUND: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants' willingness to engage with AI-led health chatbots.Entities:
Keywords: AI; Acceptability; Artificial Intelligence; bot; chatbot
Year: 2019 PMID: 31467682 PMCID: PMC6704417 DOI: 10.1177/2055207619871808
Source DB: PubMed Journal: Digit Health ISSN: 2055-2076
Quotes from thematic analysis on the motivations for AI-led chatbots in healthcare.
| Theme (sub-theme) | Illustrative quotes |
|---|---|
| Understanding of chatbots | |
| (Awareness) | “I think that it’s online and you ask it questions and it can reply to you with information. It is not a real person. It is like stored information.”“There are those diagnosis bots and you can use. Those to get an idea of what is wrong with you or what your next step could be if it can just tell you a rough severity of what you may have. Then I suppose it is useful in those cases but, in terms of just chatting to something and telling it your problems or whatever and trying to get a diagnosis. If you are using it in that kind of way then I think that is a very limited thing to try and do.” “I’ve used one for banking before, and you just type in your query. And I think I’ve used some which you type to and it picks out keywords, and one which you can select the most appropriate response.” |
| (Experience) | “Where it talks back to you, it can be more specific, compared to Google where you have to search and look through.” |
| AI hesitancy | |
| (Perceived accuracy) | “It’s not that I think they [chatbots] would intentionally give me false information. I just don’t know how accurate they are. I don’t know whether they can be as accurate as doctors can be.”“I would find it hard to trust a health chatbot because it is just looking online at things. You would want a professional opinion.” |
| (Premature technology) | “I think it has a lot of potential in the future, but now certain places are releasing it before they have perfected their own system, then it could put people off. Because you can end up chatting for like half an hour and go back to being at the same question you were at in the first place. With that people get angry.”“I don’t know whether the technology is as adequate as a doctor.” |
| (Non-human interaction) | “I think a lot of people would be put off just being with a chatbot. It is like a segregation thing, I don’t think it will replace human interaction.”“If you are looking at a chatbot thinking that it is a replacement for a person then you are looking it in the wrong way. If you were looking for a deep and meaningful conversation you are not going to find one.” |
| (Cyber-security) | “Some people might find issues with confidentiality because if you were with the doctor it is just you and them, but with chatbots, you don’t know who is behind it all.”“Some things are confidential and you wouldn’t just type it on the internet. You would want the confidentiality of the GP practice.” |
| Motivations for health chatbots | |
| (Anonymity) | “I think for mental health it would be pretty useful because I think that it’s a lot harder to talk to a real person about that. Maybe sexual health too. I’m pretty open generally about both of those things but I can see where they might be seen as a better alternative due to privacy and not having to face a person and describe all of these problems.”“I think mental health would be a good thing to use a chatbot in, because some people with mental health issues they do not want to open up to an actual person, so it would be easier doing it over the internet in the comfort of their own home.” |
| (Convenience) | “You can use a chatbot instead of googling it and reading advice on the NHS page. If that information is integrated into the chatbot then yeah it would certainly save time.” “It would be good for healthcare because obviously, it is so hard to get a doctor’s appointment, so for people who have general queries like it would be good for them to quickly get advice on whether or not it is an urgent issue.” |
| (Sign-posting) | “Chatbot can tell you if you need further advice or you will worry. And it will reassure you so you don’t go to the doctors.”“People who have no awareness of or are perhaps too worried, it could be a good way to get in touch. Just like if you use 111, first of all, you go to a chatbot and they would figure out what is wrong and if there was a severity to it they would ring 999 and get an ambulance, otherwise, they can direct you to a GP and a clinic.” |
Sample characteristics and predictors of health chatbot acceptability.
| Variable | Total of the sample (%) | (%) of those ‘likely’ to use health chatbot within 12 months/Odds ratio [95%; CI] |
|---|---|---|
| Age [mean, SD] | [30, 12] | |
| Below 25 years | 113 (53) | (89)/2.07 [0.87–4.93] |
| 25 years and above | 102 (47) | (80)/Ref |
| Gender | ||
| Male | 84 (39) | (87)/1.36 [0.55–3.38] |
| Female | 131 (61) | (84)/Ref |
| Ethnicity | ||
| White | 138 (64) | (83)/0.49 [0.17–1.39] |
| Non-white | 77 (36) | (90)/Ref |
| Education | ||
| Below university degree | 116 (54) | (84)/0.79 [0.33–1.87] |
| University degree | 99 (46) | (86)/Ref |
| Perceived IT skills | ||
| Poor or moderate | 51 (24) | (72)/0.32 [0.13–0.78]* |
| Good or very good | 164 (76) | (89)/Ref |
| Health information seeking | ||
| Several times per year | 103 (48) | (83)/0.72 [0.31–1.70] |
| Every month or more often | 112 (52) | (87)/Ref |
| Health chatbot awareness | ||
| Yes | 12 (6) | Assumption not meta |
| No | 203 (94) | |
| Past health chatbot use | ||
| Yes | 7 (3) | Assumption not meta |
| No | 202 (97) | |
|
| ||
| Likely | 143 (67) | |
| Neutral | 45 (21) | |
| Unlikely | 25 (12) | |
|
| ||
| To seek general health information | ||
| Likely | 168 (78) | (98)/5.10 [3.08–8.43]* |
| Unlikely | 24 (11) | (8)/Ref |
| To seek information about medication | ||
| Likely | 128 (60) | (99)/3.21 [1.92–5.37]* |
| Unlikely | 52 (24) | (49)/Ref |
| To seek information about diseases | ||
| Likely | 148 (69) | (97)/2.97 [2.10-4.10]* |
| Unlikely | 38 (18) | (33)/Ref |
| To seek information about symptoms | ||
| Likely | 144 (67) | (98)/3.44 [2.25-5.24]* |
| Unlikely | 28 (13) | (28)/Ref |
| To seek results of medical tests | ||
| Likely | 83 (39) | (92)/1.42 [1.08-1.85]* |
| Unlikely | 81 (38) | (75)/Ref |
| To book a medical appointment | ||
| Likely | 167 (78) | (92)/1.88 [1.46-2.42]* |
| Unlikely | 34 (16) | (50)/Ref |
| To look for local medical services (e.g. pharmacy) | ||
| Likely | 172 (80) | (92)/2.25 [1.64-3.08]* |
| Unlikely | 17 (8) | (33)/Ref |
| To seek specialist advice (e.g. sexual health) | ||
| Likely | 104 (48) | (98)/2.69 [1.61-4.49]* |
| Unlikely | 65 (30) | (61)/Ref |
|
| ||
| Worried about health | ||
| Agree | 107 (50) | (81)/0.94 [0.74-1.19] |
| Disagree | 70 (33) | (85)/Ref |
| Worried about privacy using a health chatbot | ||
| Agree | 100 (47) | (93)/1.42 [1.08-1.86]* |
| Disagree | 71 (33) | (77)/Ref |
| Worried about the security of information | ||
| Agree | 100 (47) | (93)/1.36 [1.02-1.81]* |
| Disagree | 71 (33) | (80)/Ref |
| Confident in finding accurate health information online | ||
| Agree | 104 (48) | (87)/1.31 [1.03-1.67]* |
| Disagree | 50 (23) | (70)/Ref |
| Confident in identifying own health symptoms | ||
| Agree | 146 (68) | (88)/1.20 [0.90-1.60] |
| Disagree | 32 (15) | (78)/Ref |
| Comfortable with outlining symptoms to a chatbot | ||
| Agree | 131 (61) | (91)/1.46 [1.07-1.99]* |
| Disagree | 26 (12) | (68)/Ref |
| Prefer to talk face to face with a doctor about health | ||
| Agree | 158 (73) | (83)/Assumption not meta |
| Disagree | 19 (9) | (100) |
| I don’t like talking to computers | ||
| Agree | 64 (30) | (77)/0.77 [0.60-0.99]* |
| Disagree | 103 (48) | (90)/Ref |
| It would be strange to talk to a chatbot about health | ||
| Agree | 88 (41) | (76)/0.72 [0.54-0.97]* |
| Disagree | 68 (32) | (93)/Ref |
| Health chatbot could help to make better decisions | ||
| Agree | 65 (30) | (100)/Assumption not meta |
| Disagree | 42 (19) | (62) |
| Would trust advice from a health chatbot | ||
| Agree | 59 (27) | (98)/1.92 [1.13-3.25]* |
| Disagree | 54 (25) | (78)/Ref |
| A health chatbot is a good idea | ||
| Agree | 139 (65) | (93)/2.71 [1.77-4.16]* |
| Disagree | 13 (6) | (20)/Ref |
| Willing to enter symptoms on an online form | ||
| Agree | 136 (63) | (91)/1.34 [0.95-1.86] |
| Disagree | 24 (11) | (76)/Ref |
| Curious how new technologies could improve health | ||
| Agree | 172 (80) | (88)/1.60 [1.15-2.21]* |
| Disagree | 15 (7) | (54)/Ref |
| Reliable and accurate information is important | ||
| Agree | 199 (93) | (85)/Assumption not meta |
| Disagree | 4 (2) | (0) |
| Only seek a doctor if I have a health problem | ||
| Agree | 141 (66) | (84)/0.81 [0.55-1.19] |
| Disagree | 33 (15) | (92)/Ref |
*Significant at p<0.05; SD: standard deviation; CI: confidence intervals; #Neutral values removed for the binary regression analysis; Ref: reference category for binary regression; aStatistical assumptions required to perform binary logistic regression with a single categorical predictor were not met.