| Literature DB >> 31304401 |
Adam Palanica1, Anirudh Thommandram1, Andrew Lee1, Michael Li1, Yan Fossat1.
Abstract
This study investigated the speech recognition abilities of popular voice assistants when being verbally asked about commonly dispensed medications by a variety of participants. Voice recordings of 46 participants (12 of which had a foreign accent in English) were played back to Amazon's Alexa, Google Assistant, and Apple's Siri for the brand- and generic names of the top 50 most dispensed medications in the United States. A repeated measures ANOVA indicated that Google Assistant achieved the highest comprehension accuracy for both brand medication names (M = 91.8%, SD = 4.2) and generic medication names (M = 84.3%, SD = 11.2), followed by Siri (brand names M = 58.5%, SD = 11.2; generic names M = 51.2%, SD = 16.0), and the lowest accuracy by Alexa (brand names M = 54.6%, SD = 10.8; generic names M = 45.5%, SD = 15.4). An interaction between voice assistant and participant accent was also found, demonstrating lower comprehension performance overall for those with a foreign accent using Siri (M = 48.8%, SD = 11.8) and Alexa (M = 41.7%, SD = 12.7), compared to participants without a foreign accent (Siri M = 57.0%, SD = 11.7; Alexa M = 53.0%, SD = 10.9). No significant difference between participant accents were found for Google Assistant. These findings show a substantial performance lead for Google Assistant compared to its voice assistant competitors when comprehending medication names, but there is still room for improvement.Entities:
Keywords: Health care; Signs and symptoms
Year: 2019 PMID: 31304401 PMCID: PMC6586879 DOI: 10.1038/s41746-019-0133-x
Source DB: PubMed Journal: NPJ Digit Med ISSN: 2398-6352
Participant pronunciation for brand and generic medication names
| Pronunciation | Fully correct mean % (SD) | Partially correct mean % (SD) | Incorrect mean % (SD) |
|---|---|---|---|
| Brand medication names | |||
| Total participants ( | 89.9 (8.3) | 8.4 (7.0) | 1.7 (2.5) |
| Canadian accent participants ( | 90.6 (8.0) | 7.9 (6.6) | 1.4 (2.4) |
| Foreign accent participants ( | 87.8 (9.2) | 9.7 (8.1) | 2.5 (2.7) |
| Generic medication names | |||
| Total participants ( | 55.6 (18.3) | 35.0 (14.7) | 9.4 (13.8) |
| Canadian accent participants ( | 56.2 (19.0) | 33.8 (14.7) | 10.1 (15.2) |
| Foreign accent participants ( | 53.8 (17.1) | 38.7 (15.0) | 7.5 (8.6) |
Note. Percentages represent the average accuracy rates for participants across all 50 [brand name or generic name] medications
Voice assistant comprehension accuracy for brand medication names using all participant pronunciations
| Voice assistant comprehension | Accurate mean % (SD) | Misinterpreted mean % (SD) | No response mean % (SD) |
|---|---|---|---|
| Alexa | |||
| Total participants ( | 54.6 (10.8) | 38.2 (9.4) | 7.2 (5.0) |
| Canadian accent participants ( | 57.9 (8.5) | 35.6 (7.5) | 6.4 (4.3) |
| Foreign accent participants ( | 45.2 (11.4) | 45.3 (10.9) | 9.5 (6.3) |
| Google Assistant | |||
| Total participants ( | 91.8 (4.2) | 8.2 (4.2) | 0.0 (0.0) |
| Canadian accent participants ( | 92.4 (4.0) | 7.6 (4.0) | 0.0 (0.0) |
| Foreign accent participants ( | 90.0 (4.5) | 10.0 (4.5) | 0.0 (0.0) |
| Siri | |||
| Total participants ( | 58.5 (11.2) | 41.5 (11.2) | 0.0 (0.0) |
| Canadian accent participants ( | 59.9 (11.1) | 40.1 (11.1) | 0.0 (0.0) |
| Foreign accent participants ( | 54.7 (10.8) | 45.3 (10.8) | 0.0 (0.0) |
Note. Percentages represent the average accuracy rates for participants across all 50 medication names. Only the “Accurate” means were used for statistical analyses
Voice assistant comprehension accuracy for generic medication names using all participant pronunciations
| Voice assistant comprehension | Accurate mean % (SD) | Misinterpreted mean % (SD) | No response mean % (SD) |
|---|---|---|---|
| Alexa | |||
| Total participants ( | 45.5 (15.4) | 35.4 (9.3) | 19.1 (10.1) |
| Canadian accent participants ( | 48.1 (14.7) | 34.9 (8.6) | 16.9 (9.7) |
| Foreign accent participants ( | 38.2 (15.4) | 36.7 (11.4) | 25.2 (8.8) |
| Google assistant | |||
| Total participants ( | 84.3 (11.2) | 15.7 (11.2) | 0.0 (0.0) |
| Canadian accent participants ( | 85.5 (10.7) | 14.5 (10.7) | 0.0 (0.0) |
| Foreign accent participants ( | 80.7 (12.3) | 19.3 (12.3) | 0.0 (0.0) |
| Siri | |||
| Total participants ( | 51.2 (16.0) | 48.8 (16.0) | 0.0 (0.0) |
| Canadian accent participants ( | 54.1 (15.5) | 45.9 (15.5) | 0.0 (0.0) |
| Foreign accent participants ( | 43.0 (15.3) | 57.0 (15.3) | 0.0 (0.0) |
Note. Percentages represent the average accuracy rates for participants across all 50 medication names. Only the “Accurate” means were used for statistical analyses
Voice assistant comprehension accuracy for brand medication names using only correct participant pronunciations
| Voice assistant comprehension | Accuracy using fully correct and partially correct pronunciations mean % (SD) | Accuracy using only fully correct pronunciations mean % (SD) |
|---|---|---|
| Alexa | ||
| Total participants ( | 55.4 (10.4) | 59.4 (9.0) |
| Canadian accent participants ( | 58.6 (8.0) | 62.5 (6.0) |
| Foreign accent participants ( | 46.3 (11.3) | 50.6 (10.6) |
| Google Assistant | ||
| Total participants ( | 92.6 (3.3) | 94.2 (2.3) |
| Canadian accent participants ( | 93.2 (3.0) | 94.6 (2.1) |
| Foreign accent participants ( | 91.1 (3.8) | 93.0 (2.8) |
| Siri | ||
| Total participants ( | 59.1 (11.0) | 62.4 (10.7) |
| Canadian accent participants ( | 60.4 (11.0) | 63.4 (10.7) |
| Foreign accent participants ( | 55.4 (10.6) | 59.4 (10.7) |
Voice assistant comprehension accuracy for generic medication names using only correct participant pronunciations
| Voice assistant comprehension | Accuracy using fully correct and partially correct pronunciations mean % (SD) | Accuracy using only fully correct pronunciations mean % (SD) |
|---|---|---|
| Alexa | ||
| Total participants ( | 48.8 (14.0) | 66.7 (12.7) |
| Canadian accent participants ( | 52.0 (12.3) | 70.6 (9.7) |
| Foreign accent participants ( | 39.9 (15.0) | 55.8 (14.2) |
| Google Assistant | ||
| Total participants ( | 88.4 (8.3) | 95.5 (4.9) |
| Canadian accent participants ( | 90.2 (6.9) | 96.3 (3.9) |
| Foreign accent participants ( | 83.2 (10.1) | 93.2 (6.7) |
| Siri | ||
| Total participants ( | 55.4 (14.2) | 73.7 (11.6) |
| Canadian accent participants ( | 58.8 (12.1) | 77.5 (8.7) |
| Foreign accent participants ( | 45.9 (15.9) | 62.8 (12.3) |
Demographic characteristics of participants (N = 46)
| Characteristics | Participants, |
|---|---|
| Age (years), mean (SD) | 34.3 (8.0) |
| Gender | |
| Male | 16 (34.8%) |
| Female | 30 (65.2%) |
| Race | |
| Caucasian | 32 (69.6%) |
| Asian | 10 (21.7%) |
| African American | 2 (4.3%) |
| Hispanic | 2 (4.3%) |
| Usage Frequency of Voice Assistants | |
| Every day or nearly every day | 14 (30.4%) |
| Once or twice a week | 5 (10.9%) |
| Once or twice a month | 6 (13.0%) |
| A few times a year | 8 (17.4%) |
| Never | 13 (28.3%) |
| Health literacy (REALM) | |
| ≤Grade 3 | 0 (0%) |
| Grade 4–6 | 0 (0%) |
| Grade 7–8 | 0 (0%) |
| ≥Grade 9 (“adequate”) | 46 (100%) |
REALM rapid estimate of adult literacy in medicine