| Literature DB >> 31342909 |
Yizhao Ni1, Monica Bermudez1, Stephanie Kennebeck1, Stacey Liddy-Hicks1, Judith Dexheimer1.
Abstract
BACKGROUND: One critical hurdle for clinical trial recruitment is the lack of an efficient method for identifying subjects who meet the eligibility criteria. Given the large volume of data documented in electronic health records (EHRs), it is labor-intensive for the staff to screen relevant information, particularly within the time frame needed. To facilitate subject identification, we developed a natural language processing (NLP) and machine learning-based system, Automated Clinical Trial Eligibility Screener (ACTES), which analyzes structured data and unstructured narratives automatically to determine patients' suitability for clinical trial enrollment. In this study, we integrated the ACTES into clinical practice to support real-time patient screening.Entities:
Keywords: automated patient screening; natural language processing; system integration; system usability evaluation; time and motion studies
Year: 2019 PMID: 31342909 PMCID: PMC6685132 DOI: 10.2196/14185
Source DB: PubMed Journal: JMIR Med Inform
Figure 1The overview of patient recruitment workflow with automated patient screening. API- Application Programming Interface; ACTES: Automated Clinical Trial Eligibility Screener; CRC: Clinical Research Coordinator; EHR: Electronic Health Record.
Percentage of time spent on clinical research coordinator activities with and without using automated patient screening.
| Category and clinical research coordinator activities | With ACTESa, % | Without ACTES, % | |
| Electronic screening (browsing electronic health record or ACTES) | 25.6b | 38.5 | |
| In-person screening (with physician, nurse, and patient) | 1.5 | 2.1 | |
| Logging patient eligibility in study databases | 5.2 | 6.6 | |
| Nonelectronic screening (reviewing log sheet) | 0.2b | 0.4 | |
| Introducing study | 0.5 | 0.4 | |
| Consent procedures | 0.9 | 0.4 | |
| Unclassified patient contact | 0.0 | 0.3 | |
| Clinical research coordinator performing study procedures and collecting data (eg, interviews, sample collection) | 5.9 | 5.3 | |
| Waiting for clinical procedures to be completed | 0.6 | 0.5 | |
| Waiting for sample collection to be completed | 1.5b | 0.5 | |
| Other unspecified waiting | 1.2 | 0.8 | |
| Study-related admin tasks (eg, reviewing study packet, preparing supplies) | 15.8b | 10.9 | |
| Work-related conversations | 10.5b | 6.6 | |
| Miscellaneous work-related admin tasks | 4.7 | 4.6 | |
| Emails/Web browsing | 11.1 | 8.8 | |
| Walking | 7.1 | 6.3 | |
| Personal time (nonwork-related activities) | 7.6 | 6.9 | |
aACTES: Automated Clinical Trial Eligibility Screener.
bThe difference between clinical research coordinator activities in a category is statistically significant at the .05 level.
Figure 2The percentage of time on electronic screening along study days. ACTES: Automated Clinical Trial Eligibility Screener.
The average numbers of subjects screened, approached, and enrolled per week with and without automated patient screening.
| Trial abbreviation | With automated screening | Without automated screening | ||||
| Screened | Approached | Enrolled | Screened | Approached | Enrolled | |
| Biosignature | 29.4a | 2.0 | 1.2 | 25.3 | 2.0 | 1.4 |
| CARPE-DIEM | 62.6a | 6.9 | 4.2 | 54.5 | 8.2 | 5.2 |
| ED-STARS | 17.5 | 8.8 | 6.7 | 17.2 | 7.8 | 5.8 |
| HealthyFamily | 52.4a | 39.0a | 4.3 | 44.1 | 33.8 | 4.1 |
| M-TBI | 10.1 | 0.9 | 0.8 | 12.3b | 1.3 | 0.5 |
| Torsion | 4.0a | 1.1 | 2.4a | 2.2 | 0.9 | 1.5 |
| Average | 29.6 | 10.1 | 3.0 | 25.8 | 9.1 | 2.7 |
aThe enrollment statistics with automated screening is significantly higher than that without automation (P<.05).
bThe enrollment statistics with automated screening is significantly lower than that without automation (P<.05).
The average scores of system usability scale given by the clinical research coordinator participants.
| Statements | Five-point scale (1-5)a, mean (SD) | |||
| Fallb | Winterc | Springd | Summere | |
| 1. I would like to use this system frequently. | 2.4 (1.1) | 3.2 (1.1) | 3.7 (0.9) | 3.2 (0.6) |
| 2. I found the system unnecessarily complex. | 2.1 (1.0) | 1.8 (1.4) | 1.5 (0.5) | 1.4 (0.7) |
| 3. I thought the system was easy to use. | 4.6 (0.5) | 4.5 (0.5) | 4.7 (0.5) | 4.7 (0.5) |
| 4. I would need the support of a technician to use this system. | 1.7 (1.1) | 1.1 (0.4) | 1.2 (0.4) | 1.1 (0.4) |
| 5. The various functions in the system were well integrated. | 3.3 (0.6) | 3.3 (1.1) | 3.8 (0.4) | 3.7 (0.7) |
| 6. I thought there was too much inconsistency in this system. | 3.1 (1.1) | 3.6 (1.0) | 3.3 (0.7) | 2.1 (1.0) |
| 7. Most people would learn to use this system very quickly. | 4.5 (0.8) | 4.5 (0.5) | 4.5 (0.5) | 4.4 (0.8) |
| 8. I found the system very cumbersome to use. | 3.3 (1.3) | 2.3 (0.9) | 2.0 (1.0) | 1.9 (0.9) |
| 9. I felt very confident using the system. | 4.0 (1.3) | 4.2 (0.5) | 4.7 (0.5) | 4.0 (1.2) |
| 10. I needed to learn a lot of things before I could use this system. | 1.3 (0.4) | 1.6 (0.5) | 2.2 (1.3) | 1.4 (0.4) |
a1 indicates strongly disagree and 5 strongly agree.
bOverall score of system usability scale (SUS): 67.9.
cOverall score of SUS: 72.5.
dOverall score of SUS: 78.0.
eOverall score of SUS: 80.0.