Kevin De Angeli1,2, Shang Gao3, Mohammed Alawad1, Hong-Jun Yoon1, Noah Schaefferkoetter1, Xiao-Cheng Wu4, Eric B Durbin5, Jennifer Doherty6, Antoinette Stroup7, Linda Coyle8, Lynne Penberthy9, Georgia Tourassi1. 1. Oak Ridge National Lab, Oak Ridge, TN, USA. 2. The Bredesen Center, The University of Tennessee, Knoxville, TN, US. 3. Oak Ridge National Lab, Oak Ridge, TN, USA. gaos@ornl.gov. 4. Louisiana Tumor Registry, Louisiana State University Health Sciences Center, School of Public Health, New Orleans, LA, USA. 5. College of Medicine, University of Kentucky, Lexington, KY, USA. 6. Utah Cancer Registry, University of Utah School of Medicine, Salt Lake City, UT, USA. 7. New Jersey State Cancer Registry, New Jersey Department of Health, Trenton, NJ, USA. 8. Information Management Services Inc., Calverton, MD, USA. 9. Surveillance Research Program, Division of Cancer Control and Population Sciences, National Cancer Institute, Bethesda, MD, USA.
Abstract
BACKGROUND: Automated text classification has many important applications in the clinical setting; however, obtaining labelled data for training machine learning and deep learning models is often difficult and expensive. Active learning techniques may mitigate this challenge by reducing the amount of labelled data required to effectively train a model. In this study, we analyze the effectiveness of 11 active learning algorithms on classifying subsite and histology from cancer pathology reports using a Convolutional Neural Network as the text classification model. RESULTS: We compare the performance of each active learning strategy using two differently sized datasets and two different classification tasks. Our results show that on all tasks and dataset sizes, all active learning strategies except diversity-sampling strategies outperformed random sampling, i.e., no active learning. On our large dataset (15K initial labelled samples, adding 15K additional labelled samples each iteration of active learning), there was no clear winner between the different active learning strategies. On our small dataset (1K initial labelled samples, adding 1K additional labelled samples each iteration of active learning), marginal and ratio uncertainty sampling performed better than all other active learning techniques. We found that compared to random sampling, active learning strongly helps performance on rare classes by focusing on underrepresented classes. CONCLUSIONS: Active learning can save annotation cost by helping human annotators efficiently and intelligently select which samples to label. Our results show that a dataset constructed using effective active learning techniques requires less than half the amount of labelled data to achieve the same performance as a dataset constructed using random sampling.
BACKGROUND: Automated text classification has many important applications in the clinical setting; however, obtaining labelled data for training machine learning and deep learning models is often difficult and expensive. Active learning techniques may mitigate this challenge by reducing the amount of labelled data required to effectively train a model. In this study, we analyze the effectiveness of 11 active learning algorithms on classifying subsite and histology from cancer pathology reports using a Convolutional Neural Network as the text classification model. RESULTS: We compare the performance of each active learning strategy using two differently sized datasets and two different classification tasks. Our results show that on all tasks and dataset sizes, all active learning strategies except diversity-sampling strategies outperformed random sampling, i.e., no active learning. On our large dataset (15K initial labelled samples, adding 15K additional labelled samples each iteration of active learning), there was no clear winner between the different active learning strategies. On our small dataset (1K initial labelled samples, adding 1K additional labelled samples each iteration of active learning), marginal and ratio uncertainty sampling performed better than all other active learning techniques. We found that compared to random sampling, active learning strongly helps performance on rare classes by focusing on underrepresented classes. CONCLUSIONS: Active learning can save annotation cost by helping human annotators efficiently and intelligently select which samples to label. Our results show that a dataset constructed using effective active learning techniques requires less than half the amount of labelled data to achieve the same performance as a dataset constructed using random sampling.
Entities:
Keywords:
Active learning; Cancer pathology reports; Convolutional neural networks; Deep learning; Text classification
Authors: Rosa L Figueroa; Qing Zeng-Treitler; Long H Ngo; Sergey Goryachev; Eduardo P Wiechmann Journal: J Am Med Inform Assoc Date: 2012-06-15 Impact factor: 4.497
Authors: Shang Gao; John X Qiu; Mohammed Alawad; Jacob D Hinkle; Noah Schaefferkoetter; Hong-Jun Yoon; Blair Christian; Paul A Fearn; Lynne Penberthy; Xiao-Cheng Wu; Linda Coyle; Georgia Tourassi; Arvind Ramanathan Journal: Artif Intell Med Date: 2019-10-15 Impact factor: 5.326
Authors: Mohammed Alawad; Shang Gao; John X Qiu; Hong Jun Yoon; J Blair Christian; Lynne Penberthy; Brent Mumphrey; Xiao-Cheng Wu; Linda Coyle; Georgia Tourassi Journal: J Am Med Inform Assoc Date: 2020-01-01 Impact factor: 4.497
Authors: Kevin De Angeli; Shang Gao; Ioana Danciu; Eric B Durbin; Xiao-Cheng Wu; Antoinette Stroup; Jennifer Doherty; Stephen Schwartz; Charles Wiggins; Mark Damesyn; Linda Coyle; Lynne Penberthy; Georgia D Tourassi; Hong-Jun Yoon Journal: J Biomed Inform Date: 2021-11-22 Impact factor: 8.000
Authors: Kevin De Angeli; Shang Gao; Andrew Blanchard; Eric B Durbin; Xiao-Cheng Wu; Antoinette Stroup; Jennifer Doherty; Stephen M Schwartz; Charles Wiggins; Linda Coyle; Lynne Penberthy; Georgia Tourassi; Hong-Jun Yoon Journal: JAMIA Open Date: 2022-09-13
Authors: Andrew E Blanchard; Shang Gao; Hong-Jun Yoon; J Blair Christian; Eric B Durbin; Xiao-Cheng Wu; Antoinette Stroup; Jennifer Doherty; Stephen M Schwartz; Charles Wiggins; Linda Coyle; Lynne Penberthy; Georgia D Tourassi Journal: IEEE J Biomed Health Inform Date: 2022-06-03 Impact factor: 7.021