Elizabeth E Umberfield1,2, Yun Jiang3, Susan H Fenton4, Cooper Stansbury5,6, Kathleen Ford3, Kaycee Crist7, Sharon L R Kardia8, Andrea K Thomer9, Marcelline R Harris3. 1. Health Policy & Management, Indiana University Richard M Fairbanks School of Public Health, Indianapolis, Indiana, United States. 2. Center for Biomedical Informatics, Regenstrief Institute, Inc., Indianapolis, Indiana, United States. 3. Department of Systems, Populations and Leadership, University of Michigan School of Nursing, Ann Arbor, Michigan, United States. 4. School of Biomedical Informatics, University of Texas Health Science Center, Houston, Texas, United States. 5. Department of Computational Medicine and Bioinformatics, University of Michigan Medical School, Ann Arbor, Michigan, United States. 6. The Michigan Institute for Computational Discovery and Engineering, University of Michigan, Ann Arbor, Michigan, United States. 7. Rory Meyers School of Nursing, New York University, New York, New York, United States. 8. Department of Epidemiology, University of Michigan School of Public Health, Ann Arbor, Michigan, United States. 9. University of Michigan School of Information, Ann Arbor, Michigan, United States.
Abstract
BACKGROUND: The lack of machine-interpretable representations of consent permissions precludes development of tools that act upon permissions across information ecosystems, at scale. OBJECTIVES: To report the process, results, and lessons learned while annotating permissions in clinical consent forms. METHODS: We conducted a retrospective analysis of clinical consent forms. We developed an annotation scheme following the MAMA (Model-Annotate-Model-Annotate) cycle and evaluated interannotator agreement (IAA) using observed agreement (A o), weighted kappa (κw ), and Krippendorff's α. RESULTS: The final dataset included 6,399 sentences from 134 clinical consent forms. Complete agreement was achieved for 5,871 sentences, including 211 positively identified and 5,660 negatively identified as permission-sentences across all three annotators (A o = 0.944, Krippendorff's α = 0.599). These values reflect moderate to substantial IAA. Although permission-sentences contain a set of common words and structure, disagreements between annotators are largely explained by lexical variability and ambiguity in sentence meaning. CONCLUSION: Our findings point to the complexity of identifying permission-sentences within the clinical consent forms. We present our results in light of lessons learned, which may serve as a launching point for developing tools for automated permission extraction. Thieme. All rights reserved.
BACKGROUND: The lack of machine-interpretable representations of consent permissions precludes development of tools that act upon permissions across information ecosystems, at scale. OBJECTIVES: To report the process, results, and lessons learned while annotating permissions in clinical consent forms. METHODS: We conducted a retrospective analysis of clinical consent forms. We developed an annotation scheme following the MAMA (Model-Annotate-Model-Annotate) cycle and evaluated interannotator agreement (IAA) using observed agreement (A o), weighted kappa (κw ), and Krippendorff's α. RESULTS: The final dataset included 6,399 sentences from 134 clinical consent forms. Complete agreement was achieved for 5,871 sentences, including 211 positively identified and 5,660 negatively identified as permission-sentences across all three annotators (A o = 0.944, Krippendorff's α = 0.599). These values reflect moderate to substantial IAA. Although permission-sentences contain a set of common words and structure, disagreements between annotators are largely explained by lexical variability and ambiguity in sentence meaning. CONCLUSION: Our findings point to the complexity of identifying permission-sentences within the clinical consent forms. We present our results in light of lessons learned, which may serve as a launching point for developing tools for automated permission extraction. Thieme. All rights reserved.
Authors: J Jeffery Reeves; Kristin L Mekeel; Ruth S Waterman; Lisa R Rhodes; Brian J Clay; Bryan M Clary; Christopher A Longhurst Journal: JAMA Surg Date: 2020-08-01 Impact factor: 14.766
Authors: Kapil Chalil Madathil; Reshmi Koikkara; Jihad Obeid; Joel S Greenstein; Iain C Sanderson; Katrina Fryar; Jay Moskowitz; Anand K Gramopadhye Journal: Int J Med Inform Date: 2013-06-10 Impact factor: 4.046
Authors: Adam E M Eltorai; Syed S Naqvi; Soha Ghanian; Craig P Eberson; Arnold-Peter C Weiss; Christopher T Born; Alan H Daniels Journal: Clin Transl Sci Date: 2015-12-17 Impact factor: 4.689