| Literature DB >> 36204544 |
Pouria Rouzrokh1, Bardia Khosravi1, Shahriar Faghani1, Mana Moassefi1, Diana V Vera Garcia1, Yashbir Singh1, Kuan Zhang1, Gian Marco Conte1, Bradley J Erickson1.
Abstract
Minimizing bias is critical to adoption and implementation of machine learning (ML) in clinical practice. Systematic mathematical biases produce consistent and reproducible differences between the observed and expected performance of ML systems, resulting in suboptimal performance. Such biases can be traced back to various phases of ML development: data handling, model development, and performance evaluation. This report presents 12 suboptimal practices during data handling of an ML study, explains how those practices can lead to biases, and describes what may be done to mitigate them. Authors employ an arbitrary and simplified framework that splits ML data handling into four steps: data collection, data investigation, data splitting, and feature engineering. Examples from the available research literature are provided. A Google Colaboratory Jupyter notebook includes code examples to demonstrate the suboptimal practices and steps to prevent them. Keywords: Data Handling, Bias, Machine Learning, Deep Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) © RSNA, 2022.Entities:
Keywords: Bias; Computer-aided Diagnosis (CAD); Convolutional Neural Network (CNN); Data Handling; Deep Learning; Machine Learning
Year: 2022 PMID: 36204544 PMCID: PMC9533091 DOI: 10.1148/ryai.210290
Source DB: PubMed Journal: Radiol Artif Intell ISSN: 2638-6100