| Literature DB >> 33960708 |
Isaac S Chua1,2,3, Michal Gaziel-Yablowitz1,3, Zfania T Korach1,3, Kenneth L Kehl3,4,5, Nathan A Levitan6, Yull E Arriaga6, Gretchen P Jackson6,7, David W Bates1,3, Michael Hassett3,4,5.
Abstract
In recent years, the field of artificial intelligence (AI) in oncology has grown exponentially. AI solutions have been developed to tackle a variety of cancer-related challenges. Medical institutions, hospital systems, and technology companies are developing AI tools aimed at supporting clinical decision making, increasing access to cancer care, and improving clinical efficiency while delivering safe, high-value oncology care. AI in oncology has demonstrated accurate technical performance in image analysis, predictive analytics, and precision oncology delivery. Yet, adoption of AI tools is not widespread, and the impact of AI on patient outcomes remains uncertain. Major barriers for AI implementation in oncology include biased and heterogeneous data, data management and collection burdens, a lack of standardized research reporting, insufficient clinical validation, workflow and user-design challenges, outdated regulatory and legal frameworks, and dynamic knowledge and data. Concrete actions that major stakeholders can take to overcome barriers to AI implementation in oncology include training and educating the oncology workforce in AI; standardizing data, model validation methods, and legal and safety regulations; funding and conducting future research; and developing, studying, and deploying AI tools through multidisciplinary collaboration.Entities:
Keywords: artificial intelligence; deep learning; machine learning; oncology
Mesh:
Year: 2021 PMID: 33960708 PMCID: PMC8209596 DOI: 10.1002/cam4.3935
Source DB: PubMed Journal: Cancer Med ISSN: 2045-7634 Impact factor: 4.452
Artificial intelligence and data science terminologies
| Terms | Definition |
|---|---|
| Machine learning | Algorithms and models which machines can use to learn without explicit instructions. |
| Supervised learning | Machine learning that is based on input–output pairs. |
| Unsupervised learning | Machine learning that proceeds without direction from a human, targeted at predicting outputs. |
| Deep learning | A subset of machine learning that generally uses neural networks. |
| Natural language processing | Machine learning specifically to understand, interpret, or manipulate human language. |
| Computer vision | Machine learning that trains computers to interpret and understand the visual world. |
| Knowledge representation | A surrogate that is used to enable an entity to determine consequences by thinking rather than acting and is a set of ontological commitments, a fragmentary theory of intelligent reasoning, and a medium for pragmatically efficient computation and human expression. |
| Ontology | Controlled terminology invoking formal semantic relationships between and among concepts, manifested as a type of description logic, which is a subset of first‐order predicate logic, chosen to accommodate computational tractability. |
| Fast Healthcare Interoperability Resources (FHIR) | Standard for exchanging healthcare information electronically created by Health Level Seven International (HL7), a not‐for‐profit, American National Standards Institute‐accredited standards developing organization. |
| Minimal Common Oncology Data Elements (mCODE) | A collaboration between the American Society of Clinical Oncology, Inc., CancerLinQ LLC, and MITRE to identify minimal cancer data elements that are essential for analyzing treatments across patients via their electronic health records. |
| Informatics | The science of how to use data, information, and knowledge to improve human health and the delivery of healthcare services. |
FIGURE 1Data types and sources processed by artificial intelligence. The right column exemplifies commonly used data types that can be processed by artificial intelligence. The left column categorizes the data types into three main areas: patient, medical, and contextual
FIGURE 2Stage of development and deployment among applications of artificial intelligence in oncology. The location of a topic represents the farthest that topic has come in its development, not necessarily the point in development where all solutions in that topic area have reached. Each topic's shape represents its application within the levels of cancer prevention (circle = primary, triangle = secondary [or tertiary], diamond = tertiary)
FIGURE 3Statistical biases associated with artificial intelligence (AI) algorithm predictions. AI‐based tools look for patterns of association in the data made available to them; they do not establish causation. The sample of data used to develop an AI algorithm may not represent data from other patients treated in other health systems over time. For example, if most of the data used to develop an AI algorithm came from patients <65 years old treated before 2018, then an AI algorithm may not provide reliable estimates for patients >65 years old treated after 2020
FIGURE 4Social biases associated with artificial intelligence algorithm predictions. This figure depicts the gap between what we need to show the model (i.e., both factual and counterfactual scenarios) versus what happens when machine learning (ML) is trained on existing data. In this example, an ML model is used to identify oncology patients who require opioids for pain management. When using existing data (i.e., secondary use of data collected as part of routine work), the data reflect not only the association between the patient's condition and opioid prescribing, but rather it reflects this association conditioned on the staff's determination if the patient's complaint of pain is legitimate or not. If the staff's decisions are not uniform (e.g., biased by demographics), then some of the patients who were not prescribed opioids will have the wrong label (“opioids not needed for pain control”), whereas they should have had the label (“opioids needed for pain control”). Therefore, the model will be shown the wrong labels and will learn an erroneous pattern
Next steps toward artificial intelligence (AI) implementation in oncology
| Training and educating the oncology workforce |
|
Develop educational modules for practicing oncologists who address interpretation and application of AI‐based tools |
|
Incorporate formal training on basics of medical informatics and implementation science in fellowship curricula |
|
Expand and stimulate career tracks focused on informatics applied to oncology |
|
Train health system administrators and information system leaders regarding the demands and impacts of AI‐based solutions. |
| Standardizing data, research and validation methods, and regulatory standards |
|
Develop and expand use of standard oncology terminologies and ontologies |
|
Develop standards that foster systematic evaluation of performance of AI‐based tools |
|
Establish consensus regarding regulatory and legal frameworks for AI‐based tools |
| Funding and conducting future research |
|
Conduct research that fosters development of optimal methods for balancing competing aspects of fairness |
|
Conduct randomized controlled trials that test the impact of AI‐based tools on patient survival, quality of life, and cost‐effectiveness. |
|
Conduct implementation science research that study optimal methods for deploying AI‐based tools in routine care settings. |
|
Conduct behavioral research on how data visualizations of AI‐based recommendations affect clinical decision‐making in oncology |
| Developing, studying, and deploying AI tools through multidisciplinary collaboration |
|
Increase engagement with clinical information system vendors and EHR companies |
|
Support partnerships between informatics companies, academia, professional societies, health systems, and community‐based practices to enable widespread deployment |