| Literature DB >> 34177958 |
Jeffrey Clement1, Angela Q Maldonado2.
Abstract
Advances in systems immunology, such as new biomarkers, offer the potential for highly personalized immunosuppression regimens that could improve patient outcomes. In the future, integrating all of this information with other patient history data will likely have to rely on artificial intelligence (AI). AI agents can help augment transplant decision making by discovering patterns and making predictions for specific patients that are not covered in the literature or in ways that are impossible for humans to anticipate by integrating vast amounts of data (e.g. trending across numerous biomarkers). Similar to other clinical decision support systems, AI may help overcome human biases or judgment errors. However, AI is not widely utilized in transplant to date. In this rapid review, we survey the methods employed in recent research in transplant-related AI applications and identify concerns related to implementing these tools. We identify three key challenges (bias/accuracy, clinical decision process/AI explainability, AI acceptability criteria) holding back AI in transplant. We also identify steps that can be taken in the near term to help advance meaningful use of AI in transplant (forming a Transplant AI Team at each center, establishing clinical and ethical acceptability criteria, and incorporating AI into the Shared Decision Making Model).Entities:
Keywords: artificial intelligence; decision making; ethics; immunosuppression; machine learning; natural language processing; shared decision model; transplant
Mesh:
Year: 2021 PMID: 34177958 PMCID: PMC8226178 DOI: 10.3389/fimmu.2021.694222
Source DB: PubMed Journal: Front Immunol ISSN: 1664-3224 Impact factor: 7.561
Figure 1PRISMA Diagram detailing the selection and screening of records included in the review.
AI Methods, Effectiveness and Accuracy Criteria, and Challenges identified by studies in review.
| Category | Method/Criteria (n = Records Reporting) |
|---|---|
| AI Methods Used (Studies Only) | Random Forest (n = 24) |
| Neural Networks (n = 18) | |
| Gradient Boosting (n = 11) | |
| Logistic Regression (n = 9) | |
| Decision Trees (n = 7) | |
| Support Vector Machine (n = 7) | |
| kNN (n = 3) | |
| LASSO or Ridge Regression (n = 3) | |
| Natural Language Processing (n = 3) | |
| Adaptive Boosting (Adaboost) (n = 2) | |
| Naïve Bayes (n = 2) | |
| Other or Unspecified Method (n = 8) | |
| AI Effectiveness and Accuracy Criteria Reported (Studies Only) | Area Under ROC (AUC) (n = 21) |
| Sensitivity (n = 17) | |
| Specificity (n = 12) | |
| Accuracy (n = 13) | |
| Precision (n = 4) | |
| Recall (n = 2) | |
| C-Index (n = 12) | |
| F1 (n = 3) | |
| Brier Score (n = 3) | |
| Positive Predictive Value (n = 6) | |
| Negative Predictive Value (n = 5) | |
| Cost/Benefit Metric (n = 2) | |
| RMSE (n = 2) | |
| Custom Metric (n = 1) | |
| None Reported (n = 9) | |
| Other (n = 11) | |
| Challenges and Limitations Highlighted (Studies and Reviews) | Data |
| Generalizable/Representative Data (n = 48) | |
| Collection/Measurement (n = 7) | |
| Missing data (n = 6) | |
| Clinical Decision Process | |
| Interpretability/Explanation (n = 13) | |
| Clinician Training on Use of AI (n = 4) | |
| Acceptability | |
| Validation/Approval (n = 16) | |
| Ethical Guidelines (n = 8) | |
| Accuracy/Acceptance Criteria (n = 4) | |
| Commercial Vested Interested (n = 2) |