Literature DB >> 34522738

Artificial Learning and Machine Learning Decision Guidance Applications in Total Hip and Knee Arthroplasty: A Systematic Review.

Cesar D Lopez1, Anastasia Gazgalis1, Venkat Boddapati1, Roshan P Shah1, H John Cooper1, Jeffrey A Geller1.   

Abstract

BACKGROUND: Artificial intelligence (AI) and machine learning (ML) modeling in hip and knee arthroplasty (total joint arthroplasty [TJA]) is becoming more commonplace. This systematic review aims to quantify the accuracy of current AI- and ML-based application for cognitive support and decision-making in TJA.
METHODS: A comprehensive search of publications was conducted through the EMBASE, Medline, and PubMed databases using relevant keywords to maximize the sensitivity of the search. No limits were placed on level of evidence or timing of the study. Findings were reported according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Analysis of variance testing with post-hoc Tukey test was applied to compare the area under the curve (AUC) of the models.
RESULTS: After application of inclusion and exclusion criteria, 49 studies were included in this review. The application of AI/ML-based models and average AUC is as follows: cost prediction-0.77, LOS and discharges-0.78, readmissions and reoperations-0.66, preoperative patient selection/planning-0.79, adverse events and other postoperative complications-0.84, postoperative pain-0.83, postoperative patient-reported outcomes measures and functional outcome-0.81. Significant variability in model AUC across the different decision support applications was found (P < .001) with the AUC for readmission and reoperation models being significantly lower than that of the other decision support categories.
CONCLUSIONS: AI/ML-based applications in TJA continue to expand and have the potential to optimize patient selection and accurately predict postoperative outcomes, complications, and associated costs. On average, the AI/ML models performed best in predicting postoperative complications, pain, and patient-reported outcomes and were less accurate in predicting hospital readmissions and reoperations.
© 2021 The Authors.

Entities:  

Keywords:  Artificial intelligence; Artificial neural networks; Deep learning; Hip and knee arthroplasty; Machine learning; Orthopedic surgery

Year:  2021        PMID: 34522738      PMCID: PMC8426157          DOI: 10.1016/j.artd.2021.07.012

Source DB:  PubMed          Journal:  Arthroplast Today        ISSN: 2352-3441


Introduction and background

Health care has been able to harness artificial intelligence (AI) in an effort to create predictive tools that support clinicians in more complicated decision-making processes. Specifically, a subset of AI known as machine learning (ML) has been used in multiple medical specialties including oncology, neurology, neurosurgery, cardiology, and orthopedic surgery [[1], [2], [3], [4], [5]]. In lower extremity surgery, specifically, ML has been applied in risk assessment and diagnosis, cost analysis, and reimbursement tools [6]. In the simplest form, ML produces useful models from algorithmic analysis of acquired data. One particular strength of ML is that some models are trained on large amounts of information, or “big data”. This optimizes the use of these models in decision-making as the more data the underlying model is trained on, the more accurate its predictions become [7]. Several ML models are used including decision trees, support vector machine, regression analysis, and Bayesian networks [4,5]. In addition, a subset of ML called “deep learning” has been developed, which includes artificial neural network (ANN) models. The advantage of these models is that they do not require the preprocessing of data by humans, but rather can analyze the raw inputs and identify which features are most important for the analysis [8]. Models can be created in several ways; however, supervised learning is the most common one [8,9]. In supervised learning, data sets are labeled so that the model can be built on a “training set” of inputs (variables of interests) with defined outputs (outcomes of interest). Complex patterns and relationships are identified between these inputs and outputs, and the model can then use these associations to predict outcomes of interest from novel inputs [10,11]. Once a model is created, it can be tested on a novel data set or a validation data set. Recently, there has been a substantial increase in the literature describing the use of these models, including the field of hip and knee arthroplasty. As such, it is critical to build a strong understanding of the accuracy and application of these current models to guide their applicability toward further development in the future. The purpose of this review is to assess the accuracy of current applications of AI/ML in hip and knee arthroplasty, namely in (1) administrative/clinical decision support applications (cost, discharge/length of stay (LOS), patient selection and planning, readmission and reoperation risk) and (2) postoperative prediction/management applications (adverse event/ other postoperative complication, cardiovascular complication, postoperative pain, postoperative mortality, patient-reported outcomes [PRO], and sustained opioid use).

Material and methods

Search strategy

A comprehensive search of publications, through May 2020, was conducted using the EMBASE, Medline, and PubMed databases in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The search strategy included the following keywords or MeSH-terms: “machine learning”, “artificial intelligence”, “deep learning”, “neural network”, “artificial neural networks”, “support vector machine”, “Bayesian”, “boosting”, “ensemble learning”, “prediction model”, “decision tree”, “random forest”, “total hip arthroplasty”, “total knee arthroplasty”, “total joint arthroplasty”, “THA”, “TKA”, “TJA”, “hip replacement”, “knee replacement”, and “joint replacement”. Boolean operators (OR, AND) were used to maximize the sensitivity of the search. Screening of reference lists of retrieved articles also yielded additional studies.

Eligibility criteria

Inclusion criteria comprised original clinical studies, including studies which evaluate AI/ML-based applications in clinical decision-making in hip and knee arthroplasty. Exclusion criteria comprised studies that did not evaluate AI/ML applications in hip and knee arthroplasty, medical imaging analysis studies without explicit reference or application to hip and knee arthroplasty, studies with nonhuman subjects, non-English-language studies, inaccessible articles, conference abstracts, reviews, and editorials. No limits were placed on level of evidence or timing of the study because the majority of the reviewed studies were published within the last 10 years.

Study selection

Article titles and abstracts were screened initially by 2 reviewers, and full-text articles were subsequently screened based on the selection criteria. The studies were rated by their level of evidence, based on the Oxford Center for Evidence-based Medicine Levels of Evidence [12]. Two authors reviewed each individual article that was included. Discrepancies in inclusion studies were discussed and resolved by consensus.

Data extraction and categorization

A database was generated from all included studies which consisted of the journal of publication, publication year, country of origin, study design, level of evidence, study duration, blinding of the study, number of involved institutions, AI/ML methods and clinical applications, surgical domain, data sources, input variables and output variables, sample size, average patient age, percent female patients, and any additional pertinent findings from the study. The reviewed articles were sorted into different, nonmutually exclusive categories based on the AI/ML clinical application. AI/ML clinical applications were divided into 2 major groups: (1) administrative and clinical decision support and (2) postoperative prediction and management of complications and outcomes. The former group contained the following prediction and optimization subcategories: preoperative planning and cost prediction, hospital discharge and LOS, readmissions, and reoperations. The latter group included postoperative cardiovascular complications, other complications, mortality, and functional and clinical outcomes.

Data analysis

Descriptive statistics were used to summarize important findings and results from the selected articles and to describe trends in AI/ML techniques, clinical applications, and relevant findings associated with its use. Summary data were presented using simple averages, frequencies, and proportions. This study did not evaluate R2 values. AI/ML model performance within the reviewed studies were summarized using various metrics, including the area under the curve (AUC) of receiver operating characteristic curves, accuracy (%), sensitivity (%), and specificity (%). AUC values range from 0.50 to 1 and measure a prediction models’ discriminative ability, with a higher AUC value signifying better predictive ability and overall accuracy of the model correctly placing a patient into an outcome category. A model with an AUC of 1.0 is a perfect discriminator, 0.90 to 0.99 is considered excellent, 0.80 to 0.89 is good, 0.70 to 0.79 is fair, and 0.51 to 0.69 is considered poor [13]. Reported model performance metrics for each AI/ML algorithm type and for each clinical application category were aggregated across the reviewed studies. One-way analysis of variance (ANOVA) with post hoc Tukey tests were performed, with statistical significance set at P < .05. All statistical analyses were performed using Stata (version 16.1; Stata Corporation, College Station, Texas).

Results

Search results and study selection

Predefined search terms resulted in 307 articles, of which 48 duplicate articles were removed. The remaining 259 articles were screened by title and abstract according to inclusion and exclusion criteria. Ultimately, there were 58 articles included for full review, of which 49 met full inclusion and exclusion criteria (Fig. 1). Level of evidence of the reviewed studies ranged from II to IV, and over 61% of studies had level of evidence III, 29% of studies had level of evidence II, and 10% had level of evidence IV. The average number of patients included in model testing was 30,624 (standard deviation [SD] 69,069). Although there were no limitations on publication dates in the selection process, the vast majority of studies (42 studies, or 87.5%) were published during the last 3 years (2018 - 2020) (Fig. 2) There was variability in the metrics used by authors to report or evaluate AI/ML model performance. AUC was the most frequently reported performance metric, appearing in 39 out of the 49 total reviewed studies (79.6%). In comparison, accuracy was reported less frequently (10 studies, 20.4%), as were sensitivity and specificity (9 studies, or 18.4%).
Figure 1

PRISMA diagram showing systematic review search strategy.

Figure 2

Trends in the annual number of AI/ML publications in hip and knee surgery (2013-2020∗). ∗Through May 2020.

PRISMA diagram showing systematic review search strategy. Trends in the annual number of AI/ML publications in hip and knee surgery (2013-2020∗). ∗Through May 2020.

Administrative and clinical decision support applications

A total of 31 reviewed studies (63.3%) evaluated the use of AI/ML applications in optimizing preoperative patient selection or projecting surgical costs, through prediction of hospital LOS, discharges, readmissions, and other cost-contributing factors (Table 1, Table 2). Sixteen studies (32.7%) evaluated AI/ML applications to accurately predict patient reoperations, operating time, hospital LOS, discharges, readmissions, or surgical and inpatient costs [[14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29]]. In addition, 16 studies (32.7%) used patients’ preoperative risk factors and other patient-specific variables to optimize the patient selection and surgical planning process through the use of AI/ML-based predictions of surgical outcomes and postoperative complications [[30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44]]. The majority of the decision support studies evaluated AI/ML model performance using receiver operating characteristic/AUC, accuracy, sensitivity, and specificity. Two studies did not test model performance, but instead used cluster analysis to classify patients based on preoperative risk factors and other variables to predict their association with inpatient costs and functional outcomes [20,45].
Table 1

Reviewed studied of preoperative patient selection and planning in hip and knee arthroplasty.

Author, yearPathology/SurgeryML algorithmsPrediction outputsPatients in testing set (n)Avg. age%FemaleData source
Alam et al., 2019 [14]THAANN, regressionCosts10,000Multicenter
Aram et al., 2018 [15]TKAANN, decision treeReadmissions/reoperation613770.257.1National Joint Registry (UK)
Bonakdari et al., 2020 [16]TKA, THAANNReadmissions/reoperation525160.366.3UK National Institute for Health and Care Excellence (NICE)
Borjali et al., 2020 [30]THAANNPreop patient selection/planning2561.347Single institution
Cafri et al., 2019 [31]TKADecision tree, regressionPreop patient selection/Planning74,52065.557.4Kaiser Permanente Total Joint Replacement Registry (KPTJRR)
Fontana et al., 2019 [32]TJARegression, SVM, decision treePreop patient selection/planning343063Patient database
Gabriel et al., 2019 [17]THARegression, decision treeDischarge/LOS24050.5Single institution
Hirvasniemi et al., 2019 [33]THARegressionPreop patient selection/planning19755.783.8CHECK cohort
Hyer et al., 2019 [18]TJARegression, decision treePreop patient selection/planning262,2907355.8Medicare
Hyer et al., 2020 [19]AllDecision treeCosts, readmissions/reoperation262,2907355.8Medicare
Hyer et al., 2020 [20]THA, TKACluster analysisCosts19,522Medicare
Jafarzadeh et al., 2020 [44]TKAANN, regressionPreop patient selection/planning235761.662Multicenter Osteoarthritis (MOST) Study
Jodeiri et al., 2020 [34]THAANNPreop patient selection/planning95Single institution
Jones et al., 2019 [21]THA, TKARegression, boostingReadmissions/reoperationMedicare
Kang et al., 2020 [35]THAANNPreop patient selection/planning1202Multicenter
Karnuta et al., 2019 [22]Hip fractureBayesianCosts, discharge/LOS98,56273.5New York Statewide Planning and Research Cooperative System database
Karnuta et al., 2019 [46]TJAANNCosts73,901New York State inpatient administrative database
Lee et al., 2017 [26]TJARegressionReadmissions/reoperation26Analysis of patient records to provide risk prediction for readmissions.
Lee et al., 2019 [27]TJABoosting, regressionCosts131Single institution
Navarro et al., 2018 [28]TKABayesianCosts, discharge/LOS35,362Administrative database
Pareek et al., 2019 [37]Knee fractureANN, decision tree, regression, boosting, BayesianPreop patient selection/planning6264.668Single institution
Ramkumar et al., 2019 [23]TKAANNCosts, discharge/LOS175,04273.564National inpatient sample
Ramkumar et al., 2019 [24]THABayesianCosts, discharge/LOS30,584Patient database
Ramkumar et al., 2019 [25]THAANNCosts, discharge/LOS78,33575.363.6National inpatient sample
Sherafati et al., 2020 [38]THAANNPreop patient selection/planning7863.147Single institution
Tiulpin et al., 2019 [39]TKAANN, regression, boostingPreop patient selection/planning391861.16/62.5057.2/61.2Osteoarthritis Initiative (OAI) and MOST data sets
Tolpadi et al., 2020 [40]TKA, THAANNPreop patient selection/planning7196158OAI database
Twiggs et al., 2019 [41]TKABayesianPreop patient selection/planning15065.753Single institution
Van et al., 2019 [29]THAANNCosts, preop patient selection/planning100Single institution
Yi et al., 2019 [42]TKAANNPreop patient selection/planning154Single institution
Yoo et al., 2013 [43]TKASVMPreop patient selection/planning00Single institution

ANN, artificial neural network; LOS, length of stay; ML, machine learning; SVM, support vector machine; THA, total hip arthroplasty; TJA, total joint arthroplasty; TKA, total knee arthroplasty.

Table 2

Characteristics of AI/ML applications, including applied ML algorithms and prediction outputs.

Administrative/clinical decision support applicationsApplied ML algorithmsPrediction outputs
CostsANN, Bayesian, boosting, decision tree, regression, cluster analysisHospital charges, procedural costs, cost-effective interventions, payment, postoperative resource utilization
Discharge/LOSANN, Bayesian, decision tree, regressionDischarge disposition, LOS
Preop patient selection/planningANN, Bayesian, boosting, decision tree, regression, SVMPreop OA progression/prognosis, preop THA/TKA indication, patient surgical complexity score, patient selection, identification of implant, preop. HOOS JR, preoperative SF-36 MCS, preoperative SF-36 PCS
Readmissions/reoperation
ANN, boosting, decision tree, regression
30-d readmission, 90-d readmission, unplanned readmission, revision
Postoperative prediction/management applications


Adverse event/other complicationANN, boosting, decision tree, regression, SVM90-d postoperative complications, any complication, periprosthetic joint infection, postoperative complications, postoperative vomiting, pulmonary complication, renal complication, surgical site infection
Cardiovascular complicationDecision tree, regressionCardiac complication, risk of allogenic blood transfusion (ALBT) in primary lower limb, VTE
Postoperative painANN, boosting, decision tree, regression, SVMImprovement in SF-36 pain score, VAS score, severe pain
Postoperative mortalityDecision tree, regression30-d mortality, 90-d mortality, death
PROMs/OutcomesANN, boosting, decision tree, regression, SVM, cluster analysisHip OA at 8 y postoperatively, HOOS JR, Hip OA at 10 y postoperatively, KOOS JR, patient satisfaction, postoperative Q-score, postoperative functional outcomes, clinically meaningful improvement for the patient-reported health state, postoperative walking limitation, SF-36 MCS, SF-36 PCS, unfavorable outcomes
Sustained opioid useANN, boosting, decision tree, regression, SVM90-d postoperative outcome-opioid use, postoperative sustained opioid use

AI/ML, artificial intelligence/machine learning; ANN, artificial neural network; HOOS, Hip disability and Osteoarthritis Outcome Score; JR, joint replacement; KOOS, Knee disability and Osteoarthritis Outcome Score; LOS, length of stay; OA, osteoarthritis; PROMs, patient-reported outcome measures; SF-36 MCS, Short Form 36 mental component summary; SF-36 PCS, Short Form 36 pain catastrophizing score; SVM, support vector machine; VAS, visual analog scale; VTE, venous thromboembolism.

Reviewed studied of preoperative patient selection and planning in hip and knee arthroplasty. ANN, artificial neural network; LOS, length of stay; ML, machine learning; SVM, support vector machine; THA, total hip arthroplasty; TJA, total joint arthroplasty; TKA, total knee arthroplasty. Characteristics of AI/ML applications, including applied ML algorithms and prediction outputs. AI/ML, artificial intelligence/machine learning; ANN, artificial neural network; HOOS, Hip disability and Osteoarthritis Outcome Score; JR, joint replacement; KOOS, Knee disability and Osteoarthritis Outcome Score; LOS, length of stay; OA, osteoarthritis; PROMs, patient-reported outcome measures; SF-36 MCS, Short Form 36 mental component summary; SF-36 PCS, Short Form 36 pain catastrophizing score; SVM, support vector machine; VAS, visual analog scale; VTE, venous thromboembolism. AI/ML applications of cost prediction were used in 23 models across 11 studies, which reported an average AUC of 0.77 (SD 0.08) (Table 3). Predictive models of LOS and discharges were used in 6 studies, with an average AUC of 0.78 (SD 0.05) across 11 models. Six studies each evaluated different AI/ML-based predictive models of readmissions and reoperations, with an average AUC of 0.66 (SD 0.04) across 15 models. Applications of preoperative patient selection/planning were used in 62 models across 16 studies, reporting an average AUC of 0.79 (SD 0.11). ANOVA testing found statistically significant variability in model AUC across the different decision support applications (P < .001), and Tukey post-hoc testing confirmed that AI/ML predictive models of readmissions and reoperations reported significantly lower AUC than each of the other administrative and clinical decision support categories (Table 3).
Table 3

Statistical comparisons of reported model performance metrics, by administrative/clinical decision support application.

Administrative/clinical decision support applicationsPerformance metrics: mean (SD, n)
AUCAccuracySensitivitySpecificity
1. Costs0.77 (0.08, 23)86.5 (4.7, 4)
2. Discharge/LOS0.78 (0.05, 11)85.2 (3.2, 2)64.5 (—, 1)72.1 (—, 1)
3. Preoperative patient selection/planning0.79 (0.11, 62)95.4 (5.4, 10)70.1 (32.7, 9)94.6 (7.1, 9)
4. Readmissions/reoperation0.66 (0.04, 15)80.1 (3.1, 3)81.8 (2.4, 2)98.3 (0.2, 2)
ANOVAP < .001P < .001P = .866P = .026
Tukey Post Hoc Tests (stat. significant results)4 vs 1 (P = .003)3 vs 1 (P = .006)2 vs 3 (P < .001)
4 vs 2 (P = .005)3 vs 2 (P < .001)2 vs 4 (P < .001)
4 vs 3 (P < .001)3 vs 4 (P < .001)

ANOVA, analysis of variance; AUC, area under curve; LOS, length of stay; SD, standard deviation.

Statistical comparisons of reported model performance metrics, by administrative/clinical decision support application. ANOVA, analysis of variance; AUC, area under curve; LOS, length of stay; SD, standard deviation. ANOVA testing found statistically significant variability in model accuracy and specificity (P < .001 and P = .026, respectively), and Tukey post hoc testing confirmed statistically significant intergroup differences in those metrics, shown in Table 3. Preoperative planning and patient selection models reported significantly higher average accuracy (95.4%) than each of the other decision support categories, including cost prediction (86.5%; P = .006), discharge/LOS (85.2%; P < .001), and readmissions/reoperations (80.1%; P < .001). Discharge/LOS prediction models had the lowest specificity (72.1%) which was significantly lower than specificity reported for preoperative planning and patient selection models (94.6%; P < .001) and readmissions/reoperations models (98.3%; P < .001). Conversely, there were no significant differences in model sensitivity between the applications (P = .866).

Prediction and management of postoperative outcomes and complications

A total of 25 reviewed studies (51.0%) (Table 4) used various AI/ML models to predict outcomes, complications, and adverse events, including postoperative risk of cardiac complications, pulmonary complications, renal complications, venous thromboembolism, blood transfusion, periprosthetic and surgical site infections, vomiting, sustained opioid use, and mortality (ranging from 30 to 90 days) (Table 2). Postoperative prediction categories were sorted into 5 groups based on application: adverse events and other complications, cardiovascular complications, postoperative pain, postoperative mortality, PROs and other functional outcomes, and sustained opioid use (Table 2).
Table 4

Reviewed studies of postoperative outcome prediction in hip and knee arthroplasty.

Author, yearPathology/SurgeryML algorithmsPrediction outputsPatients in testing set (n)Avg. age%FemaleData source
Alam et al., 2019 [14]THAANN, regressionPROs/outcomes10,000Multicenter
Bini et al., 2019 [45]TJACluster analysisPROs/outcomes6368Single institution
Fontana et al., 2019 [32]TJARegression, SVM, decision treePROs/outcomes274463Patient database
Galivanche et al., 2019 [47]THABoostingAdverse event/other complication34,982ACS-NSQIP database
Gielis et al., 2020 [48]THARegressionPROs/outcomes104455.987.3CHECK cohort
Gong et al., 2014 [49]TJAANN, regressionAdverse event/other complication69.653.3Single institution
Harris et al., 2019 [50]TJARegressionAdverse event/other complication, cardiovascular complication, postoperative mortality65.759.4ACS-NSQIP database
Hirvasniemi et al., 2019 [33]THARegressionPROs/outcomes19755.783.8CHECK cohort
Huang et al., 2018 [51]THA, TKADecision tree, regressionCardiovascular complication37976266Multicenter
Huang et al., 2018 [52]TKADecision treePostoperative painAdministrative database
Huber et al., 2019 [53]THABoosting, ANN, regressionPostoperative pain, PROs/outcomes31,90559.7NHS PRO data
Hyer et al., 2019 [18]THA, TKARegressionAdverse event/other complication1,049,160Medicare
Hyer et al., 2020 [19]AllDecision treeAdverse event/other complication, postoperative mortality524,5807355.8Medicare
Jacobs et al., 2016 [54]TKADecision treePROs/outcomes325Single institution
Karhade et al., 2019 [55]THABoosting, decision tree, SVM, ANN, regressionSustained opioid use2635938.7Multicenter
Katakam et al., 2020 [56]TKAANN, decision tree, SVM, regression, boostingSustained opioid use25086760.3Single institution
Kluge et al., 2018 [57]TKADecision tree, ANN, boosting, regression, SVMPROs/outcomes6466.666667Single institution
Kunze et al., 2020 [58]THAANN, decision tree, SVM, regression, boostingPROs/outcomes1836257.3Single institution
Onsem et al., 2016 [59]TKARegressionPROs/outcomes11365.256Single institution
Parvizi et al., 2018 [60]THA, TKADecision treeAdverse event/other complication42265.452.3Multicenter
Pua et al., 2019 [61]TKADecision tree, regression, boostingPROs/outcomes120867.875Single institution
Schwartz et al., 1997 [62]THAANN, regressionPostoperative pain2216357THR outcomes database at Center for Clinical Effectiveness of the Henry Ford Health System
Van et al., 2019 [29]THAANNAdverse event/other complication100Single institution
Wu et al., 2016 [63]TJARegression, SVMAdverse event/other complication69.653.3Single institution
Yoo et al., 2013 [43]TKASVMPostoperative pain, PROs/outcomes00Single institution

ACS-NSQIP, American College of Surgeons National Surgical Quality Improvement Program; ANN, artificial neural network; CHECK, Cohort Hip and Cohort Knee; ML, machine learning; NHS, National Health Service; PRO, patient-reported outcome; SVM, support vector machine; THA, total hip arthroplasty; THR, total hip reconstruction; TJA, total joint arthroplasty; TKA, total knee arthroplasty.

Reviewed studies of postoperative outcome prediction in hip and knee arthroplasty. ACS-NSQIP, American College of Surgeons National Surgical Quality Improvement Program; ANN, artificial neural network; CHECK, Cohort Hip and Cohort Knee; ML, machine learning; NHS, National Health Service; PRO, patient-reported outcome; SVM, support vector machine; THA, total hip arthroplasty; THR, total hip reconstruction; TJA, total joint arthroplasty; TKA, total knee arthroplasty. Model performance significantly varied across postoperative management applications, which was confirmed by ANOVA testing for average AUC and sensitivity values (P = .002 and P = .042, respectively) (Table 5). Models predicting adverse events and other postoperative complications averaged 0.84 (SD 0.10, 14 models), models predicting postoperative cardiovascular complications averaged an AUC of 0.77 (SD 0.08, 8 models), postoperative pain models averaged an AUC of 0.83 (SD 0.05, 10 models), postoperative mortality models averaged an AUC of 0.81 (SD 0.07, 3 models), and postoperative PRO and functional outcome models averaged an AUC of 0.81 (SD 0.08, 56 models). Tukey post hoc testing found statistically significant differences between postoperative sustained opioid use models (average AUC of 0.71) and models predicting adverse events/other complications (AUC 0.84; P = .002), postoperative pain (AUC 0.83; P = .003), and PROs/functional outcomes (AUC 0.81; P = .011) (Table 5). Average sensitivity was also found to be significantly different between adverse event/other complication models (97.7%) and postoperative pain (78.8%; P < .001) and PROs/functional outcome models (76.9%; P < .001) (Table 5). There was no significant variation in reported accuracy or specificity values (P = .279 and P = .167, respectively) (Table 5).
Table 5

Statistical comparison of reported model performance metrics, by postoperative predictions/management applications.

Postoperative prediction/management applicationsPerformance metrics: mean (SD, n)
AUCAccuracySensitivitySpecificity
1. Adverse event/other complication0.84 (0.1, 14)97.7 (—, 1)99.5 (—, 1)
2. Cardiovascular complication0.77 (0.08, 8)
3. Postoperative pain0.83 (0.05, 10)78.8 (2.2, 7)78.7 (7.5, 7)78.8 (4.9, 7)
4. Postoperative mortality0.81 (0.07, 3)
5. PROs/outcomes0.81 (0.08, 56)75.1 (8.4, 12)76.9 (7.1, 13)64.9 (24, 13)
6. Sustained opioid use0.71 (0.09, 10)
ANOVAP = .002P = .279P = .042P = .167
Tukey Post Hoc Tests (stat. significant results)6 vs 1 (P = .002)-1 vs 3 (P < .001)-
6 vs 3 (P = .003)-1 vs 5 (P < .001)-
6 vs 5 (P = .011)---

ANOVA, analysis of variance; AUC, area under curve; PRO, patient-reported outcome; SD, standard deviation.

Statistical comparison of reported model performance metrics, by postoperative predictions/management applications. ANOVA, analysis of variance; AUC, area under curve; PRO, patient-reported outcome; SD, standard deviation.

Comparison of AI/ML algorithms

Various AI/ML algorithms were used in the reviewed studies, including ANNs, decision trees (including random forest), logistic regressions, gradient boosting/ensemble learning, and Bayesian networks (Table 2, Table 6). The most commonly applied AI/ML algorithms in the reviewed studies were logistic regression (24 studies, 60.0%) and neural networks (23 studies, 46.9%). Decision trees were used in 16 studies (32.7%), boosting/ensemble learning models were used in 11 studies (22.4%), support vector machines were used in 7 studies (14.3%), Bayesian networks were used in 5 studies (10.2%), and cluster analysis was only included in 2 studies (4.1%).
Table 6

Statistical comparisons of reported model performance metrics, by AI/ML algorithm.

AI/ML algorithmPerformance metrics: Mean (SD, n)
AUCAccuracySensitivitySpecificity
ANN0.81 (0.11, 56)87.6 (11.7, 14)70.69 (24.18, 15)88.4 (12.9, 15)
Bayesian0.81 (0.07, 8)84.1 (2.6, 4)
Boosting0.79 (0.07, 19)77.3 (7.1, 7)77.8 (5.36, 5)72.8 (11.7, 5)
Decision tree0.78 (0.1, 41)89 (—, 1)86.35 (16.05, 2)99.8 (0.4, 2)
Regression0.77 (0.07, 62)79 (8.7, 7)75.75 (11.28, 6)70.4 (14.6, 6)
SVM0.77 (0.11, 26)83.2 (10, 5)86.1 (7.34, 5)80.5 (16.1, 5)
ANOVAP = .252P = .228P = .497P = .019
Tukey Post Hoc Tests (stat. significant results)

AI/ML, artificial intelligence/machine learning; ANN, artificial neural network; ANOVA, analysis of variance; AUC, area under curve; SD, standard deviation; SVM, support vector machine.

Statistical comparisons of reported model performance metrics, by AI/ML algorithm. AI/ML, artificial intelligence/machine learning; ANN, artificial neural network; ANOVA, analysis of variance; AUC, area under curve; SD, standard deviation; SVM, support vector machine. Across all ML types, ANNs and Bayesian networks each had the highest average AUC (0.81; SD 0.11 across 56 models and SD 0.07 across 8 models, respectively) Boosting/ensemble learning models had average AUC of 0.79 (SD 0.07, 19 models), followed by decision tree models (AUC 0.78, SD 0.10, 41 models), regression models (AUC 0.77, SD 0.07, 62 models), and support vector machines (AUC 0.77, SD 0.11, 26 models) (Table 6). When comparing AI/ML model performance across various algorithm types, one-way ANOVA testing did not find statistically significant variation, except for specificity (P = .019). No significant intergroup differences were found for any performance metrics on Tukey post-hoc testing (Table 6).

Training data sets

For several studies, data sets used for training were extracted from large national and multicenter databases (Table 1, Table 4). The most commonly used were the Medicare databases (5 studies). Other administrative and private insurance databases were also used, including the Kaiser Permanente Total Joint Replacement Registry, the American College of Surgeons National Surgical Quality Improvement Program, National Inpatient Sample, and New York Statewide Planning and Research Cooperative System databases. Finally, some studies used training data sets comprising patients from multicenter or single-center cohorts.

Discussion

To our knowledge this systematic review is the first of its kind, evaluating the accuracy and reliability of AI/ML applications in hip and knee arthroplasties across 49 studies. The included studies investigated the role of AI/ML in clinical decision-making and surgical planning by optimizing patient selection and predicting cost and complication risks. AI/ML models performed best, average AUC > 0.8, when predicting postoperative adverse events and mortality, as well as postoperative pain and PROs. Deep learning/ANN models resulted in the highest average AUC and accuracy of all the model types and were presented in 47% of the studies. There are multiple benefits of AI/ML-based predictive modeling in hip and knee arthroplasties. First, AI/ML-based model capable of predicting the need for surgery remains an important tool for surgeons, given increased cost-consciousness with health-care expenditures. Hip and knee arthroplasty typically involve an older and highly comorbid patient population, and these tools can be especially helpful in identifying patient-specific needs and risks within this population. Examples of how these models can enable providers to create and optimize personalized treatment plans include accurate identification of an implant from a previous surgery for revision procedures and classifying total knee arthroplasty (TKA) surgical candidates based on patient-specific risk factors [[29], [30], [31],[33], [34], [35],[37], [38], [39], [40], [41], [42],44,62]. Hyer et al. demonstrated an AI/ML model which classified TKA and total hip arthroplasty patients based on surgical complexity scores [19]. AI/ML models may aid in predicting postoperative complications and creating personalized postoperative management protocols to avoid or manage those obstacles and maximize outcomes. Several of the reviewed studies used AI/ML models to accurately predict the risk of a range of postoperative complications and adverse events [19,29,[47], [50], [51], [60]]. TKA and total hip arthroplasty revisions and reoperations are also modeled with AI/ML algorithms in some studies, [15,16,21,64] as well as hospital readmissions [20,21,26,27]. In the postoperative period, AI/ML tools offer surgeons the ability to predict patients’ outcomes after surgery, including functional outcomes and PRO scores [14,32,33,43,45,[48], [53], [54], [57], [58], [59], [61]]. Postoperative pain has also been shown to be predicted with AI/ML, [43,53,56,55] including identification of patients at high risk for prolonged postoperative opioid prescriptions. These tools may better inform analgesic and pain management protocols, especially for opioid prescriptions. They may also enable surgeons to better tailor treatment for their patients and perhaps offer nonoperative management, especially if there is a high predicted risk of revision surgery or potentially serious postoperative complications. A powerful predictive tool that aids in clinical decision-making by being able to integrate a large amount of information and identify complex patterns, ML is still vulnerable to the biases faced in other forms of clinical research [[65], [66], [67], [68]]. Among these biases are those related to nonrandom missing data, limited sample size and underestimation to avoid overfitting by the models, and misclassification of disease or discrepancies in measurement between providers [65,67]. These result in specific biases as there can be a limited number of inputs based on researchers’ belief of which variables are important or the ability to collect all possible variables, thus limiting the accuracy and generalizability of the model. Models created on single-institution data sets may not be generalizable because of variation in measurement or reporting of the variables or outputs [65]. Some national data sets that are often used to create these models do not provide granular data which can lead to errors. Other databases can be subject to several biases including selection bias and misclassification of diagnosis as some of these sets have been created for purposes other than research, such as billing [67,68]. Sample size and the population from which the training set is sourced from are also important when considering generalizability; these models may be better at making predictions for those individuals with high access to care as they are built of their data [67]. In both smaller single and large multicenter databases, there is often a lack of information related to social determinants of health which may contribute to disparities seen within our current system [65,68]. Controversy surrounding the use of gender and race information in AI/ML models raises ethical concerns regarding potential introduction of bias into prediction models which are designed to optimize outcomes based on historically inequitable health-care data [66,69,70]. These challenges must be taken into account when implementing the use of AI/ML in clinical settings, especially given the well-studied systemic racial and socioeconomic disparities that exist in US health care [70]. The application of AI/ML to clinical decision-making in hip and knee arthroplasties may result in optimized outcomes by aiding in accurate patient selection and surgical planning during the perioperative period. Several studies in our review demonstrated the use of AI/ML models to predict hospital LOS and readmissions and associated inpatient costs after total joint arthroplasty [14,[22], [23], [24], [25],[27], [28], [29],46]. Other studies have demonstrated AI/ML potential to reduce unnecessary expenditures and create risk-adjusted reimbursement models [24,27,28,46]. AI/ML may even enable insurers to more accurately account for individual patient risk and case complexity, especially for bundled payment models. However, the shift away from fee-for-service models and toward models which reward cost-efficiency and incentivize treating a low-acuity patient population may indirectly exclude certain patients from accessing care, especially when based on ethnic or socioeconomic background and associated comorbidities and risk factors [71,72]. AI/ML is a powerful tool that can be broadly adopted in health care and, more specifically, within the field of hip and knee arthroplasty, to optimize patient outcomes, but the data sets upon which these models are trained on must be carefully constructed not only to be of a sufficient size but also to adequately represent the complexities of our patient population. This study has several potential limitations, as we did not use criteria to evaluate the quality of the various data sets used in the reviewed studies, and additional studies using more standardized data sets would be required before making conclusions about clinical efficacy.

Conclusions

The body of literature on AI/ML-based applications in hip and knee arthroplasties is growing rapidly. Currently, these models are doing well in predicting some postoperative complications but are still limited in predicting postoperative opioid use and need for readmission or reoperation. The accuracy of these predictive tools has the potential to increase with technological advancements and larger data sets, but these models also require external validation. Future work in AI/ML-based applications should aim at creating accurate commercially ready tools that can be integrated into existing systems and to fulfill their role as an aid to physicians and patients in clinical decision-making.

Conflicts of interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: R.P.S. is a paid consultant for KCI USA, Inc and LinkBio Corporation. H.J.C. is a paid consultant for KCI USA, Inc and Zimmer Biomet. J.A.G. is a paid consultant for Smith & Nephew Inc.
  65 in total

1.  Predicting Inpatient Payments Prior to Lower Extremity Arthroplasty Using Deep Learning: Which Model Architecture Is Best?

Authors:  Jaret M Karnuta; Sergio M Navarro; Heather S Haeberle; J Matthew Helm; Atul F Kamath; Jonathan L Schaffer; Viktor E Krebs; Prem N Ramkumar
Journal:  J Arthroplasty       Date:  2019-06-03       Impact factor: 4.757

2.  Can Machine Learning Methods Produce Accurate and Easy-to-use Prediction Models of 30-day Complications and Mortality After Knee or Hip Arthroplasty?

Authors:  Alex H S Harris; Alfred C Kuo; Yingjie Weng; Amber W Trickey; Thomas Bowe; Nicholas J Giori
Journal:  Clin Orthop Relat Res       Date:  2019-02       Impact factor: 4.176

3.  Predictive analytics and machine learning in stroke and neurovascular medicine.

Authors:  Hamidreza Saber; Melek Somai; Gary B Rajah; Fabien Scalzo; David S Liebeskind
Journal:  Neurol Res       Date:  2019-04-30       Impact factor: 2.448

4.  Machine learning methods are comparable to logistic regression techniques in predicting severe walking limitation following total knee arthroplasty.

Authors:  Yong-Hao Pua; Hakmook Kang; Julian Thumboo; Ross Allan Clark; Eleanor Shu-Xian Chew; Cheryl Lian-Li Poon; Hwei-Chi Chong; Seng-Jin Yeo
Journal:  Knee Surg Sports Traumatol Arthrosc       Date:  2019-12-12       Impact factor: 4.342

5.  Machine Learning and Prediction in Medicine - Beyond the Peak of Inflated Expectations.

Authors:  Jonathan H Chen; Steven M Asch
Journal:  N Engl J Med       Date:  2017-06-29       Impact factor: 91.245

6.  Using neural networks to identify patients unlikely to achieve a reduction in bodily pain after total hip replacement surgery.

Authors:  M H Schwartz; R E Ward; C Macwilliam; J J Verner
Journal:  Med Care       Date:  1997-10       Impact factor: 2.983

7.  Analysis of a large data set to identify predictors of blood transfusion in primary total hip and knee arthroplasty.

Authors:  ZeYu Huang; Cheng Huang; JinWei Xie; Jun Ma; GuoRui Cao; Qiang Huang; Bin Shen; Virginia Byers Kraus; FuXing Pei
Journal:  Transfusion       Date:  2018-08-25       Impact factor: 3.157

8.  Predicting the Future - Big Data, Machine Learning, and Clinical Medicine.

Authors:  Ziad Obermeyer; Ezekiel J Emanuel
Journal:  N Engl J Med       Date:  2016-09-29       Impact factor: 91.245

9.  Can We Improve Prediction of Adverse Surgical Outcomes? Development of a Surgical Complexity Score Using a Novel Machine Learning Technique.

Authors:  J Madison Hyer; Susan White; Jordan Cloyd; Mary Dillhoff; Allan Tsung; Timothy M Pawlik; Aslam Ejaz
Journal:  J Am Coll Surg       Date:  2019-10-28       Impact factor: 6.113

10.  Predicting postoperative vomiting among orthopedic patients receiving patient-controlled epidural analgesia using SVM and LR.

Authors:  Hsin-Yun Wu; Cihun-Siyong Alex Gong; Shih-Pin Lin; Kuang-Yi Chang; Mei-Yung Tsou; Chien-Kun Ting
Journal:  Sci Rep       Date:  2016-06-01       Impact factor: 4.379

View more
  2 in total

1.  Predictive capacity of four machine learning models for in-hospital postoperative outcomes following total knee arthroplasty.

Authors:  Abdul K Zalikha; Mouhanad M El-Othmani; Roshan P Shah
Journal:  J Orthop       Date:  2022-03-21

Review 2.  Discussion on the possibility of multi-layer intelligent technologies to achieve the best recover of musculoskeletal injuries: Smart materials, variable structures, and intelligent therapeutic planning.

Authors:  Na Guo; Jiawen Tian; Litao Wang; Kai Sun; Lixin Mi; Hao Ming; Zhao Zhe; Fuchun Sun
Journal:  Front Bioeng Biotechnol       Date:  2022-09-30
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.