Literature DB >> 35923866

Machine Learning for Predicting Lower Extremity Muscle Strain in National Basketball Association Athletes.

Yining Lu1, Ayoosh Pareek1, Ophelie Z Lavoie-Gagne2, Enrico M Forlenza3, Bhavik H Patel4, Anna K Reinholz1, Brian Forsythe3, Christopher L Camp1.   

Abstract

Background: In professional sports, injuries resulting in loss of playing time have serious implications for both the athlete and the organization. Efforts to quantify injury probability utilizing machine learning have been met with renewed interest, and the development of effective models has the potential to supplement the decision-making process of team physicians. Purpose/Hypothesis: The purpose of this study was to (1) characterize the epidemiology of time-loss lower extremity muscle strains (LEMSs) in the National Basketball Association (NBA) from 1999 to 2019 and (2) determine the validity of a machine-learning model in predicting injury risk. It was hypothesized that time-loss LEMSs would be infrequent in this cohort and that a machine-learning model would outperform conventional methods in the prediction of injury risk. Study Design: Case-control study; Level of evidence, 3.
Methods: Performance data and rates of the 4 major muscle strain injury types (hamstring, quadriceps, calf, and groin) were compiled from the 1999 to 2019 NBA seasons. Injuries included all publicly reported injuries that resulted in lost playing time. Models to predict the occurrence of a LEMS were generated using random forest, extreme gradient boosting (XGBoost), neural network, support vector machines, elastic net penalized logistic regression, and generalized logistic regression. Performance was compared utilizing discrimination, calibration, decision curve analysis, and the Brier score.
Results: A total of 736 LEMSs resulting in lost playing time occurred among 2103 athletes. Important variables for predicting LEMS included previous number of lower extremity injuries; age; recent history of injuries to the ankle, hamstring, or groin; and recent history of concussion as well as 3-point attempt rate and free throw attempt rate. The XGBoost machine achieved the best performance based on discrimination assessed via internal validation (area under the receiver operating characteristic curve, 0.840), calibration, and decision curve analysis.
Conclusion: Machine learning algorithms such as XGBoost outperformed logistic regression in the prediction of a LEMS that will result in lost time. Several variables increased the risk of LEMS, including a history of various lower extremity injuries, recent concussion, and total number of previous injuries.
© The Author(s) 2022.

Entities:  

Keywords:  loss of playing time; lower extremity; machine learning; muscle strain; professional athletes

Year:  2022        PMID: 35923866      PMCID: PMC9340342          DOI: 10.1177/23259671221111742

Source DB:  PubMed          Journal:  Orthop J Sports Med        ISSN: 2325-9671


Injuries in professional athletes are detrimental to both the team and the sport overall. Time missed from sport could be detrimental from not only a competitive perspective but also a financial one. Lower extremity muscle strains (LEMSs) are some of the most common injuries in athletes. One study on gastrocnemius-soleus complex injuries in National Football League (NFL) athletes reported at least 2 weeks of missed playing time on average. In a summative report on time out of play for Major and Minor League Baseball players, the authors reported that the most common injuries were related to muscle strains or tears (30%). In the same study, hamstring strains were the most common injury in approximately 7% of the athletes and resulted in a total of more than 46,000 days missed, with a mean of 14.5 days missed per player. In addition, approximately 3.6% of these injuries were season ending and 2.6% recurred at least once more. Two additional LEMSs consisted of the top 10 most common injuries in Major League Baseball (MLB) players resulting in additional missed days. While the combined incidence and outcomes of LEMSs have not been well studied in National Basketball Association (NBA) athletes, in a 17-year overview of injuries in NBA athletes, Drakos et al identified hamstring and adductor strains to be among the top 5 most frequently encountered injuries, with quadriceps and hip flexor strains representing significant proportions. Machine learning has become increasingly recognized as a useful tool in medicine, including orthopaedic surgery. By allowing for the creation of predictive models that can improve accuracy, machine learning can help guide decision making for not only physicians but also the patients. In addition, machine learning has the distinct advantage of performing well when handling complex relationships by allowing accurate prediction from many inputs. Because of the frequency of LEMS in addition to the days missed and common recurrence, it is important to determine the most important factors that can contribute to LEMS. In addition, no current models exist delineating important risk factors for LEMS in professional NBA athletes. Therefore, the purpose of this study was to (1) create accurate machine learning models for the prediction of LEMS in NBA athletes and (2) compare the predictive performance of these models with conventional logistic regression with the hypothesis that machine learning would allow for the creation of customized risk-predictive tools with higher discrimination than conventional logistic regression. We hypothesized that time-loss LEMSs would occur infrequently in this elite athlete population and that a machine-learning model would outperform traditional methods in quantifying injury risk.

Methods

Guidelines

This study was conducted in adherence with the Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research as well as the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis guidelines. A detailed modeling workflow is available in the Appendix, and definitions of commonly encountered machine-learning terminology are available in Appendix Table A1. This study was considered exempt from institutional review board approval.
Appendix Table A1

Definition of Machine Learning Concepts and Methods Used

TermDefinition
Multiple imputationA popular method for handling missing data, which are often a source of bias and error in model output. In this approach, a missing value in the data set is replaced with an imputed value based on a statistical estimation; this process is repeated randomly, resulting in multiple “completed” data sets, each consisting of observed and imputed values. These are combined utilizing a simple formula known as the Rubin rule to give final estimates of target variables. 9
Recursive feature elimination (RFE)A feature selection algorithm that searches for an optimal subset of features by fitting a given machine learning algorithm (random forest and naïve Bayes in our case) to the predicted outcome, ranking the features by importance, and removing the least important features; this is done repeatedly, in a “recursive” manner, until a specified number of features remains or a threshold value of a designated performance metric has been reached. The features can then be entered as inputs into the candidate models for prediction of the desired outcome. 7
0.632 bootstrappingThe method for training an algorithm based on the input features selected from RFE. Briefly, model evaluation consists of reiterative partitions of the complete data set into train and test sets. For each combination of train and test set, the model is trained on the train set using 10-fold cross-validation repeated 3 times. The performance of this model is then evaluated on the respective test set, and no data points from the training set were included in the test set. This sequence of steps is then repeated for 999 more data partitions. 11 The model is thus trained and tested on all data points available, and evaluation metrics are summarized with standard distributions of values. 11 Bootstrapping has been found to optimize both model bias and variance and improve overall performance compared with internal validation through splitting of the data into training and holdout sets.
Extreme gradient boostingAlgorithm of choice among stochastic gradient boosting machines, a family in which multiple weak classifiers (a classifier that predicts marginally better than random) are combined (in a process known as boosting) to produce an ensemble classifier with a superior generalized misclassification error rate. 7
Random forestAlgorithm of choice among tree-based algorithms, an ensemble of independent trees, each generating predictions for a new sample chosen from the training data, and whose predictions are averaged to give the forest’s prediction. The ensembling process is distinct in principle from gradient boosting. 7
Neural networkA nonlinear regression technique based on 1 or more hidden layers consisting of linear combinations of some or all predictor variables, through which the outcome is modeled; these hidden layers are not estimated in a hierarchical fashion. The structure of the network mimics neurons in a brain. 7
Elastic net penalized logistic regressionA penalized linear regression based on a function to minimize the square errors of the outputs; belongs to the family of penalized linear models, including ridge regression and the lasso. 7
Support vector machinesA supervised learning algorithm that performs classification problems by representing each data point as a point in abstract space and defining a plane known as a hyperplane that separates the points into distinct binary classes, with maximal margin. Hyperplanes can be linear or nonlinear, as we have implemented in the presented analysis, using a circular kernel. 7
Area under the receiver operating characteristic curve (AUC)A common metric to model performance, utilizing the receiver operating characteristic curve, which plots calculated sensitivity and specificity given the class probability of an event occurring (instead of using a 50:50 probability). The AUC classically ranges from 0.5 to 1, with 0.5 being a model that is no better than random and 1 being a model that is completely accurate in assigning class labels. 7
CalibrationThe ability of a model to output probability estimates that reflect the true event rate in repeat sampling from the population. An ideal model is a straight line with an intercept of 0 and slope of 1 (ie, perfect concordance of model predictions to observed frequencies within the data). A model can correctly assign a label, as reflected by the AUC, yet it can output class probabilities of a binary outcome that is dramatically different from its true event rate in the population; such a model is not well calibrated. 7
Brier scoreThe mean square difference between predicted probabilities of models and observed outcomes in the testing data. The Brier score can generally range from 0 for a perfect model to 0.25 for a noninformative model. 7
Decision curve analysisA measure of clinical utility whereby a clinical net benefit for 1 or more prediction models or diagnostic tests is calculated in comparison with default strategies of treating all or no patients. This value is calculated based on a set threshold, defined as the minimum probability of disease at which further intervention would be warranted. The decision curve is constructed by plotting the ranges of threshold values against the net benefit yielded by the model at each value; as such, a model curve that is farther from the bottom left corner yields more net benefit than one that is closer. 16

Data Collection

NBA athlete data were publicly sourced from 3 online platforms: www.prosportstransactions.com, www.basketball-reference.com, and www.sportsforecaster.com. Injuries included all publicly reported injuries resulting in loss of playing time. Data were compiled for all players from the 1999 through 2019 NBA seasons (over a 20-year period). Data collected included demographic characteristics, prior injury documentation, and performance metrics.

Variables and Outcomes

The primary outcome of interest was risk of sustaining a major muscle strain, which was defined as any muscle strain that led to loss of playing time based on movement to and from the injury list, as noted by the publicly available compilation of professional basketball transactions. The major muscle strain injury types considered in the model consisted of hamstring, quadriceps, calf, and groin muscle strains. Demographic variables included age, career length, and player position. Clinical variables included recent injury history, defined as one of the following injuries within 8 weeks of the case injury: groin, quadriceps, hamstring, ankle, back injury, or concussion; remote injury history, defined as any history of the injuries before the case injury; and previous total count of lower extremity injuries. Performance metrics were also included, including basic and advanced statistics. Notable advanced statistics included the 3-point attempt rate (percentage of player field goal attempts that are for 3 points) as well as free throw attempt rates (the ratio of a player’s free throw attempts to field goal attempts). The full list of variables considered for feature selection is provided in Appendix Table A2. There were no missing data. All variables collected in the final compilation were included in recursive feature elimination (RFE) using a random forest algorithm, a technique demonstrated to effectively isolate features correlated with the desired outcome while eliminating variables with high collinearity within high-dimensional data.
Appendix Table A2

Inputs Considered for Feature Selection

Variables
Recent groin injuryRecent ankle injuryRecent concussionRecent hamstring injuryRecent back injury AgeRecent quad injuryPrevious injury countPositionGames played Games startedMinutes per gameField goals made per gameField goal attempts per gameField goal percentage3-point shots made per game3-point shots attempted per game3-point percentage2-point shots made per game2-point shots attempted per game2-point percentageEffective field goal percentageFree throws made per gameFree throws attempted per gameFree throw percentageOffensive rebounds per gameDefensive rebounds per gameTotal rebounds per gameAssists per gameSteals per gameBlocks per gameTurnovers per gamePersonal fouls per gamePoints per gamePlayer efficiency ratingTrue shooting percentage3-point attempt rateFree throw attempt rateOffensive rebound percentageDefensive rebound percentageTotal rebound percentageAssist percentageSteals percentageBlocks percentageTurnover percentageUsage percentageOffensive win shareDefensive win shareWin sharesWin shares per 48 minOffensive box ±Defensive box ±Box ±Value over replacement player

Modeling Training

After selection, modeling was performed using the selected features with each of the following candidate machine learning algorithms: elastic net penalized regression, random forest, extreme gradient boosted (XGBoost) machine, support vector machines, and logistic regression. Variables significant on logistic regression were entered into a simplified XGBoost for benchmarking. Models were trained using 10-fold cross-validation repeated 3 times. The performance of this model was then evaluated on the respective test set, and no data points from the training set were included in the test set. The model was then internally validated via 0.632 bootstrapping with 1000 resample sets, because of this technique’s ability to optimize evaluation of both model bias and variance. The model was thus tested on all data points available, and evaluation metrics were summarized with standard distributions of values.

Model Selection

The optimal model was chosen based on area under the receiver operating characteristic curve (AUC). Models were compared by discrimination, calibration, and Brier score values (Figure 1A). An AUC of 0.70 to 0.80 was considered acceptable, and an AUC of 0.80 to 0.90 was considered excellent. The mean square difference between predicted probabilities of models and observed outcomes, known as the Brier score, was calculated for each candidate model. The Brier scores of candidate algorithms were then assessed by comparison with the Brier score of the null model, which is a model that assigns a class probability equal to the sample prevalence of the outcome for every prediction.
Figure 1.

(A) Discrimination and (B) calibration of the extreme gradient boosted machine. AUC, area under the receiver operating characteristic curve.

(A) Discrimination and (B) calibration of the extreme gradient boosted machine. AUC, area under the receiver operating characteristic curve. The final model was calibrated with the observed frequencies within the test population and summarized in a calibration plot (Figure 1B). Ideally, the model is calibrated to a straight line, with an intercept of 0 and slope of 1 corresponding to perfect concordance of model predictions to observed frequencies in the data.

Model Implementation

The benefit of implementing the predictive algorithm into practice was assessed via decision curve analysis. These curves plot the net benefit against the predicted probabilities of each outcome, providing the cost-benefit ratio for every probability threshold of classifying a prediction as high risk. Additionally, curves demonstrating default strategies of changing management for all or no patients are included for comparative purposes.

Model Interpretability

Both global and local model interpretability and explanations were provided. Global model interpretability is provided as a plot of the model’s input variables normalized against the input considered to have the most contribution to the model prediction and Shapley Additive Explanations (SHAP), demonstrating how much each predictor contributes, either positively or negatively, to the model output. Local explanations are provided using local-interpretable model-agnostic explanations, in which variable contributions for individual model predictions are visually depicted.

Digital Application

The final model was incorporated into a web-based application to illustrate possible future model integration. It should be noted that this digital application remains exclusively for research and educational purposes until rigorous external validation has been conducted. In the digital application, athlete demographic and performance data are entered to generate outcome predictions with accompanying explanations. All data analysis was performed in R Version 4.0.2 using RStudio Version 1.2.5001.

Results

Patient Characteristics

A total of 2103 NBA athletes were included in the study over a 20-year period. The median career length was 6 years (interquartile range, 2-9 years), with an almost even breakdown between designated positions (Table 1). Hamstring (36.4%) and calf (36.1%) injuries were more prevalent compared with quadriceps (11.5%) and groin (15.9%) injuries. The incidence rate of LEMSs per athlete per season was 5.83%.
Table 1

Baseline Characteristic of the Study Population (N = 2103)

VariableValue
Age, y26 (23-29)
BMI, kg/m2 24.3 (20.1-26.5)
Career length, y6 (2-9)
Position
 Center384 (18.2)
 Power forward429 (20.4)
 Point guard424 (20.2)
 Small forward389 (18.5)
 Shooting guard477 (22.7)
Injuries (n = 736)
 Quadriceps85 (11.5)
 Hamstring268 (36.4)
 Calf266 (36.1)
 Groin117 (15.9)

Values are presented as n (%) or median (interquartile range). BMI, body mass index.

Baseline Characteristic of the Study Population (N = 2103) Values are presented as n (%) or median (interquartile range). BMI, body mass index.

Multivariate Logistic Regression

After feature selection, the multivariate logistic regression was utilized to generate models using the selected features with odd ratios (ORs) of statistically significant contributors to LEMS. The most important risk factor for LEMS was previous injury count (OR, 21.0; 95% CI, 2.5-72.5) (Table 2). The next 5 most important risk factors for LEMS, in order from most to least contributory, were recent quadriceps injury (OR, 4.31; 95% CI, 1.21-15.4), recent groin injury (OR, 2.9; 95% CI, 2.88-2.91), free throw attempt rate (OR, 2.76; 95% CI, 1.27-6), recent ankle injury (OR, 2.66; 95% CI, 2.65-2.68), and recent hamstring injury (OR, 2.39; 95% CI, 2.38-2.4) (Table 2). While significant, age and games had a negligible effect in contributing to LEMS in the logistic regression (OR, 1.01-1.03), and 3-point attempt rate was protective of LEMS (OR, 0.46; 95% CI, 0.27-0.97).
Table 2

Significant Contributors to Lower Extremity Muscle Strain From Logistic Regression Model

VariableOR (95% CI)
Previous injury count21.0 (2.5-72.5)
Recent quadriceps injury4.31 (1.21-15.4)
Recent groin injury2.9 (2.88-2.91)
Free throw rate2.76 (1.27-6)
Recent ankle injury2.66 (2.65-2.68)
Recent hamstring injury2.39 (2.38-2.4)
Recent concussion2.34 (2.33-2.35)
Recent back injury1.95 (1.94-1.96)
Age1.03 (1.01-1.05)
Games played1.01 (1.01-1.02)
3-point attempt rate0.46 (0.27-0.79)

OR, odds ratio.

Significant Contributors to Lower Extremity Muscle Strain From Logistic Regression Model OR, odds ratio.

Model Creation and Performance

After multivariate logistic regression, machine-learning models were trained using the same variables identified from RFE. After model optimization, candidate model performances on internal validation were compared (Table 3). The random forest and XGBoost models had higher AUCs on internal validation data, 0.830 (95% CI, 0.829-0.831) and 0.840 (95% CI, 0.831-0.845), respectively, although XGBoost had a nonsignificantly higher Brier score. In this data set, conventional logistic regression had a significantly lower AUC on internal validation (0.818; 95% CI, 0.817-0.819) compared with the aforementioned models; similarly, this was lower than the AUC yielded by the simplified XGBoost model (0.832; 95% CI, 0.818-0.838). The calibration slope of models ranged from 0.997 for neural network to 1.003 for XGBoost, suggesting excellent estimation for all models (Table 3). The Brier score of models ranged from 0.029 for random forest to 0.31 for multiple models, indicating excellent accuracy. The XGBoost model had the highest overall AUC, with comparable calibration and Brier scores, and was therefore chosen as the best-performing candidate algorithm (Figure 1).
Table 3

Model Assessment on Internal Validation Using 0.632 Bootstrapping With 1000 Resampled Data Sets (N = 2103)

MetricAUC (95% CI)
ApparentInternal ValidationCalibration SlopeCalibration InterceptBrier Score
Elastic net0.834 (0.791-0.877)0.819 (0.818-0.820)0.999 (0.998-1)0.003 (0.001-0.005)0.031 (0.027-0.034)
Random forest0.905 (0.896-0.92)0.830 (0.829-0.831)1.001 (1-1.002)0.002 (0.001-0.007)0.029 (0.027-0.032)
XGBoost0.906 (0.899-0.911)0.840 (0.831-0.845)1.003 (1.002-1.004)0.002 (0.001-0.007)0.03 (0.027-0.033)
SVM0.881 (0.88-0.882)0.787 (0.786-0.788)0.999 (0.998-1)0.007 (0.004-0.009)0.031 (0.028-0.034)
Neural network0.84 (0.839-0.841)0.813 (0.812-0.814)0.997 (0.996-0.998)0.003 (0-0.005)0.031 (0.028-0.034)
Logistic regression0.835 (0.834-0.836)0.818 (0.817-0.819)0.998 (0.997-0.999)0.008 (0.002-0.012)0.031 (0.028-0.034)
Simple XGBoost0.882 (0.880-0.882)0.832 (0.818-0.838)0.999 (0.998-1.000)0.003 (0.002-0.004)0.031 (0.027-0.033)

Null model Brier score = 0.063. AUC, area under the receiver operating characteristic curve; SVM, support vector machine; XGBoost, extreme gradient boosted.Data in parentheses is 95% confidence intervals.

Model Assessment on Internal Validation Using 0.632 Bootstrapping With 1000 Resampled Data Sets (N = 2103) Null model Brier score = 0.063. AUC, area under the receiver operating characteristic curve; SVM, support vector machine; XGBoost, extreme gradient boosted.Data in parentheses is 95% confidence intervals.

Variable Importance

The global importance of input variables used for XGBoost was assessed with previous lower extremity injury having a near 100% relative influence, followed by games played, free throw attempt rate and percentage, 3-point attempt rate, and assist percentage (Figure 2A). SHAP values (Figure 2B) are average marginal contributions of selected features across all possible coalitions and indicate the 3 most common features to be total rebound percentage, previous lower extremity injury count, and games played. As can be interpreted, age and games played affect the model in a positive direction. While there are a number of outliers in which a high injury count does not contribute positively to an increased probability of LEMS, the overall contribution of previous injury count as indicated by the mean SHAP value remains positive.
Figure 2.

(A) Variable importance plot of the extreme gradient boosted (XGBoost) machine model. (B) Summary plot of Shapley (SHAP) values of the XGBoost model. Specifically, the global SHAP values are plotted on the x-axis with variable contributions on the y-axis. Numbers next to each input name indicate the mean global SHAP value, and gradient color indicates feature value. Each point represents a row in the original data set. Three-point attempt rate = percentage of player field goals that are for 3 points; free throw attempt rate = ratio of free throw attempts to field goal attempts. LE, lower extremity.

(A) Variable importance plot of the extreme gradient boosted (XGBoost) machine model. (B) Summary plot of Shapley (SHAP) values of the XGBoost model. Specifically, the global SHAP values are plotted on the x-axis with variable contributions on the y-axis. Numbers next to each input name indicate the mean global SHAP value, and gradient color indicates feature value. Each point represents a row in the original data set. Three-point attempt rate = percentage of player field goals that are for 3 points; free throw attempt rate = ratio of free throw attempts to field goal attempts. LE, lower extremity.

Decision Curve Analysis

Decision curve analysis was used to compare the net benefit derived from the trained XGBoost model. For comparison purposes, a decision curve was also plotted for a learned multivariate logistic regression model trained using the same parameters and inputs. The XGBoost model trained on the complete feature set demonstrated greater net benefit compared with logistic regression and other alternatives (Figure 3).
Figure 3.

Decision curve analysis comparing the complete extreme gradient boosted (XGBoost) machine algorithm with the complete logistic regression as well as a simplified model utilizing select parameters. The downsloping line marked by “All” plots the net benefit from the default strategy of changing management for all patients, while the horizontal line marked “none” represents the strategy of changing management for none of the patients (net benefit is zero at all thresholds). The “All” line slopes down because at a threshold of zero, false positives are given no weight relative to true positives; as the threshold increases, false positives gain increased weight relative to true positives and the net benefit for the default strategy of changing management for all patients decreases. LR, logistic regression.

Decision curve analysis comparing the complete extreme gradient boosted (XGBoost) machine algorithm with the complete logistic regression as well as a simplified model utilizing select parameters. The downsloping line marked by “All” plots the net benefit from the default strategy of changing management for all patients, while the horizontal line marked “none” represents the strategy of changing management for none of the patients (net benefit is zero at all thresholds). The “All” line slopes down because at a threshold of zero, false positives are given no weight relative to true positives; as the threshold increases, false positives gain increased weight relative to true positives and the net benefit for the default strategy of changing management for all patients decreases. LR, logistic regression.

Interpretation

An example of a patient-level evaluation and variable importance explanation is provided in Figure 4. This patient was assigned a probability of 0.007 (approximately 1%) for sustaining LEMS. Features that decreased the patient’s risk for LEMS included lack of recent ankle injury; concussion; or groin, hamstring, quadriceps, or back injury (among others). Features that increased the patient’s risk of injury were a recent back injury in addition to 3-point shot percentage.
Figure 4.

Example of individual patient-level explanation for the simplified extreme gradient boosted machine algorithm predictions. This athlete had a predicted injury risk of 0.77% at this point during the season. The only feature to support the likelihood of injury was a recent back injury.

Example of individual patient-level explanation for the simplified extreme gradient boosted machine algorithm predictions. This athlete had a predicted injury risk of 0.77% at this point during the season. The only feature to support the likelihood of injury was a recent back injury. For each patient or professional basketball athlete, baseline parameters can be collected or examined during the encounter to generate predictions regarding risk of LEMS in the athlete. These predictions can be utilized to inform counseling, modify exercise regiments, or dictate rest periods for athletes at high risk for LEMS. The final model was incorporated into a web-based application that generated predictions for probabilities of LEMS. The application (available at https://sportsmed.shinyapps.io/NBA_LE) is accessible from desktop computers, tablet computers, and smartphones. Default values are provided as placeholders in the interface, and the model requires complete cases to generate predictions and explanations.

Discussion

The principal findings of our study are that (1) the incidence of LEMSs in the NBA over the study period was 5.83%, and a number of significant features in the prediction of these injuries were identified; (2) the XGBoost algorithms outperformed logistic regression with regard to discrimination, calibration, and overall performance; and (3) the clinical model was incorporated into an open-access injury risk calculator for education and demonstration purposes. Despite a number of studies evaluating injuries in the NBA, there is a paucity of evidence regarding the rate of LEMS. A study by Drakos et al found that hamstring strains (n = 413; 3.3%), defined as any that required missed time, physician referral, or emergency care, were among the most frequently encountered orthopaedic injuries in the NBA across a 17-year study period; our study reports a lower incidence (n = 268) over 20 years, likely given our accounting of only time-loss injuries. This likely reflects a similar overall deflation of injury incidence in our cohort among other body parts. While similar investigations have been carried out in professional soccer, baseball, and American football athletes, few studies have previously investigated the risk factors for characterizing the epidemiology of LEMSs in professional basketball athletes. The input features that were significant on both multivariate logistic regression and global variable importance assessment included the following: previous injury count; games played; history of recent ankle, groin, hamstring, or back injury; history of a concussion; free throw percentage; and 3-point shot percentage. The investigation by Orchard et al identified a strong relationship between LEMS and a recent history of same-site muscle strain. We similarly utilized the definition of recent recurrence as within 8 weeks of the index injury as described by Fuller et al, because of the absence of such a consensus definition in the NBA, and corroborated this finding among basketball athletes. This relationship is intuitive, as premature return to play can predispose athletes to reinjury. However, we did not identify a relationship between nonrecent history of any injuries with the risk of an index injury, which suggests that injury risk for LEMSs may be equivalent between controls and injured athletes beyond the 8-week window within this cohort of elite athletes. There has been a longstanding relationship between ankle injuries and lower extremity biomechanics and postural stability. However, whether this relationship is causative or correlative is more ambiguous. While a motion analysis study by Leanderson et al identified a significantly increased range of postural sway among Swedish Division II basketball players with a history of lateral ankle sprains compared with controls, a more recent investigation by Lopezosa-Reca et al found that athletes who had certain foot postures as described by the Foot Posture Index were more likely to experience lower extremity injuries such as lateral ankle sprains and patellar tendinopathy. Our findings suggest that altered foot biomechanics secondary to a recent ankle injury contribute at least partially to the increased risk of lower extremity muscle injuries, as nonrecent ankle injury was not found to be a significant risk factor. The relationship between concussion and lower extremity injury has been extensively studied in the sports medicine literature, with a number of hypotheses for the underlying mechanism, including resultant deficiencies in dynamic balance, neuromuscular control, cognitive accuracy, and gait performance. This observation is corroborated by our study, which identified a concussion history as a significant risk factor in both the multivariate logistic regression and the machine learning models. Finally, an interesting protective correlation was identified between 3-point attempt rate and risk of lower extremity injury. One possible reason for this observation may lie in the role of the 3-point attempt rate as a proxy of playing style, as players who take a greater number of 3-point attempts usually play a less physical perimeter game. Conversely, the free throw rate represents the ability of a player to draw personal fouls from opponents and therefore is a measure of physical play and a strong predictor of injury risk. On evaluation of the predictive models, both the complete and simple XGBoost models outperformed the logistic regression on both discrimination and the Brier score. Investigators have previously developed machine learning injury-prediction models for recreational athletes as well as in professional sports, including the NFL, National Hockey League, and MLB. These models utilized a range of inputs, from performance metrics to video recordings and motion kinematics. The present study evaluated a number of performance metrics as well as clinical injury history, which may present more actionable findings for the team physician. After external validation, prospective deployment of the model can integrate the athlete’s injury history in an 8-week window to provide a real-time snapshot of the athlete’s risk for experiencing a LEMS with excellent fidelity and reliability. Additionally, with contemporary improvements in computational and sensor technology, there has also been an increase in focus on the potential of global positioning system tracking data in real-time injury forecasting and prevention, and machine learning technology is uniquely equipped to handle the sheer overabundance of data available through the automation of structure and pattern recognition.

Strengths and Limitations

The strengths of this study must be interpreted concurrently with a number of limitations. The first concerns the quality of the data source. While we were able to capture injury history and performance metrics that can serve as proxies for playing style, data extracted from publicly available sources do not offer insight into postinjury rehabilitation protocol or long-term management strategies for recurrent injuries. Additionally, detailed clinical data including physical examination and imaging findings were unavailable. Within these limitations, it is notable that the current machine learning algorithm reached an excellent level of concordance and calibration and that a simplified algorithm performed similarly to the complete logistic regression model; it should be within expectation that prospective incorporation of granular characteristics of the injury and the return-to-play protocol can continue to augment the performance of the algorithm. Second, the sampling remains limited to the population of elite athletes, and generalizability to those competing at the recreational or semiprofessional levels may remain questionable until further external validation, as such use of the digital application is for education and demonstration only. In this context, an interesting future extension of this study may be a matched-cohort comparison of LEMS injury risk between professional and amateur athletes. Finally, the black box phenomenon is an inherent flaw of certain machine learning algorithms wherein transparency into model behavior is insufficient. For example, the complete model utilizes 25 inputs and can become extremely cumbersome for the user, especially for physicians to whom the effects of specific performance variables may not be clear from a clinical perspective. We have attempted to mitigate this by reducing the dimensions of the training data to produce a simplified model that is both clinically sound and easily deployable without significantly sacrificing its effectiveness. In addition, the application features a built-in local agnostic model explanation algorithm that can approximate the model dependence on each input for a given prediction.

Conclusion

Machine-learning algorithms such as XGBoost outperformed logistic regression in the effective and reliable prediction of a LEMS that will result in lost time. Factors resulting in an increased risk of LEMS included history of a back, quadriceps, hamstring, groin, or ankle injury; concussion within the previous 8 weeks; and total count of previous injuries.
  37 in total

Review 1.  Consensus statement on injury definitions and data collection procedures in studies of football (soccer) injuries.

Authors:  C W Fuller; J Ekstrand; A Junge; T E Andersen; R Bahr; J Dvorak; M Hägglund; P McCrory; W H Meeuwisse
Journal:  Br J Sports Med       Date:  2006-03       Impact factor: 13.800

2.  Prediction models need appropriate internal, internal-external, and external validation.

Authors:  Ewout W Steyerberg; Frank E Harrell
Journal:  J Clin Epidemiol       Date:  2015-04-18       Impact factor: 6.437

3.  Summative Report on Time Out of Play for Major and Minor League Baseball: An Analysis of 49,955 Injuries From 2011 Through 2016.

Authors:  Christopher L Camp; Joshua S Dines; Jelle P van der List; Stan Conte; Justin Conway; David W Altchek; Struan H Coleman; Andrew D Pearle
Journal:  Am J Sports Med       Date:  2018-04-09       Impact factor: 6.202

4.  Is There Any Association Between Foot Posture and Lower Limb-Related Injuries in Professional Male Basketball Players? A Cross-Sectional Study.

Authors:  Eva Lopezosa-Reca; Gabriel Gijon-Nogueron; Jose Miguel Morales-Asencio; Jose Antonio Cervera-Marin; Alejandro Luque-Suarez
Journal:  Clin J Sport Med       Date:  2020-01       Impact factor: 3.638

5.  Ankle sprain and postural sway in basketball players.

Authors:  J Leanderson; A Wykman; E Eriksson
Journal:  Knee Surg Sports Traumatol Arthrosc       Date:  1993       Impact factor: 4.342

6.  New Machine Learning Approach for Detection of Injury Risk Factors in Young Team Sport Athletes.

Authors:  Susanne Jauhiainen; Jukka-Pekka Kauppi; Mari Leppänen; Kati Pasanen; Jari Parkkari; Tommi Vasankari; Pekka Kannus; Sami Äyrämö
Journal:  Int J Sports Med       Date:  2020-09-13       Impact factor: 3.118

7.  Machine Learning and Primary Total Knee Arthroplasty: Patient Forecasting for a Patient-Specific Payment Model.

Authors:  Sergio M Navarro; Eric Y Wang; Heather S Haeberle; Michael A Mont; Viktor E Krebs; Brendan M Patterson; Prem N Ramkumar
Journal:  J Arthroplasty       Date:  2018-09-05       Impact factor: 4.757

8.  Development and Validation of a Machine Learning Algorithm After Primary Total Hip Arthroplasty: Applications to Length of Stay and Payment Models.

Authors:  Prem N Ramkumar; Sergio M Navarro; Heather S Haeberle; Jaret M Karnuta; Michael A Mont; Joseph P Iannotti; Brendan M Patterson; Viktor E Krebs
Journal:  J Arthroplasty       Date:  2018-12-27       Impact factor: 4.757

Review 9.  Prognosis Research Strategy (PROGRESS) 3: prognostic model research.

Authors:  Ewout W Steyerberg; Karel G M Moons; Danielle A van der Windt; Jill A Hayden; Pablo Perel; Sara Schroter; Richard D Riley; Harry Hemingway; Douglas G Altman
Journal:  PLoS Med       Date:  2013-02-05       Impact factor: 11.069

10.  Epidemiology and Impact on Performance of Lower Extremity Stress Injuries in Professional Basketball Players.

Authors:  Moin Khan; Kim Madden; M Tyrrell Burrus; Joseph P Rogowski; Jeff Stotts; Marisa J Samani; Robby Sikka; Asheesh Bedi
Journal:  Sports Health       Date:  2017-11-06       Impact factor: 3.843

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.