| Literature DB >> 35955237 |
Miao Zou1, Wu-Gui Jiang1, Qing-Hua Qin2, Yu-Cheng Liu1, Mao-Lin Li3.
Abstract
Determining the quality of Ti-6Al-4V parts fabricated by selective laser melting (SLM) remains a challenge due to the high cost of SLM and the need for expertise in processes and materials. In order to understand the correspondence of the relative density of SLMed Ti-6Al-4V parts with process parameters, an optimized extreme gradient boosting (XGBoost) decision tree model was developed in the present paper using hyperparameter optimization with the GridsearchCV method. In particular, the effect of the size of the dataset for model training and testing on model prediction accuracy was examined. The results show that with the reduction in dataset size, the prediction accuracy of the proposed model decreases, but the overall accuracy can be maintained within a relatively high accuracy range, showing good agreement with the experimental results. Based on a small dataset, the prediction accuracy of the optimized XGBoost model was also compared with that of artificial neural network (ANN) and support vector regression (SVR) models, and it was found that the optimized XGBoost model has better evaluation indicators such as mean absolute error, root mean square error, and the coefficient of determination. In addition, the optimized XGBoost model can be easily extended to the prediction of mechanical properties of more metal materials manufactured by SLM processes.Entities:
Keywords: Ti-6Al-4V; machine learning; optimized XGBoost method; selective laser melting; small dataset
Year: 2022 PMID: 35955237 PMCID: PMC9369844 DOI: 10.3390/ma15155298
Source DB: PubMed Journal: Materials (Basel) ISSN: 1996-1944 Impact factor: 3.748
Specific composition of Ti-6Al-4V ELI alloy powder.
| Element | Al | V | Fe | C | N | O | H | Ti | Others |
|---|---|---|---|---|---|---|---|---|---|
| wt. % | 5.50–6.50 | 3.50–4.50 | 0.25 | 0.08 | 0.03 | 0.13 | 0.0125 | Balance | 0.50 |
SLM process parameters and their ranges used to generate data.
| Process Parameters | Unit | Value |
|---|---|---|
| Laser scanning speed | mm/s | 800, 900, 1000, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000, 2100, 2200, 2300, 2400, 2500 |
| Laser power | W | 80, 90, 95, 100, 105, 110, 115, 120, 130, 135, 140, 145, 150, 155, 160, 165, 170, 175, 180 |
| Hatch distance | μm | 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100 |
| Power layer thickness | μm | 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80 |
Figure 1Schematic diagram of the XGBoost regression tree model.
Figure 2Schematic diagram of ten-fold cross-validation.
Hyperparameter ranges for model fine-tuning.
| Item | Range of Values | Tolerance |
|---|---|---|
|
| 1–10 | 1 |
|
| 0.01–0.3 | 0.02 |
|
| 100–600 | 50 |
|
| 0–0.05 | 0.01 |
|
| 0–1 | 0.1 |
Figure 3Regression analysis on the training dataset and the unseen test dataset by the trained XGBoost model. (a) The experimental measurement and the numerical prediction of the relative density. The solid line y = x is the identity line for reference. (b) Distribution plot of relative error for the training dataset and the unseen test dataset.
Comparison of evaluation indicators of the proposed models with different sizes of dataset on unseen test dataset.
| Training Dataset (Set) | Test Dataset | MAE | RMSE |
|
|---|---|---|---|---|
| 48,648 | 10,811 | 0.4768 | 0.6245 | 0.9699 |
| 27,027 | 6757 | 0.4815 | 0.6344 | 0.9696 |
| 16,216 | 4055 | 0.5194 | 0.7179 | 0.9643 |
| 8108 | 2028 | 0.6001 | 0.9917 | 0.9513 |
| 4324 | 1082 | 0.6871 | 1.1797 | 0.9428 |
| 2594 | 649 | 0.8011 | 1.7171 | 0.9184 |
| 2162 | 541 | 0.8889 | 2.1495 | 0.8930 |
| 1621 | 406 | 0.9870 | 2.2707 | 0.8840 |
| 486 | 122 | 1.5577 | 5.1405 | 0.7632 |
Figure 4The predicted relative errors from experimental values as a function of dataset size used in the optimized XGBoost model.
Figure 5Regression analysis on the training dataset and the unseen test dataset by the trained ML models using a small dataset. The experimental measurements and the numerical predictions predicted by (a) DNN, (b) SVR, and (c) the present model. Distribution plot of relative error for the training dataset and the unseen test dataset evaluated in (d) DNN, (e) SVR, and (f) the present optimized XGBoost model.
Comparison of prediction results of SVR, DNN, and optimized XGBoost models on the unseen test set.
| Test | SVR | DNN | Optimized XGBoost |
|---|---|---|---|
| MAE | 1.3344 | 0.8576 | 0.8011 |
| RMSE | 4.8646 | 1.7316 | 1.7171 |
|
| 0.7687 | 0.7849 | 0.9184 |