| Literature DB >> 35591498 |
Haji Sami Ullah1, Rao Arsalan Khushnood1, Furqan Farooq1,2, Junaid Ahmad1, Nikolai Ivanovich Vatin3, Dina Yehia Zakaria Ewais4.
Abstract
The entraining and distribution of air voids in the concrete matrix is a complex process that makes the mechanical properties of lightweight foamed concrete (LFC) highly unpredictable. To study the complex nature of aerated concrete, a reliable and robust prediction model is required, employing different machine learning (ML) techniques. This study aims to predict the compressive strength of LFC by using a support vector machine (SVM) as an individual learner along with bagging, boosting, and random forest (RF) as a modified ensemble learner. For that purpose, a database of 191 data points was collected from published literature, where the mix design ingredients, i.e., cement content, sand content, water to cement ratio, and foam volume, were chosen to predict the compressive strength of LFC. The 10-K fold cross-validation method and different statistical error and regression tools, i.e., mean absolute error (MAE), root means square error (RMSE), and coefficient of determinant (R2), were used to evaluate the performance of the developed ML models. The modified ensemble learner (RF) outperforms all models by yielding a strong correlation of R2 = 0.96 along with the lowest statistical error values of MAE = 1.84 MPa and RMSE = 2.52 MPa. Overall, the result suggests that the ensemble learners would significantly enhance the performance and robustness of ML models.Entities:
Keywords: artificial intelligence; compressive strength; ensemble learners; foamed concrete; machine learning; sustainable concrete
Year: 2022 PMID: 35591498 PMCID: PMC9102231 DOI: 10.3390/ma15093166
Source DB: PubMed Journal: Materials (Basel) ISSN: 1996-1944 Impact factor: 3.748
Statistics of collected data.
| Variable | Unit | Role | Minimum | Maximum | Average | Standard Deviation |
|---|---|---|---|---|---|---|
| Cement content | (kg/m3) | Input | 292.2 | 992.8 | 661.578 | 174.62 |
| Sand content | (kg/m3) | Input | 0 | 1355 | 699.622 | 233.629 |
| water/cement | - | Input | 0.3 | 0.84 | 0.42623 | 0.10244 |
| Foam volume | (dm3/m3) | Input | 47 | 690 | 245.431 | 121.496 |
| Compressive strength | (MPa) | Output | 1.09 | 48.88 | 23.9598 | 13.5282 |
Figure 1Distribution histogram of collected data.
Pearson-correlation matrix for mix design parameters.
| Cement | Sand | w/c | Foam | Compressive-Strength | |
|---|---|---|---|---|---|
| Cement | 1 | ||||
| Sand | 0.026 | 1 | |||
| w/c | −0.576 | −0.285 | 1 | ||
| Foam | −0.770 | −0.485 | 0.388 | 1 | |
| Compressive Strength | 0.777 | 0.402 | −0.631 | −0.748 | 1 |
Figure 2Contour maps of input variables (a) cement content; (b) sand content; (c) w/c ratio; (d) foam volume against the compressive strength.
Summarize machine-learning algorithm by researchers.
| Sr. No | Machine Learning Method | Abbreviation | Data Set | Prediction Property | Year | Waste Materials | References |
|---|---|---|---|---|---|---|---|
| 1. | Gene expression programming | GEP | 298 | Compressive Strength | 2021 | FA | [ |
| 2. | Support Vector Machine | SVM | 15 | Compressive strength | 2021 | Normal concrete | [ |
| 3. | Individuals with ensemble modeling | ANN, bagging and boosting | 1030 | Compressive strength | 2021 | FA | [ |
| 4. | Data Envelopment | DEA | 114 | Compressive strength, Slump test, | 2021 | FA | [ |
| 5. | Gene expression programming | GEP | 160 | Post-fire behavior | 2020 | GGBFS | [ |
| 6. | Gene expression programming | GEP | 351 | Compressive Strength | 2020 | GGBFS | [ |
| 7. | Multivariate | MV | 21 | Compressive strength | 2020 | Crumb rubber with SF | [ |
| 8. | Support Vector Machine | SVM-ANFIS | 120 | Deflection | 2020 | RC beam | [ |
| 9. | Conventional Artificial-Neural Network | C-ANN | 220 | Compressive Strength | 2020 | Foamed concrete | [ |
| 10. | Gene Expression Programming | GEP | 357 | Compressive strength | 2020 | Superplasticizers | [ |
| 11. | Adaptive neuro-fuzzy inference system | ANFIS with ANN | 7 | Compressive strength | 2020 | POFA | [ |
| 12. | Gene expression programming and random forest | GEP and RF | 357 | Compressive strength | 2020 | - | [ |
| 13. | Gene expression programming | GEP | 277 | Axial capacity | 2020 | - | [ |
| 14. | Support vector machine | SVM | - | Compressive strength | 2020 | FA | [ |
| 15. | Support vector machine | SVM | 115 | Slump test, | 2020 | FA | [ |
| 16. | Ensemble models | RT, RF, GBRT, ensemble GBRT | 126 | Unconfined compressive strength | 2019 | Cemented Paste Backfill | [ |
| 17. | Artificial Neural-Network | ANN | 264 | Thermal properties | 2019 | Silica fume | [ |
| 18. | Random forest | RF | 131 | Compressive strength | 2019 | FA | [ |
| 19. | Artificial neuron-network | ANN | 205 | Compressive strength | 2019 | FA | [ |
| 20. | Intelligent rule-based enhanced multiclass support vector machine and fuzzy rules | IREMSVM-FR with | 114 | Compressive strength | 2019 | FA | [ |
| 21. | Adaptive neuro-fuzzy inference system | ANFIS | 55 | Compressive strength | 2018 | - | [ |
| 22. | Multivariate adaptive regression spline | M5 | 114 | Compressive strength | 2018 | FA | [ |
| 23. | Random Kitchen Sink Algorithm | RKSA | 40 | V-funnel test | 2018 | FA | [ |
| 24. | Artificial neuron-network | ANN | 69 | Compressive strength | 2017 | FA | [ |
| 25. | Artificial neuron-network | ANN | 114 | Compressive strength | 2017 | FA | [ |
| 26. | Support Vector Machine | SVM | 288 | Compressive Strength | 2017 | Blast furnace slag and waste tire rubber powder | [ |
| 27. | Artificial neuron-network | ANN | 169 | Compressive strength | 2016 | FA | [ |
| 28. | Biogeographical-based programming | BBP | 413 | Elastic modulus | 2016 | SF | [ |
| 29. | Artificial Neural Network | ANN and MLR | 1288 | Compressive strength | 2015 | Clinker mortar | [ |
| 30. | Gene expression programming | GEP | 168 | Tensile Strength | 2012 | Normal concrete | [ |
| 31. | Artificial neuron-network | ANN | 80 | Compressive strength | 2011 | FA | [ |
| 32. | Artificial neuron-network | ANN | 300 | Compressive strength | 2009 | FA | [ |
Figure 3Ensemble models with various number of ensemble estimators; (a) SVR-bagging; (b) SVR-boosting; (c) Random forest.
Analysis method for optimum sub-models.
| Approach | Ensemble Method | ML Technique | Ensemble Models | Optimum Estimator | R-Value |
|---|---|---|---|---|---|
| Individual | - | Support vector regression | - | - | 0.88 |
| Ensemble learner | Bagging | SVR-Bagging | (1, 2, 3, …., 20) | 9 | 0.98 |
| Ensemble learner | Boosting | SVR-Boosting | (1, 2, 3, …., 20) | 5 | 0.95 |
| Modified ensemble | - | Random Forest | (1, 2, 3, …., 20) | 2 | 0.98 |
Figure 4(a) SVR relation and (b) error distribution; (c) SVR-Bagging relation and (d) error distribution; (e) SVR-Adaboost relation and (f) error distribution between experimental and prediction values.
Statistical evaluation of different ML modeling approaches.
| ML Technique | Approach | MAE (MPa) | RMSE (MPa) | R2 |
|---|---|---|---|---|
| Support vector regression | Individual | 4.96 | 6.68 | 0.78 |
| SVR-Bagging | Ensemble learner | 2.05 | 2.54 | 0.96 |
| SVR-Boosting | Ensemble learner | 2.72 | 4.12 | 0.91 |
| Random Forest | Modified ensemble learner | 1.84 | 2.52 | 0.96 |
Figure 5Results of random forest ML approach; (a) regression relation between experimental and prediction values; (b) Prediction errors distribution.
Figure 6(a) Regression results (R2) of models with 10 k-fold cross-validations of models; (b) MAE Statistical error results of models with 10-K fold cross- validation; (c) RMSE Statistical error results of models with 10-k fold cross- validation.