Literature DB >> 35458270

Evaluation of the Strength of Slab-Column Connections with FRPs Using Machine Learning Algorithms.

Nermin M Salem1, Ahmed Deifalla2.   

Abstract

Slab-column connections with FRPs fail suddenly without warning. Machine learning (ML) models can model the behavior with high precision and reliability. Nineteen ML algorithms were examined and compared. The comparisons showed that the ensembled boosted tree model showed the best, most precise prediction with the highest coefficient of determination (R2) (0.98), the lowest Root Mean Square Error (RMSE) (44.12 kN), and the lowest Mean Absolute Error (MAE) (35.95 kN). The ensembled boosted model had an average of 0.99, a coefficient of variation of 12%, and a lower 95% of 0.97, respectively, in terms of the measured strength. Thus, it was found to be more accurate and consistent compared to all implemented machine learning models and selected traditional models. In addition, the significance of various parameters with respect to the predicted strength was identified, where the effective depth was the most significant by a factor of 0.9, and the concrete compressive strength was the lowest by a factor of 0.3.

Entities:  

Keywords:  FRP; slab-column connection; two-way shear

Year:  2022        PMID: 35458270      PMCID: PMC9032783          DOI: 10.3390/polym14081517

Source DB:  PubMed          Journal:  Polymers (Basel)        ISSN: 2073-4360            Impact factor:   4.967


1. Introduction

Vital infrastructures suffer from the risk of demolition due to a lack of maintenance, severe environmental conditions due to steel corrosion, and the sudden nature of slab-column connection failures. Thus, fiber-reinforced polymers (FRPs) are replacing steel to avoid problems due to steel corrosion because of their excellent properties, which include, but are not limited to, being non-corrosive in nature, having a high strength-to-weight ratio, and performing well under fatigue [1,2,3,4,5]. It is worth noting that, back in the 90’s, FRPs were used for strengthening structures and continue to be valuable in this field [6,7]. In addition, most of the existing design models for slab-column connections lack a physical sense, which is due to their being empirical or semi-empirical [4]. On the other hand, machine learning (ML) models can model the behavior with a high level of precision and consistency [8,9,10]. Although the punching shear failure has a sophisticated, complex behavior, other innovative data-driven models are essential for improving prediction accuracy [11,12,13,14,15]. For the last few decades, ML has shown significant improvements in various fields [16,17,18,19], including structural engineering [20,21,22,23]. Some studies have tackled slab-column connections with FRP reinforcements. In [24], Jumaa and Yousif examined three ML prediction models, Nonlinear Regression analysis (NLR), an Artificial Neural Network model (ANN), and GEP to predict the punching shear failure of FRPs. The models were trained on a dataset composed of 269 records. The results showed that the ANN model outperformed the other two models regarding prediction accuracy. In [20], two models were presented, one based on an ANN and one based on an SVM; both models were trained using a dataset composed of 82 records. In [25], Metwally employed a Levenberg–Marquardt Artificial Neural Network (LM-ANN) for the prediction of the punching shear strength of concrete slabs with various types of FRPs. His model was trained on a small dataset composed of 59 records. His method showed promising results when compared to the experimental results. All previous ML models, used for the punching shear of FRP-reinforced slabs that were examined, worked with a relatively small dataset. In our study, a comprehensive study was performed for five main ML algorithms, and all of our models were trained with a relatively large dataset, composed of 189 records. The dataset of the five ML algorithms, used for the prediction of the punching shear of FRP slabs, was divided into three subsets: training, validation, and testing. In addition, the effect of all input parameters on prediction was examined, and all of our models were compared with each other with respect to model efficiency and prediction accuracy. Several design models for slab-column connection design models are shown in Table 1. Models were selected to represent well-known simple design models, including, but not limited to, well-established design codes, guides, and recently developed models, with both empirical and semi-empirical designs. However, these models were developed using an old experimental database and thus lack the consistency needed for reliable strength prediction. In addition, these models vary in the variables considered and their patterns. Thus, there is a need for a machine learning model capable of accurately predicting strength and identifying the inter-relationships between variables. The selected models in Table 1 are being used for comparison with the proposed model as evidence of its accuracy and consistency.
Table 1

Selected slab-column strength models.

Design Model
JSCE [26] V=βdβρβrfPcdb0.5dd
βd=1000d1/41.5, βρ=100ρE/Es1/31.5, βr=1+11+0.25b0.5dd, fPcd=0.2fc1.2,
b0.5d=4c+d
CSA [27] V=b0.5dd0.0281+2βcEρfc1/30.147Eρfc1/30.19+αsdb0.5d0.056Eρfc1/3
βc= 1, αs= 4, 3, 3 for an inner, edge, corner connection, b0.5d=4c+d
ACI [28] V=0.8fckdb0.5d
k=2ρn+ρn2ρn, n=EEc, Ec=4750fc, b0.5d=4c+d
Hemzah [29] V=13fck90fc0.335ρ0.39EEs0.3b0.5dd
k=0.77and0.55 for circular and rectangular columns, respectively
Ju [30] V=2.3100ρEEsfc1/2db0.5d1/2b0.5dd
b0.5d=4c+d
Selected slab-column strength models.

2. Analysis of the Dataset

An extensive dataset of 189 records representing the test results of slab-column capacity with FRP slabs tested under punching shear was gathered from 37 research investigations, which will be referred to hereinafter as the dataset, as shown in Figure 1 and Table 2, where CFRP is carbon FRP, GFRP is glass FRP, N is the number of tested specimens, V is the punching shear failure load, E is Young’s modulus, d is the effective depth, fc’ is the concrete compressive strength, is the flexure reinforcement ratio, b and c are the column dimensions, and A and B are the slab dimensions. In Figure 1, the dataset is close to normally distributed with respect to all variables. In Figure 1 and Table 2, the dataset covered a range with respect to all significant variables, including, but not limited to, the following:
Figure 1

Frequency spectrum of different Inputs.

Table 2

Experimental database for FRP-reinforced concrete slab-column connections.

ReferencenABbcdf c’ρEVType
(mm)(mm)(mm)(mm)(mm)(MPa)(%)(GPa)(kN)
Ahmed et al. (1993) [31] 469069075–100756136–450.9511378–99CFRP
Banthania et al. (1995) [32]36006001001005541–530.3110061–72CFRP
Bank and Xi (1995) [33]61800150025025076301.49–2.05143–156179–201CFRP
Louka (1999) [34]123000180057522517543–55139–160500–1183GFRP, and CFRP
Matthys and Tarewe (2000) [35]131000100080–23080–23095–12632–1180.19–1.2237–149142–347CFRP and GFRP
Rahman et al. (2000) [36]520002500250150162420.2885534–698GFRP
Hassan et al. (2000) [37]318003000575225165590.571471000–1328CFRP
Khanna et al. (2000) [38]120004000500250138352.442756GFRP
El–Ghandour et al. (2003) [39]52000200020020014229–470.18–0.4745–110170–317GFRP and CFRP
Ospina et al. (2003) [40]32150215025025012029.5–37.50.73–1.4628–34206–260GFRP
Zaghloul and Razapur (2003) [41]11760176025025075451100234CFRP and GFRP
Hussien et al. (2004) [42]41830183025025010026–401.05–1.6742210–249GFRP
Jacobson et al. (2005) [43]52000–2300200063525017527.60.95–0.9833537–897GFRP
El–Gamal et al. (2005) [44]53000250060025015944–49.60.35–1.9938–122674–799GFRP and CFRP
Zhang et al. (2005) [45]21830183025025010035–711.05–1.1842218–275GFRP
Zhang (2006) [46]71900190025025010025–980.36–0.75120251–446CFRP
Tom (2007) [47]619001900250250110701–1.541282–487GFRP
Zaghloul (2007) [48]717601000250250120250.94–1.4810097–211CFRP
El–Gamal et al. (2007) [49]23000250060025015644.11.244.5707–735GFRP
Ramzy et al. (2007) [50]42000200020020082–11233–400.81–1.5446165–230GFRP
Zaghloul et al. (2008) [51]41760176020020082–11233–400.81–2.1446165–230GFRP
Lee et al. (2009) [52]42300230022522511036.31.17–348.2222–330GFRP
Zhu (2010) [53]71500150015015013522–420.29–0.55100145–275BFRP
Min (2010) [54]730030025254547.8–1790.7876–23039–98GFRP and CFRP
Bouguerra et al. (2011) [55]730002500600250110–15535–650.70–1.2043362–732GFRP
Zhu et al. (2012) [56]51500150015015013022–450.29–0.5545.6167–252GFRP
Nguyen–Minh et al. (2013) [57]32200220020020013048.80.48–0.9248180GFRP
Hassan et al. (2013) [58]1925002500300300131–28432–750.30–1.6148–57329–1248GFRP
El-Gendy et al. (2015) [59]628001500300300160410.85–1.7060.5159–277GFRP
Tharmarajah et al. (2015) [60]4142550050025117–11965–690.654–67.4295–365GFRP
Mostafa et al. (2016) [61]32600145030030016080–850.87–1.7060.5–69.3251–288GFRP
ELGABBAS (2016) [62]63000200060025016042–480.40–1.2069.3436–716BFRP
Gouda and El–Salakawy (2016) [63]42600260030030016038–700.65–1.3065–69363–719GFRP
Oskouei et al. (2017) [64]1800800250250176590.768719GFRP
Hussein and El–Salakawy (2018) [65]32800280030030016080–870.98–1.9365461–604GFRP
Hemzah et al. (2019) [29]86006001001008046–600.3–0.9014457–129CFRP
Huang et al. (2020) [66]11600160020020012524.970.89123262CFRP
Mean 19611736301212131460.9480416
Minimum 300300252545220.182839
Maximum 300040006353002841793.762301600
Slab dimensions vary from 300 to 4000 mm. Effective depth varies from 45 to 284 mm, while there are very few specimens above 200 mm, which is not common for a flat slab; however, this could be because of the lab testing facility. Concrete compressive strength varies from 22 (conventional normal concrete) to 179 MPa (ultra-high-performance concrete), while there are very few specimens above 50 MPa. Thus, there is a need for more testing of high strength concrete and ultra high strength concrete. The flexure reinforcement ratio varies from 0.18% to 3.26%, which is a wide range of ratios. Young’s Modulus varies from 28 to 230 GPa, and the majority of values are between 40 and 60 MPa. However, the FRP industry is evolving with new products with much higher Young’s modulus values. Thus, more testing of FRP reinforcements with a Young’s modulus up to the maximum values offered by the market is needed. The shear-span-to-depth ratio varies between 1.8 and 11. The loading area dimensions vary from 25 to 635 mm. FRP reinforcements have different diameters and configurations, where the diameter varies between 6 to 24 mm and the configurations are bars or grids. However, the flexure reinforcement ratio and Young’s modulus were implemented to consider the influence of FRP material type, diameter, and configuration. Experimental database for FRP-reinforced concrete slab-column connections.

3. Machine Learning Methods

Before the implementation phase of the five ML models, dividing the dataset into two subsets, a training set with 80% of the dataset, with a handout validation of 15%, and a testing set with the remaining 20%, is recommended in order to determine the best-fit model. The testing dataset was not used in the training phase. The models were trained with five input variables: the column dimension C, the effective depth d, the concrete compressive strength f’c, the flexure reinforcement ratio , and young’s modulus E. The output of our models is the prediction of the slab-column strength of FRP V. The evaluation of the five models was performed on the testing set.

3.1. Linear Regression Model

This model is defined as a linear fit regression that interprets the relationship between the output and the influencing inputs. The key concept is to indicate the coefficient parameters’ linearity. Different types of linear regression were examined, including normal, interaction, robust, and step-wise.

3.2. Regression Decision Tree

Decision trees subdivide data in a practical tree illustration using simple rules [67]. These rules are set through the decision tree, and the response prediction is performed in an iterative segmentation manner. The tree is composed of roots, leaves, and branches. The training dataset is arranged at the bottom of the tree. The training starts from the top-most roots of the tree. Afterwards, a conditional test is performed in order to draw a path along the tree branches for every node. Various judging conditions are applied to assess the testing at each node, such as the Mean Square Error (MSE). The output of the tree is the prediction that exists in the leaves of the tree, at the end of each path. Various types of regression trees are examined, including complex trees, medium trees, and simple trees. They all follow the same concept of prediction; however, they differ in the fixation of the minimum size of the leaf.

3.3. Ensemble Trees

The ensemble method was introduced in [68] as a group of separate, inadequate models that provide a powerful mathematical prediction. These types of trees can combine similar or non-similar prediction algorithms. There are two types of ensemble trees: bagged and boosted trees. The bagged trees create many models by implementing various bootstraps in a single tree and then merging them into single decision tree by computing the average between them. Boosted trees work as a two-step technique. In the first step, a subset of data is used to obtain a sequence of average working models; in the second step, the performance is boosted by joining the models with each other using a fixed cost function. The boosted algorithm relies on an iterative approach, which means that the parameters in the next step are updated using the residual computed from the previous step in order to optimize the objective function, which is defined as where is the trained parameters among the given data, J() is the objective function, is the training loss function, which is computed through the comparison between the predicted output and the real output to evaluate the accurate prediction of the model, is the regularization term, which is added to prevent model over-fitting by controlling the complexity of the algorithm, n and m are the number of predictions and trees, respectively, is the individual tree prediction function to evaluate the output in the functional space F of all regression trees, and are the regularization parameters terms, also used in controlling the complexity of the boosted algorithm, T is the number of tree leaf nodes, and is the weight of the leaf node. The prediction result of the boosted algorithm is computed from the prediction of each individual tree, defined as where is the input variable. The boosted algorithm constructs a tree and splits a leaf node into two sub-tresses/ branches, left and right. Afterwards, the gain is computed at each leaf node for the determination of the best node. The optimal branch gain is selected once the gain after the splitting reaches its maximum value. where and are the gain of the new right and left branches, respectively, and and are the original right and left branches, respectively. The boosted algorithm showed the best fit model for the prediction for FRP capacity with respect to the available dataset.

3.4. Support Vector Machine (SVM)

The concept of a SVM was first introduced in [69]. It is a method of implementing kernel functions for transforming data into a high dimensional feature space through a linear model. This model is utilized to remove any sophisticated nonlinear relationship. The main concept underlying the SVM is the linear regression function calculations, where the input data is mapped using a nonlinear function. The training in the regression process can be defined as {(), (), ...., ()}, where and are the input vector and predicted output value of the SVM models, respectively, and n is the size of the sample. The aim is to find the function that has the maximum derivation of the real output for all samples in the training set. is assumed to have a linear regression function that can be defined as where the vector w is required to be minimized to allow the function to be as flat as possible. This minimization is achieved by computing the norm subject to . Some points in the training data may not satisfy the constraint condition. In this case, a slack variable will be introduced for such a sample to be able to deal with the constraint condition. This slack variable will measure the derivation of the training points outside the supported points. Therefore, the SVM function will be computed to minimize as follows: where C > 0 is the regularization parameter and is responsible for the determination of the trade-off between the function and the calculated error. The final estimation function for to minimize:SVM is recalculated to be where is the Lagrange multiplier, is the kernel function, and b is the bias term. Many SVM approaches have been examined, including linear, quadratic, cubic, fine, medium, and coarse SVMs. They all follow the same concept but have different kernel functions.

3.5. Gaussian Process Regression

Gaussian Regression (GPR) was introduced in [64]. It is a complex model that is capable of solving sophisticated ML problems. The power of such a model is its flexible, non-parametric models. GPR models are able to analysis the smoothness and noise parameters from the training data. The models find a stochastic process in which random variables are assumed to follow a Gaussian distribution. GPR models are non-parametric kernel-based probabilistic supervised learning models used for generalizing a complex and nonlinear function mapping hidden in the datasets used in training. The power of GPR methods is based on the use of kernel functions that improve efficiency when handling nonlinear data. GPR models provide a reliable response to the input data. These models assume that the output is computed as where is the noise representation of sample . For a given training set, the main goal is to predict the output value y of a new input pattern. In order to be able to establish this goal, it is essential to establish three co-variance matrices as follows: where is the co-variance function, which maps the relation between one output and the next. Many GPR methods have been examined, including the squared Gaussian process, the Marten 5/2 GPR, the exponential GPR, and the rational quadratic GPR.

4. Results and Discussion

To develop the used ML-based models, a grid search method with a 15-fold cross-validation approach was used in the training phase to determine the optimal hyper-parameters. In order to evaluate the effectiveness of our models, the following statistical measures were reported: Correlation Coefficient (R2), defined in Equation (14); Root Mean Square Error (RMSE) measured in kN, defined in Equation (15); Mean Absolute Error (MAE) in kN, defined in Equation (16); The model’s training time, measured in seconds. Models were trained using an Intel core i5, 8GB RAM, using the MATLAB 2021a Machine Learning toolbox. In addition, the R2, RMSE, and MAE values were calculated for each model as shown in Table 3, which will be discussed in this section.
Table 3

Experimental database for FRP-reinforced concrete slab-column connections.

ModelR2RMSE (kN)MAE (kN)Training Time (secs)
ModelsTrainTestTrainTestTrainTest
Linear
Normal0.870.65107.36264.22186.571145.6741.4376
Interaction0.880.66101.99258.01373.766144.4351.1043
Robust0.90.6395.409240.4876.542116.2750.97814
Stepwise0.950.6469.438266.68855.976153.1293.4838
Tree
Fine0.940.8475.74185.344652.9860.83460.76536
Medium0.930.4478.387357.97457.683216.8080.63219
Coarse0.820.63128.12214.858112.3117.1080.50711
Support Vector Machine
Linear0.890.6399.258249.61578.551172.0660.36803
Quadratic0.880.71104.89214.85862.374109.9861.7385
Cubic0.770.49143.89341.11797.009219.4751.6429
Fine Gaussian0.790.59137.11250.681102.54169.5511.5262
Medium Gaussian0.960.6957.815236.58746.092109.3721.4165
Coarse Gaussian0.890.6198.455245.06677.313116.6131.3137
Ensembled Trees
Boosted 0.98 0.97 44.12 71.963 35.95 43.452 1.1991
Bagged0.930.8776.359113.90259.32663.8912.8842
Gaussian Process Regression
Squared Exponential0.950.9368.981150.09753.35477.0680.95702
Marten 5/20.950.9167.181112.57449.36865.6392.2757
Exponential0.960.9360.24567.83943.26743.7382.0637
Rational Quadratic0.950.9165.88691.37248.30258.3311.7053

4.1. Linear Regression

The stepwise model had the highest R2 and the lowest RMSE and MAE, with values of 0.95, 69.438, and 55.976, respectively, in training. However, the model produced the worst values in testing. The stepwise model reported the lowest R2 and the highest RMSE and MAE, with values 0.64, 266.688, and 153.129, respectively, among other linear regression models. This could be because the testing set was not used in training and thus provided value ranges for the input patterns that were different from the values used in training. This means that this model was not able to produce accurate predictions.

4.2. Tree

Three different approaches were employed for solving the slab-column strength problem. The fine tree method performed best among the other tree methods in training and testing. It reported the highest R2 and the lowest RMSE and MAE, with values of 0.94, 75.741, and 52.98, respectively, in training and 0.84, 85.3446, and 60.8346, respectively, in testing. It also consumed the most training time among the different tree approaches. The fine tree method could be used for prediction; however, it is not the optimal solution for our problem. The worst tree method in training was the coarse tree, while the medium tree reported the worst values in testing. This means that these tree methods may not be reliable enough for solving such a problem.

4.3. Support Vector Machine

Several SVM methods were examined for the prediction of the punching shear. The medium Gaussian SVM performed the best among the other tree methods in training and testing. It reported the highest R2 and the lowest RMSE and MAE, with values of 0.96, 57.815, and 46.092, respectively, in training. I was not able to produce the same good results for testing, reporting 0.69, 236.587, and 109.372, respectively. This means that this method may not be able to produce accurate predictions for our problem.

4.4. Ensembled Trees

The boosted model reported the most optimal values for R2, RMSE, and MAE in both training and testing phases. It reported vales of 0.98, 44.12, and 39.95, respectively, in training and 0.97, 71.963, and 43.452, respectively, in testing. This means that this is the most optimal and powerful method that can capture all of the changes in the input parameter patterns and provide accurate predictions of the punching shear. The bagged methodology also gave good results in both training and testing, but it was not the most optimal and consumed more training time.

4.5. Gaussian Process Regression

GPR methods performed well in both training and testing phases. They provided a solution that was close to the optimum boosted ensembled tree solution but with double training time. The exponential GPR reported the highest R2 and the lowest RMSE and MAE, with values of 0.96, 60.245, and 43.738, respectively, in training and 0.93, 67.839, and 43.738, respectively, in testing, compared with the other GPR methods. Figure 2 shows the predicted vs. actual values of the punching shear. This figure shows how well each model produces predictions for different response values. A perfect regression model has a predicted response equal to the actual response, where all of the points lie in the diagonal model, but the vertical distance from the line to any point is the error in the prediction of that point. Based on Figure 2, the best optimal solution of our problem was Model 1.14—the boosted ensembled tree model. Almost all points either have the smallest distance with respect to the vertical line or lie on the vertical line itself, i.e., a zero error between the predicted and real output values, which is the best case. Figure 2 is generic, but further specifications regarding model performance are found in Table 3, where the R2, RMSE, and MAE values are reported for all models. We also investigated the effect of each of the five inputs on the most optimal model, the boosted ensembled tree. The analysis showed that the effective depth d had the most important effect on the prediction of the slab-column strength, i.e., it has highest R2 and the lowest RMSE and MAE, followed by column dimension C, while Youngs’ modulus E had the least important effect, i.e., the lowest R2 and the highest RMSE and MAE, as shown in Figure 3. The values are reported with respect to R2, MAE, and RMSE.
Figure 2

SF calculated using various models versus the record number.

Figure 3

Importance of input variables in the boosted model reported in R2, MAE, and RMSE.

For the optimal method, the boosted tree in our case, Figure 4 shows how the error decreases as different combinations of hyper-parameters are evaluated. It also shows the model behaviour with the hyper-parameters that are optimized best. The best model converges in the 30th training iteration. It achieved the best performance in its 25th iteration.
Figure 4

Visualization of the most optimal method—the boosted tree.

5. Precision and Reliability of ML and Existing Models

In this section, the reliability and precision of the proposed mode will be compared to the most recent models, including, but not limited to, the Ju model developed in 2021 and the Hemzah developed in 2018. The precision and reliability of the capacity calculated using a model will be examined using the ratio between the experimentally observed capacity and that determined using that model (SF). Applying statistical measures on the SF calculated using the ML models and the existing ones could express the precision, reliability, and safety of the model. The closer the average of the SF is to unity, the more precise the used model is. The lower the coefficient of variation of the SF is, the more reliable and precise the model is. If the lower 95% confidence limit is close to unity and larger than 0.85, the model has acceptable safety. In addition, the SF can be plotted versus all effective parameters; thus, the variation in model safety can be examined. Moreover, the ideal pattern is plotted using a solid line, and a linear trendline for the SF is plotted using a dotted line. The inclination of the trendline is an indication of variation and scattering, while the sign of that inclination is indicative of an increase or decrease with the investigated parameter.

5.1. Overall Safety

Table 4 shows the statistical measures calculated for the existing models as well as the proposed model. It is clear that, overall, the ML model captured the behavior with a significantly lower RMSE, MAE, and coefficient of variation compared to all existing models, with values of 64.23, 37.97, and 12%, respectively. In addition, the proposed ML model is more precise, is more reliable, and is reasonably safe compared to the existing design codes in terms of the mean, R2, and the lower 95%, which is close to unity, with values of 0.96, 0.99, and 0.97, respectively. Excluding the ML model, the CSA, Hemzah, and Ju models are more reliable and accurate compared to the ACI and JSCE. Figure 5 shows the SF calculated using the JSCE, CSA, ACI, Hemzah, Ju, and ML models. In the figure, the inclinations of the trend line for the JSCE, CSA, ACI, Hemzah, and Ju models were , , , , , and , respectively. It is clear that the ML model has less scattering compared to the selected existing models.
Table 4

Statistical measures for the SF calculated using the existing models and the ML model.

Statistical MeaaureJSCECSAACIHemzahJuML
R2 0.740.770.740.770.750.96
RMSE 375.94169.58305.58157.87181.3064.23
MAE 274.36112.52222.62100.98121.2537.97
Mean 2.871.202.201.021.240.99
C.O.V 36%37%39%43%32%12%
Lower 95% 2.721.142.080.961.180.97
Maximum 0.850.340.620.280.360.57
Minimum 8.083.005.863.542.411.35
Figure 5

SF calculated using various models versus the record number.

5.2. Safety versus Slab Size

Figure 6 shows the SF calculated using the JSCE, CSA, ACI, Hemzah, Ju, and ML models versus the effective depth. In the figure, the inclinations of the trend line for the JSCE, CSA, ACI, Hemzah, and Ju models was , , , , , and . For all models, the safety decreases with the depth increase, except in the case of the JSCE and CSA models. The ML has less scattering compared to the other models. The JSCE and Ju models have a small inclination, which might be due to the size effect factor.
Figure 6

SF calculated using various models versus d.

5.3. Safety versus Concrete Strength

Figure 7 shows the SF calculated using the JSCE, CSA, ACI, Hemzah, Ju, and ML models versus the concrete strength. In the figure, the inclinations of the trend line for the JSCE, CSA, ACI, Hemzah, and Ju models were , , , , , and . For all models, the safety increases with the concrete compressive strength increase, except in the case of ML. The ML model has less scattering compared to the other models.
Figure 7

SF calculated using various models versus d.

5.4. Safety versus the FRP Young’s Modulus

Figure 8 shows the SF calculated using the JSCE, CSA, ACI, Hemzah, Ju, and ML models versus the concrete density. In the figure, the inclinations of the trend line for the JSCE, CSA, ACI, Hemzah, and Ju models were , , , , , and . For all models, the safety increases with the increase in Young’s modulus, except in the case of the ML. The ML has less scattering compared to the other models. The Ju model has a small inclination, which might be due to the use of a square root relation, while other models use a cubic root relation.
Figure 8

SF calculated using various models versus .

5.5. Safety versus Column-Dimension-to-Depth Ratio

Figure 9 shows the SF calculated using the JSCE, CSA, ACI, Hemzah, Ju, and ML models versus the column dimensions. In the figure, the inclinations of the trend line for the JSCE, CSA, ACI, Hemzah, and Ju models were , , , , , and . The ML has less scattering compared to the other models. The Ju model has a small inclination with respect to column dimensions, which might be due to the factor .
Figure 9

SF calculated using various models versus .

5.6. Safety versus Flexure Reinforcements

Figure 10 shows the SF calculated using the JSCE, CSA, ACI, Hemzah, Ju, and ML models versus the flexure reinforcement ratio. In the figure, the inclinations of the trend line for the JSCE, CSA, ACI, Hemzah, and Ju models were , , , , , and . For all models, the safety decreases with the flexure reinforcement’s increase, except in the case of the JSCE and ACI models. The ML model has less scattering compared to the other models.
Figure 10

SF calculated using various models versus .

6. Conclusions

Several machine learning models were developed and evaluated using an extensive experimental database of 189 slab-column connections with FRP reinforcements. Although concluding remarks are limited to the range of parameter values in the database, a problem that can be solved with the testing of more slabs, the following can be concluded: A grid search with a 15-fold cross-validation was used to determine the optimal hyper-parameters of ML-based models during the training process. The comparison to the experimental data showed that the five ML-based models with the input variables and optimal hyper-parameters are fully capable of predicting the punching shear strength of FRP-RC slabs. The ensembled boosted model was found to be the most reliable and accurate model among all implemented machine learning models with the best accuracy: R2 = 0.97, RMSE = 71.963 kN, and MAE = 43.452 kN for the testing dataset. In addition, the boosted model predicted the actual strength more precisely and reliably compared to the existing design models. It minimized the variability of the traditional models with respect to the effective variables. For the most accurate model—the boosted ensemble—the effect of all input variables on the predicted Shear capacity was examined. Variables can be arranged from most to least influential as follows: The effective depth; The column dimensions; The flexure reinforcements; The longitudinal reinforcement modulus of elasticity; The concrete compressive strength. The proposed model has high accuracy and consistency and thus provides a reliable alternative to the existing strength models, which are inconsistent and have a high coefficient of variation. In addition, the interpretation results of the model reflect the importance and contribution of the parameters that influence the strength in the proposed model. Moreover, these findings confirm findings from concurrent research studies [70,71,72].
  5 in total

1.  Punching Shear Strength of FRP-Reinforced Concrete Slabs without Shear Reinforcements: A Reliability Assessment.

Authors:  Soliman Alkhatib; Ahmed Deifalla
Journal:  Polymers (Basel)       Date:  2022-04-25       Impact factor: 4.967

Review 2.  A State-of-the-Art Review of FRP-Confined Steel-Reinforced Concrete (FCSRC) Structural Members.

Authors:  Yu-Yi Ye; Jun-Jie Zeng; Pei-Lin Li
Journal:  Polymers (Basel)       Date:  2022-02-10       Impact factor: 4.329

3.  Comparative Study of Machine Learning Approaches for Predicting Creep Behavior of Polyurethane Elastomer.

Authors:  Chunhao Yang; Wuning Ma; Jianlin Zhong; Zhendong Zhang
Journal:  Polymers (Basel)       Date:  2021-05-28       Impact factor: 4.329

  5 in total
  5 in total

1.  A Machine Learning Model for Torsion Strength of Externally Bonded FRP-Reinforced Concrete Beams.

Authors:  Ahmed Deifalla; Nermin M Salem
Journal:  Polymers (Basel)       Date:  2022-04-29       Impact factor: 4.329

2.  Compressive Strength of Steel Fiber-Reinforced Concrete Employing Supervised Machine Learning Techniques.

Authors:  Yongjian Li; Qizhi Zhang; Paweł Kamiński; Ahmed Farouk Deifalla; Muhammad Sufian; Artur Dyczko; Nabil Ben Kahla; Miniar Atig
Journal:  Materials (Basel)       Date:  2022-06-14       Impact factor: 3.748

3.  Punching Shear Strength of FRP-Reinforced Concrete Slabs without Shear Reinforcements: A Reliability Assessment.

Authors:  Soliman Alkhatib; Ahmed Deifalla
Journal:  Polymers (Basel)       Date:  2022-04-25       Impact factor: 4.967

4.  Flexural Strength Prediction of Steel Fiber-Reinforced Concrete Using Artificial Intelligence.

Authors:  Dong Zheng; Rongxing Wu; Muhammad Sufian; Nabil Ben Kahla; Miniar Atig; Ahmed Farouk Deifalla; Oussama Accouche; Marc Azab
Journal:  Materials (Basel)       Date:  2022-07-27       Impact factor: 3.748

5.  Comparative Study on the Behavior of Reinforced Concrete Beam Retrofitted with CFRP Strengthening Techniques.

Authors:  Aditya Kumar Tiwary; Sandeep Singh; Raman Kumar; Kamal Sharma; Jasgurpreet Singh Chohan; Shubham Sharma; Jujhar Singh; Jatinder Kumar; Ahmed Farouk Deifalla
Journal:  Polymers (Basel)       Date:  2022-09-26       Impact factor: 4.967

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.