| Literature DB >> 34885386 |
Costel Anton1, Silvia Curteanu1, Cătălin Lisa1, Florin Leon2.
Abstract
Most of the time, industrial brick manufacture facilities are designed and commissioned for a particular type of manufacture mix and a particular type of burning process. Productivity and product quality maintenance and improvement is a challenge for process engineers. Our paper aims at using machine learning methods to evaluate the impact of adding new auxiliary materials on the amount of exhaust emissions. Experimental determinations made in similar conditions enabled us to build a database containing information about 121 brick batches. Various models (artificial neural networks and regression algorithms) were designed to make predictions about exhaust emission changes when auxiliary materials are introduced into the manufacture mix. The best models were feed-forward neural networks with two hidden layers, having MSE < 0.01 and r2 > 0.82 and, as regression model, kNN with error < 0.6. Also, an optimization procedure, including the best models, was developed in order to determine the optimal values for the parameters that assure the minimum quantities for the gas emission. The Pareto front obtained in the multi-objective optimization conducted with grid search method allows the user the chose the most convenient values for the dry product mass, clay, ash and organic raw materials which minimize gas emissions with energy potential.Entities:
Keywords: bricks; influence of additives; machine learning; neural networks; random forest
Year: 2021 PMID: 34885386 PMCID: PMC8658433 DOI: 10.3390/ma14237232
Source DB: PubMed Journal: Materials (Basel) ISSN: 1996-1944 Impact factor: 3.623
Figure 1Neural network structure.
Statistical description of experimental data.
|
|
|
|
|
|
|
|
|
|
|
|
| Col 1 | 121 | 0 | 14.885 | 3.219 | 0.293 | 0.579 | 8.49 | 18.7 | 10.21 | 14.11 |
| Col 2 | 121 | 0 | 1538.471 | 303.036 | 27.549 | 54.545 | 2071 | 2205 | 134 | 1344 |
| Col 3 | 121 | 0 | 739.899 | 131.261 | 11.933 | 23.626 | 813.137 | 889.85 | 76.712 | 785.862 |
| Col 4 | 121 | 0 | 606.701 | 107.98 | 9.816 | 19.436 | 667.156 | 729.677 | 62.521 | 644.097 |
| Col 5 | 121 | 0 | 110.985 | 19.689 | 1.79 | 3.544 | 121.971 | 133.477 | 11.507 | 117.879 |
| Col 6 | 121 | 0 | 22.213 | 4.502 | 0.409 | 0.81 | 26.247 | 28.932 | 2.685 | 23.406 |
| Col 7 | 121 | 0 | 914.885 | 123.356 | 11.214 | 22.203 | 576.052 | 1233.708 | 657.656 | 913.87 |
| Col 8 | 121 | 0 | 112.32 | 22.238 | 2.022 | 4.003 | 99.137 | 166.258 | 67.121 | 109.788 |
| Col 9 | 121 | 0 | 395.969 | 205.602 | 18.691 | 37.007 | 962.734 | 979.563 | 16.828 | 384.978 |
|
|
|
|
|
|
|
|
|
|
|
|
| Col 1 | 11.483 | 17.982 | −0.178 | −1.705 | 0.273 | <0.001 | 0.818 | <0.001 | 1801.1 | 28,053.031 |
| Col 2 | 1344 | 1890 | −0.0657 | 2.74 | 0.318 | <0.001 | 0.745 | <0.001 | 186155 | 297,413,803 |
| Col 3 | 685.73 | 822.174 | −2.108 | 5.76 | 0.213 | <0.001 | 0.779 | <0.001 | 89,527.831 | 68,309,121 |
| Col 4 | 563.507 | 677.172 | −2.085 | 5.668 | 0.199 | <0.001 | 0.783 | <0.001 | 73,410.829 | 45,937,579 |
| Col 5 | 102.859 | 123.326 | −2.108 | 5.76 | 0.213 | <0.001 | 0.779 | <0.001 | 13,429.175 | 1,536,955.2 |
| Col 6 | 20.334 | 25.153 | −1.179 | 2.404 | 0.152 | <0.001 | 0.916 | <0.001 | 2687.827 | 62,137.641 |
| Col 7 | 823.385 | 992.977 | 0.303 | −0.298 | 0.0514 | 0.564 | 0.987 | 0.281 | 110,701.05 | 103,104,680 |
| Col 8 | 94.526 | 129.049 | 0.389 | −0.59 | 0.0948 | 0.01 | 0.97 | 0.009 | 13,590.734 | 1,585,858.2 |
| Col 9 | 237.99 | 540.703 | 0.362 | −0.396 | 0.0515 | 0.56 | 0.982 | 0.097 | 47,912.303 | 24,044,489 |
Col 1(Dry product mass, kg); Col 2 (No. piece/kiln car); Col 3 (Total tons/day); Col 4 (clay, tons); Col 5 (Ash tons); Col 6 (Organic raw materials, tons); Values of the noxious substances measured in the chimney: Col 7 (CO mg/m3); Col 8 (NO mg/m3); Col 9 (CH4 mg/Nm3).
Results of the Kruskal-Wallis test.
| Comparison | Diff. of Ranks | Q | |
|---|---|---|---|
| Col 2 vs. Col 1 | 952 | 23.544 | Yes |
| Col 2 vs. Col 6 | 851.777 | 21.065 | Yes |
| Col 2 vs. Col 8 | 653.686 | 16.166 | Yes |
| Col 2 vs. Col 5 | 648.62 | 16.041 | Yes |
| Col 2 vs. Col 9 | 463.57 | 11.465 | Yes |
| Col 2 vs. Col 4 | 362.86 | 8.974 | Yes |
| Col 2 vs. Col 3 | 248.603 | 6.148 | Yes |
| Col 2 vs. Col 7 | 138.215 | 3.418 | Yes |
| Col 7 vs. Col 1 | 813.785 | 20.126 | Yes |
| Col 7 vs. Col 6 | 713.562 | 17.647 | Yes |
| Col 7 vs. Col 8 | 515.471 | 12.748 | Yes |
| Col 7 vs. Col 5 | 510.405 | 12.623 | Yes |
| Col 7 vs. Col 9 | 325.355 | 8.046 | Yes |
| Col 7 vs. Col 4 | 224.645 | 5.556 | Yes |
| Col 7 vs. Col 3 | 110.388 | 2.73 | No |
| Col 3 vs. Col 1 | 703.397 | 17.396 | Yes |
| Col 3 vs. Col 6 | 603.174 | 14.917 | Yes |
| Col 3 vs. Col 8 | 405.083 | 10.018 | Yes |
| Col 3 vs. Col 5 | 400.017 | 9.893 | Yes |
| Col 3 vs. Col 9 | 214.967 | 5.316 | Yes |
| Col 3 vs. Col 4 | 114.256 | 2.826 | No |
| Col 4 vs. Col 1 | 589.14 | 14.57 | Yes |
| Col 4 vs. Col 6 | 488.917 | 12.091 | Yes |
| Col 4 vs. Col 8 | 290.826 | 7.192 | Yes |
| Col 4 vs. Col 5 | 285.76 | 7.067 | Yes |
| Col 4 vs. Col 9 | 100.711 | 2.491 | No |
| Col 9 vs. Col 1 | 488.43 | 12.079 | Yes |
| Col 9 vs. Col 6 | 388.207 | 9.601 | Yes |
| Col 9 vs. Col 8 | 190.116 | 4.702 | Yes |
| Col 9 vs. Col 5 | 185.05 | 4.576 | Yes |
| Col 5 vs. Col 1 | 303.38 | 7.503 | Yes |
| Col 5 vs. Col 6 | 203.157 | 5.024 | Yes |
| Col 5 vs. Col 8 | 5.066 | 0.125 | No |
| Col 8 vs. Col 1 | 298.314 | 7.378 | Yes |
| Col 8 vs. Col 6 | 198.091 | 4.899 | Yes |
| Col 6 vs. Col 1 | 100.223 | 2.479 | No |
Col 1(Dry product mass, kg); Col 2 (No. piece/kiln car); Col 3 (Total tons/day); Col 4 (clay, tons); Col 5 (Ash tons); Col 6 (Organic raw materials, tons); Values of the noxious substances measured in the chimney: Col 7 (CO mg/m3); Col 8 (NO mg/m3); Col 9 (CH4 mg/Nm3).
Topology of various ANN forward propagation networks developed to predict CO.
| No. | Topology | MSE | r2 | Ep (%) |
|---|---|---|---|---|
| 1. | ANN (4:4:1) | 0.0199 | 0.696 | 7.18 |
| 2. | ANN (4:8:1) | 0.0181 | 0.728 | 6.58 |
| 3. | ANN (4:12:1) | 0.0177 | 0.735 | 6.56 |
| 4. | ANN (4:16:1) | 0.0175 | 0.739 | 6.50 |
| 5. | ANN (4:20:1) | 0.0177 | 0.735 | 6.50 |
| 6. | ANN (4:40:1) | 0.0165 | 0.757 | 6.11 |
| 7. | ANN (4:40:20:1) | 0.0138 | 0.801 | 4.92 |
| 8. | ANN (4:60:30:1) | 0.0123 | 0.825 | 4.76 |
| 9. | ANN (4:80:40:1) | 0.0126 | 0.821 | 4.77 |
Figure 2Comparing experimental and simulation results obtained in the validation stage to predict CO.
Topology of various ANN forward propagation networks developed to predict NO.
| No. | Topology | MSE | r2 | Ep (%) |
|---|---|---|---|---|
| 1. | ANN (4:4:1) | 0.0241 | 0.547 | 14.08 |
| 2. | ANN (4:8:1) | 0.0230 | 0.665 | 11.91 |
| 3. | ANN (4:12:1) | 0.0260 | 0.610 | 13.21 |
| 4. | ANN (4:16:1) | 0.0290 | 0.529 | 14.42 |
| 5. | ANN (4:20:1) | 0.0229 | 0.609 | 11.69 |
| 6. | ANN (4:40:1) | 0.0232 | 0.664 | 12.03 |
| 7. | ANN (4:40:20:1) | 0.0082 | 0.895 | 5.88 |
| 8. | ANN (4:60:30:1) | 0.0138 | 0.816 | 7.84 |
| 9. | ANN (4:80:40:1) | 0.0117 | 0.847 | 6.68 |
Topology of various ANN forward propagation networks developed to predict CH4.
| No. | Topology | MSE | r2 | Ep (%) |
|---|---|---|---|---|
| 1. | ANN (4:4:1) | 0.0114 | 0.835 | 51.07 |
| 2. | ANN (4:8:1) | 0.0093 | 0.867 | 41.25 |
| 3. | ANN (4:12:1) | 0.0090 | 0.872 | 40.67 |
| 4. | ANN (4:16:1) | 0.0092 | 0.869 | 40.91 |
| 5. | ANN (4:20:1) | 0.0090 | 0.872 | 39.43 |
| 6. | ANN (4:40:1) | 0.0094 | 0.865 | 43.36 |
| 7. | ANN (4:40:20:1) | 0.0074 | 0.895 | 34.90 |
| 8. | ANN (4:60:30:1) | 0.0070 | 0.902 | 33.63 |
| 9. | ANN (4:80:40:1) | 0.0073 | 0.897 | 34.47 |
Figure 3Comparing experimental and simulation results obtained in the validation stage to predict NO.
Figure 4Comparing experimental and simulation results obtained in the validation stage to predict CH4.
The results obtained with several regression algorithms.
| Algorithm | Dataset | CO | NO | CH4 |
|---|---|---|---|---|
| kNN (k = 6, w = 1/d) | Training | 0.9261 | 0.9251 | 0.9525 |
| Cross-validation | 0.6315 | 0.5184 | 0.7203 | |
| kNN (k = 10, w = 1/d) | Training | 0.8127 | 0.7336 | 0.7790 |
| Cross-validation | 0.6355 | 0.4666 | 0.7081 | |
| NN (k = 1) | Training | 0.9978 | 0.9893 | 0.9875 |
| Cross-validation | 0.4927 | 0.4371 | 0.6473 | |
| K* (gb = 10) | Training | 0.9392 | 0.9456 | 0.9641 |
| Cross-validation | 0.5954 | 0.4529 | 0.6449 | |
| K* (gb = 20) | Training | 0.8945 | 0.8927 | 0.9359 |
| Cross-validation | 0.6295 | 0.4608 | 0.6840 | |
| K* (gb = 50) | Training | 0.7924 | 0.7540 | 0.8670 |
| Cross-validation | 0.6061 | 0.4461 | 0.7197 | |
| SVR (C = 10,000, PUK) | Training | 0.7999 | 0.7812 | 0.8841 |
| Cross-validation | 0.2349 | 0.1995 | 0.1880 | |
| SVR (C = 100, poly d = 2) | Training | 0.6443 | 0.3835 | 0.7736 |
| Cross-validation | 0.5938 | 0.1619 | 0.7061 | |
| SVR (C = 100, RBF) | Training | 0.6277 | 0.3061 | 0.6432 |
| Cross-validation | 0.6037 | 0.2103 | 0.5999 | |
| Random Forest(100 trees) | Training | 0.9592 | 0.9418 | 0.9621 |
| Cross-validation | 0.5994 | 0.4799 | 0.7120 | |
| Random Forest(1000 trees) | Training | 0.9613 | 0.9468 | 0.9612 |
| Cross-validation | 0.6045 | 0.4878 | 0.7384 |
kNN—k-Nearest Neighbors; k = the number of neighbors; wi = the weight of instance i; di = the distance from the query point to instance i; NN = Nearest Neighbor; gb = global blend; SVR = Support Vector Regression; C = the cost parameter; PUK = Pearson Universal Kernel; d = the degree of the polynomial kernel; RBF = Radial Basis Function kernel.
Figure 5The Pareto front obtained with the kNN models.
Figure 6The Pareto front obtained with the Random Forest models.
Figure 7The Pareto front obtained with three separate models.