| Literature DB >> 34722404 |
Mohammad Nabipour1, Mohammad Reza Deevband2, Amin Asgharzadeh Alvar3, Narges Soleimani4, Sara Sadeghi5.
Abstract
BACKGROUND: Given the extensive use and preferred diagnostic method in common mammography tests for screening and diagnosis of breast cancer, there is concern about the increased dose absorbed by the patient due to the sensitivity of the breast tissue.Entities:
Keywords: Computer; Mammography; Neural Networks; Radiation Dosimeters
Year: 2021 PMID: 34722404 PMCID: PMC8546156 DOI: 10.31661/jbpe.v0i0.1146
Source DB: PubMed Journal: J Biomed Phys Eng ISSN: 2251-7200
Figure 1The bilayer neural network with six inputs and tangent sigmoid transfer function and one output with purelin transfer function.
Specifications of the optimized neural network.
| NN architecture | MLP with 2 layers |
|---|---|
| Inputs: six inputs with tangent sigmoid transfer function | Kvp, mAs, type of filter target, total filter thickness, HVL, brand |
| Output: one output with linear transfer function | ESAK |
| train function | Levenberg-Marquardt |
| hidden layer size | 35 |
| divide function | Random |
| train, validation and test ratio | 70/100, 15/100, 15/100 |
| performance function | MSE |
NN: Neural network, MLP: MultiLayer Perceptron, Kvp: Kilovoltage peak, mAs: milliAmperage-seconds, HVL: Half-Value layer, ESAK: entrance surface air kerma, MSE: Mean Squared Error
Figure 2The output and target regression in relation to the number of neurons of the hidden layer.
Figure 3The Mean Squared Error (MSE) in relation to the number of neurons of the hidden layer.
The mean values of the network evaluation indices in relation to the number of neurons of the hidden layer.
| Neuron no. | MSEtrain value | MSEtest value | R (%) | Neuron no. | MSEtrain value | MSEtest value | R (%) |
|---|---|---|---|---|---|---|---|
| 14 | 0.365 | 1.658 | 0.918 | 38 | 0.390 | 4.305 | 0.890 |
| 15 | 0.202 | 1.082 | 0.947 | 39 | 0.653 | 2.246 | 0.894 |
| 16 | 0.888 | 1.407 | 0.881 | 40 | 0.213 | 5.748 | 0.898 |
| 17 | 0.417 | 2.290 | 0.926 | 41 | 0.481 | 3.829 | 0.902 |
| 18 | 0.975 | 2.354 | 0.875 | 42 | 0.601 | 2.384 | 0.905 |
| 19 | 1.147 | 1.201 | 0.878 | 43 | 0.367 | 6.143 | 0.863 |
| 20 | 0.413 | 1.387 | 0.925 | 44 | 0.487 | 1.868 | 0.914 |
| 21 | 0.339 | 1.822 | 0.933 | 45 | 0.934 | 1.284 | 0.866 |
| 22 | 0.845 | 1.806 | 0.907 | 46 | 1.003 | 3.411 | 0.848 |
| 23 | 0.848 | 1.503 | 0.882 | 47 | 0.155 | 5.482 | 0.879 |
| 24 | 0.722 | 1.009 | 0.920 | 48 | 0.250 | 2.186 | 0.935 |
| 25 | 1.724 | 2.282 | 0.799 | 49 | 2.590 | 4.183 | 0.789 |
| 26 | 0.797 | 1.368 | 0.910 | 50 | 0.565 | 1.469 | 0.905 |
| 27 | 0.652 | 1.413 | 0.916 | 51 | 0.380 | 3.649 | 0.901 |
| 28 | 0.394 | 2.697 | 0.920 | 52 | 0.203 | 1.770 | 0.895 |
| 29 | 0.0820 | 1.583 | 0.824 | 53 | 0.547 | 1.825 | 0.896 |
| 30 | 2.270 | 3.540 | 0.816 | 54 | 0.186 | 4.025 | 0.887 |
| 31 | 0.149 | 2.919 | 0.937 | 55 | 0.212 | 1.229 | 0.946 |
| 32 | 0.363 | 3.181 | 0.908 | 56 | 0.458 | 5.519 | 0.876 |
| 33 | 0.304 | 2.438 | 0.904 | 57 | 0.627 | 2.525 | 0.908 |
| 34 | 0.419 | 1.947 | 0.933 | 58 | 0.242 | 1.479 | 0.920 |
| 35 | 0.201 | 0.912 | 0.949 | 59 | 0.966 | 4.110 | 0.863 |
| 36 | 0.497 | 1.578 | 0.925 | 60 | 0.803 | 0.925 | 0.891 |
| 37 | 0.290 | 5.239 | 0.897 |
Figure 4Top-left: Comparing the output and target for all data; bottom-left: error for all data
Top-right: comparing the output and target regression for all data; bottom-left: histogram diagram of the error of all data.
Figure 5Top-left: Comparing the output and target for test data; bottom-left: error for test data
Top-right: comparing the output and target regression for test data; bottom-left: histogram diagram of the error of test data.
Figure 6Comparing the neural network estimation (black), the collected data (blue), and the error value (red).