| Literature DB >> 35806543 |
Peng Shi1,2, Mengmeng Duan1, Lifang Yang1, Wei Feng3, Lianhong Ding4, Liwu Jiang1.
Abstract
Grain size is one of the most important parameters for metallographic microstructure analysis, which can partly determine the material performance. The measurement of grain size is based on accurate image segmentation methods, which include traditional image processing methods and emerging machine-learning-based methods. Unfortunately, traditional image processing methods can hardly segment grains correctly from metallographic images with low contrast and blurry boundaries. Moreover, the proposed machine-learning-based methods need a large dataset to train the model and can hardly deal with the segmentation challenge of complex images with fuzzy boundaries and complex structure. In this paper, an improved U-Net model is proposed to automatically accomplish image segmentation of complex metallographic images with only a small training set. The experiments on metallographic images show the significant advantage of the method, especially for the metallographic images with low contrast, a fuzzy boundary and complex structure. Compared with other deep learning methods, the improved U-Net scored higher in ACC, MIoU, Precision, and F1 indexes, among which ACC was 0.97, MIoU was 0.752, Precision was 0.98, and F1 was 0.96. The grain size was calculated based on the segmentation according to the American Society for Testing Material (ASTM) standards, producing a satisfactory result.Entities:
Keywords: complex image; grain size; image segmentation; improved U-Net; metallographic microstructure analysis
Year: 2022 PMID: 35806543 PMCID: PMC9267311 DOI: 10.3390/ma15134417
Source DB: PubMed Journal: Materials (Basel) ISSN: 1996-1944 Impact factor: 3.748
Figure 1The structure of the improved U-Net segmentation model.
The meanings of TP, FP, TN, and FN.
| Terminology | Full Name | Meaning |
|---|---|---|
| TP | True positive | The number of positive examples predicted correctly. |
| FP | False positive | The number of positive example prediction errors. |
| TN | True negative | The number of negative examples predicted correctly. |
| FN | False negative | The number of negative example prediction errors. |
Figure 2Specific work steps’ flow chart.
Figure 3Process of image annotation. (a) Original image; (b) annotated image; (c) single-pixel skeleton image; (d) bolded image.
Figure 4Examples of metallographic images. (a–d) Four metallographic images and (e–h) preprocessed images.
Figure 5The training and validation loss of the model’s training.
Overall performance for different optimizers.
| Optimizer | Learning Rate | ACC | Dice | MIoU | Precision |
|---|---|---|---|---|---|
| Adam | 0.015 | 0.965 | 0.878 | 0.894 | 0.981 |
| SGD | 0.015 | 0.941 | 0.783 | 0.851 | 0.950 |
| Adadelta | 0.015 | 0.912 | 0.790 | 0.795 | 0.891 |
| AdaGrad | 0.015 | 0.934 | 0.817 | 0.610 | 0.887 |
Figure 6The segmentation result of different segmentation methods. (a) The original image; (b) Morphological methods; (c) Canny algorithm; (d) Watershed algorithms; (e) Mipar; (f) U-Net; (g) U-Net++; (h) improved U-Net.
The metrics’ results of different segmentation methods.
| Methods | ACC | Dice | MIoU | Precision | Recall | F1 |
|---|---|---|---|---|---|---|
| FCN | 0.654 | 0.678 | 0.341 | 0.85 | 0.876 | 0.86 |
| SegNet | 0.679 | 0.783 | 0.451 | 0.95 | 0.853 | 0.898 |
| Deeplab V3 | 0.91 | 0.790 | 0.595 | 0.734 | 0.983 | 0.840 |
| DenseNet | 0.89 | 0.917 | 0.610 | 0.657 | 0.942 | 0.774 |
| Mask R-CNN | 0.87 | 0.937 | 0.556 | 0.894 | 0.871 | 0.882 |
| U-Net | 0.89 | 0.910 | 0.700 | 0.904 | 0.880 | 0.892 |
| ResU-Net | 0.905 | 0.912 | 0.621 | 0.864 | 0.88 | 0.872 |
| A-DenseU-Net | 0.843 | 0.916 | 0.684 | 0.92 | 0.855 | 0.886 |
| ResU-Net++ | 0.919 | 0.927 | 0.741 | 0.89 | 0.803 | 0.844 |
| U-Net++ | 0.96 | 0.915 | 0.718 | 0.957 | 0.89 | 0.922 |
| Improved U-Net | 0.97 | 0.93 | 0.752 | 0.98 | 0.940 | 0.960 |
Figure 7Training losses of different segmentation methods.
The performance of different loss functions on our model.
| Loss Function | Loss | Acc | Precision | Recall | MIoU | Dice | F1 |
|---|---|---|---|---|---|---|---|
| Cross entropy | 0.134 | 0.967 | 0.650 | 0.558 | 0.335 | 0.483 | 0.600 |
| Focal loss | 0.631 | 0.973 | 0.859 | 0.122 | 0.013 | 0.403 | 0.210 |
| Dice loss | 0.654 | 0.952 | 0.832 | 0.807 | 0.342 | 0.786 | 0.819 |
| Tversky | 0.112 | 0.967 | 0.783 | 0.783 | 0.632 | 0.431 | 0.783 |
| Focal and Dice | 0.102 | 0.987 | 0.951 | 0.965 | 0.880 | 0.761 | 0.958 |
| Tversky and focal | 0.132 | 0.968 | 0.653 | 0.573 | 0.600 | 0.663 | 0.610 |
Figure 8Grain size measurement results according to ASTM E112-12. (a) The planimetric method; (b) the intercept method.
Typical metallographic grain number statistics in SGMD.
| Image ID | Human | Automatic Calculation | Error Number | Error Rate |
|---|---|---|---|---|
| 1 | 173 | 174 | +1 | 0.008 |
| 2 | 90 | 90 | 0 | 0 |
| 3 | 110 | 114 | +4 | 0.036 |
| 4 | 69 | 68 | −1 | 0.014 |
| 5 | 200 | 200 | 0 | 0 |
| 6 | 151 | 152 | +1 | 0.007 |
| 7 | 160 | 159 | −1 | 0.006 |
| 8 | 171 | 170 | 0 | 0 |
| 9 | 300 | 300 | 0 | 0 |
| 10 | 190 | 188 | −2 | 0.1 |
| Average | 0.017 |
Typical grain size level statistics in SGMD.
| Image ID | G (ASTM) |
| % |
|---|---|---|---|
| 1 | 6.90 | 0.163 | 5.41 |
| 2 | 5.70 | 0.266 | 8.32 |
| 3 | 7.36 | 0.151 | 5.02 |
| 4 | 7.89 | 0.201 | 6.04 |
| 5 | 7.95 | 0.321 | 10.1 |
| 6 | 7.90 | 0.131 | 4.51 |
| 7 | 6.68 | 0.163 | 5.41 |
| 8 | 6.73 | 0.204 | 6.21 |
| 9 | 6.92 | 0.210 | 6.30 |
| 10 | 7.50 | 0.109 | 3.01 |
| Average | 7.053 | 0.1919 | 6.1 |