| Literature DB >> 36225540 |
Chi Zhang1, Jian Cui1, Wei Liu2.
Abstract
The development of industry is inseparable from the support of steel materials, and the modern industry has increasingly high requirements for the quality of steel plates. But the process of steel plate production produces many types of defects, such as roll marks, scratches, and scars. These defects will directly affect the quality and performance of the steel plate, so it is necessary to effectively detect them. Steel plate surface defects are characterized by their types, shape, and size: the same defect can have different morphologies, and similarities can exist between different defects. In this paper, industrial steel plate surface defect samples are analyzed, and a sample set is established by screening the collected defect images. Then, annotation and classification are performed. A multilayer feature extraction framework is developed in experiments to train a neural network on the sample set of defects. To address the problems of low automation, slow detection speed, and low accuracy of the traditional defect detection methods, the attention graph convolution network (AGCN) is investigated in this paper. Firstly, faster R-CNN is used as the basic network model for defect detection, and the visual features are jointly refined by combining attention mechanism and graph convolution neural network. The latter network enriches the contextual information in the visual features of steel plates and explores the semantic association between vision and defect types for different kinds of defects using the attention mechanism to achieve intelligent detection of defects, thus enabling our method to meet the practical needs of steel plate production.Entities:
Year: 2022 PMID: 36225540 PMCID: PMC9550437 DOI: 10.1155/2022/2549683
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Process of computer vision diagnosis of surface defects on steel plates.
Figure 2Feature extraction from low level to high level.
Figure 3Types of defects in the collected data set. (a) White iron scale. (b) Roll printing. (c) Scratch. (d) Scarring. (e) Embroidery skin.
Figure 4Model architecture.
Figure 5Schematic diagram of the multilayer feature extraction process.
Figure 6Graph generation.
Experiments conditions.
| CPU | Intel Xeon W-2135 |
|---|---|
| Operating system | Ubuntu 18.04 |
| RAM | 32 GB |
| GPU | GeForce RTX 2080Ti |
| Video memory | 8 GB |
| Python version | 3.6.9 |
| CUDA | 10.0 |
| CU DNN | 7.4.1 |
Figure 7Training loss convergence plot.
Performance comparison (mIoU).
| Model | White iron scale | Roll printing | Scratch | Scarring | Embroidery skin | Average |
|---|---|---|---|---|---|---|
| Faster R-CNN [ | 0.7962 | 0.7204 | 0.8386 | 0.7602 | 0.8982 | 0.8027 |
| SegNet [ | 0.8250 | 0.6835 | 0.8532 | 0.8622 | 0.8824 | 0.8213 |
| PSPNet [ | 0.8032 | 0.7282 | 0.8419 | 0.8358 | 0.9047 | 0.8228 |
| YOLOv4 [ | 0.8068 | 0.7025 | 0.8793 | 0.8524 | 0.8856 | 0.8253 |
| DeepLab+ [ | 0.8271 | 0.7139 | 0.8748 | 0.8437 | 0.8754 | 0.8270 |
| RefineNet [ | 0.8265 |
| 0.8786 | 0.8613 | 0.8410 | 0.8271 |
| AGCN (ours) |
| 0.7188 |
|
|
| 0.8580 |
Average detection speed.
| Model | Testing time in seconds | Number of frames per second |
|---|---|---|
| Faster R-CNN | 0.0621 | 16.11 |
| SegNet | 0.0464 | 21.54 |
| PSPNet | 0.0427 | 23.42 |
| YOLOv4 | 0.0340 |
|
| DeepLab+ | 0.0386 | 25.92 |
| RefineNet | 0.0534 | 18.73 |
| AGCN (ours) | 0.0364 |
|
Figure 8Partial experimental results—segmentation map.
Ablation experiments.
| Number of convolutional layers | White iron scale (%) | Roll printing (%) | Scratch (%) | Scarring (%) | Embroidery skin (%) |
|---|---|---|---|---|---|
| 1 | 82.72 | 72.11 | 90.87 | 89.08 | 90.89 |
| 2 | 82.75 | 72.24 | 90.86 | 89.32 | 93.82 |
| 3 |
|
|
|
|
|
| 4 | 83.62 | 72.54 | 90.76 | 89.42 | 95.82 |