| Literature DB >> 26131678 |
Abstract
In this study, we present an application of neural network and image processing techniques for detecting the defects of an internal micro-spray nozzle. The defect regions were segmented by Canny edge detection, a randomized algorithm for detecting circles and a circle inspection (CI) algorithm. The gray level co-occurrence matrix (GLCM) was further used to evaluate the texture features of the segmented region. These texture features (contrast, entropy, energy), color features (mean and variance of gray level) and geometric features (distance variance, mean diameter and diameter ratio) were used in the classification procedures. A back-propagation neural network classifier was employed to detect the defects of micro-spray nozzles. The methodology presented herein effectively works for detecting micro-spray nozzle defects to an accuracy of 90.71%.Entities:
Keywords: image processing; micro-spray nozzle; neural network
Mesh:
Year: 2015 PMID: 26131678 PMCID: PMC4541833 DOI: 10.3390/s150715326
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Micro-spray nozzle.
Figure 2The profile of the micro-spray nozzle.
Figure 3Four possible defects. (a) Small outlet; (b) Deckle edge of outlet; (c) Pellet metal fillings; (d) Strip metal fillings.
Figure 4Illustration of the machine vision system for micro-spray nozzle defect inspection.
Figure 5Outlet image segmentation. (a) Orignal image; (b) Outlet image.
Figure 6ROI diagram.
Figure 7(a) Original image; (b) after Canny edge detection; (c) randomized algorithm for detecting circles; (d) the image after hole-filling operation; (e) ROI image with the AND logic operator for (a,d).
Figure 8The scanning direction of the circle inspection for the ROI.
Figure 9The distribution of gray levels of the circle inspection.
Figure 10Parameters of gray level gradient computation.
Figure 11The structure of the back propagation neural network (BPNN) classifier.
Mathematical formulations of the features.
| Distance Variance | Variance of |
| Mean gray level |
|
| Gray level variance |
|
| Contrast (orientations 45°, 90°, 135° and 180°) |
|
| Entropy (orientations 45°, 90°, 135° and 180°) |
|
| Energy (orientations 45°, 90°, 135° and 180°) |
|
Note: D(x, y) is the distance between location (x, y) and the outlet center, p(i, j, d, θ) represents the relative frequency of a gray level co-occurrence matrix (GLCM) of an image g(x, y), where i is the gray level at location (x, y) and j represents the gray level of a neighboring pixel at a distance d and an orientation from location (x, y). The GLCM of distance 1 pixel and orientations 45°, 90°, 135° and 180° are used for the isolated image.
Figure 12Segmentation and classification steps.
Figure 13(a) Original image; (b) segmented binary image after the CI algorithm; (c) segmented image after the CI algorithm; (d) classification result.
Examples of classification failure and explanations.
| Original Image | ROI | Explanation | |
|---|---|---|---|
| Case 1: Deckle edge |
|
| Deckle edges are removed after ROI segmentation operator. |
| Case 2: Deckle edge | Deckle edges cannot be segmented by the CI algorithm because they are not obvious. | ||
| Case 3: Pellet filings | Pellet fillings cannot be segmented by the CI algorithm because they are not obvious. | ||
| Case 4: Dark image | The image is blurred and dark. |