| Literature DB >> 36210987 |
Jingjing Tang1, Li Wang1, Jing Huang2, Aiye Shi1, Lizhong Xu1.
Abstract
Semantic feature recognition in colour images is required for identifying uneven patterns in object detection and classification. The semantic features are identified by segmenting the colorimetric sensor array features through machine learning paradigms. Semantic segmentation is a method for identifying distinct elements in an image. This can be considered a task involving image classification at the pixel level. This article introduces a semantic feature-dependent array segmentation method (SFASM) to improve recognition accuracy due to irregular semantics. The proposed method incorporates a deep convolutional neural network for detecting the semantic and un-semantic features based on sensor array representations. The colour distributions per array are identified for horizontal and vertical semantics analysis. In this analysis, deep learning classifies the uneven patterns based on colour distribution, i.e. the consecutive and scattered colour distribution pixels in an array are correlated for their similarity. This similarity identification is maximized through max-pooling and recurrent iterations, preventing detection errors. The proposed method classifies the semantic features for further correlation sections, improving the accuracy. The proposed method's performance is thus validated using the metrics precision, analysis time and F1-Score.Entities:
Mesh:
Year: 2022 PMID: 36210987 PMCID: PMC9546663 DOI: 10.1155/2022/2439371
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Proposed method.
Figure 2Semantic segmentation process illustration.
Figure 3Distribution process illustration.
Figure 4Learning process for recognition and correlation.
Output for distribution.
|
|
Output for semantics.
|
|
Output for detection.
|
|
Figure 5Analysis of C and S for the varying ρ.
Figure 6Error ratio and ρ(D, S) analysis for varying iterations.
Figure 7Recognition accuracy analysis.
Figure 8Precision analysis.
Figure 9F1-score analysis.
Figure 10Error analysis.
Figure 11Analysis time.
Comparative analysis for patterns.
| Metrics | SCN | SOSD-Net | CBF | SFASM |
|---|---|---|---|---|
| Accuracy | 0.7985 | 0.8201 | 0.8744 | 0.9194 |
| Precision | 0.8281 | 0.8614 | 0.9085 | 0.9415 |
|
| 0.682 | 0.853 | 0.867 | 0.9195 |
| Error ratio | 18.17 | 15.63 | 10.15 | 7.129 |
| Analysis time (s) | 4.19 | 3.35 | 2.66 | 1.738 |
The proposed method achieves 8.84% high accuracy, 7.55% high precision, 11.88% high F1-score, 7.5% less error ratio and 8.15% less analysis time.
Comparative analysis for features.
| Metrics | SCN | SOSD-Net | CBF | SFASM |
|---|---|---|---|---|
| Accuracy | 0.7843 | 0.8206 | 0.8642 | 0.9196 |
| Precision | 0.8271 | 0.8598 | 0.9084 | 0.9472 |
|
| 0.691 | 0.755 | 0.842 | 0.9191 |
| Error ratio | 15.66 | 13.78 | 10.51 | 7.198 |
| Analysis time (s) | 4.12 | 3.43 | 2.59 | 1.706 |
The proposed method achieves 9.66% high accuracy, 8.21% high precision, 15.64% high F1-score, 6.12% less error ratio and 8.25% less analysis time.