Literature DB >> 35845429

Surface Defect Segmentation Algorithm of Steel Plate Based on Geometric Median Filter Pruning.

Zhiqiang Hao1,2,3, Zhigang Wang1,2, Dongxu Bai1,4, Xiliang Tong3,4.   

Abstract

Problems such as redundancy of detection model parameters make it difficult to apply to factory embedded device applications. This paper focuses on the analysis of different existing deep learning model compression algorithms and proposes a model pruning algorithm based on geometric median filtering for structured pruning and compression of defect segmentation detection networks on the basis of structured pruning. Through experimental comparisons and optimizations, the proposed optimization algorithm can greatly reduce the network parameters and computational effort to achieve effective pruning of the defect detection algorithm for steel plate surfaces.
Copyright © 2022 Hao, Wang, Bai and Tong.

Entities:  

Keywords:  deep learning; defect detection; model compression; semantic segmentation; structured pruning

Year:  2022        PMID: 35845429      PMCID: PMC9283705          DOI: 10.3389/fbioe.2022.945248

Source DB:  PubMed          Journal:  Front Bioeng Biotechnol        ISSN: 2296-4185


1 Introduction

Applying defect detection segmentation algorithms to real industrial production scenarios, hardware resources are a challenge that must be faced (Tang et al., 2017; Liu H. et al., 2020; Hao et al., 2021). Complex models often mean better detection capabilities, but the high memory space footprint and huge consumption of computational resources doom it to ineffective application in resource-limited hardware platforms (Sun H. et al., 2020, Sun et al., 2022c; Tao et al., 2022; Wu et al., 2022; Zhao et al., 2022). Therefore, compression of redundant neural network models is essential. Model pruning is a fast and effective way to compress neural networks by cutting out unimportant neurons or filters to obtain a small network model with small storage capacity and fast inference. Model pruning can inherit the weights of the network before pruning, so the model can be pruned to achieve better optimization results. Model pruning is a fast and effective way to compress neural networks by cutting out unimportant neurons or filters to obtain a network model with small storage capacity and fast inference. Model pruning inherits the weights of the network before pruning, so model pruning allows for better mobile deployment and better optimization. For real-time applications such as surface EMG signal processing (Li et al., 2019b, Li et al., 2020; Sun et al., 2020a; Qi et al., 2020; Yang et al., 2021), gesture recognition (Duan et al., 2021, Liu X. et al., 2022, Liu et al., 2022a, Liu et al., 2022b, Luo et al., 2020, Jiang et al., 2019a, b, Xu et al., 2022, Sun et al., 2022a) and quality inspection (Chen et al., 2021a, Chen et al., 2021b, Huang L. et al., 2021, Jiang et al., 2021a, Jiang et al., 2021b, Sun et al., 2021b, Chen et al., 2022a, Chen et al., 2022b, Chen et al., 2022c, Huang et al., 2022, Sun et al., 2022b, Yun et al., 2022b, Zhang et al., 2022), model compression effectively reduces the memory and computational power consumed by the original large neural network, and improves the training and inference speed. Moreover, the compressed models are conducive to deployment and timely updates on embedded and mobile devices with limited storage space, facilitating the development of smart factories (Li et al., 2019a; Li et al., 2019c; Yun et al., 2022a). The key contributions of this work are: 1) A model pruning method based on improved geometric median filter pruning is proposed on the basis of structured pruning. 2) The pruning method and pruning process are improved by performing model acceleration and fine-tuning in the structured pruning process, and determining whether the pruning end condition is satisfied by the evaluation function to improve the pruning compression efficiency. 3) After experimental comparison, the improved geometric median filter model-based pruning method proposed in this paper outperforms other classical pruning methods. And the pruning algorithm has better detection performance and pruning efficiency in steel plate surface defect segmentation detection. The rest of this paper is organized as follows: Section 2 discusses related work on model compression in recent years. Section 3 analyses model pruning methods and clarifies in detail the advantages and disadvantages of unstructured pruning and structured pruning methods; and as a basis, proposes a structured model pruning method based on geometric median filtering for pruning and compressing steel plate surface defect models. After a brief introduction of the open source steel plate surface defect dataset and the configuration of the experimental environment, Section 4 presents an experimental comparison of the proposed pruning algorithm with other pruning algorithms to demonstrate the effectiveness of the structured pruning algorithm. Section 5 concludes the paper with a prospect.

2 Related Work

In recent years, in order to perform more complex information processing tasks, deep learning-based neural network models have become deeper and deeper, also making them increasingly computationally intensive, making it difficult to deploy neural networks on devices with scarce computational resources or with strict latency requirements (Liu H. et al., 2022). As a result, compression of neural network models is becoming increasingly important. For applications such as steel plate surface defect detection, where real-time requirements are particularly stringent, it is even more important to reduce the computational cost and storage requirements and to speed up the computation. Currently, there are five main neural network model compression methods that are widely used (Gao et al., 2021): Low rank decomposition, structural design, knowledge distillation, parameter quantization and model pruning, and the relevant short descriptions are shown in Table 1.
TABLE 1

Model compression methods.

MethodsMethod descriptionAdvantages and disadvantages
Low-rank decompositionLow-rank decomposition of parameter matricesParameter matrix decomposition is more difficult and requires larger hardware resources
Structural designDesigning special convolution kernelsConstructing new modules, trained from 0
Knowledge distillationTrain to optimise your network with a large model as a guideTraining from 0, model performance is more sensitive to network structure is more sensitive
Parameter quantificationReplacing high-precision weighting parameters with low precisionThe quantified parameters are often not derivable and the actual update may deviate from the original gradient direction
Model pruningCrop parameters that are not important to the final accuracyThe pruned model has some robustness and can achieve better optimization
Model compression methods. Liu M. et al. (2020) proposed a joint optimization model of low-rank matrix bi-factor decomposition and structured sparse matrix decomposition, and applied it to saliency target detection with low time complexity. Zhang and Chen (2019) modelled the detection of defects on the track surface as a low-rank matrix decomposition problem, and calculated the row accumulation of the sparse matrix obtained from the decomposition, and searched for the maximum connected region to determine the defect location, realizing automatic detection and localization of defects. Wang et al. (2018) used multiple independent and complementary information in the multi-view feature space to outperform single information, and proposed that by decomposing the potential low-dimensional data cluster representations to present structured low-rank representations and improve clustering performance by exploring multi-view consensus structures beyond low-rank with an efficient alternating minimization strategy function. Ouyang (2021) proposed an improved autoencoder architecture based on an extreme learning machine that uses low-rank matrix decomposition to learn optimal low-dimensional features. The representational and non-linear capabilities of the features are enhanced. However, due to the large arithmetic size of matrix decomposition, it inherently takes longer training time and requires more hardware resources. DenseNet (Huang et al., 2017) is a densely connected neural network, with connections between any two layers of the network, combining information from all previous layers as input features for the next layer and introducing a feature channel scaling factor and a resolution scaling factor into the network, further reducing the computational effort of the network. Inception (Szegedy et al., 2016), on the other hand, uses mainly 1 × 1 filters instead of 3 × 3 filters, saving the number of parameters in the network. To randomly disrupt the feature channels, ShuffleNet (Xin et al., 2021) divides the feature channels into multiple groups and convolves them to increase the information exchange between different feature channels. MobileNet (Sun et al., 2021a) designs a deeply separable convolution module and fuses the information of different feature channels by 1 × 1 convolution. In addition, researchers often introduce 1 × 1 filters between 3 × 3 filters to reduce the number of input and output channels of the feature map. Although lightweight networks are effective in reducing the computational complexity of the network, there is still a large amount of redundancy in the network and the design requirements are high. Huang J. et al. (2021) replaced the traditional static convolution by constructing a dynamic convolution module incorporating an attention mechanism to transfer dynamic feature knowledge from the teacher network back to the student network, thus achieving high accuracy recognition of defects while significantly reducing model inference time. Liu et al. (2021) proposed a neural network compression algorithm based on knowledge distillation and adversarial learning, and allowed the teacher network and student network to learn from each other in the second half of training, enabling the student network to explore its own optimal solution space. Park and Yong (2020) proposed to apply channel and spatial correlation loss functions and adaptive cross-entropy loss functions to train the light network and use the heavy network for semantic segmentation. Knowledge distillation from the heavy network as the teacher to the light network as the student can be used as a way to improve the performance of the student network. Zhang et al. (2021) proposed a novel two-branch network that took three pairs of original transformed images as input and incorporated a class activation graph to drive the network to mine the most relevant class-specific regions. This strategy ensured that the network generated differentiated embeddings and a round of self-knowledge distillation was set up to prevent overfitting and improve performance. However, compared to other compression methods (Sarakon et al., 2021), the whole training process of knowledge distillation takes longer and is only applicable to neural networks with softmax layers. Rao et al. (2019) proposed a deep neural network compression method based on dynamic quantization coding, in which the quantization codebook is updated simultaneously during the training of the model, so that the codebook minimizes the error caused by quantization of larger weight parameters. Sun H. et al. (2020) proposed a lightweight image compression neural network based on parameter quantization, quantizing the model parameters from 32-bit floating-point to 8-bit integer, saving 73% of storage space compared to the original model. Chen et al. (2019) proposed an efficient convolutional neural network-based fast decision method for quantization parameter selection for video coding by comparing the rate distortion cost to calculate the optimal quantization parameters, saving the encoding time of the video. Feature extraction is important for steganalysis of content-adaptive JPEG steganography, Xu et al. (2018) proposed a scale covariance matrix feature based on a two-dimensional Gabor filter and used diverse quantization of filter residuals to improve detection performance. Jin et al. (2018) proposed a hybrid pruning method combining weight pruning and convolutional kernel pruning; the convolutional kernels that contribute less to the overall accuracy of the convolutional neural network are pruned first, and then the pruned model is weight pruned to achieve further model compression. Wei et al. (2021) obtained a deep convolutional neural network model with sparse parameters by training the convolutional neural network model with sparse regularization, and combined the sparsity of the convolutional and batch regression layers to perform structured pruning to remove redundant filters. Ziani et al. (2018) proposed a vertical partition pruning method based on the maximum frequent item set, which effectively prunes the potential search space to search for optimal solutions. Zhang H. et al. (2020) performed model compression by enforcing channel-level sparsity pruning in a YOLOv3 network, and tested the effect of different gradient optimizers on model pruning before finally using the Adam optimizer to optimize the model. Jia et al. (2021) proposed a novel solution for minutely significant target object detection, which evaluates the parameters in the training model based on significant energy levels as a way to distinguish between background parameters in the model as a way to distinguish between background and salient objects. The above-mentioned deep learning-based model compression methods still have problems such as requiring large hardware resources for acceleration, high redundancy, the stability and robustness of the network after model compression is difficult to be guaranteed in complex environments, and the network model has insufficient self-adaptability.

3 Improved Geometric Median Filter Based Pruning Algorithm

3.1 Model Pruning Methods

There are two main types of model pruning methods: unstructured pruning and structured pruning. Unstructured pruning prunes the neuron or connection weights, which means that some non-0 elements in the network calculation are set to 0, or the dense connections of the network are turned into sparse connections, turning the original dense matrix operation into a sparse matrix operation, as shown in Figure 1. In Figure 1, the dashed box is a pruning of the neurons to 0, and the dashed connection is a pruning of the dense connections to sparse connections, i.e. pruning weights.
FIGURE 1

Unstructured pruning diagram.

Unstructured pruning diagram. Structured pruning is a type of pruning at the filter level, which focuses on pruning the filters with smaller contributions in each layer of the network. When the filter van value (the filter's impact factor) is less than a set range, the network is structured to prune redundant filters according to the van value, as shown in Figure 2. In the figure, the jth convolutional layer is the i + 1th convolutional layer. Thus, structured pruning can effectively reduce the network model size without destroying the convolutional structure.
FIGURE 2

Structured pruning diagram.

Structured pruning diagram. Since the convolution kernel obtained after pruning is sparse, and most GPUs today do not provide additional acceleration for sparse matrix operations, this results in a pruned network that is not accelerated in any way compared to the original network, but may be slower. Therefore, structured pruning is now a more general approach, and is relatively more efficient than unstructured pruning methods. For the use of the pruned network does not require the support of specific hardware platforms, computational libraries, effectively avoiding the drawbacks of unstructured pruning and enabling direct deployment on the mainstream deep learning frameworks nowadays (Liu et al., 2017).

3.2 Geometric Median Filtering Based Detection Model Pruning Algorithm

Model-structured pruning requires a criterion to select the filter to be pruned, i.e. the filter’s magnitude value. The most common pruning criterion is that the filter's parametric value is compared to some threshold value and if it is below the threshold, the filter is set to zero, i.e., the filter is pruned and pruned. He et al. (2019) proposed a new filter pruning method for pruning models by geometric median filter pruning, which is a type of structured pruning. Unlike the previous methods, geometric median filter-based pruning compresses the convolutional neural network model by removing redundant filters. Geometric median filtering works by calculating the geometric median of the filters within the same layer and, depending on the properties of the geometric median, filters near the geometric median can be represented by the remaining filters. Therefore, pruning the geometric median filter does not have a substantial negative impact on the model performance. In d-dimensional space, given any set of n points: , and , there exists a point such that the sum of the Euclidean distances (Euclidean distances) to each point is minimized, and the point is referred to as the Geometric Median (GM) point and is calculated as: In which, , and is referred to as the geometric median point; denotes the value of the variable at which the objective function is made to take its minimum valu; [1, n] = {1, …, n}; def means that the function is defined as . The geometric median is a classical robust estimator of data centeredness in Euclidean space and is used when pruning the model to obtain common information about all filters within a single layer i as the geometric median for that layer . In which, denotes the sum of the Euclidean distances of all filters within tensor to layer i. denotes that exists within input tensor , and , and denote the number of channels, height and width of the input tensor within layer i, respectively. indicates that the output is when the input is . The core idea of geometric median filtering is that if there are filters within layer i that are close to geometric median , then these filters are redundant and clipping these redundant filters will not have a large impact on network performance. In layer i, these redundant filters are: And these redundant filters are close to the geometric median . That is, Eq. 5 is equivalent to In Eq. 7, 。 The geometric median is a classical robust estimator of data-centricity in Euclidean space. This shows that the information of the selected filter can be replaced by other filters. After fine-tuning, the network can easily recover its original performance. Therefore, the neural network is pruned to have little impact on the final result of the detection. The pruning flow chart based on geometric median structured pruning is shown in Figure 3. First, a pre-trained detection model with the required compression is input and the pruning rate and pruning layers are set. The pruning rate can be set to 0–1 and the pruning layers can be set to convolutional layers, fully connected layers, Batchnorm layers, etc.
FIGURE 3

Structured pruning flow chart.

Structured pruning flow chart. The structured pruning process in this paper includes updating the mask, updating the indirect sparsity and updating the module. After the pruning process, the model is accelerated and refined to optimize the model. Finally, an evaluation score is calculated to determine whether the end condition is met. If the end condition is met, the pruned and compressed model is output; if not, the pruning process continues. Geometric median filtering algorithms can effectively improve the compression rate of neural networks and reduce detection model redundancy. The pruned detection model can be deployed to portable devices for faster processing (Ran et al., 2022). In this paper, a model pruning algorithm based on geometric median filtering is used to compress the steel plate surface defect detection network and implement a model pruning defect segmentation detection algorithm based on geometric median filtering to reduce the number of parameters and computational effort of the detection model.

4 Experimental Results and Analysis

4.1 Open Source Surface Defect Dataset for Steel Plates

The Severstal dataset was released open source on the competition platform Kaggle. The Severstal dataset contains 12,568 images from the training set and 1,801 images from the test set. There are 5,902 defect-free images and 6,666 defective images in the training set. The number of defective and non-defective images in the dataset is roughly equal, and most of the images have no defects or contain only one type of defect (Hao et al., 2022). All images in the Severstal dataset have a vertical and horizontal resolution of 256 and 1,600 respectively. There are four types of steel surface defects in the Severstal dataset, as shown in Figure 4: A) Pit defects, B) Edge crack defects, C) Scratch and scrape defects, D) Rolled-in scale defects and E) Non-defect images.
FIGURE 4

Surface defect data for Severstal plates. (A) Pit defects, (B) Edge crack defects, (C) Scratch and scrape defects, (D) Rolled-in scale defects and (E) Non-defect images.

Surface defect data for Severstal plates. (A) Pit defects, (B) Edge crack defects, (C) Scratch and scrape defects, (D) Rolled-in scale defects and (E) Non-defect images. The Severstal dataset contains a large variation in morphology between different defects on the surface of steel plates, both large defects such as scratches and scrapes, and very small defects such as pits and edge cracks. The extremely large span of defect scales places high demands on the defect detection segmentation algorithm: it has to focus on the details to achieve fine segmentation; and it has to focus on the global picture and have sufficient sensory field for large scale defects. These factors make feature extraction and detection segmentation of the network difficult and lead to the need for pruning and compression of the defect detection model.

4.2 Experimental Environment Configuration

The algorithm research and network training in this paper were conducted on a laboratory server. The specific computer systems used and the configuration of the experimental environment are shown in Table 2.
TABLE 2

Experimental environment configuration.

ProjectConfiguration
Operating systemWindows10
CPUi7-9700k
GPURTX2080 Ti
RAMDDR5 16GB × 4
Programming languagePython3.7
Deep learning frameworkPyTorch1.10
Experimental environment configuration. This paper uses relevant open source libraries and toolkits to implement the overall algorithmic procedure based on the good ecology and scalability of the Python language and the open source framework PyTorch (Bai et al., 2021). These open source tools greatly save the development time of the defect detection and segmentation procedure in this paper, thus allowing more time and effort to be devoted to the research, improvement and experimentation of the structured pruning algorithm.

4.3 Experiments on Surface Defect Segmentation Detection of Steel Plates Based on Structured Pruning

In order to verify the practical effectiveness of the proposed defect segmentation algorithm based on geometric median filter pruning, experiments with different pruning rates were conducted on different models under the same conditions to test the effect of different pruning rates on the accuracy of the models. Since the main network layer of the pruned model is the convolutional layer, this paper only detects pruning on the convolutional layer of the detection model, and does not perform pruning experiments on the fully connected layers, Batchnorm layers, etc. The input size of the model only affects the computational volume of the model and does not affect the number of model parameters. Therefore, the input size was set to [3, 64, 64] for the model pruning experiments, i.e., the simulated input image size was 64 × 64 for the 3-channel image. The ResNet50 model has good performance in image recognition and localization tasks (He et al., 2016). The ResNeXt50 model is a grouped convolution based on the ResNet50 model, which can greatly reduce the number of parameters and is more effective in many visual recognition tasks (Xie et al., 2017). The FPN-ResNeSt50 model is an improved fusion of the FPN (Feature pyramid networks) and the ResNeSt50 model (Lin et al., 2017; Zhang Y. et al., 2020), with powerful feature extraction and fusion capabilities, and have good detection capability for defect segmentation detection tasks on steel plate surfaces. In this paper, ResNet50, ResNeXt50 and FPN-ResNeSt50 are used as the detection models for defects on the surface of steel plates, and pruning experiments and validation are performed on them. The effect of different pruning rates on the ResNet50 model is shown in Table 3. A pruning rate of 0% indicates that no pruning is applied to the model. For example, when the pruning rate is 40%, the computation of the model is , which is 48.81% lower than the computation of the original model, and the number of parameters of the model is , which is 48.43% lower than the number of parameters of the original model.
TABLE 3

Effect of different pruning rates on the ResNet50 model.

Pruning rate/%Calculated volume/MNumber of parameters/MCalculated volume decline rate/%Rate of decline in number of parameters/%
0335.6925.5000
10291.8222.1713.0713.06
20249.3718.9825.7125.57
30208.9615.9837.7537.33
40171.8413.1548.8148.43
50136.8610.5059.2358.82
60105.678.0868.5268.31
7074.705.7277.7577.57
8042.313.3887.4086.75
9013.791.2995.8994.94
Effect of different pruning rates on the ResNet50 model. The effect of different pruning rates on the ResNeXt50 (32 × 4d) model is shown in Table 4. As the pruning rate increases, the computational volume and number of parameters of the network decreases and the rate of decrease in computational volume and number of parameters increases.
TABLE 4

Effect of different pruning rates on the ResNeXt50 (32 × 4d) model.

Pruning rate/%Calculated volume/MNumber of parameters/MCalculated volume decline rate/%Rate of decline in number of parameters/%
0347.2324.9600
10342.7324.751.30.84
20336.5524.133.083.25
30328.3423.215.447.01
40310.9221.7310.4312.94
50288.0719.8117.0420.63
60251.9916.9627.4332.05
70183.4712.6347.1649.40
80112.318.0767.6667.67
9037.063.0489.3387.82
Effect of different pruning rates on the ResNeXt50 (32 × 4d) model. However, the structured pruning effect of the model was not evident at smaller pruning rates in the early stages due to the ResNeXt50 (32 × 4d) model having a 32-component group convolution, resulting in a smaller pruning rate. The effects of different pruning rates on the FPN-ResNeSt50 model are shown in Table 5.
TABLE 5

Effect of different pruning rates on the FPN-ResNeSt50 model.

Pruning rate/%Calculated volume/MNumber of parameters/MCalculated volume decline rate/%Rate of decline in number of parameters/%
0508.1927.9800
10464.2325.948.657.28
20425.5123.2516.2716.91
30372.5520.0126.6928.49
40325.0417.5636.0437.23
50268.0214.3447.2648.74
60212.9311.3158.1059.59
70152.868.1869.9270.78
8095.135.2481.2881.29
9031.462.1493.8192.36
Effect of different pruning rates on the FPN-ResNeSt50 model. Comparing Tables 3, 4, 5, the results of the pruning experiments prove that the more grouped convolutions a network model has, the lower the compression rate of its pruning. Since grouped convolutions can greatly reduce the number of model parameters, the more groupings exist for grouped convolutions, the lower the pruning compression rate. The model pruning algorithm based on geometric median filtering prunes and compresses the steel plate surface defect segmentation model based on depth feature fusion, and experiments with different pruning rates were conducted on it under the same conditions to test the effect of different pruning rates on the accuracy of the FPN-ResNeSt50 model, and the detection results are shown in Figure 5.
FIGURE 5

Plot of test results for different pruning rates.

Plot of test results for different pruning rates. At a pruning rate of 40%, the defect detection accuracy starts to gradually decline, so at a pruning rate greater than 40%, it will lead to the loss of important parameters of the model, resulting in a serious decline in accuracy. In contrast, at a pruning rate of 10%–30%, the model accuracy is able to maintain a low loss of accuracy. The test results show that when the pruning rate is small, pruning brings regularization to the network and enhances the generalization performance of the network; when the pruning rate is large, the characterization ability of the network is severely damaged and the performance of the model decreases significantly.

5 Conclusion

In order to solve the problems of large number of model parameters and difficulty in applying the model to actual plant equipment, this paper investigates the defect segmentation detection algorithm based on geometric median filter pruning. Based on the structured pruning, a model pruning algorithm based on geometric median filtering is proposed to prune and compress the defect segmentation detection network, which greatly reduces the network parameters and computational effort and improves the generalization ability of the model. Through experimental comparisons and optimizations, the detection accuracy of steel surface defects is improved. Meanwhile, the parameters and computation of the detection model are reduced. The pruning and compression algorithm proposed in this paper has good prospects for application in the segmentation and detection of defects on steel plate surfaces. Good pruning algorithms can be applied to a variety of factory embedded or portable mobile devices and can meet the demand for real-time scene detection. In the future, there is still a long way to go in model pruning and compression research.
  16 in total

1.  RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification.

Authors:  Pei Zhang; Ying Li; Dong Wang; Jiyue Wang
Journal:  Sensors (Basel)       Date:  2021-02-24       Impact factor: 3.576

2.  Genetic Algorithm-Based Trajectory Optimization for Digital Twin Robots.

Authors:  Xin Liu; Du Jiang; Bo Tao; Guozhang Jiang; Ying Sun; Jianyi Kong; Xiliang Tong; Guojun Zhao; Baojia Chen
Journal:  Front Bioeng Biotechnol       Date:  2022-01-10

3.  Self-Tuning Control of Manipulator Positioning Based on Fuzzy PID and PSO Algorithm.

Authors:  Ying Liu; Du Jiang; Juntong Yun; Ying Sun; Cuiqiao Li; Guozhang Jiang; Jianyi Kong; Bo Tao; Zifan Fang
Journal:  Front Bioeng Biotechnol       Date:  2022-02-11

4.  Genetic-Based Optimization of 3D Burch-Schneider Cage With Functionally Graded Lattice Material.

Authors:  Manman Xu; Yan Zhang; Shuting Wang; Guozhang Jiang
Journal:  Front Bioeng Biotechnol       Date:  2022-01-26

5.  Attitude Stabilization Control of Autonomous Underwater Vehicle Based on Decoupling Algorithm and PSO-ADRC.

Authors:  Xiong Wu; Du Jiang; Juntong Yun; Xin Liu; Ying Sun; Bo Tao; Xiliang Tong; Manman Xu; Jianyi Kong; Ying Liu; Guojun Zhao; Zifan Fang
Journal:  Front Bioeng Biotechnol       Date:  2022-02-28

6.  Time Optimal Trajectory Planing Based on Improved Sparrow Search Algorithm.

Authors:  Xiaofeng Zhang; Fan Xiao; XiLiang Tong; Juntong Yun; Ying Liu; Ying Sun; Bo Tao; Jianyi Kong; Manman Xu; Baojia Chen
Journal:  Front Bioeng Biotechnol       Date:  2022-03-22

7.  Photoelastic Stress Field Recovery Using Deep Convolutional Neural Network.

Authors:  Bo Tao; Yan Wang; Xinbo Qian; Xiliang Tong; Fuqiang He; Weiping Yao; Bin Chen; Baojia Chen
Journal:  Front Bioeng Biotechnol       Date:  2022-03-21

8.  Low-Illumination Image Enhancement Algorithm Based on Improved Multi-Scale Retinex and ABC Algorithm Optimization.

Authors:  Ying Sun; Zichen Zhao; Du Jiang; Xiliang Tong; Bo Tao; Guozhang Jiang; Jianyi Kong; Juntong Yun; Ying Liu; Xin Liu; Guojun Zhao; Zifan Fang
Journal:  Front Bioeng Biotechnol       Date:  2022-04-11

9.  Multi-Objective Optimization Design of Ladle Refractory Lining Based on Genetic Algorithm.

Authors:  Ying Sun; Peng Huang; Yongcheng Cao; Guozhang Jiang; Zhongping Yuan; Dongxu Bai; Xin Liu
Journal:  Front Bioeng Biotechnol       Date:  2022-06-15
View more
  1 in total

1.  Multi-Objective Location and Mapping Based on Deep Learning and Visual Slam.

Authors:  Ying Sun; Jun Hu; Juntong Yun; Ying Liu; Dongxu Bai; Xin Liu; Guojun Zhao; Guozhang Jiang; Jianyi Kong; Baojia Chen
Journal:  Sensors (Basel)       Date:  2022-10-06       Impact factor: 3.847

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.