| Literature DB >> 36201486 |
Lili Fu1, Shijun Li2, Shuolin Kong1, Ruiwen Ni1, Haohong Pang1, Yu Sun1, Tianli Hu1, Ye Mu1, Ying Guo1, He Gong1.
Abstract
Individual cow identification is a prerequisite for intelligent dairy farming management, and is important for achieving accurate and informative dairy farming. Computer vision-based approaches are widely considered because of their non-contact and practical advantages. In this study, a method based on the combination of Ghost and attention mechanism is proposed to improve ReNet50 to achieve non-contact individual recognition of cows. In the model, coarse-grained features of cows are extracted using a large sensory field of cavity convolution, while reducing the number of model parameters to some extent. ResNet50 consists of two Bottlenecks with different structures, and a plug-and-play Ghost module is inserted between the two Bottlenecks to reduce the number of parameters and computation of the model using common linear operations without reducing the feature map. In addition, the convolutional block attention module (CBAM) is introduced after each stage of the model to help the model to give different weights to each part of the input and extract the more critical and important information. In our experiments, a total of 13 cows' side view images were collected to train the model, and the final recognition accuracy of the model was 98.58%, which was 4.8 percentage points better than the recognition accuracy of the original ResNet50, the number of model parameters was reduced by 24.85 times, and the model size was only 3.61 MB. In addition, to verify the validity of the model, it is compared with other networks and the results show that our model has good robustness. This research overcomes the shortcomings of traditional recognition methods that require human extraction of features, and provides theoretical references for further animal recognition.Entities:
Mesh:
Year: 2022 PMID: 36201486 PMCID: PMC9536640 DOI: 10.1371/journal.pone.0275435
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.752
Fig 1Individual data from 13 cows used in this research.
Fig 2The model structure built in this research.
Fig 3Dilated convolution of 7×7 size used in the model.
Fig 4The GhostBottleneck structure used in the model.
Fig 5The CBAM structure used in the model.
Hyperparameter setting.
| Hyperparameters | Values |
|---|---|
| Classes | 13 |
| Batch size | 32 |
| Epoch | 50 |
| Optimizer | SGD |
| Learning rate | 0.001 |
| Momentum | 0.9 |
Add module comparison results.
| Model | Dilated Convolution | Ghost | CBAM | Accuracy (%) | parameters | Model size (MB) |
|---|---|---|---|---|---|---|
| ResNet50 | 94.33 | 23,534,669 | 89.78 | |||
| A | 94.88 | 1,694,925 | 6.47 | |||
| B | √ | 95.51 | 1,687,437 | 6.44 | ||
| C | √ | √ | 96.54 | 923,125 | 3.52 | |
| Ours | √ | √ | √ | 98.58 | 947,226 | 3.61 |
Performance analysis results of different models.
| Architecture | Validation accuracy (%) | FLOPs | Model size (MB) | Time (s) |
|---|---|---|---|---|
| ResNeXt | 95.65 | 4.26G | 87.76 | 0.2703 |
| GoogLeNet | 97.96 | 2G | 21.04 | 0.2088 |
| ShuffleNetV2 | 85.87 | 591.08M | 5.26 | 0.3068 |
| MobileNetV3 | 83.82 | 262.12M | 8.52 | 0.2270 |
| EfficientNet v2 | 87.64 | 2.975G | 22.39 | 0.4006 |
| PVT | 90.57 | 1.86G | 23.98 | 0.2070 |
| Ours | 98.58 | 627.69M | 3.61 | 0.2410 |
Fig 6Comparison results of different network models: A is training accuracy of model, B is validation accuracy of model.
Comparison of model performance results with other researchers.
| Method | Accuracy (%) | Model size (MB) | FLOPs |
|---|---|---|---|
| [ | 99.76 | 9.25 | 581.71M |
| [ | 95.00 | 14.3 | 15.80G |
| [ | 97.95 | 8.58 | 463.28M |
| Ours | 98.58 | 3.61 | 627.69M |