| Literature DB >> 31470571 |
Abozar Nasirahmadi1, Barbara Sturm2, Sandra Edwards3, Knut-Håkan Jeppsson4, Anne-Charlotte Olsson4, Simone Müller5, Oliver Hensel2.
Abstract
Posture detection targeted towards providing assessments for the monitoring of health and welfare of pigs has been of great interest to researchers from different disciplines. Existing studies applying machine vision techniques are mostly based on methods using three-dimensional imaging systems, or two-dimensional systems with the limitation of monitoring under controlled conditions. Thus, the main goal of this study was to determine whether a two-dimensional imaging system, along with deep learning approaches, could be utilized to detect the standing and lying (belly and side) postures of pigs under commercial farm conditions. Three deep learning-based detector methods, including faster regions with convolutional neural network features (Faster R-CNN), single shot multibox detector (SSD) and region-based fully convolutional network (R-FCN), combined with Inception V2, Residual Network (ResNet) and Inception ResNet V2 feature extractions of RGB images were proposed. Data from different commercial farms were used for training and validation of the proposed models. The experimental results demonstrated that the R-FCN ResNet101 method was able to detect lying and standing postures with higher average precision (AP) of 0.93, 0.95 and 0.92 for standing, lying on side and lying on belly postures, respectively and mean average precision (mAP) of more than 0.93.Entities:
Keywords: convolutional neural networks; livestock; lying posture; standing posture
Mesh:
Year: 2019 PMID: 31470571 PMCID: PMC6749226 DOI: 10.3390/s19173738
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Example of images used for development of the detection algorithms in this study.
Figure 2Examples of three posture classes used in this study in different farming conditions.
Details of image data sets used for posture detection.
| Posture Classes | Training Process | Testing Process | ||
|---|---|---|---|---|
| Number of Individual Postures | ||||
| Training | Validation | Total | Test | |
| Standing | 11,632 | 4372 | 16,004 | 1839 |
| Lying on side | 11,435 | 4085 | 15,520 | 1489 |
| Lying on belly | 15,781 | 5480 | 21,261 | 1659 |
| Total samples |
|
|
|
|
Figure 3Schematic diagram of the faster regions with convolutional neural network (Faster R-CNN) used in this study.
Figure 4Schematic diagram of the region-based fully convolutional network (R-FCN) used in this study.
Figure 5Schematic diagram of the single shot multibox detector (SSD) used in this study.
Performance of the detection phase in test data set in various learning rates.
| AP | ||||||
|---|---|---|---|---|---|---|
| Classes | Learning Rate | Feature Extractor | Standing | Lying on Side | Lying on Belly | mAP |
|
| 0.03 | Inception V2 | 0.82 | 0.87 | 0.88 | 0.86 |
|
| 0.03 | ResNet50 | 0.80 | 0.85 | 0.83 | 0.83 |
|
| 0.03 | ResNet101 | 0.87 | 0.86 | 0.81 | 0.85 |
|
| 0.03 | Inception-ResNet V2 | 0.79 | 0.83 | 0.77 | 0.80 |
|
| 0.03 | ResNet101 | 0.88 | 0.88 | 0.87 | 0.88 |
|
| 0.03 | Inception V2 | 0.69 | 0.70 | 0.68 | 0.69 |
|
| 0.003 | Inception V2 | 0.90 | 0.93 | 0.91 | 0.91 |
|
| 0.003 | ResNet50 | 0.85 | 0.92 | 0.89 | 0.88 |
|
| 0.003 | ResNet101 | 0.93 | 0.92 | 0.89 | 0.91 |
|
| 0.003 | Inception-ResNet V2 | 0.86 | 0.89 | 0.84 | 0.86 |
|
| 0.003 | ResNet101 | 0.93 | 0.95 | 0.92 | 0.93 |
|
| 0.003 | Inception V2 | 0.76 | 0.79 | 0.74 | 0.76 |
|
| 0.0003 | Inception V2 | 0.85 | 0.90 | 0.89 | 0.87 |
|
| 0.0003 | ResNet50 | 0.85 | 0.86 | 0.87 | 0.86 |
|
| 0.0003 | ResNet101 | 0.87 | 0.89 | 0.87 | 0.88 |
|
| 0.0003 | Inception-ResNet V2 | 0.80 | 0.85 | 0.79 | 0.81 |
|
| 0.0003 | ResNet101 | 0.90 | 0.90 | 0.88 | 0.89 |
|
| 0.0003 | Inception V2 | 0.75 | 0.80 | 0.72 | 0.76 |
Figure 6Training and validation loss during the (a) Faster R-CNN, (b) R-FCN and SSD training processes.
Figure 7Examples of detected standing (light green rectangle), lying on belly (yellow rectangle) and side posture (green rectangle) of six different models in various farming conditions.
Figure 8Sample of images of standing posture which are similar to belly lying posture in top view.
Confusion matrix of the proposed R-FCN ResNet101 in the test data set at learning rate of 0.003.
| Predicted Class | ||||
|---|---|---|---|---|
|
|
|
|
| |
| Standing | 1672 | 25 | 89 | |
| Lying on side | 18 | 1382 | 31 | |
| Lying on belly | 71 | 16 | 1523 | |
Figure 9(a) Results of scoring (in percentage) of the lying and standing postures across the day. (b) Standing posture (light blue rectangle), lying on belly (blue rectangle) and side posture (green rectangle).