| Literature DB >> 35890870 |
Nina Volkmann1,2, Claudius Zelenka3, Archana Malavalli Devaraju3, Johannes Brünger3, Jenny Stracke4, Birgit Spindler2, Nicole Kemper2, Reinhard Koch3.
Abstract
Injurious pecking against conspecifics is a serious problem in turkey husbandry. Bloody injuries act as a trigger mechanism to induce further pecking, and timely detection and intervention can prevent massive animal welfare impairments and costly losses. Thus, the overarching aim is to develop a camera-based system to monitor the flock and detect injuries using neural networks. In a preliminary study, images of turkeys were annotated by labelling potential injuries. These were used to train a network for injury detection. Here, we applied a keypoint detection model to provide more information on animal position and indicate injury location. Therefore, seven turkey keypoints were defined, and 244 images (showing 7660 birds) were manually annotated. Two state-of-the-art approaches for pose estimation were adjusted, and their results were compared. Subsequently, a better keypoint detection model (HRNet-W48) was combined with the segmentation model for injury detection. For example, individual injuries were classified using "near tail" or "near head" labels. Summarizing, the keypoint detection showed good results and could clearly differentiate between individual animals even in crowded situations.Entities:
Keywords: animal welfare; crowded dataset; injury location; keypoint detection; pose estimation; turkeys
Mesh:
Year: 2022 PMID: 35890870 PMCID: PMC9319281 DOI: 10.3390/s22145188
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Schematic view of the turkey barn (15.9 × 29.2 m) showing the different positions of the three top-view video cameras. The feeding lines are shown with orange squares, and the drinking lines have blue circles. A separate experimental compartment (5.5 × 6 m) and a second compartment (5.5 × 6 m) for sick animals are shown as differently patterned squares.
Figure 2(a) Keypoint skeleton showing the beak (B), head (H), neck (N), left wing (L), right wing (R), center of the body (C), and tail (T). (b) Example image showing the keypoints on turkey hens.
Figure 3Example image showing the visualization of annotated keypoints and bounding boxes using the COCO API [44].
Figure 4Overview of the baseline keypoint detection method by Xiao et al. [45].
Performance of the HRNet-W48 showing the used hyper-parameters for model training and structure. Object keypoint similarity (OKS) metrics state the average precision with threshold values of 0.50 (AP50) and 0.75 (AP75); these were averaged over thresholds from 0.5 to 0.95 (AP) as well as the average recall with threshold values of 0.50 (AR50) and 0.75 (AR75) averaged over thresholds from 0.50 to 0.95 (AR). We used a batch size of 64 for all tests. Best-performing values are printed in bold. We evaluated the model performance every 10 epochs to select the best-performing model and then listed the performance for this epoch.
| Hyper-Parameters | AP0.50 | AP0.75 | AP | AR0.50 | AR0.75 | AR |
|---|---|---|---|---|---|---|
| LR 1 = 1e–4; epochs = 180 | 0.677 | 0.129 | 0.249 | 0.721 | 0.234 | 0.315 |
| LR 1 = 3e–4; epochs = 150 | 0.714 | 0.137 | 0.273 | 0.755 | 0.243 | 0.334 |
|
|
|
|
|
|
|
|
1 Learning rate.
Object keypoint similarity metrics (OKS) resulting from the different keypoint detection models stating the average precision with threshold values of 0.50 (AP50) and 0.75 (AP75) and averaged over thresholds from 0.5 to 0.95 (AP) as well as the average recall with the threshold values of 0.50 (AR50) and 0.75 (AR75). These were averaged over thresholds from 0.50 to 0.95 (AR). Best-performing values are printed in bold.
| Architecture Type | AP0.50 | AP0.75 | AP | AR0.50 | AR0.75 | AR |
|---|---|---|---|---|---|---|
| Baseline–ResNet50 | 0.648 | 0.107 | 0.213 | 0.691 | 0.198 | 0.292 |
| Baseline–ResNet101 | 0.640 | 0.107 | 0.228 | 0.687 | 0.200 | 0.288 |
| Baseline–ResNet152 | 0.659 | 0.134 | 0.254 | 0.703 | 0.231 | 0.313 |
| HRNet-W32 | 0.692 | 0.158 | 0.267 | 0.726 | 0.241 | 0.323 |
| HRNet-W48 |
|
|
|
|
|
|
Figure 5Comparison of KPD using (a) baseline method with 152 layers and (b) HRNet-W48. Turkeys showing differences between the results of the baseline and HRNet are highlighted with yellow circles on the right image.
Figure 6Combination of KPD generated in this study and injury detection from previous work [16] on the evaluation dataset. The keypoints are shown in lilac connected by blue lines. The supposed injuries detected are highlighted using red boxes and the classification of the injuries is marked using labels such as “near neck” or “near tail”.