| Literature DB >> 35165330 |
Hamed Taheri Gorji1, Seyed Mojtaba Shahabi2, Akshay Sharma3, Lucas Q Tande4, Kaylee Husarik1, Jianwei Qin5, Diane E Chan5, Insuck Baek5, Moon S Kim5, Nicholas MacKinnon6, Jeffrey Morrow6, Stanislav Sokolov6, Alireza Akhbardeh6, Fartash Vasefi6, Kouhyar Tavakolian7.
Abstract
Food safety and foodborne diseases are significant global public health concerns. Meat and poultry carcasses can be contaminated by pathogens like E. coli and salmonella, by contact with animal fecal matter and ingesta during slaughter and processing. Since fecal matter and ingesta can host these pathogens, detection, and excision of contaminated regions on meat surfaces is crucial. Fluorescence imaging has proven its potential for the detection of fecal residue but requires expertise to interpret. In order to be used by meat cutters without special training, automated detection is needed. This study used fluorescence imaging and deep learning algorithms to automatically detect and segment areas of fecal matter in carcass images using EfficientNet-B0 to determine which meat surface images showed fecal contamination and then U-Net to precisely segment the areas of contamination. The EfficientNet-B0 model achieved a 97.32% accuracy (precision 97.66%, recall 97.06%, specificity 97.59%, F-score 97.35%) for discriminating clean and contaminated areas on carcasses. U-Net segmented areas with fecal residue with an intersection over union (IoU) score of 89.34% (precision 92.95%, recall 95.84%, specificity 99.79%, F-score 94.37%, and AUC 99.54%). These results demonstrate that the combination of deep learning and fluorescence imaging techniques can improve food safety assurance by allowing the industry to use CSI-D fluorescence imaging to train employees in trimming carcasses as part of their Hazard Analysis Critical Control Point zero-tolerance plan.Entities:
Mesh:
Year: 2022 PMID: 35165330 PMCID: PMC8844077 DOI: 10.1038/s41598-022-06379-1
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1CSI-D device. (A) Front view. (B) Rear view.
Figure 2Six CSI-D fluorescence images of clean meat surfaces (A–F).
Figure 3Six CSI-D fluorescence images of meat surfaces with fecal contamination (A–F).
Figure 4(A) A concise representation of the EfficientNet-B0 model. (B) The building blocks of MBConv1. (C) The building blocks of MBConv6.
Figure 5U-Net architecture.
Performance of the EfficientNet-B0 for discrimination between clean and contamination frames.
| Evaluation metrics | Accuracy (%) | Precision (%) | Recall (%) | Specificity (%) | F-score (%) | AUC (%) |
|---|---|---|---|---|---|---|
| 97.32 | 97.66 | 97.06 | 97.59 | 97.35 | 99.54 |
Figure 6(A) The model accuracy during training and validation. (B) The model loss during training and validation.
Figure 7The confusion matrix of the model when applied to the test set.
Performance of the U-Net for segmentation of fecal matter in meat surface images.
| Evaluation metrics | IoU (%) | Precision (%) | Recall (%) | Specificity (%) | F-score (%) | AUC (%) |
|---|---|---|---|---|---|---|
| 89.34 | 92.95 | 95.84 | 99.79 | 94.37 | 99.89 |
Figure 8Performance of the semantic segmentation method on some randomly selected test frames. (A) The input frames to the semantic segmentation model. (B) Segmented image output by the model. (C) The ground truth segmented image by human experts.