| Literature DB >> 35494845 |
Agus Pratondo1, Arif Bramantoro2.
Abstract
Zophobas Morio and Tenebrio Molitor are popular larvae as feed ingredients that are widely used by animal lovers to feed reptiles, songbirds, and other poultry. These two larvae share a similar appearance, however; the nutritional ingredients are significantly different. Zophobas Morio is more nutritious and has a higher economic value compared to Tenebrio Molitor. Due to limited knowledge, many animal lovers find it difficult to distinguish between the two. This study aims to build a machine learning model that is able to distinguish between the two. The model is trained using images that are taken from a standard camera on a mobile phone. The training is carried on using a deep learning algorithm, by adopting an architecture through transfer learning, namely VGG-19 and Inception v3. The experimental results on the datasets show that the accuracy rates of the model are 94.219% and 96.875%, respectively. The results are quite promising for practical use and can be improved for future works. ©2022 Pratondo and Bramantoro.Entities:
Keywords: VGG-19; Classification; Tenebrio Molitor; Transfer learning; Zophobas Morio
Year: 2022 PMID: 35494845 PMCID: PMC9044276 DOI: 10.7717/peerj-cs.884
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
Nutrition ingredients for the two larvae (Benzertiha et al., 2019).
| Item |
|
|
|---|---|---|
| Dry matter (DM, %) | 95.58 | 96.32 |
| Crude protein (% of DM) | 47.0 | 49.3 |
| Ether extract (% of DM) | 29.6 | 33.6 |
| Chitin (% of DM) | 89.1 | 45.9 |
Figure 1Images of Zophobas Morio (upper) and Tenebrio Molitor (lower).
Figure 2A single layer perceptron.
Figure 3The architecture of VGG-19.
Image distribution for 10-CV.
| Species | Total | Training per CV | Testing per CV |
|---|---|---|---|
|
| 320 | 288 | 32 |
|
| 320 | 288 | 32 |
Experimental results using k-NN.
| Testing fold | Training fold | Accuracy | Average | ||||
|---|---|---|---|---|---|---|---|
| k=1 | k=3 | k=5 | k=7 | k=9 | |||
| 1 | 2-9 | 90.625 | 84.375 | 82.812 | 84.376 | 76.562 |
|
| 2 | 1,3-10 | 84.375 | 81.250 | 79.688 | 76.562 | 71.875 | 78.750 |
| 3 | 1-2,4-10 | 81.250 | 81.250 | 76.562 | 76.562 | 75.000 | 78.125 |
| 4 | 1-3,5-10 | 81.250 | 82.812 | 82.812 | 76.562 | 75.000 | 79.687 |
| 5 | 1-4,6-10 | 82.812 | 76.562 | 71.875 | 73.438 | 68.750 | 74.687 |
| 6 | 1-5,7-10 | 82.812 | 78.125 | 75.000 | 75.000 | 71.875 | 76.562 |
| 7 | 1-6,8-10 | 75.000 | 73.438 | 68.750 | 67.188 | 65.625 | 70.000 |
| 8 | 1-7,9-10 | 92.188 | 81.250 | 75.000 | 73.438 | 71.875 | 78.750 |
| 9 | 1-8,10 | 89.062 | 82.812 | 79.688 | 79.688 | 76.562 | 81.562 |
| 10 | 1-9 | 75.000 | 78.125 | 76.562 | 75.000 | 70.312 | 75.000 |
|
|
| 80.000 | 76.875 | 75.781 | 72.344 | 77.687 | |
Accuracy of various models.
| Testing fold | Training fold | Accuracy | |||
|---|---|---|---|---|---|
| k-NN | SVM | VGG-19 | Inception v3 | ||
| 1 | 2-9 | 90.625 | 96.875 | 89.625 | 98.438 |
| 2 | 1,3-10 | 84.375 | 95.313 | 95.313 | 98.438 |
| 3 | 1-2,4-10 | 81.250 | 92.188 | 98.438 | 96.875 |
| 4 | 1-3,5-10 | 81.250 | 92.188 | 95.313 | 100.000 |
| 5 | 1-4,6-10 | 82.812 | 90.625 | 96.875 | 92.188 |
| 6 | 1-5,7-10 | 82.812 | 95.313 | 95.313 | 98.438 |
| 7 | 1-6,8-10 | 75.000 | 93.750 | 92.188 | 89.438 |
| 8 | 1-7,9-10 | 92.188 | 95.313 | 90.625 | 89.063 |
| 9 | 1-8,10 | 89.062 | 87.500 | 90.625 | 98.438 |
| 10 | 1-9 | 75.000 | 90.625 | 95.313 | 98.438 |
Figure 4Classification results using k-NN and SVM.
Figure 5Classification results using VGG-19 and Inception v3.
Performance metrics for binary classification.
| Method | TP | FP | TN | FN | Precision | Recall | Accuracy |
|---|---|---|---|---|---|---|---|
| k-NN | 319 | 105 | 215 | 1 | 0.752 | 0.997 | 83.438 |
| SVM | 310 | 35 | 285 | 10 | 0.899 | 0.969 | 92.969 |
| VGG-19 | 298 | 15 | 305 | 22 | 0.952 | 0.931 | 94.219 |
| Inception v3 | 309 | 9 | 311 | 11 | 0.972 | 0.966 | 96.875 |
Related works comparison.
| Work | Larva types | Methods | Results |
|---|---|---|---|
| ( |
| Google inception model | 99.77–99.98% accuracy, 0.21–5.13% cross-entropy error |
| ( |
| Convolution neural network | 0.7–73% accuracy |
| ( |
| VGG16, VGG-19, ResNet-50, InceptionV3 | 77.31–85.10% accuracy, 0.31–0.66% loss |
| ( |
| GoogLeNet, VGG-19, AlexNet | 91–100% accuracy |
| ( | Oyster | coordinate system of PyTorch | 82.4% precision, 90.8% recall, 86.4% F-score |
| ( | House flies | Convolution neural network | 88.44–92.95% precision, 88.23–94.10% recall, 87.56–92.89% accuracy, 88.08–93.02% F-score |
| Ours |
| VGG-19, Inception v3 | 97.2% precision, 96.6% recall, 96.876% accuracy |