| Literature DB >> 35248105 |
Jiachen Yang1, Xiaolan Guo1, Yang Li2, Francesco Marinello3, Sezai Ercisli4, Zhuo Zhang1.
Abstract
With the rise of artificial intelligence, deep learning is gradually applied to the field of agriculture and plant science. However, the excellent performance of deep learning needs to be established on massive numbers of samples. In the field of plant science and biology, it is not easy to obtain a large amount of labeled data. The emergence of few-shot learning solves this problem. It imitates the ability of humans' rapid learning and can learn a new task with only a small number of labeled samples, which greatly reduces the time cost and financial resources. At present, the advanced few-shot learning methods are mainly divided into four categories based on: data augmentation, metric learning, external memory, and parameter optimization, solving the over-fitting problem from different viewpoints. This review comprehensively expounds on few-shot learning in smart agriculture, introduces the definition of few-shot learning, four kinds of learning methods, the publicly available datasets for few-shot learning, various applications in smart agriculture, and the challenges in smart agriculture in future development.Entities:
Keywords: Data augmentation; Deep learning; Few-shot learning; Metric learning
Year: 2022 PMID: 35248105 PMCID: PMC8897954 DOI: 10.1186/s13007-022-00866-2
Source DB: PubMed Journal: Plant Methods ISSN: 1746-4811 Impact factor: 4.993
Fig. 1Fitting and over-fitting curves
Fig. 2Actual number and normalized percentage of publications related to the few-shot learning topic in the last 5 years
Fig. 3Field of interest of applied sciences documents
Fig. 4Partition of scientific papers based on applied approach
Fig. 5The training strategy of few-shot learning
Fig. 6The methods based on data augmentation
Fig. 7The methods based on metric learning
Fig. 8The methods based on external memory
Fig. 9The methods based on parameters optimization
Various few-shot datasets
| Dataset | Source | Number of classes | Number of images | Image Size |
|---|---|---|---|---|
Omniglot CUB mini-ImageNet tiered-ImageNet Fewshot-CIFAR100 CIFAR-FS | – – ImageNet ImageNet CIFAR100 CIFAR100 | 1623 200 100 608 100 100 | 32,460 11,788 60,000 779,165 60,000 60,000 | 28 × 28 84 × 84 84 × 84 84 × 84 32 × 32 32 × 32 |
Performance of different methods on benchmarks
| Method | Model | Backbone | Omniglot | Mini-ImageNet | Tiered-ImageNet | |||
|---|---|---|---|---|---|---|---|---|
| 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot | |||
| Data Augmentation | AFHN [ | ResNet-18 | – | – | 62.38 ± 0.72 | 78.16 ± 0.56 | – | – |
| ∆-encoder [ | ResNet-18 | – | – | 59.9 | 69.7 | – | – | |
| Metric Learning | MatchingNet [ | ResNet-12 | 97.9 | 98.7 | 63.08 ± 0.80 | 75.99 ± 0.60 | 68.50 ± 0.92 | 80.60 ± 0.71 |
| RelationNet [ | Conv-4 | 99.6 ± 0.2 | 99.8 ± 0.1 | 50.44 ± 0.82 | 65.32 ± 0.70 | 54.48 ± 0.93 | 71.32 ± 0.78 | |
| ProtoNet [ | ResNet-12 | 98.8 | 99.7 | 60.37 ± 0.83 | 78.02 ± 0.57 | – | – | |
| DeepEMD [ | ResNet-12 | – | – | 65.91 ± 0.82 | 82.41 ± 0.56 | 71.16 ± 0.87 | 86.03 ± 0.58 | |
External Memory | MetaNet [ | ResNet-12 | 99.9 | – | 49.21 ± 0.96 | – | – | – |
| MMNet [ | CNN + LSTM | 99.28 ± 0.08 | 99.77 ± 0.1 | 53.37 ± 0.48 | 66.97 ± 0.35 | – | – | |
| [ | ResNet-10 | – | – | 55.45 ± 0.89% | 70.13 ± 0.68% | – | – | |
| Parameter Optimization | MAML [ | Conv-4 | 98.7 ± 0.4 | 99.9 ± 0.1 | 48.70 ± 1.75 | 63.11 ± 0.92 | – | – |
| Reptile [ | Conv-4 | 95.39 ± 0.09 | 98.90 ± 0.1 | 47.07 ± 0.26% | 62.74 ± 0.37% | – | – | |
| MetaNAS [ | CNN | – | – | 63.1 ± 0.3 | 79.5 ± 0.2 | – | – | |
Fig. 10Some images in Omniglot, CUB, and mini-ImageNet