| Literature DB >> 30891048 |
Ke Lin1, Liang Gong1, Yixiang Huang1, Chengliang Liu1, Junsong Pan2.
Abstract
Powdery mildew is a common disease in plants, and it is also one of the main diseases in the middle and final stages of cucumber (Cucumis sativus). Powdery mildew on plant leaves affects the photosynthesis, which may reduce the plant yield. Therefore, it is of great significance to automatically identify powdery mildew. Currently, most image-based models commonly regard the powdery mildew identification problem as a dichotomy case, yielding a true or false classification assertion. However, quantitative assessment of disease resistance traits plays an important role in the screening of breeders for plant varieties. Therefore, there is an urgent need to exploit the extent to which leaves are infected which can be obtained by the area of diseases regions. In order to tackle these challenges, we propose a semantic segmentation model based on convolutional neural networks (CNN) to segment the powdery mildew on cucumber leaf images at pixel level, achieving an average pixel accuracy of 96.08%, intersection over union of 72.11% and Dice accuracy of 83.45% on twenty test samples. This outperforms the existing segmentation methods, K-means, Random forest, and GBDT methods. In conclusion, the proposed model is capable of segmenting the powdery mildew on cucumber leaves at pixel level, which makes a valuable tool for cucumber breeders to assess the severity of powdery mildew.Entities:
Keywords: convolutional neural network; cucumber leaf; deep-learning; image segmentation; powdery mildew
Year: 2019 PMID: 30891048 PMCID: PMC6413718 DOI: 10.3389/fpls.2019.00155
Source DB: PubMed Journal: Front Plant Sci ISSN: 1664-462X Impact factor: 5.753
FIGURE 1In vitro Cucumber Fruit/Leaf Phenotyping platform.
FIGURE 2(A) Two samples of cucumber leaves, (B) their disease areas, (C) annotation of infected areas.
FIGURE 3The structure of the proposed model.
FIGURE 4Image augmentation of four samples (images and their annotation).
Accuracy of our model and K-means method in 20 test samples∗.
| No. | Our model IU acc. | Our model Dice acc. | Our model Pixel acc. | K-means IU acc. | K-means Dice acc. | K-means Pixel acc. |
|---|---|---|---|---|---|---|
| 1 | 69.93% | 82.31% | 97.76% | 36.07% | 53.02% | 93.55% |
| 2 | 81.92% | 90.06% | 95.65% | 46.96% | 63.91% | 89.17% |
| 3 | 53.98% | 70.12% | 99.24% | 14.55% | 25.41% | 96.64% |
| 4 | 83.41% | 90.95% | 94.73% | 44.89% | 61.96% | 84.93% |
| 5 | 82.35% | 90.32% | 96.88% | 66.46% | 79.85% | 94.75% |
| 6 | 73.04% | 84.42% | 96.17% | 57.79% | 73.25% | 94.73% |
| 7 | 82.68% | 90.52% | 95.78% | 62.88% | 77.21% | 92.01% |
| 8 | 83.11% | 90.77% | 95.60% | 49.55% | 66.26% | 88.34% |
| 9 | 63.33% | 77.55% | 96.79% | 40.80% | 57.95% | 95.61% |
| 10 | 71.71% | 83.53% | 96.67% | 51.00% | 67.55% | 94.79% |
| 11 | 73.00% | 84.40% | 96.43% | 58.14% | 73.53% | 95.24% |
| 12 | 79.20% | 88.39% | 96.98% | 59.01% | 74.22% | 94.50% |
| 13 | 64.31% | 78.28% | 97.76% | 39.77% | 56.91% | 95.72% |
| 14 | 85.65% | 92.27% | 94.42% | 45.33% | 62.38% | 81.33% |
| 15 | 65.78% | 79.36% | 93.18% | 45.82% | 62.84% | 90.47% |
| 16 | 67.21% | 80.39% | 95.14% | 46.27% | 63.27% | 92.79% |
| 17 | 54.09% | 70.20% | 95.34% | 32.46% | 49.01% | 91.85% |
| 18 | 72.71% | 84.20% | 93.90% | 51.34% | 67.85% | 91.20% |
| 19 | 64.99% | 78.78% | 96.33% | 49.41% | 66.14% | 95.27% |
| 20 | 69.76% | 82.19% | 96.80% | 42.51% | 59.65% | 93.76% |
FIGURE 5Situation when Dice acc and IU acc are 0.8 (left) and 0.7 (right).
Precision, Recall and F-score of our model and K-means method∗.
| No. | Our model Precision | Our model Recall | Our model F2 Score | K-means Precision | K-means Recall | K-means F2 Score |
|---|---|---|---|---|---|---|
| 1 | 70.69% | 98.48% | 91.30% | 43.08% | 68.92% | 61.54% |
| 2 | 82.10% | 99.74% | 95.63% | 93.61% | 48.52% | 53.69% |
| 3 | 56.90% | 91.32% | 81.46% | 16.20% | 58.86% | 38.56% |
| 4 | 83.57% | 99.77% | 96.04% | 94.00% | 46.21% | 51.44% |
| 5 | 82.80% | 99.34% | 95.52% | 91.17% | 71.04% | 74.32% |
| 6 | 73.86% | 98.50% | 92.34% | 78.70% | 68.51% | 70.33% |
| 7 | 83.10% | 99.39% | 95.64% | 91.50% | 66.78% | 70.60% |
| 8 | 83.55% | 99.37% | 95.74% | 89.44% | 52.63% | 57.35% |
| 9 | 64.47% | 97.30% | 88.31% | 63.78% | 53.10% | 54.94% |
| 10 | 72.42% | 98.66% | 91.99% | 72.42% | 63.30% | 64.93% |
| 11 | 73.32% | 99.42% | 92.81% | 79.89% | 68.11% | 70.18% |
| 12 | 79.50% | 99.53% | 94.76% | 81.04% | 68.47% | 70.66% |
| 13 | 66.39% | 95.35% | 87.70% | 49.58% | 66.79% | 62.45% |
| 14 | 85.88% | 99.68% | 96.58% | 95.43% | 46.34% | 51.65% |
| 15 | 71.35% | 89.38% | 85.08% | 73.48% | 54.89% | 57.82% |
| 16 | 68.33% | 97.63% | 89.91% | 65.84% | 60.89% | 61.82% |
| 17 | 59.08% | 86.48% | 79.14% | 40.65% | 61.70% | 55.90% |
| 18 | 73.13% | 99.22% | 92.61% | 84.52% | 56.67% | 60.67% |
| 19 | 65.46% | 98.89% | 89.72% | 65.23% | 67.06% | 66.69% |
| 20 | 70.04% | 99.44% | 91.74% | 57.37% | 62.13% | 61.12% |
The performance of our method and the three other methods∗.
| Method | Precision | Recall | F2 score | IU acc. | Dice acc. | Pixel acc. |
|---|---|---|---|---|---|---|
| The proposed method | 73.30% | 97.34% | 91.20% | 72.11% | 83.45% | 96.08% |
| GBDT | 73.90% | 70.81% | 70.86% | 56.96% | 71.44% | 94.33% |
| Random Forest | 70.99% | 69.33% | 69.20% | 54.84% | 69.46% | 93.95% |
| K-means | 71.35% | 60.55% | 60.83% | 47.05% | 63.11% | 92.33% |
FIGURE 6(A) Original images, (B) annotation images, (C–F) recognition results of the proposed model, K-means, Random forest, and GBDT methods.
FIGURE 7(A) Input image; (B–E) feature map of the proposed model given this input image; (F) output image.
FIGURE 8Loss and IU accuracy through the training period.