| Literature DB >> 33954231 |
Bifta Sama Bari1, Md Nahidul Islam1, Mamunur Rashid1, Md Jahid Hasan2, Mohd Azraai Mohd Razman2, Rabiu Muazu Musa3, Ahmad Fakhri Ab Nasir2,4, Anwar P P Abdul Majeed2,4.
Abstract
The rice leaves related diseases often pose threats to the sustainable production of rice affecting many farmers around the world. Early diagnosis and appropriate remedy of the rice leaf infection is crucial in facilitating healthy growth of the rice plants to ensure adequate supply and food security to the rapidly increasing population. Therefore, machine-driven disease diagnosis systems could mitigate the limitations of the conventional methods for leaf disease diagnosis techniques that is often time-consuming, inaccurate, and expensive. Nowadays, computer-assisted rice leaf disease diagnosis systems are becoming very popular. However, several limitations ranging from strong image backgrounds, vague symptoms' edge, dissimilarity in the image capturing weather, lack of real field rice leaf image data, variation in symptoms from the same infection, multiple infections producing similar symptoms, and lack of efficient real-time system mar the efficacy of the system and its usage. To mitigate the aforesaid problems, a faster region-based convolutional neural network (Faster R-CNN) was employed for the real-time detection of rice leaf diseases in the present research. The Faster R-CNN algorithm introduces advanced RPN architecture that addresses the object location very precisely to generate candidate regions. The robustness of the Faster R-CNN model is enhanced by training the model with publicly available online and own real-field rice leaf datasets. The proposed deep-learning-based approach was observed to be effective in the automatic diagnosis of three discriminative rice leaf diseases including rice blast, brown spot, and hispa with an accuracy of 98.09%, 98.85%, and 99.17% respectively. Moreover, the model was able to identify a healthy rice leaf with an accuracy of 99.25%. The results obtained herein demonstrated that the Faster R-CNN model offers a high-performing rice leaf infection identification system that could diagnose the most common rice diseases more precisely in real-time.Entities:
Keywords: Deep learning; Faster R-CNN; Image processing; Object detection; Rice reaf disease detection
Year: 2021 PMID: 33954231 PMCID: PMC8049121 DOI: 10.7717/peerj-cs.432
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
Figure 1Complete architecture of the proposed study.
Total number of images collected from each database.
| Leaf condition | Kaggle dataset (publicly available) | Own dataset (on-field dataset) |
|---|---|---|
| Rice blast | 500 | 100 |
| Brown spot | 500 | 150 |
| Hispa | 500 | – |
| Healthy | 500 | 150 |
| Total | 2,000 | 400 |
| 2,400 | ||
Figure 2Data augmentation of rice leaf disease images: (A) original image (B) image rotated by 180 degree (C) high brightness (D) Gaussian noise (E) horizontal flip (F) low brightness (G) vertical flip.
Figure 3The image annotation outcome in XML file.
Steps of the Faster R-CNN technique.
| The Faster R-CNN technique: | |
|---|---|
| Step 1: | To acquire a feature map, the entire image of rice diseases is fed into CNN |
| Step 2: | To gain the feature information of the candidate frame and the convolution feature is then fed into the RPN |
| Step 3: | To recognize whether the features of rice diseases from the candidate box belongs to a specific disease category and then classify |
| Step 4: | To adjust the disease location again by a regression device for the candidate frame belonging to a specific disease feature |
Steps of the RPN for candidate regions.
| RPN steps for candidate regions: | |
|---|---|
| Step 1: | To slide a window on the map of rice disease |
| Step 2: | To classify the leaf infections and revert back the location of the frame, a neural network is formed. |
| Step 3: | To provide approximate distribution details of leaf infection according to the position of the sliding window |
| Step 4: | To achieve a better location of leaf infection with the box’s regression |
Phases of the training processes (Faster R-CNN training model).
| Training processes: Different Phases of Faster R-CNN training model: | |
|---|---|
| Phase 1: | After initializing the RPN structure with the pre-trained framework, the RPN is trained. The model’s distinctive value and RPN are revised when the training is finished |
| Phase 2: | The Faster R-CNN architecture is formed. Subsequently the proposal is calculated by utilizing the trained RPN and then the proposal is sent to the Faster R-CNN network. Following this, the network is trained. Then the model and the uniqueness of the Faster R-CNN is updated through the training process |
| Phase 3: | The RPN network is initialized by employing the model that was formed in the Phase 2. Then a second training is carried out on the RPN network. The RPN’s distinctive value is altered at the time of the training procedure while the model parameters remain unchanged. |
| Phase 4: | The model variables stated in Phase 3 are kept unaltered. The Faster R-CNN architecture is formed and trained the network for the 2nd attempt to optimize the specifications |
Figure 4Architecture of Faster R-CNN.
Figure 5Activation visualization results: (A) Rice Blast (B) Brown Spot (C) Healthy (D) Hispa.
Figure 6Types of detection results (Images collected from Online and captured in the lab).
(A) Brown Spot, (B) Healthy, (C) Hispa, (D) Rice Blast, (E) Brown Spot and Rice Blast, (F) Hispa and Rice blast, (G) Rice Blast, (H) Hispa and Healthy.
Figure 7Types of detection results (real field image).
(A) Rice Blast. (B) Healthy.
Figure 8Performance comparison with other pre-trained model.
Figure 9Confusion matrix of the proposed approach.
Figure 10The classification loss of the proposed system.
Comparison of the proposed model with other related studies.
| Researchers | Methods | Dataset (own or publicly available) | Camera to capture data | Number of observation | Learning rate | Number of iteration | Performance (%) |
|---|---|---|---|---|---|---|---|
| FCM-KM and Faster R-CNN fusion | Rice field of the Hunan Rice Research Institute, China | Canon EOS R (pixel: 2,400 * 1,600) | 3,010 | 0.001 | 15,000 | Rice blast: 96.71 | |
| Faster R-CNN | Farm field | Smartphone camera (48 Megapixel) | 50 | 0.001 | 5 | Initial steps to make a prototype for automatic detection of RFS Rice false smut | |
| Bayes’ and SVM Classifier | Rice field images of East Midnapur, India | Nikon COOLPIX P4 digital camera | 1,000 | – | – | Normal leaf image: 92 | |
| Optimized Deep Neural Network with Jaya Optimization Algorithm (DNN_JOA) | Farm field | High resolution digital camera | 650 | – | – | Rice blast: 98.9 | |
| Faster-RCNN | Rice field in Anhui, Jiangxi and Hunan Province, China | Mobile phone camera (iPhone7 & HUAWEI P10) and Sony DSC-QX10 camera | 5,320 | 0.002 | 50,000 | Rice sheath blight: 90.9 | |
| SVM | Farm field | NIKON D90 digital SLR (12.3 megapixels) | 120 | – | – | For SVM: | |
| CNN | Kaggle dataset | – | 1,000 | – | – | Prediction accuracy: 99.61 (healthy and leaf blast) | |
| Simple CNN | Rice fields of Bangladesh Rice Research Institute (BRRI) | Four different types of camera | 1,426 | 0.0001 | 100 | Mean validation accuracy: 94.33 | |
| Proposed Model | Faster-RCNN | Both on field data and Kaggle dataset | Smartphone camera (Xiaomi Redmi 8) | 16,800 | 0.0002 | 50965 | Rice blast: 98.09 |