Literature DB >> 32328258

Nondestructive classification of saffron using color and textural analysis.

Morteza Mohamadzadeh Moghadam1, Masoud Taghizadeh1, Hassan Sadrnia2, Hamid Reza Pourreza3.   

Abstract

Saffron classification based on machine vision techniques as well as the expert's opinion is an objective and nondestructive method that can increase the accuracy of this process in real applications. The experts in Iran classify saffron into three classes Pushal, Negin, and Sargol based on apparent characteristics. Four hundred and forty color images from saffron for the three different classes were acquired, using a mobile phone camera. Twenty-one color features and 99 textural features were extracted using image analysis. Twenty-two classifiers were employed for classification using mentioned features. The support vector machine and Ensemble classifiers were better than other classifiers. Our results showed that the mean classification accuracy was up to 83.9% using the Quadratic support vector machine and Subspace Discriminant classifier.
© 2020 The Authors. Food Science & Nutrition published by Wiley Periodicals, Inc.

Entities:  

Keywords:  classification; image processing; saffron

Year:  2020        PMID: 32328258      PMCID: PMC7174224          DOI: 10.1002/fsn3.1478

Source DB:  PubMed          Journal:  Food Sci Nutr        ISSN: 2048-7177            Impact factor:   2.863


INTRODUCTION

Saffron is the most expensive spice in the world. This product is cultivated in different countries of the world, such as Iran, India, Spain, Greece, Italy, and Morocco (Fernández, 2004). Iran is the largest saffron producer in the world. Currently, 94% of the world's saffron is produced in Iran (Masi et al., 2016). Traditionally, saffron classification based on apparent qualitative parameters is graded by expert people (Kiani & Minaei, 2016). In different countries such as Iran, Spain, and India, pure saffron is graded according to the apparent characteristics of saffron to different categories. In the local market of Iran, saffron is divided into three types: Sargol, Negin, and Pushal (Peter, 2012) and (Shahdadi, Barati, Bahador, & Eteghadi, 2016). There is another type of saffron in Iran that is called Daste or Dokhtarpich or Bunch (red stigmas plus large amounts of yellow style, presented in a tiny bundle) (Bonyadi, Yazdani, & Saadat, 2014). This type is less available on the market in Iran and is not traded Internationally (Azarabadi & Özdemir, 2018). To prepare Pushal, the stigmas (section of the aerial part of the pistil) are separated from the ends of the three filaments with a small amount of style (part of the pistil between the stigma and the ovary) and then dried. Negin is only red‐colored; three filaments separated and collected individually and then dried. The difference between Sargol and Negin is that the stigma has been broken, but in Negin, mostly stigmas are whole and without crushed filaments (Figure 1) (Atefi, Akbari Oghaz, & Mehri, 2013) and (Kafi, Koocheki, & Rashed, 2006). Sargol saffron consists of only the red part of the stigma. This category of saffron has a strong coloring property (Azarabadi & Özdemir, 2018). Due to the difference between expert opinions, there are errors that can be avoided using an objective approach such as image processing (Pourreza, Pourreza, Abbaspour‐Fard, & Sadrnia, 2012). Advances in machine vision technology make accurate, robust, and low‐cost vision machine systems that make it suitable for detection food quality and so this technology can be used to determine the quality of saffron (Kiani & Minaei, 2016).
Figure 1

Different types of saffron including Negin, Sargol, Pushal, and Daste

Different types of saffron including Negin, Sargol, Pushal, and Daste Kiani, Minaei, and Ghasemi‐Varnamkhasti (2018) propose the use of E‐nose, E‐tongue, and CVS systems to evaluate saffron quality and replace sensory recognition by human assessors (Kiani et al., 2018). Minaei, Kiani, Ayyari, and Ghasemi‐Varnamkhasti (2017) demonstrated that the combination of computer vision system (CVS) and multilayer perceptron (MLP) is a simple tool for evaluating the quality of saffron samples based on color strength. The performance of the MLP model for saffron color recognition was better than PLS and MLR, and the success rate of classification (CSR) was 96.67%. (Minaei et al., 2017). Today, color computer vision systems are used in various food industries and agricultural products sorting systems because they are reliable, fast, and inexpensive (Donis‐González & Guyer, 2016). Color computer vision is used to categorize or recognize the quality of agricultural products and various types of foods, including dates (Muhammad, 2015), pistachios (Omid, Firouz, Nouri‐Ahmadabadi, & Mohtasebi, 2017), apple (Paulus & Schrevens, 1999), pizza (Sun, 2016), and Wheat (Pourreza et al., 2012). The computer vision system is trained based on specific patterns extracted from a set of color images provided for different classes, such as texture, geometry, and color properties. Then, the computer vision system determines which new image belongs to which particular category (Faucitano, Huff, Teuscher, Gariepy, & Wegner, 2005). The first step involves extracting a large number of features from classified images. Then, the features must have the ability to separate the classes correctly, which, by training the system, can automatically categorize the new image. Classification is performed by statistical algorithms and different clustering by assigning each image to the corresponding class (Donis‐González & Guyer, 2016). The purpose of this study was to design a visual machine technique for detecting different types of saffron (Sargol, Negin, and Pushal) using images taken with mobile phones from bulk samples. Texture properties, color properties, and the percentage of foreign matter (based on color) of saffron were obtained.

MATERIALS AND METHODS

Saffron samples

A total of 440 samples of different saffron kinds on the market were prepared, without any additives, from various cities of Khorasan Province: Gonabad, Bajestan, Roshtkhar, Sabzevar, Mashhad, Torbat Heydarieh, and Kakhk, without any additives as fraud, and then, the samples were coded. Four experts who had a long history of saffron trading were selected. They divided the specimens into three classes, Sargol, Negin, and Pushal. Samples' information was recorded in a database (Zheng & Lu, 2012) and (Donis‐González & Guyer, 2016) and (Zhang, Lee, Lillywhite, & Tippetts, 2017).

Image acquisition

Image acquisition was done with a cellphone camera (Samsung Galaxy S7 Edge SM‐G935FD Dual SIM 32GB Mobile Phone), which was placed on an imaging chamber at a distance of 9 cm from the sample. In the lighting system, SMD LED strip lights (4014 SMD LED Module) have been used in the upper part of the imaging chamber. A diffuser was installed for the uniformity of light under the lamps. The black background color was used to create the best contrast. The shutter speed was 1/500 s without employing flash, and, respectively, lens focal length, Diaphragm value, and ISO were 4/2 mm, F1/7, and 100. Images were captured at their maximum resolution (3024 × 4032 pixels) and were saved in “JPG” format. For imaging, the images were transferred to the laptop, which was equipped with MATLAB software (2017b. ver. 9.3). The images were given to the expert individuals to classify the samples into three classes: Sargol, Negin, and Pushal. Based on the average view of the experts, 440 different samples were taken from them, and they were divided into three categories: 195 samples: Pushal, 129 samples: Negin, and 116 samples: Sargol. In this case, the average views of the experts were selected as the criteria for tagging the samples.

Image preprocessing

Original sample image is presented in Figure 2a. In the first step, in order to remove the noises and smooth it, the image is filtered using a low‐pass filter. The result is shown in Figure 2b. Foreground of the image is selected by choosing the pixels having intensity bigger than 20. Results are shown in Figure 2c. Small objects are removed from foreground binary image by morphological opening operation the image where all connected components (objects) that have fewer than 3,000 pixels are removed. Further, the image is eroded and dilated by a morphological structuring element with 5‐pixel radius. The final foreground of the image is shown in Figure 2d. The saffron part of image is cropped by selecting the area, which has nonzero values. For this purpose, the projection of image over vertical and horizontal axis is calculated and the area between minimum and maximum values is cropped. For example, for the sample image, the area between two vertical and horizontal lines shown in Figure 2e is selected. In general, four virtual lines are generated for defining the cropped area. The cropped area image is then used for further processing.
Figure 2

Image preprocessing: (a) original sample image, (b) smoothed image, (c) foreground of the image,(d) binary image, (e) selecting cropped area, (f) yellow and white parts, and (g) Pure saffron parts

Image preprocessing: (a) original sample image, (b) smoothed image, (c) foreground of the image,(d) binary image, (e) selecting cropped area, (f) yellow and white parts, and (g) Pure saffron parts

Color image feature extraction

Color components were extracted from color images of each saffron sample. The components of various color spaces including R, G, B, H, S, r, L*, a*, b*, C, I, E, Y, Cb, Cr, Y, I, and Q were extracted from the images. Yellow and white parts of the original cropped image would be selected using the Color Thresholder App. The image is entered the tool in HSI format. The proper threshold for hue value is selected by visual inspecting. Minimum and maximum values for hue in this case are 0.045, and 0.279, respectively. Figure 2f and Figure 2g is showing the result of applying these thresholds. At the end, the percentage of foreign matters (yellow and white parts based on pixels) in the total mass, the percentage of stigmas (red parts based on pixels) in the total mass, the proportion of foreign matters to the stigmas based on percentage, and the components of various color spaces were extracted from the images.

Textural algorithm

Texture analysis is one of the most important characteristics used in identifying regions of interest in an image and has been widely used in image processing. They are defined as attributes representing spatial arrangement of the gray levels of pixels in a region of a digital image, which provide measures of some properties of a region such as smoothness, coarseness, and regularity (Wang, Zhang, & Wei, 2019). To analyze the textures, the features extracted from the image are local entropy of grayscale image (entropy), local standard deviation of image (STD), local binary patterns (LBP), and gray level co‐occurrence matrix (GLCM). Features extracted from GLCM include contrast, homogeneity, correlation, and energy that the mentioned features were extracted from the images. The contrast shows the intensity of the gray variation in the image. The correlation describes the linearity and dependence of a different two‐pixel value. In this case, μ is the mean value of the matrix and σ of the variance. The energy represents the order of the image (repetition of the pixel pair) and in fact represents the smoothness and uniformity of the sample surface. Homogeneity describes the similarity of a pixel with neighboring pixels and reflects the uniformity of the image. Specifications extracted from entropy, standard deviation, and local binary patterns were calculated according to Table 1. In addition, the histogram is a graphical representation of the number of pixels for each brightness level in the input image. We defined 25 Bin in this study, and in each period, the abundance of things was gathered together and placed there. Finally, 120 features were extracted from each image.
Table 1

Features extracted from entropy, standard deviation, and local binary patterns matrices

FeatureEquation
Mean μ=ipi
Standard deviation σ=ii-μ2pi
Smoothness 1-11+σ2
Third moment ii-μ3pi
Uniformity ipi2
Entropy -ipilogi
Gray level rage maxi|pi0-mini|pi0
Features extracted from entropy, standard deviation, and local binary patterns matrices

The local binary patterns (LBPs)

A local binary pattern is a synergistic approach to texture analysis, which can provide a boundary of proximity with a pixel tag and a binary result. The main advantage of LBP in business applications is its ability to maintain independent behavior with grayscale level changes and its computational efficiency, processing images in complex real‐time environment. In a basic LBP, each 3 × 3 neighborhood is thresholded by the value of the central pixel. Then, the threshold neighborhood values are multiplied by weights given to the corresponding pixels. Finally, the resulted values are summed to acquire the number of this texture unit (Pantazi, Moshou, & Tamouridou, 2019).

Classification model

The features outlined in the above sections were used to classify. 22 different calssifiers were used including:

Decision trees classifiers

Decision tree (DT) is a machine learning algorithm which classifies the training data recursively by each node in order to maximize the separation of data. The decisions in the tree are started from the root node down to a leaf node to predict a response. The leaf node contains the response (Kamiński, Jakubczyk, & Szufel, 2018). Types of models used in this group include Fine Tree, Medium Tree, and Coarse Tree.

Discriminant analysis classifiers

Discriminant analysis is a classification method. It is a multivariate classification technique. It assumes that different classes generate data based on different Gaussian distributions (Riveiro‐Valiño, Álvarez‐López, & Marey‐Pérez, 2009). Types of models used in this group include linear discriminant analysis and quadratic discriminant analysis.

Support vector machine classifiers

Support vector machine (SVM) is an effective modeling tool for classification and was used for regression, pattern classification, prediction, and problem detection (Nasirahmadi et al., 2019). In SVM, data input space is mapped into a high dimensional feature space through a kernel function by using minimal training data (Huang, Tang, Yang, & Zhu, 2016). Types of models used in this group include Linear SVM, Quadratic SVM, Cubic SVM, Fine Gaussian SVM, Medium Gaussian SVM, and Coarse Gaussian SVM.

Nearest neighbor classifiers

The Nearest neighbor classifiers in the low‐precision dimensions is a good predictor. However, they may not have this capability on a large scale. In this classifier, samples that are neighbors or similar to a well‐known instance are identified that fall into the set of training, and then, the classification is done based on the training set (Xie, Yang, & He, 2017). Types of models used in this group include Fine KNN, Medium KNN, Coarse KNN, Cosine KNN, Cubic KNN, and Weighted KNN.

Ensemble classifiers

An ensemble is a supervised learning approach such as bagging, boosting, and variants that use multiple models to improve the predictive performance than could be obtained from any of the constituent models (Dutta et al., 2015). Types of models used in this group include Boosted Trees, Bagged Trees, Subspace Discriminant, Subspace KNN, and RUSBoost Trees.

Validation and performance evaluation indices

A fivefold stratified cross‐validation technique was used to validate the classification. In k‐fold cross‐validation, the original sample is randomly divided into k equal sized subsamples. Of the k subsamples, a single subsample is remained as the validation data for testing the model, and the remaining/k subsamples are used as training data. The cross‐validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random subsampling is that all observations are used for both training and validation, and each observation is used for validation exactly once (Siedliska, Baranowski, & Mazurek, 2014). Accuracy, confusion matrix, true‐positive rate (TP rate), false‐negative rate (FN rate), positive predictive rate (PP rate), and false discovery rate (FD rate) were calculated (Xie et al., 2017). Also, the receiver operating characteristic (ROC) was computed in MATLAB based on true‐positive and false‐negative rates. The area under the ROC curve which ranges from 0.5 (no discrimination ability) to 1 (best discrimination ability) was also calculated (Nasirahmadi et al., 2019). One‐way analysis of variance (ANOVA) and Duncan's test were used to determine the significant difference between the accuracy of classifiers. Statistical analysis was performed using SPSS software (IBM Statistics version 23).

RESULTS AND DISCUSSION

The 440 color photographs from different samples of saffron including 195 samples of Pushal, 129 Negin, and 116 Sargol were used in this study. The glossary defined for classifiers, including 21 color features and 99 texture features, was extracted from 440 samples. The classifier was then evaluated using fivefold cross‐validation. In the cross‐validation, the original samples were randomly partitioned into five groups. Four groups were used as training data for developing the model, and the remaining group was retained as validation data for testing the classifier. The process was repeated for five times, with each of the groups used once as the validation data (Kuo, Chung, Chen, Lin, & Kuo, 2016).

Classification when features of color were used in the classifiers

Table 2 illustrates the accuracy of 22 classifiers using 21 color features. Based on the results, it can be seen that classification with Linear Discriminant, Linear SVM, Bagged Trees, and RUSBoost Trees classifiers have higher average accuracy compared other classifiers. The average accuracy of these four classifiers did not differ significantly (p < .05). For Linear SVM classifier, the classification accuracy was 82.23% (±0.66%).
Table 2

Average classification accuracies (%) for 10 times running of fivefold cross‐validation using 21 color features for saffron classification

NO.ClassifierAverage accuracy % SD %NO.ClassifierAverage accuracy % SD %
1Fine Tree79.651.4312Fine KNN77.331.26
2Medium Tree80.861.6813Medium KNN77.710.81
3Coarse Tree79.580.7714Coarse KNN73.50.4
4Linear Discriminant82.230.6615Cosine KNN77.691.18
5Quadratic Discriminant58.170.6616Cubic KNN78.020.87
6Linear SVM82.270.717Weighted KNN79.390.85
7Quadratic SVM80.730.6918Boosted Trees81.090.97
8Cubic SVM78.890.819Bagged Trees82.181.04
9Fine Gaussian SVM79.340.6820Subspace Discriminant80.650.55
10Medium Gaussian SVM81.110.5821Subspace KNN60.713.12
11Coarse Gaussian SVM76.70.3922RUSBoost Trees81.831.19
Average classification accuracies (%) for 10 times running of fivefold cross‐validation using 21 color features for saffron classification

Classification when features of texture were used in the classifiers

Table 3 illustrates the accuracy of 22 classifiers using 99 texture features. The results showed that the average classification accuracy of Subspace Discriminant Classifier was higher than the other classifiers, with a significant difference (p < .05). The classification accuracy of the Subspace Discriminant Classifier was 82.83% (±0.85%).
Table 3

Average classification accuracies (%) for 10 times running of fivefold cross‐validation using 99 texture features for saffron classification

NO.ClassifierAverage accuracy % SD %NO.ClassifierAverage accuracy % SD %
1Fine Tree69.93212Fine KNN75.991.08
2Medium Tree72.161.5613Medium KNN75.550.93
3Coarse Tree71.420.8214Coarse KNN74.820.85
4Linear Discriminant72.260.9115Cosine KNN76.60.98
5Quadratic Discriminant44.3016Cubic KNN76.090.79
6Linear SVM80.30.6917Weighted KNN77.750.68
7Quadratic SVM79.531.1618Boosted Trees76.761.05
8Cubic SVM78.671.4119Bagged Trees77.611.81
9Fine Gaussian SVM78.021.0520Subspace Discriminant82.830.85
10Medium Gaussian SVM79.060.9721Subspace KNN46.121.55
11Coarse Gaussian SVM74.010.8322RUSBoost Trees75.941.21
Average classification accuracies (%) for 10 times running of fivefold cross‐validation using 99 texture features for saffron classification

Classification when combinations of all features were used in the classifier

Table 4 shows the average accuracy of the classifiers for classifying saffron into three classes of Sargol, Negin, and Pushal. The ANOVA and Duncan's test showed that the average accuracy of classifiers Linear SVM (LSVM), Quadratic SVM (QSVM), Cubic SVM (CSVM), Medium Gaussian SVM (MGSVM), Boosted Trees (BoT), Bagged Trees (BaT), and Subspace Discriminant (SDT) did not differ significantly (p < .05). The results of this study show that seven classifiers mentioned are qualified to separate the saffron to three classes Sargol, Negin, and Pushal from others. It was also found that SVM and Ensemble Classifiers were better than other classifiers for classification of saffron. Based on the results, it can be seen that classification with all features resulted in higher average classification accuracy compared to Color and texture features separately. For Quadratic SVM classifier, the average accuracy was 83.9% (±0.69%), and the classification accuracy of Subspace Discriminant classifier was obtained 83.9% (±0.36%.). The results show that the accuracy of saffron category identification can increase when color features are used in combination with textural features.
Table 4

Average classification accuracies (%) for 10 times running of fivefold cross‐validation using 120 color and texture features for saffron classification

NO.ClassifierAverage accuracy % SD %NO.ClassifierAverage accuracy % SD %
1Fine Tree801.3312Fine KNN79.10.46
2Medium Tree80.51.1813Medium KNN80.50.46
3Coarse Tree780.9214Coarse KNN77.050.87
4Linear Discriminant72.30.9315Cosine KNN80.20.64
5Quadratic Discriminant44.3016Cubic KNN81.40.44
6Linear SVM83.10.8317Weighted KNN81.40.37
7Quadratic SVM83.90.6918Boosted Trees83.850.57
8Cubic SVM82.71.0619Bagged Trees83.31.29
9Fine Gaussian SVM79.30.6920Subspace Discriminant83.90.36
10Medium Gaussian SVM83.50.4721Subspace KNN45.80.71
11Coarse Gaussian SVM77.5%0.5422RUSBoost Trees81.750.71
Average classification accuracies (%) for 10 times running of fivefold cross‐validation using 120 color and texture features for saffron classification Figure 3 shows the confusion matrix for seven classifiers mentioned. Also, detailed accuracy analysis has been reported in Table 5. A high value of TP rate and PP rate, and a low value of FN rate and FD rate, mean the classification model is good. These values for Pushal saffron were better than other classes of saffron. The FN rate and FD rate showed that the classification error of Sargol and Negin is more than Pushal. These errors happen when the values are close to each other, and it is hard to classify them. In terms of appearance, Negin and Sargol are very similar, and the distinction between them is difficult. In the Pushal, three filaments of stigmas are connected, which at the end has a bit of style, but in the Negin and Sargol, three filaments of stigmas are separated.
Figure 3

Confusion matrices of seven classifiers for distinguishing Pushal, Negin, and Sargol samples (Confusion matrices of the classification models for cultivars as an independent variable). Each model has a specific color representation and the diagonal cells (in blue) present the correct classifications

Table 5

Detailed accuracy analysis using class of the studied seven classifiers

Classification modelTP rate (Pushal) %TP rate (Negin) %TP rate (Sargol) %FN rate (Pushal) %FN rate (Negin) %FN rate (Sargol) %PP rate (Pushal) %PP rate (Negin) %PP rate (Sargol) %FD rate (Pushal) %FD rate (Negin) %FD rate (Sargol) %
LSVM92817081930907284102816
QSVM9178819221992787982221
CSVM88788112221991767892422
MGSVM93826971831887588132512
BOT9578725222891768192419
BaT95757552525877984132116
SDT9182759182592758282518
Confusion matrices of seven classifiers for distinguishing Pushal, Negin, and Sargol samples (Confusion matrices of the classification models for cultivars as an independent variable). Each model has a specific color representation and the diagonal cells (in blue) present the correct classifications Detailed accuracy analysis using class of the studied seven classifiers The receiver operating characteristics (ROC) was an additional method for evaluating the performance of the classification models. An ROC graph illustrates relative trade‐offs between true‐positives and false‐positives and its x‐axis is the false‐positive rate, whereas the y‐axis is the true‐positive rate of the model (Siedliska et al., 2014). The area under the ROC curve (AUC) is an important statistical parameter for evaluating classifier performance. Figure 4 shows the ROC curves, along with the AUC, for each class Pushal, Negin, and Sargol obtaining of the Subspace Discriminant classifier. The AUC values obtained were 0.96 for Pushal, 0.91 for Negin, and 0.93 for Sargol. The closer AUC is to 1, the better overall diagnostic performance of established classifier (Hu, Dong, & Liu, 2016). In Table 6, the results show that AUC values for Pushal were better than other classes of saffron. The SDT had the highest AUC values for identifying Pushal, Negin, and Sargol classes. Moreover, the overall AUC values of the classification LSVM, QSVM, CSVM, MGSVM, BoT, BaT, and SDT methods were 0.936, 0.933, 0.916, 0.936, 0.930, 0.943, and 0.95, respectively. These results further revealed that in this study Subspace Discriminant classifier had a success in the classification of saffron classes using the textural features and the combination features. According to the results of Tables 5 and 6, the classification of Pushal saffron with these models was better than the other two classes.
Figure 4

The ROC along with AUC values of the Subspace Discriminant classifier: (a) Pushal, (b) Negin, and (c) Sargol

Table 6

The AUC (area under the ROC curve) of the seven classifiers for different saffron classes

Classification modelPushalNeginSargolOverall
LSVM0.970.910.930.936
QSVM0.960.910.930.933
CSVM0.950.900.900.916
MGSVM0.960.910.940.936
BOT0.960.900.930.93
BaT0.970.920.940.943
SDT0.970.930.950.95
The ROC along with AUC values of the Subspace Discriminant classifier: (a) Pushal, (b) Negin, and (c) Sargol The AUC (area under the ROC curve) of the seven classifiers for different saffron classes Results from this study show that color images, obtained using a mobile phone camera, were ideal for this experiment. This technique classifies saffron into three classes by measuring different color, and textural features from color images. The Local traders in Iran considering the type of saffron (Pushal, Negin, and sargol) determine the price, because this classification has a significant relationship with the quality of saffron (Azarabadi & Özdemir, 2018). Few studies have been done on the classification of saffron using machine learning methods. The method described in this study can be a valuable tool for increasing the accuracy of pricing and assurance for the customer to purchase the product. Using other methods of machine learning and morphological characteristics of saffron can improve the technique used in this study. Further studies can lead to the creation of application software that can be used by the end user for classification of saffron.

CONCLUSIONS

In summary, these results showed that the visual texture and color index could be a good index for separating saffron of Pushal, Negin, and Sargol. The saffron samples were collected from the cities of Khorasan Province. A commercially available mobile phone was used to capture the saffron images. The images were given to expert individuals to classify the samples into three classes: Sargol, Negin, and Pushal. A total number of 120 features were extracted from the saffron images. Textures and color features were considered as inputs to 22 different classifiers for classification of saffron. The SVM and Ensemble Classifiers were better than other Classifiers. The classification accuracy 83.9% was achieved from the Quadratic SVM classifier and Subspace Discriminant classifier. Future studies on morphology features and machine learning techniques (i.e. deep learning) can optimize the accuracy of saffron class identification.

CONFLICT OF INTEREST

None declared.

AUTHORS' CONTRIBUTIONS

The first author was responsible for the accomplishment of most of the works, searching literature data, and write up of the paper. The second author also contributed in the manuscript preparation and standardized the paper as well as supervision of the whole research works. The third and fourth authors also contributed in the manuscript preparation. All authors approved the final manuscript for publication.

ETHICAL STATEMENT

This study does not involve any human or animal testing.
  5 in total

1.  PTR-TOF-MS and HPLC analysis in the characterization of saffron (Crocus sativus L.) from Italy and Iran.

Authors:  E Masi; C Taiti; D Heimler; P Vignolini; A Romani; S Mancuso
Journal:  Food Chem       Date:  2015-07-02       Impact factor: 7.514

2.  Application of computer image analysis to measure pork marbling characteristics.

Authors:  L Faucitano; P Huff; F Teuscher; C Gariepy; J Wegner
Journal:  Meat Sci       Date:  2004-11-13       Impact factor: 5.209

3.  Potential application of machine vision technology to saffron (Crocus sativus L.) quality characterization.

Authors:  Sajad Kiani; Saeid Minaei
Journal:  Food Chem       Date:  2016-04-29       Impact factor: 7.514

4.  A framework for sensitivity analysis of decision trees.

Authors:  Bogumił Kamiński; Michał Jakubczyk; Przemysław Szufel
Journal:  Cent Eur J Oper Res       Date:  2017-05-24       Impact factor: 2.345

5.  The ocular hypotensive effect of saffron extract in primary open angle glaucoma: a pilot study.

Authors:  Mohammad Hossein Jabbarpoor Bonyadi; Shahin Yazdani; Saeed Saadat
Journal:  BMC Complement Altern Med       Date:  2014-10-15       Impact factor: 3.659

  5 in total
  1 in total

1.  Textural Properties of Chinese Water Chestnut (Eleocharis dulcis) during Steam Heating Treatment.

Authors:  Yu Lu; Siming Zhao; Caihua Jia; Yan Xu; Binjia Zhang; Meng Niu
Journal:  Foods       Date:  2022-04-19
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.