Literature DB >> 27034710

A Novel Method for the Separation of Overlapping Pollen Species for Automated Detection and Classification.

Santiago Tello-Mijares1, Francisco Flores2.   

Abstract

The identification of pollen in an automated way will accelerate different tasks and applications of palynology to aid in, among others, climate change studies, medical allergies calendar, and forensic science. The aim of this paper is to develop a system that automatically captures a hundred microscopic images of pollen and classifies them into the 12 different species from Lagunera Region, Mexico. Many times, the pollen is overlapping on the microscopic images, which increases the difficulty for its automated identification and classification. This paper focuses on a method to segment the overlapping pollen. First, the proposed method segments the overlapping pollen. Second, the method separates the pollen based on the mean shift process (100% segmentation) and erosion by H-minima based on the Fibonacci series. Thus, pollen is characterized by its shape, color, and texture for training and evaluating the performance of three classification techniques: random tree forest, multilayer perceptron, and Bayes net. Using the newly developed system, we obtained segmentation results of 100% and classification on top of 96.2% and 96.1% in recall and precision using multilayer perceptron in twofold cross validation.

Entities:  

Mesh:

Year:  2016        PMID: 27034710      PMCID: PMC4806277          DOI: 10.1155/2016/5689346

Source DB:  PubMed          Journal:  Comput Math Methods Med        ISSN: 1748-670X            Impact factor:   2.238


1. Introduction

In allergic rhinitis and asthmatic exacerbations, major allergens are dust mites, followed by pollen. Martínez Ordaz et al. [1] found a significant correlation between the concentration of environmental pollen and the frequency of asthmatic exacerbations in the Lagunera Region in north-central Mexico, with Pearson's r of 0.63 and a coefficient of determination r of 0.39 (p < 0.01). This is due to the pollution of a metropolitan area of 1.5 million inhabitants and an arid climate where rain is not often present to cleanse the air of pollutants. Additionally, Campos et al. [2] observed a significant relationship between the concentration of Chenopodiaceae and Amaranthaceae pollen particles and a peak expiratory flow in the same region. Over the past decade, interest has increased in automatic systems to identify pollen, which can be helpful for palynologists and medical specialists. Such a system could help make climate change studies, medical allergies calendar, and forensic science easier. Techniques using such an automated system could help to reduce errors of palynologist scanning. In developing such an automated system, a new challenge is presented: the segmentation of overlapped pollen for species classification. This study is part of a project that aims to develop an automated system that captures hundreds of microscopic images of pollen to select and classify these images for calendaring. In this paper, we focus on the first stage of system analysis that aims to segment and classify 12 species of pollen into 12 palynological categories (Table 1). In real samples, pollen grains can be found segregated or overlapped (Figure 1 Case I or Case II, resp.), which increases the detection difficulty. Also, in the background of the images we found Vaseline and other unwanted materials. Pollen grains are 10–60 μm diameter structures that contain reproduction matter. The pollen is covered with two membranes (exine and intine). The exine membrane can have porous and/or elongated grooves, or none, to permit transfer of genetic material in the pollination process [3]. Thus, we proposed a dataset composed of 12 pollen species divided into 12 palynological classes, as can be seen in Table 1 and Figure 1, in order to apply a segmentation method for pollen detection and evaluation of species by three different classification methods.
Table 1

Palynological classification of pollen species images dataset.

ClassPollen common name (genus/species)PollenImages
1Huisache (Acacia farnesiana)4945
2Alfalfa (Medicago sativa) 4729
3Nettle-leaved goosefoot (Chenopodium murale) 6031
4Chicken foot grass (Cynodon dactylon)3734
5Grass bitter (Helianthus ciliaris)4334
6White mulberry (Morus alba)5724
7Pecan (Carya illinoensis)3231
8Olive (Olea europaea)2828
9Honey mesquite (Prosopis glandulosa)8650
10Willow (Salix spp.)6430
11Pepper tree (Schinus molle)7823
12Johnson grass (Sorghum halepense)3431

Total public pollen species images dataset 618390
Figure 1

Dataset of examples of pollen species images classified according to Table 1. Case I, classes 1–12, and Case II, classes 1–12.

Besides quality of detection and expressivity of the used visual descriptors, the estimated accuracy rate for a recognition system is directly related to the number of classes in the dataset and the statistical significance of the proposed evaluation schemes. Several proposals to detect pollen grains can be found in the literature (Rodriguez-Damian et al. [3]; Ranzato et al. [4]; Mitsumoto et al. [5]; Kaya et al. [6]; Dell'Anna et al. [7]), and we have compared them to the method shown in this paper. According to Rodriguez-Damian et al. [3] and Mitsumoto et al. [5], pollen grains have a circular shape (assumed shape). Since the pollen species they analyzed presented this circular shape, it was therefore a reasonable restriction. Ranzato et al. [4] presented work using a generic filtering approach. In relation to the characterization of detected grains, different visual descriptors are also proposed. Rodriguez-Damian et al. [3] computed shape and texture descriptors over the segmented grains, but only three pollen species are under consideration, and the same data are used for training and testing. Ranzato et al. [4] used local texture descriptors and worked with twelve classes separated into 10 and 100 randomly generated experiments, with 90% of the set used for training and 10% for testing. Mitsumoto et al. [5] applied a very simple descriptor (pollen size and the ratio of the blue to red pollen autofluorescence spectra). They concluded that, using their descriptors, two pollen species can be separated, although they do not present accuracy rates or evaluation schemes. Kaya et al. [6] used 11 different microscopic and morphological characteristic features for a rough set-based expert system for the classifications of twenty different pollen grains from the Onopordum family, obtaining a 90.625% (145/160) rate of success. Finally, Dell'Anna et al. [7] used spectral reflectance from Fourier transform infrared (FT-IR) microspectroscopy with unsupervised (hierarchical cluster analysis, HCA) and supervised (k-NN neighbors classifier, k-NN) learning methods to discriminate and automatically classify pollen grains from 11 different allergy-relevant species belonging to different 7 families (5 single pollen grains per species). The k-NN classifier they built got an overall accuracy of 84% and, for nine out of the 11 considered plant species, the obtained accuracy was greater than or equal to 80%. In this work, we propose a generic approach for grain detection based on mean shift segmentation and applying Otsu's method, morphological erosion, and dilatation using the H-minima method based on Fibonacci series and gradient vector flow snake (GFVS) when the grains overlap. On the other hand, as opposed to Rodriguez-Damian et al. [3] and Mitsumoto et al. [5], the shape of the grains does not have to be circular Also, a significant evaluation of current visual descriptors is performed, applying to three classification techniques (Tree Random Forest, Multilayer Perceptron, and Bayes Net) using three s-fold cross-validation schemes (s = 10, s = 5, and s = 2): 2-fold setup (2 iterations, 50% training, and 50% test), 5-fold (5 iterations, 80% training, and 20% test), and 10-fold (10 iterations, 90% training, and 10% test). Thus, we propose a novel approach that automates imaging and classification of pollen.

2. Materials and Methods

Figure 2 illustrates the stages of the proposed segmentation algorithm and classification techniques, which are detailed in the following subsections. First, we explain the slide preparation and image acquisition and, later, the proposed system. The proposed system is accomplished in four stages: image preprocessing (RGB to lab), pollen image segmentation (mean shift and Otsu's method, followed by the H-minimum process based on the Fibonacci series to identify the overlapping pollen and finally applying gradient vector flow snake (GVFS), feature extraction (shape, color, and texture), and classification (into 12 pollen species).
Figure 2

Overall method description.

2.1. Slide Preparation and Image Acquisition

Pollen grains were acquired by air sampling and sediment technique; in our case, we used a pollen collector model Hirst, commercial brand Burkard (Burkard Mfg. Co. Ltd., Rickmansworth, Hertfordshire, England), and the pollen slides were prepared according to the Wodehouse technique for light microscope [8]:For the image acquisition, an Axioskop 40 microscope with an AxioCam MR Series digital camera (Carl Zeiss Microscopy, LLC, EUA) that has a sensor of 8.9 × 6.7 mm (2/3′′), with 1.4 megapixels and 12 bits of digitalization was used. Available exposition time goes from 1 millisecond to 20 seconds and is capable of video capture at 38 frames per second with a resolution of 276 × 208 pixels. A small amount of pollen, about as much as can be picked up on the flat end of toothpick, is placed on the center of microscopic slide and a drop of alcohol added and allowed partly to evaporate (one to four drops were added). The alcohol spreads out as it evaporates and leaves the oily and resinous substances of the pollen deposited in ring form around the specimen. The oily ring is wiped off with the help of cotton moistened with alcohol and before the specimen dried completely, a small drop of hot melted methyl green glycerin jelly was added, and the pollen stirred in with the needle and is evenly distributed. During the process the jelly is kept hot by passing the slide over a small flame. Cover glass (#1) was then placed over the specimen and the slide gently heated.

2.2. Preprocessing

Input RGB pollen images (see Figure 2) are converted to lab color space. The perceptual linearity makes it more suitable for distance-based pollen region segmentation, which is the following step.

2.3. Segmentation

Mean shift's segmentation refers to the process of partitioning an image into multiple segments [9]. We propose the use of the mean shift algorithm [10] to obtain the segmentation of the pollen region from the microscopic images. Mean shift is a nonparametric technique for analyzing multimodal data, with multiple applications in pattern analysis [11], including its use for image segmentation. We start from the observation that the pollen in a region, either individual (Figure 3(a)) or overlapping (Figure 3(b)), is characterized by similar color values (always similar to purple, Figure 3), unlike the area around the pollen (such as the background, Vaseline, and unwanted material). We characterize every pixel of the image by a vector x = [a, b], with [L, a, b] being its color. We then run the mean shift algorithm over this 2-dimensional distribution with a bandwidth value h = 25, selected so that cell pollen is segmented into two or three regions, which is required for the later use of Otsu's [12] binary process to be effective and to only have two regions. Once the pollen region is segmented, the next step is to separate the overlapping pollen using an effective erosion method inspired in H-minimal and based on Fibonacci series to identify whether it belongs to Case I or Case II.
Figure 3

Segmentation of the pollen images. Shown left to right are original image, mean shift, and Otsu's method. (a) Case I and (b) Case II.

2.4. H-Minima Based on the Fibonacci Series

After segmentation of the pollen region, the next step is to separate the overlapping pollen using an effective proposed method of erosion inspired in H-minimal and based on the Fibonacci series to identify whether the region pollen belongs to Case I or Case II. Morphological erosion using a disk shaped structural element can be used to identify the type of region (Case I or II). This erosion can be performed n times from n = 1 : 21, defining the radius R of the disk using the Fibonacci series Fn = 0,1, 3 ⋯ 6765, as seen in Figure 4. The pollen region becomes smaller as it is eroded until it disappears altogether. When the pollen region belongs to Case I, it is an object or a number of objects (separate pollen grains delineated in blue in Figure 4) both at the start and near the very end of the erosion process. On the other hand, when the region belongs to Case II, as in the green line in Figure 4, at the beginning there are two objects that become three objects just before they disappear (overlapping pollen grains). Because of the disk structure, the pollen region can be separated because the pollen grains have an elliptical or a circular shape. Under this condition, when the final count of binary objects is different from the initial count, it belongs to Case II, and when the initial and final count is the same, it belongs to Case I. Figure 2 shows the results of applying this morphological operation. After an image is classified as belonging to Case II, a morphological dilation is applied until the number of elements (separate pollen grains) remains the same. These objects will be the initial seeds for the GVFS application.
Figure 4

H-minima based erosion by the Fibonacci series for binary seeds image.

2.5. Pollen Separation by Gradient Vector Flow Snakes

Once an image is identified as Case I, we proceed to extract the features based on the initial segmented object (Figure 5).
Figure 5

Feature extraction, Case I.

For Case II, we can observe the numerical difference between objects, as seen in Figure 4. Therefore, to recover the pollen separated from the pollen region, we used dilated objects (Figure 6). These objects can work like seeds and the gradient vector flow snakes (GVFS) may be applied.
Figure 6

Feature extraction of mean GVFS, Case II.

Traditional snakes are curves (x(s) = [x(s), y(s)], s ∈ [0,1]) defined within the domain of an image, and it can move itself and allow them to move under the influence of internal forces coming from within the curve itself and external forces computed from the image data, as first introduced by Kass et al. [13]. The GVFS improves the capture range of the contours obtained by the binary image. Xu and Prince [14] propose an improved snake to obtain better performance for image segmentations (Figure 6). The formulation of a GVFS is valid for gray images as well as binary images; however, we used binary images, as seen in Figure 6. To compute GVFS, an edge-map function is first calculated using a Gaussian function. An edge-map function and an approximation of its gradient are then given. The GVFS is computed to guide the deformation of the snake at the boundary edges.

2.6. Feature Extraction

This part aims to characterize pollen grains separated with a feature vector that helps classify 12 species. These pollen regions are then mapped on the L a b color model image for feature extraction. Selected features can be grouped into three categories that are summarized in Table 2.
Table 2

Summary of descriptors.

Shape
Area A = n pixels

Perimeter P=xi-xi-12+yi-yi-12
Roundness R=4πAP2
Compactness C=AP2

First-order texture

Average μ=1iji,jpi,j
Median m=L+IN/2-Ff
Variance σ2=1iji,jpi,j-μ
Standard deviation σ=1iji,jpi,j-μ
Entropy S=-i,jpi,jlogpi,j

Second-order texture

Contrast descriptor CM=i,ji-j2ci,j
Correlation r=i,ji-μcij-μcjci,jσciσcj
Energy e=i,jci,j2
Local homogeneity HL=i,jci,j1+i-j

2.6.1. Shape Descriptors

Length and width of the bounding box (MA = length and mA = width) are obtained from the separated pollen grains. The area (A) of the grain is determined by counting the number of pixels contained within the border. Perimeter (P) is the length of the border. The obtained regions A and P can be used as descriptors because the size of the pollen grain is a palynological parameter of interest. Roundness (R) is defined as the multiplication of 4π and A over squared P. If R = 1, then the object is circular. Compactness (C) is defined as the result of A over P. 4π

2.6.2. First-Order Texture Descriptors

One way to discriminate between different textures is to compare L , a , and b levels using first-order statistics. First-order statistics are calculated based on the probability of observing a particular pixel value at a randomly chosen location in the image. They depend only on individual pixel values and not on the interaction of neighboring pixel values. Average (μ) is the mean of the sum of all intensity values in the image. Median (m) represents the value of the central variable position in the dataset of sorted pixels. Variance (σ 2) is a dispersion measure defined as the squared deviation of the variable with respect to its mean. Standard deviation (σ) is a measure of centralization or dispersion variable. Entropy (S) of the object in the image is a measure of content information.

2.6.3. Second-Order Texture Descriptors

Haralick's gray level cooccurrence matrices [15] have been used very successfully for biomedical image classification [16, 17]. Out of 14 features outlined, we considered first 4 texture features suitable for our experiment. We propose to use the cooccurrence matrix for the whole L a b color model. Contrast descriptor (CM) is a measure of local variation in the image. It is a high value when the region within the range of the window has a high contrast. Correlation (r) of the texture measures the relationship between the different intensities of colors. Mathematically, the correlation increases when the variance is low, meaning that the matrix elements are not far from the main diagonal. Energy (e) is the sum of the squared elements in the matrix of cooccurrence of gray level, also known as the uniformity or the second element of the angular momentum. Local homogeneity (HL) provides information on local regularity of the texture. When the elements of the cooccurrence matrix are closer to the main diagonal, the value of the local homogeneity is higher.

2.7. Classification

The main contribution of this work is the proposed segmentation and characterization method for the classification of pollen. In order to classify segmented pollen into 12 pollen species and obtain final classification results, we have explored the use of three different classification approaches implemented in Weka (Waikato Environment for Knowledge Analysis) [18, 19]: random tree forests [20] (RTF), a multilayer perceptron (MLP) and a Bayesian network (BN). Experimental results have been obtained for these three classification techniques (see confusion matrixes in Table 3 and Section 3).
Table 3

Confusion matrixes of multilayer perceptron results.

10-fold cross validation5-fold cross validation2-fold cross validation
Predicted pollen class Predicted pollen classPredicted pollen class
a b c d e f g h i j k l a b c d e f g h i j k l a b c d e f g h i j k l
Actual pollen class a = 1450000000100044000000010014500000001000
b = 2044001000100004500000010000410000005000
c = 3005510000000000551000000000055100000000
d = 4022300000300002130000040000213100003000
e = 5010041000000001004100000000100410000000
f = 6000005500010000000550001000000055000100
g = 7000000310000100000032000000000003200000
h = 8010000050010001000005001000000000510100
i = 9010300007402001030000740200302000073020
j = 10000000000640000000000064000000000006400
k = 11001100003068000110000206900011000010700
l = 12000000000103311000000010310000001001032

3. Results and Discussion

For pollen classification in this work, the quantitative and qualitative results evaluation of three different classification techniques, namely, random tree forest (RTF, Table 3), multilayer perceptron (MLP, Table 4), and Bayes net (BN, Table 5), was measured using three different s-fold schemes (s = 2, s = 5, and s = 10) in cross validation. The first choice was the 2-fold cross validation, where the dataset was divided into two equal parts (50% training set, 50% test set) and the other two were selected to be 5-fold (80% training set, 20% test set) and 10-fold (90% training set, 10% test set). Tables 3 to 5 show the confusion matrixes results for the twelve pollen classes (a = 1, b = 2,…, l = 12, according to Table 1) of the classification techniques (RTF, MLP, and BN). A confusion matrix contains information about actual and predicted classifications done by a classification technique [21].
Table 4

Confusion matrixes of random tree forests results.

10-fold cross validation5-fold cross validation2-fold cross validation
Predicted pollen classPredicted pollen classPredicted pollen class
a b c d e f g h i j k l a b c d e f g h i j k l a b c d e f g h i j k l
Actual pollen class a = 1460000000000045000000000104500000001000
b = 2045000000100004400100010000410000005000
c = 3005410000001000531000000200055100000000
d = 4010360000000002032010020000213100003000
e = 5000042000000000004200000000100410000000
f = 6000005600000000010550000000000055000100
g = 7000000320000000000032000000000003200000
h = 8000000052000000000005200000000000510100
i = 9030200007500004030000730000302000073020
j = 10001000000630000100000063000000000006400
k = 11001000002070000100000007200011000010700
l = 12200000000103120000000010310000001001032
Table 5

Confusion matrixes of Bayesian network results.

10-fold cross validation 5-fold cross validation2-fold cross validation
Predicted pollen classPredicted pollen classPredicted pollen class
a b c d e f g h i j k l a b c d e f g h i j k l a b c d e f g h i j k l
Actual pollen class a = 1460000000000045000000000014600000000000
b = 2044000000200004400000020000450000001000
c = 3005220000001100532000000010052220000000
d = 4011340000100002133000010000213201001000
e = 5010040000001000004100000100100400010000
f = 6000005300030000000530003000000054000101
g = 7000000320000000000032000000000003200000
h = 8000000051010000000005200000101000480200
i = 9010020000670100100400006600001012000065020
j = 10000000000640000000000064000000010006300
k = 11001100101069000110000106820002001000700
l = 12001200300112610120010010281102001011027

3.1. Dataset Description

The dataset and associated ground-truth is composed of 389 images from 12 different pollen species (Table 1). The images are formed using greater magnification and resolution (1388 × 1040 pixels). The experiments and resulting images are 278 × 208 × 3 pixels, as the application is in video format. To obtain the ground-truth, the contours of the pollen grains were manually identified by an expert palynologist. The Supplementary Materials for download contain the entire pollen images database (classes 1–12), the ground-truth (as binary bmp images), results images segmentation (as Matlab figures), Excel descriptors of the features of every pollen, Weka features and associated class for experiment (as ARFF Data File), and segmentation method (as Matlab interface) (see Supplementary Material available online at http://dx.doi.org/10.1155/2016/5689346).

3.2. Quality Indicators for Pollen Classification

In order to quantitatively assess the pollen classification results and the performance of the RTF, MLP, and BN techniques, several quality indicators were obtained. They were divided into final or external quality indicators that evaluate the final classification results and are useful for external comparison with other works or internal quality indicators that are useful for evaluating the internal behavior of the proposed classification options. For external indicators, let P be the number of pollen substances in the dataset, and let TP, FP, and FN be the number of true positives, false positives, and false negatives, respectively. We then defined TP rate, recall, or sensitivity as TPR = TP/P = TP/TP + FN and precision or positive predictive value as PPV = TP/TP + FP. As the proposed algorithm first selects the pollen that is then characterized and separated into 12 pollen classes, we can further evaluate the classification performance of the three selected classification schemes via the internal indicators. Here, let N be the number of nonpollen candidates resulting from the application of the proposed method to the complete dataset and let TN be the number of true negatives after classification. We can then define the false positive rate or fallout as FPR = FP/N = FP/FP + TN and F-measure, F1-score, or harmonic mean of TPR and SPC as HM = 2TPR/2TPR + FP + FN.

3.3. Quantitative and Qualitative Evaluation of Pollen Identification

The results of the pollen selection and the feature extraction phases over the described dataset are a collection of 618 pollen candidate regions divided into 12 classes according to the ground-truth. Each pollen substance is characterized by a 33-dimension feature vector. As previously mentioned, three classification techniques have been explored (RTF, MLP, and BN), also using three different cross validation schemes: s-fold, cross validation (s = 10, s = 5, and s = 2), and the full set for training and testing. Table 6 summarizes all quantitative results. We observed that, in the toughest but more realistic classification experiment, the 2-fold cross validation (i.e., the dataset is divided into two equal parts, one used for training and the other for testing) MLP achieves the best results, which proves that this is a reasonable scheme for pollen classification. As expected, as the value of s increases in the s-fold cross validation, results improve until the classification techniques obtain full precision and recall for case of the full set.
Table 6

Quantitative classification results.

TechniqueExperiment (s-fold cross validation) External quality indicators Internal quality indicators
TPRPPVHMFPR
MLPs = 100.9740.9750.9740.003
s = 50.9610.9620.9610.004
s = 20.9610.9620.9610.004

RTF s = 100.9550.9550.9550.005
s = 50.9550.9560.9550.005
s = 20.9550.9550.9550.005

BN s = 100.9350.940.9350.006
s = 50.9370.9410.9370.005
s = 20.9290.9350.9290.006

BN: Bayesian network; FPR: fallout or false positive rate; HM: harmonic mean; MLP: multilayer perceptron; PPV: precision; TPR: sensitivity, recall, or true positive rate; RTF: random tree forests.

3.4. Comparative Discussion

In Table 6, it is shown that the MLP classification method on the toughest classification task, known as 2-fold cross validation, produces very powerful results. In particular, MLP outperformed RTF and BN with an average precision of 96.2% against 95.5% and 93.5%, respectively. In addition, the average recall rate of MLP outperformed RTF and BN with an average of 96.1% against 95.5% and 92.9%, respectively. Finally, the F-measure average of MLP was improved to 96.1%, compared to 95.5% and 92.9% of RTF and BN, respectively. In Table 7, the methods that have appeared in the literature for the segmentation of pollen images are presented. As can be observed, several methods do not take advantage of the color information of pollen images by instead converting the color image to its gray-scale counterpart [3, 4], and, therefore, there is missing color information. Also, the problem of overlapping pollen grains is not considered in many methods that identify borders of pollen in palynological images that contain only one pollen grain or isolated pollen grains [3, 4, 6]. The work of Dell'Anna et al. [7] is presented in an original way to identify and classify the nucleus by FT-IR and cluster features. The results obtained by Kaya et al. [6] show a precision over 90% (145 detected images from 160) when training with 440 images and evaluating with 160. In terms of the general image-processing approach, our method accounts for the shape information of pollen substances in contrast with techniques previously reported [3, 5] that only worked with circular pollen grains.
Table 7

Dataset comparison of the proposed method and other methods, as appeared in the literature.

ReferenceDetectionDescriptionClassification resultsResults
ProposedA database of 12 pollen species is generated and a MS-Otsu filter applied to separate and regroup overlapping pollen grains using morphological operations and GVFSShape, first- and second-order textureMultilayer perceptron (MLP)PRREFM
10-fold cross validation (0.9–0.1)0.9740.9750.974
5-fold cross validation (0.8–0.2)0.9610.9620.961
2-fold cross validation (0.5–0.5)0.9610.9620.961

Kaya et al. [6]Classifies 20 different pollen types obtained from the genus Onopordum L. (Asteraceae) by a rough set-based expert system. For each pollen grain, 30 different images were photographed (600 total)Microscopic features: polar axis (P), equatorial axis (E), P/E, exine, intine, tectine, nexine, columella, colpus L, and colpus WThe 440 samples were used for training and the remaining 160 samples were used for testing (600 total) The overall success of the RS method in recognition of the pollen grains PR = 90.625% (145/160 pollen samples)

Dell'Anna et al. [7]Discriminates and automatically classifies pollen grains from 11 different allergy-relevant species belonging to 7 different familiesFourier transform infrared (FT-IR) patterns Applied statistical analysis unsupervised (hierarchical cluster analysis, HCA) and supervised (k-NN neighbors classifier, k-NN) learning method in the process of pollen discrimination Obtained accuracy of 80% for the 11 species classified and 84% for 9 species

Mitsumoto et al. [5]Used in autofluorescence images to simplify the problem by splitting pollen into RGB channels. Assuming circularity on the particlesParticles sizePresents the relationship between the grain diameter and B/R ratio of the pollen grains The results show that values for the pollen grains of a given species tend to cluster within a limited area of the graph (lack of quantitative results)

Ranzato et al. [4] Blurring the image into two bandwidths f 1 and f 2, to calculate Gaussian diffrencesLocal jets (shape descriptor and texture information)Bayesian classifier. Others are tried, but there is very little improvement; train-test (90%–10%) random selection… 10 times (100%–6.8%) 100 times (100%–23.2%) after based on the use of false classifications in the training data Train-test (90%–10%) random selection… 10 times (100%–6.8%) 100 times (100%–23.2%)

Rodriguez-Damian et al. [3]Looks for circular grains, as most pollen grains present have this shape. Tests some edge detection techniques to find a good shape of each pollen grainShape: common geometrical features (CGF); statistical moments; statistical moments; Fourier descriptorsTexture: Haralicks's coefficients; gray level run length statistics; gray level run length statisticsMinimum distance classifier using preselected attributes; SVM Texture (88%); boundary features (80%); they try fusing classifiers and improving the result (89%)
This paper proposes a scheme for segmenting different images of 12 pollen species. To filter the pollen ROI and background, we proposed a combination of MS and Otsu segmentation. After segmenting the ROI, morphological erosion is applied to the remaining subimage components to emphasize and separate the overlapping images. Finally, the separate region obtained is improved by applying boundary removal rules, dilatation, and a GVFS. Experimental results confirmed that the proposed method can efficiently segment the pollen ROI of the samples, individual and overlapping, with a 100% success rate. We compared our method using manual ground-truth. Experiments show that the performance of the proposed algorithm is close to that obtained by human segmentation.

4. Conclusion

This paper presents a new segmentation technique for pollen species detection and classification of individual and overlapping pollen grains. The first advantage of this new system is the detection of 100% of ROIs and the separation of overlapping pollen grains using the proposed segmentation method. The best algorithm for classification of pollen images was the MLP technique, with a precision of 96.2%, and the total time taken to build the model was 16.02 seconds on 2-fold cross validation. MLP and RTF classifiers got the lower average error of 0.002, compared to 0.027 of BN on a full set for training and evaluation. These results suggest that, among the machine learning algorithms tested, the MLP classifier has the potential to significantly improve the conventional classification methods used in medical and bioinformatic applications. Future work will involve the development of automated pollen video counters that includes segmenting pollen, feature extraction from each grain, and classification in twelve pollen categories. This may enable the production of a calendar-based disease diagnosis tool. In comparison with the results from other work, our methodology yields precision (PR), recall (RE), and F-measure (FM) results over 96% using MLP to classify the twelve pollen species with minimum hit rate losing (Tables 6 and 7). These results may serve as a platform for more complex systems able to chart pollen scheduling. The dataset and associated ground-truth is composed of 389 images from 12 different pollen species (Table 1). The images are formed using greater magnification and resolution (1388×1040 pixels). The experiments and resulting images are 278×208×3 pixels, as the application is in video format. To obtain the ground-truth, the contours of the pollen grains were manually identified by an expert palynologist. The Supplementary Materials for download contain the entire pollen images database (clases 1–12), the ground-truth (as binary bmp images), results images segmentation (as Matlab figures), Excel descriptors of the features of every pollen, Weka features and associated class for experiment (as ARFF Data File), and segmentation method (as Matlab interface).
  8 in total

1.  Snakes, shapes, and gradient vector flow.

Authors:  C Xu; J L Prince
Journal:  IEEE Trans Image Process       Date:  1998       Impact factor: 10.856

2.  Pollen discrimination and classification by Fourier transform infrared (FT-IR) microspectroscopy and machine learning.

Authors:  R Dell'Anna; P Lazzeri; M Frisanco; F Monti; F Malvezzi Campeggi; E Gottardini; M Bersani
Journal:  Anal Bioanal Chem       Date:  2009-04-25       Impact factor: 4.142

3.  Classification of pollen species using autofluorescence image analysis.

Authors:  Kotaro Mitsumoto; Katsumi Yabusaki; Hideki Aoyagi
Journal:  J Biosci Bioeng       Date:  2009-01       Impact factor: 2.894

4.  [Respiratory function in allergic asthmatic children and its relation to the environmental pollen concentration].

Authors:  Celsa López Campos; Cuauhtémoc B Rincón Castañeda; Víctor Borja Aburto; Aristides Gómez Muñoz; Oswaldo Téllez Valdés; Verónica Martínez Ordaz; Pedro Cano Ríos; Elia Ramírez Arriaga; Enrique Martínez Hernández; Salvador Martínez-Cairo Cueto; Arnulfo Albores Medina
Journal:  Rev Alerg Mex       Date:  2003 Jul-Aug

5.  Asthmatic exacerbations and environmental pollen concentration in La Comarca Lagunera (Mexico).

Authors:  V A Ordaz; C B Castaneda; C L Campos; V M Rodríguez; J G Saenz; P C Ríos
Journal:  Rev Alerg Mex       Date:  1998 Jul-Aug

6.  Nominated texture based cervical cancer classification.

Authors:  Edwin Jayasingh Mariarputham; Allwin Stephen
Journal:  Comput Math Methods Med       Date:  2015-01-14       Impact factor: 2.238

7.  Cirrhosis classification based on texture classification of random features.

Authors:  Hui Liu; Ying Shao; Dongmei Guo; Yuanjie Zheng; Zuowei Zhao; Tianshuang Qiu
Journal:  Comput Math Methods Med       Date:  2014-02-24       Impact factor: 2.238

8.  Smart spotting of pulmonary TB cavities using CT images.

Authors:  V Ezhil Swanly; L Selvam; P Mohan Kumar; J Arokia Renjith; M Arunachalam; K L Shunmuganathan
Journal:  Comput Math Methods Med       Date:  2013-12-03       Impact factor: 2.238

  8 in total
  4 in total

1.  Automated identification of Monogeneans using digital image processing and K-nearest neighbour approaches.

Authors:  Elham Yousef Kalafi; Wooi Boon Tan; Christopher Town; Sarinder Kaur Dhillon
Journal:  BMC Bioinformatics       Date:  2016-12-22       Impact factor: 3.169

2.  Breast Cancer Identification via Thermography Image Segmentation with a Gradient Vector Flow and a Convolutional Neural Network.

Authors:  Santiago Tello-Mijares; Fomuy Woo; Francisco Flores
Journal:  J Healthc Eng       Date:  2019-11-03       Impact factor: 2.682

3.  Deep Learning Methods for Improving Pollen Monitoring.

Authors:  Elżbieta Kubera; Agnieszka Kubik-Komar; Krystyna Piotrowska-Weryszko; Magdalena Skrzypiec
Journal:  Sensors (Basel)       Date:  2021-05-19       Impact factor: 3.576

4.  Selection of morphological features of pollen grains for chosen tree taxa.

Authors:  Agnieszka Kubik-Komar; Elżbieta Kubera; Krystyna Piotrowska-Weryszko
Journal:  Biol Open       Date:  2018-04-30       Impact factor: 2.422

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.