Literature DB >> 31890778

Data on cut-edge for spatial clustering based on proximity graphs.

Alper Aksac1, Tansel Ozyer2, Reda Alhajj1,3.   

Abstract

Cluster analysis plays a significant role regarding automating such a knowledge discovery process in spatial data mining. A good clustering algorithm supports two essential conditions, namely high intra-cluster similarity and low inter-cluster similarity. Maximized intra-cluster/within-cluster similarity produces low distances between data points inside the same cluster. However, minimized inter-cluster/between-cluster similarity increases the distance between data points in different clusters by furthering them apart from each other. We previously presented a spatial clustering algorithm, abbreviated CutESC (Cut-Edge for Spatial Clustering) with a graph-based approach. The data presented in this article is related to and supportive to the research paper entitled "CutESC: Cutting edge spatial clustering technique based on proximity graphs" (Aksac et al., 2019) [1], where interpretation research data presented here is available. In this article, we share the parametric version of our algorithm named CutESC-P, the best parameter settings for the experiments, the additional analyses and some additional information related to the proposed algorithm (CutESC) in [1].
© 2019 The Authors. Published by Elsevier Inc.

Entities:  

Keywords:  Clustering; Graph theory; Proximity graphs; Spatial data mining

Year:  2019        PMID: 31890778      PMCID: PMC6931115          DOI: 10.1016/j.dib.2019.104899

Source DB:  PubMed          Journal:  Data Brief        ISSN: 2352-3409


Specifications Table The parametric version of our algorithm presented here may be useful for users to set two parameters to better adapt clustering solutions for particular problems. This data file presents the best parameter settings used in the experiments, which are helpful for researchers to enhance reproducibility and/or reanalysis. This data file will be helpful to understand the CutESC algorithm in detail by providing additional information and experiments. This approach works without any prior information and preliminary parameter settings while automatically discovering clusters with non-uniform densities, arbitrary shapes, and outliers.

Data

This article provides details about a novel algorithm (CutESC) for spatial clustering based on proximity graphs introduced in Ref. [1]. Moreover, the data in this article describes tables and figures in support of the article titled “CutESC: Cutting edge spatial clustering technique based on proximity graphs” [1]. CutESC performs clustering automatically for non-uniform densities, arbitrary shapes, and outliers without requiring any prior information and preliminary parameters. Besides, the parametric version of our algorithm (CutESC-P, see Algorithm 1 in 2.1) optionally allows interested users to tune the clustering process by setting two parameters for specific applications. In 2.1, CutESC-P refers to the parametric version of our algorithm. Some additional information related to the CutESC algorithm is provided in 2.2. The 3 thresholding procedures are presented so as to be in a hierarchy. Fig. 1 shows that second and third thresholding rules of the CutESC algorithm are applied in a flipped order. Fig. 2, Fig. 3 show that the CutESC algorithm obtains the optimal solution in the first iteration. The relation between levels is given at Table 1 where the number of clusters and Calinski-Harabasz score are shown for each level. We scanned through combinations of values for each algorithm. The best parameter settings for the experiments are given in 2.3. In the pre-processing step, features are standardized by subtracting the mean and scaling to unit variance. All features are centered around zero. We scanned through combinations of values for each algorithm to find the best parameter settings. Table 2 shows selected parameters for 3-spiral [5], Aggregation [6], Compound [7], D31 [8], Zelnik4 [9] datasets. Table 3 shows selected parameters for Chameleon [3] dataset. Table 4 shows selected parameters for UCI (Dermatology, Ionosphere, Heart-Statlog, Cardiac-Arrhythmia, Thyroid-Allbp) [4] datasets. Table 5 shows selected parameters for BSDS500 [10] dataset. Table 6 shows selected parameters for Histological [11] dataset. Other details on external clustering criteria are reported in Table 7, Table 8 of 2.4. The additional analysis for Real-World datasets based on external clustering criteria is included in 2.5. Table 9 includes the comparison for Real-World datasets based on external clustering criteria. Table 10 includes the number of instances that were attributed to each cluster as compared with the ground truth for Real-World datasets. The external clustering criteria of the image segmentation datasets is given in Table 11, Table 12 of 2.6.
Fig. 1

Second and third thresholding rules of the algorithm are applied in a flipped order. The algorithm mainly follows a top-down approach, where it first removed global (large scale effect) and later removed local edges (small scale effect), and global level → connected components (sub-groups) level → neighborhood level. The third rule provides more details to be considered using second order neighborhood, it is a pruning step for touching problems such as chain and necks. In the last stage of Fig. 1b, it can be seen that the touching problem (between green connected components (CC) and brown CC) could not be resolved.

Fig. 2

Our experiments with different cases show that one iteration is sufficient. It is also a trade-off between uniform (see Fig. 2a) and non-uniform (see Fig. 2b) scenarios. When the data become more chaotic, the useful information might be hidden in deeper levels and the algorithm needs to be run more than one iteration. We also provided this option to users for their special applications (see Algorithm 1 in Section 2.1).

Fig. 3

Running 3 iterations on the synthetic dataset [2] which is used to describe steps of the CutESC algorithm in the paper [1].

Table 1

Iterative/Nested experiments for Fig. 2, Fig. 3, respectively. The high density and high dimensional datasets will increase the execution time of clustering algorithms as in our case. It is a trade-off between accuracy and speed. As in shown Fig. 2, Fig. 3, the CutESC algorithm obtains the optimal solution in the first iteration. However, meaningful or useful clusters in the chaotic data might be hidden in deeper levels. Moreover, while branching to sub-clusters, the goodness of the resulting clusters should not decrease. Many cluster validation indices have been published in the literature. The CutESC algorithm uses the Calinski-Harabasz score to evaluate the goodness (see Algorithm 1). While this score is increasing, the iteration will continue. Here, not only one index but also the combination of indices could be used. The Calinski-Harabasz score is in the range [0, +∞], a higher score indicates better clustering. It considers the quality of the distribution of the within-cluster and the between-cluster to define the score. As seen in the table, Calinski-Harabasz scores do not change when iterating in the first case (see Fig. 2a), but the number of clusters is increasing. In the second example, the score increases, but then it decreases. The second level has better goodness than other levels (see Fig. 2b). In the last example, the score is constantly decreasing thus the iteration will stop in the first step.

Level 1Level 2Level 3
# of Clusters389
Calinski-Harabasz666
# of Clusters164
Calinski-Harabasz1188
# of Clusters81319
Calinski-Harabasz1055725
Table 2

Selected Parameters for 3-spiral [5], Aggregation [6], Compound [7], D31 [8], Zelnik4 [9] datasets.

DatasetHDBSCANDBSCANOPTICS
3-spiralminClusterSize = 2eps = 0.1, minPoints = 4eps = 0.1, minPoints = 3
AggregationminClusterSize = 12eps = 0.05, minPoints = 3eps = 0.082, minPoints = 3
CompoundminClusterSize = 3eps = 0.05, minPoints = 3eps = 0.1, minPoints = 8
D31minClusterSize = 6eps = 0.016, minPoints = 3eps = 0.013, minPoints = 2
Zelnik4minClusterSize = 6eps = 0.075, minPoints = 7eps = 0.015, minPoints = 3
Scanning Range(2:1:20)(0.01:0.001:0.1), (3:1:10)(0.01:0.001:0.1), (3:1:10)
Table 3

Selected Parameters for Chameleon [3] dataset.

DatasetCutESC-PHDBSCANDBSCANOPTICS
t4.8kα = 1, β = 0.8minClusterSize = 9eps = 0.015, minPoints = 6eps = 0.013, minPoints = 1
t5.8kα = 1, β = 0.7minClusterSize = 6eps = 0.013, minPoints = 10eps = 0.013, minPoints = 9
t7.10kα = 0.7, β = 1minClusterSize = 12eps = 0.014, minPoints = 7eps = 0.02, minPoints = 3
t8.8kα = 1, β = 1minClusterSize = 11eps = 0.013, minPoints = 3eps = 0.013, minPoints = 2
Scanning Range(0.1:0.1:1), (0.1:0.1:1)(2:1:20)(0.01:0.001:0.2), (3:1:10)(0.01:0.001:0.2), (3:1:10)
Table 4

Selected Parameters for UCI [4] datasets.

DatasetHDBSCANDBSCANOPTICS
DermatologyminClusterSize = 5eps = 0.5, minPoints = 5eps = 0.9, minPoints = 10
IonosphereminClusterSize = 10eps = 0.3, minPoints = 10eps = 0.1, minPoints = 5
Heart-StatlogminClusterSize = 10eps = 0.5, minPoints = 9eps = 0.5, minPoints = 8
Cardiac-ArrhythmiaminClusterSize = 5eps = 0.3, minPoints = 5eps = 0.5, minPoints = 8
Thyroid-AllbpminClusterSize = 10eps = 0.3, minPoints = 10eps = 0.2, minPoints = 10
Scanning Range(2:1:10)(0.1:0.1:1), (3:1:10)(0.1:0.1:1), (3:1:10)
Table 5

Selected Parameters for BSDS500 [10] dataset.

Image NameHDBSCANDBSCANOPTICS
8068minClusterSize = 5eps = 0.1, minPoints = 3eps = 0.1, minPoints = 3
42049minClusterSize = 7eps = 0.03, minPoints = 3eps = 0.03, minPoints = 3
108073minClusterSize = 7eps = 0.2, minPoints = 3eps = 0.2, minPoints = 4
260058minClusterSize = 4eps = 0.2, minPoints = 3eps = 0.2, minPoints = 4
300091minClusterSize = 9eps = 0.2, minPoints = 3eps = 0.2, minPoints = 3
Scanning Range(2:1:20)(0.01:0.01:0.2), (3:1:10)(0.01:0.01:0.2), (3:1:10)
Table 6

Selected Parameters for Histological [11] dataset.

Image NameHDBSCANDBSCANOPTICS
ih2ycmuhwrgalominClusterSize = 16eps = 0.1, minPoints = 3eps = 0.15, minPoints = 3
pbphl1xujdvyxminClusterSize = 13eps = 0.3, minPoints = 3eps = 0.25, minPoints = 3
ebvubdfxocisgnyminClusterSize = 13eps = 0.5, minPoints = 3eps = 0.25, minPoints = 3
0anzqyibfucminClusterSize = 8eps = 0.65, minPoints = 3eps = 0.65, minPoints = 2
4nkj5wqcqjminClusterSize = 10eps = 0.35, minPoints = 3eps = 0.3, minPoints = 6
Scanning Range(2:1:20)(0.1:0.05:1), (3:1:10)(0.1:0.05:1), (3:1:10)
Table 7

Comparison for 3-spiral, Aggregation, Compound, D31, Zelnik4 based on external clustering criteria.

Algorithm3-spiral
Aggregation
Compound
D31
Zelnik4

F-MARIAMIF-MARIAMIF-MARIAMIF-MARIAMIF-MARIAMI

CutESC1110.8590.8020.7980.9760.9680.9370.6200.5710.809111
HDBSCAN1110.8780.8390.8680.8820.8330.8220.5980.5690.8190.9230.9030.899
AUTOCLUST0.6100.4420.4760.8650.8090.7990.9460.9270.9050.6650.6280.8130.8720.8360.649
GDD1110.8650.8090.7990.9590.9440.9070.2940.1090.3380.9920.9900.984
DBSCAN1110.8650.8090.7990.9610.9490.8850.6520.6240.8070.9350.9190.916
MeanShift0.330−0.005−0.0050.8880.8470.8180.8510.7780.7420.5870.5250.7250.8700.8330.618
OPTICS1110.8850.8520.8090.8360.7570.6970.6000.5310.747111
Table 8

Comparison for Chameleon datasets based on external clustering criteria.

Algorithmt4.8k
t5.8k
t7.10k
t8.8k
F-MARIAMIF-MARIAMIF-MARIAMIF-MARIAMI
CutESC0.9160.8970.8750.9400.9300.9120.8900.8410.8360.9780.9740.940
CutESC-P0.9680.9610.9350.9560.9480.9240.9580.9490.9360.9780.9740.940
HDBSCAN0.9580.9500.9080.9260.9130.8760.9530.9440.9330.9370.9240.901
AUTOCLUST0.9390.9260.7590.9090.8930.7200.8900.8680.7590.7970.7460.687
GDD0.4070.0070.0210.3690.0110.0630.4050.0060.9880.4010.0090.022
DBSCAN0.9550.9460.8890.6510.5950.6570.9820.9780.9580.9590.9500.865
MeanShift0.6040.5120.5500.8140.7770.7880.5340.4400.5750.5380.4020.438
OPTICS0.9520.9430.8320.6500.5940.6570.9630.9550.8310.9590.9500.868
Table 9

Comparison for Real-World datasets based on external clustering criteria. At the bottom of table, the number of groups detected after the proposed algorithm (CutESC) of each one of the 3 clustering criteria which are global edges, local edges and local inner edges, respectively.

AlgorithmDermatology
Ionosphere
Heart-Statlog
Cardiac-Arrhythmia
Thyroid-Allbp

JaccardPrecisionRecallJaccardPrecisionRecallJaccardPrecisionRecallJaccardPrecisionRecallJaccardPrecisionRecall

CutESC0.5550.5850.9150.5700.6120.8920.4950.5050.9590.3560.3600.9670.3350.3990.675
HDBSCAN0.4170.5110.6930.3790.5770.5260.3840.5370.5750.3230.32310.0610.4850.066
DBSCAN0.1990.19910.4960.5290.8870.3840.5040.6170.3230.32310.1730.4940.211
MeanShift0.1990.19910.5380.53810.4940.5080.9490.3230.32310.3190.3890.637
OPTICS0.2690.2790.8880.5380.53810.4030.5030.6710.3230.32310.2650.4520.390
AUTOCLUST
GDD
CutESCStep 1Step 2Step 3Step 1Step 2Step 3Step 1Step 2Step 3Step 1Step 2Step 3Step 1Step 2Step 3
# of groups444222222222444
Table 10

The number of instances that were attributed to each cluster as compared with the ground truth. In this table, rows represent the true class while columns are the predicted class. The values are reported using the contingency matrix which is used in statistics to define association between two partitions. In a clustering problem, true label names and predicted ones do not need to be the same, the assumptions are unclear. The number of clusters might not even be the same as true classes. According to this table, Cardiac-Arrhythmia dataset has 13 true classes however it is reported 16 in the UCI repository. The reason is that 3 classes (1. Degree AtrioVentricular block, 2. Degree AV block, 3. Degree AV block) actually include 0 instances in the dataset.

True ClassDermatology
Ionosphere
Heart-Statlog
Cardiac-Arrhythmia
Thyroid-Allbp
12341212121234

1601060438321482243183122815467
22590002254116124256510
34006803826511
4049000212910
5250008138718312
620000545
704
805
9220
10638
11510
12015
13310
Table 11

Comparison for 5 selected images from BSDS500 dataset based on external clustering criteria.

Algorithm8068
42049
108073
260058
300091
DicePrecisionRecallARIAMIDicePrecisionRecallARIAMIDicePrecisionRecallARIAMIDicePrecisionRecallARIAMIDicePrecisionRecallARIAMI

CutESC0.9330.9410.9240.8860.6850.9260.9530.9010.9040.7430.8550.7830.9410.5510.3660.8070.7170.9230.6860.5680.9070.9970.8330.7560.490
HDBSCAN0.8460.8150.8800.7300.5500.5320.4070.7680.3160.2830.8350.7290.9760.4300.2670.7830.6530.9760.6310.4200.6810.9280.5380.3620.294
AUTOCLUST0.7350.6120.9190.4750.4160.4740.3180.9340.1770.2220.8360.7810.8990.5110.3750.8540.7840.9370.7670.6130.9050.9800.8400.7430.534
GDD0.8530.8010.9120.7370.5920.3780.2900.5460.0910.1420.8340.7970.8760.5280.2840.7690.6670.9090.6180.4640.7500.8830.6520.4060.354
DBSCAN0.8480.8150.8830.7330.5660.5050.3850.7330.2740.2530.8610.7950.9400.5760.3410.8060.7030.9450.6800.4710.8860.9770.8100.7010.484
MeanShift0.8400.8180.8630.7230.5220.5250.3890.8070.2940.3040.8390.7440.9630.4650.2840.7080.7180.6970.5580.4560.6230.9030.4750.2880.209
OPTICS0.8450.8130.8800.7290.5620.4940.3710.7410.2530.2130.8570.7970.9270.5700.3030.8020.7160.9130.6790.4480.8830.9760.8060.6940.479
Table 12

Comparison for 5 selected images from Histological dataset based on external clustering criteria.

Algorithmih2ycmuhwrgalo
pbphl1xujdvyx
ebvubdfxocisgny
0anzqyibfuc
4nkj5wqcqj
DicePrecisionRecallARIAMIDicePrecisionRecallARIAMIDicePrecisionRecallARIAMIDicePrecisionRecallARIAMIDicePrecisionRecallARIAMI

CutESC0.8890.9730.8180.7850.4900.9370.9090.9680.6970.4210.9480.9590.9380.7000.4000.9730.9650.9810.7690.5290.9470.9320.9640.6670.433
HDBSCAN0.8700.8770.8630.7250.5620.8760.9590.8050.5820.3590.9530.9430.9630.6920.4530.9730.9620.9850.7650.5100.8990.9370.8640.5090.292
AUTOCLUST0.6810.5390.9250.0320.0260.9060.8880.9250.5630.3130.9290.9360.9220.5780.3240.9710.9690.9730.7580.5270.9130.8890.9380.4210.309
GDD0.6890.5300.987−0.0040.0040.8340.9610.7360.5010.2790.9210.9610.8840.5980.3680.8630.9720.7760.3830.2590.7030.9420.5610.2220.151
DBSCAN0.8560.8760.8370.7010.5160.9000.8370.9740.4220.2110.9510.9350.9690.6690.4960.9730.9590.9870.7530.4990.9300.9060.9560.5330.298
MeanShift0.8940.8810.9060.7700.6260.7990.9500.6890.4310.2440.9490.9550.9420.6940.5190.9570.9690.9450.6790.4640.9370.8960.9820.5300.284
OPTICS0.8700.8570.8840.7180.6000.8990.8390.9670.4250.2100.9450.9580.9330.6830.4410.9720.9630.9820.7590.4910.9100.9390.8820.5430.315
Second and third thresholding rules of the algorithm are applied in a flipped order. The algorithm mainly follows a top-down approach, where it first removed global (large scale effect) and later removed local edges (small scale effect), and global level → connected components (sub-groups) level → neighborhood level. The third rule provides more details to be considered using second order neighborhood, it is a pruning step for touching problems such as chain and necks. In the last stage of Fig. 1b, it can be seen that the touching problem (between green connected components (CC) and brown CC) could not be resolved. Our experiments with different cases show that one iteration is sufficient. It is also a trade-off between uniform (see Fig. 2a) and non-uniform (see Fig. 2b) scenarios. When the data become more chaotic, the useful information might be hidden in deeper levels and the algorithm needs to be run more than one iteration. We also provided this option to users for their special applications (see Algorithm 1 in Section 2.1). Running 3 iterations on the synthetic dataset [2] which is used to describe steps of the CutESC algorithm in the paper [1]. Iterative/Nested experiments for Fig. 2, Fig. 3, respectively. The high density and high dimensional datasets will increase the execution time of clustering algorithms as in our case. It is a trade-off between accuracy and speed. As in shown Fig. 2, Fig. 3, the CutESC algorithm obtains the optimal solution in the first iteration. However, meaningful or useful clusters in the chaotic data might be hidden in deeper levels. Moreover, while branching to sub-clusters, the goodness of the resulting clusters should not decrease. Many cluster validation indices have been published in the literature. The CutESC algorithm uses the Calinski-Harabasz score to evaluate the goodness (see Algorithm 1). While this score is increasing, the iteration will continue. Here, not only one index but also the combination of indices could be used. The Calinski-Harabasz score is in the range [0, +∞], a higher score indicates better clustering. It considers the quality of the distribution of the within-cluster and the between-cluster to define the score. As seen in the table, Calinski-Harabasz scores do not change when iterating in the first case (see Fig. 2a), but the number of clusters is increasing. In the second example, the score increases, but then it decreases. The second level has better goodness than other levels (see Fig. 2b). In the last example, the score is constantly decreasing thus the iteration will stop in the first step. Selected Parameters for 3-spiral [5], Aggregation [6], Compound [7], D31 [8], Zelnik4 [9] datasets. Selected Parameters for Chameleon [3] dataset. Selected Parameters for UCI [4] datasets. Selected Parameters for BSDS500 [10] dataset. Selected Parameters for Histological [11] dataset. Comparison for 3-spiral, Aggregation, Compound, D31, Zelnik4 based on external clustering criteria. Comparison for Chameleon datasets based on external clustering criteria. Comparison for Real-World datasets based on external clustering criteria. At the bottom of table, the number of groups detected after the proposed algorithm (CutESC) of each one of the 3 clustering criteria which are global edges, local edges and local inner edges, respectively. The number of instances that were attributed to each cluster as compared with the ground truth. In this table, rows represent the true class while columns are the predicted class. The values are reported using the contingency matrix which is used in statistics to define association between two partitions. In a clustering problem, true label names and predicted ones do not need to be the same, the assumptions are unclear. The number of clusters might not even be the same as true classes. According to this table, Cardiac-Arrhythmia dataset has 13 true classes however it is reported 16 in the UCI repository. The reason is that 3 classes (1. Degree AtrioVentricular block, 2. Degree AV block, 3. Degree AV block) actually include 0 instances in the dataset. Comparison for 5 selected images from BSDS500 dataset based on external clustering criteria. Comparison for 5 selected images from Histological dataset based on external clustering criteria.

Experimental design, materials, and methods

The CutESC algorithm with optional configurations The CutESC (Cut-Edge for Spatial Clustering) algorithm with a graph-based approach is presented in [1]. This novel algorithm performs clustering automatically for outliers, complex shapes and irregular densities without requiring any prior information and parameters. Additionally, users can provide their own parameters to tune the clustering process by setting two parameters for specific applications. CutESC-P refers to the parametric version of our algorithm, see Algorithm 1.

Pseudocode of the CutESC-P Algorithm.

Various experiments on the CutESC algorithm

In this section, some additional information related to the CutESC algorithm is provided in detail. The presented algorithm includes 3-step thresholding procedures which should be applied in a hierarchy. In Fig. 1, the second and third thresholding rules of the CutESC algorithm are applied in a flipped order. Also, the CutESC algorithm can be computed iteratively. In Fig. 2, Fig. 3, the CutESC algorithm obtains the optimal solution in the first iteration (level 1). The relation between the levels/iterations is given in Table 1, where the number of clusters and Calinski- Harabasz score are shown for each level/iteration.

Selected parameters for several datasets

The best parameter settings for the experiments are given in this section. To find the best parameters, we scanned through combinations of values for each algorithm. In the pre-processing step, features are standardized by subtracting the mean and scaling to unit variance, and all features are centered around zero. The best parameters for 3-spiral [5], Aggregation [6], Compound [7], D31 [8], and Zelnik4 [9] datasets are given at Table 2. Table 3 shows the best parameters for Chameleon [3] dataset. Table 4 shows the best parameters for UCI (Dermatology, Ionosphere, Heart-Statlog, Cardiac-Arrhythmia, Thyroid-Allbp) [4] datasets. Table 5 shows the best parameters for BSDS500 [10] dataset. Finally, the best parameters for Histological [11] dataset are given at Table 6.

Additional experiments on external clustering criteria

External clustering criteria validate the experiments based on previous knowledge about data, when the ground truth data is known, and the predicted clusters are compared to the true one (see [1] for more details). Other details on external clustering criteria are reported in Table 7, Table 8. We can see that our method is highly competitive and outperforms other methods on some datasets in terms of external clustering criteria.

Additional experiments on multidimensional datasets

In this section, the additional analysis for Real-World datasets based on external clustering criteria is included. The comparison for Real-World datasets based on external clustering criteria is included in Table 9. Table 10 includes the number of instances that were attributed to each cluster as compared with the ground truth for Real-World datasets.

External clustering criteria for selected images from BSDS500 and histological datasets

In this section, the external clustering criteria of some selected images from these image segmentation datasets are given in Table 11, Table 12, where our algorithm outperforms other methods.

Specifications Table

SubjectComputer Science (General)
Specific subject areaSpatial Data Mining, Clustering, Proximity Graphs, Graph Theory
Type of dataTableFigure
How data was acquiredClustering analysis
Data formatraw and analyzed
Experimental factorsA preprocessing step is used for heterogeneous features. manuscript. The features are standardized by subtracting the mean and scaling to unit variance; all features are centered around zero.
Experimental featuresSeveral clustering algorithms used to cluster various synthetic and real-world datasets from UCI repository, as well as real data related to image segmentation problems.
Data source locationInstitution: University of CalgaryCity/Town/Region: Calgary, ABCountry: CANADA
Data accessibilityThe raw data files are provided in the Mendeley Data,https://doi.org/10.17632/hkkbnxf4yp.1 [2]. All other data is with this article.
Related research articleAlper Aksac, Tansel Özyer, Reda AlhajjCutESC: Cutting edge spatial clustering technique based on proximity graphsPattern Recognitionhttps://doi.org/10.1016/j.patcog.2019.06.014
Value of the Data

The parametric version of our algorithm presented here may be useful for users to set two parameters to better adapt clustering solutions for particular problems.

This data file presents the best parameter settings used in the experiments, which are helpful for researchers to enhance reproducibility and/or reanalysis.

This data file will be helpful to understand the CutESC algorithm in detail by providing additional information and experiments.

This approach works without any prior information and preliminary parameter settings while automatically discovering clusters with non-uniform densities, arbitrary shapes, and outliers.

  2 in total

1.  Contour detection and hierarchical image segmentation.

Authors:  Pablo Arbeláez; Michael Maire; Charless Fowlkes; Jitendra Malik
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2011-05       Impact factor: 6.226

2.  Spatial Statistics for Segmenting Histological Structures in H&E Stained Tissue Images.

Authors:  Luong Nguyen; Akif Burak Tosun; Jeffrey L Fine; Adrian V Lee; D Lansing Taylor; S Chakra Chennubhotla
Journal:  IEEE Trans Med Imaging       Date:  2017-03-16       Impact factor: 10.048

  2 in total
  1 in total

1.  CACTUS: cancer image annotating, calibrating, testing, understanding and sharing in breast cancer histopathology.

Authors:  Alper Aksac; Tansel Ozyer; Douglas J Demetrick; Reda Alhajj
Journal:  BMC Res Notes       Date:  2020-01-06
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.