Literature DB >> 21708503

CoTrade: Confident Co-Training With Data Editing.

.   

Abstract

Co-training is one of the major semi-supervised learning paradigms that iteratively trains two classifiers on two different views, and uses the predictions of either classifier on the unlabeled examples to augment the training set of the other. During the co-training process, especially in initial rounds when the classifiers have only mediocre accuracy, it is quite possible that one classifier will receive labels on unlabeled examples erroneously predicted by the other classifier. Therefore, the performance of co-training style algorithms is usually unstable. In this paper, the problem of how to reliably communicate labeling information between different views is addressed by a novel co-training algorithm named COTRADE. In each labeling round, COTRADE carries out the label communication process in two steps. First, confidence of either classifier's predictions on unlabeled examples is explicitly estimated based on specific data editing techniques. Secondly, a number of predicted labels with higher confidence of either classifier are passed to the other one, where certain constraints are imposed to avoid introducing undesirable classification noise. Experiments on several real-world datasets across three domains show that COTRADE can effectively exploit unlabeled data to achieve better generalization performance.

Year:  2011        PMID: 21708503     DOI: 10.1109/TSMCB.2011.2157998

Source DB:  PubMed          Journal:  IEEE Trans Syst Man Cybern B Cybern        ISSN: 1083-4419


  2 in total

1.  Hessian-regularized co-training for social activity recognition.

Authors:  Weifeng Liu; Yang Li; Xu Lin; Dacheng Tao; Yanjiang Wang
Journal:  PLoS One       Date:  2014-09-26       Impact factor: 3.240

2.  Self-Trained LMT for Semisupervised Learning.

Authors:  Nikos Fazakis; Stamatis Karlos; Sotiris Kotsiantis; Kyriakos Sgarbas
Journal:  Comput Intell Neurosci       Date:  2015-12-29
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.