| Literature DB >> 18276526 |
R Anand1, K G Mehrotra, C K Mohan, S Ranka.
Abstract
The backpropagation algorithm converges very slowly for two-class problems in which most of the exemplars belong to one dominant class. An analysis shows that this occurs because the computed net error gradient vector is dominated by the bigger class so much that the net error for the exemplars in the smaller class increases significantly in the initial iteration. The subsequent rate of convergence of the net error is very low. A modified technique for calculating a direction in weight-space which decreases the error for each class is presented. Using this algorithm, the rate of learning for two-class classification problems is accelerated by an order of magnitude.Year: 1993 PMID: 18276526 DOI: 10.1109/72.286891
Source DB: PubMed Journal: IEEE Trans Neural Netw ISSN: 1045-9227