Brian T Luke1, Jack R Collins. 1. Advanced Biomedical Computing Center, Advanced Technology Program, SAIC-Frederick, Inc, NCI-Frederick, Frederick, MD 21702, USA. lukeb@ncifcrf.gov
Abstract
BACKGROUND: Experimental examinations of biofluids to measure concentrations of proteins or their fragments or metabolites are being explored as a means of early disease detection, distinguishing diseases with similar symptoms, and drug treatment efficacy. Many studies have produced classifiers with a high sensitivity and specificity, and it has been argued that accurate results necessarily imply some underlying biology-based features in the classifier. The simplest test of this conjecture is to examine datasets designed to contain no information with classifiers used in many published studies. RESULTS: The classification accuracy of two fingerprint-based classifiers, a decision tree (DT) algorithm and a medoid classification algorithm (MCA), are examined. These methods are used to examine 30 artificial datasets that contain random concentration levels for 300 biomolecules. Each dataset contains between 30 and 300 Cases and Controls, and since the 300 observed concentrations are randomly generated, these datasets are constructed to contain no biological information. A modest search of decision trees containing at most seven decision nodes finds a large number of unique decision trees with an average sensitivity and specificity above 85% for datasets containing 60 Cases and 60 Controls or less, and for datasets with 90 Cases and 90 Controls many DTs have an average sensitivity and specificity above 80%. For even the largest dataset (300 Cases and 300 Controls) the MCA procedure finds several unique classifiers that have an average sensitivity and specificity above 88% using only six or seven features. CONCLUSION: While it has been argued that accurate classification results must imply some biological basis for the separation of Cases from Controls, our results show that this is not necessarily true. The DT and MCA classifiers are sufficiently flexible and can produce good results from datasets that are specifically constructed to contain no information. This means that a chance fitting to the data is possible. All datasets used in this investigation are available on the web.
BACKGROUND: Experimental examinations of biofluids to measure concentrations of proteins or their fragments or metabolites are being explored as a means of early disease detection, distinguishing diseases with similar symptoms, and drug treatment efficacy. Many studies have produced classifiers with a high sensitivity and specificity, and it has been argued that accurate results necessarily imply some underlying biology-based features in the classifier. The simplest test of this conjecture is to examine datasets designed to contain no information with classifiers used in many published studies. RESULTS: The classification accuracy of two fingerprint-based classifiers, a decision tree (DT) algorithm and a medoid classification algorithm (MCA), are examined. These methods are used to examine 30 artificial datasets that contain random concentration levels for 300 biomolecules. Each dataset contains between 30 and 300 Cases and Controls, and since the 300 observed concentrations are randomly generated, these datasets are constructed to contain no biological information. A modest search of decision trees containing at most seven decision nodes finds a large number of unique decision trees with an average sensitivity and specificity above 85% for datasets containing 60 Cases and 60 Controls or less, and for datasets with 90 Cases and 90 Controls many DTs have an average sensitivity and specificity above 80%. For even the largest dataset (300 Cases and 300 Controls) the MCA procedure finds several unique classifiers that have an average sensitivity and specificity above 88% using only six or seven features. CONCLUSION: While it has been argued that accurate classification results must imply some biological basis for the separation of Cases from Controls, our results show that this is not necessarily true. The DT and MCA classifiers are sufficiently flexible and can produce good results from datasets that are specifically constructed to contain no information. This means that a chance fitting to the data is possible. All datasets used in this investigation are available on the web.
Authors: T P Conrads; V A Fusaro; S Ross; D Johann; V Rajapakse; B A Hitt; S M Steinberg; E C Kohn; D A Fishman; G Whitely; J C Barrett; L A Liotta; E F Petricoin; T D Veenstra Journal: Endocr Relat Cancer Date: 2004-06 Impact factor: 5.678
Authors: Ramaprasad Srinivasan; Jasmine Daniels; Vincent Fusaro; Andreas Lundqvist; Jonathan K Killian; David Geho; Martha Quezado; David Kleiner; Sally Rucker; Virginia Espina; Gordon Whiteley; Lance Liotta; Emmanuel Petricoin; Stefania Pittaluga; Ben Hitt; A J Barrett; Kevin Rosenblatt; Richard W Childs Journal: Exp Hematol Date: 2006-06 Impact factor: 3.084
Authors: Jens K Habermann; Uwe J Roblick; Brian T Luke; Darue A Prieto; William J J Finlay; Vladimir N Podust; John M Roman; Elisabeth Oevermann; Thomas Schiedeck; Nils Homann; Michael Duchrow; Thomas P Conrads; Timothy D Veenstra; Stanley K Burt; Hans-Peter Bruch; Gert Auer; Thomas Ried Journal: Gastroenterology Date: 2006-10 Impact factor: 22.682
Authors: D G Ward; N Suggett; Y Cheng; W Wei; H Johnson; L J Billingham; T Ismail; M J O Wakelam; P J Johnson; A Martin Journal: Br J Cancer Date: 2006-06-06 Impact factor: 7.640
Authors: Peter S Kutchukian; Nadya Y Vasilyeva; Jordan Xu; Mika K Lindvall; Michael P Dillon; Meir Glick; John D Coley; Natasja Brooijmans Journal: PLoS One Date: 2012-11-21 Impact factor: 3.240