Literature DB >> 12169544

Evaluating machine learning approaches for aiding probe selection for gene-expression arrays.

J B Tobler1, M N Molla, E F Nuwaysir, R D Green, J W Shavlik.   

Abstract

MOTIVATION: Microarrays are a fast and cost-effective method of performing thousands of DNA hybridization experiments simultaneously. DNA probes are typically used to measure the expression level of specific genes. Because probes greatly vary in the quality of their hybridizations, choosing good probes is a difficult task. If one could accurately choose probes that are likely to hybridize well, then fewer probes would be needed to represent each gene in a gene-expression microarray, and, hence, more genes could be placed on an array of a given physical size. Our goal is to empirically evaluate how successfully three standard machine-learning algorithms-naïve Bayes, decision trees, and artificial neural networks-can be applied to the task of predicting good probes. Fortunately it is relatively easy to get training examples for such a learning task: place various probes on a gene chip, add a sample where the corresponding genes are highly expressed, and then record how well each probe measures the presence of its corresponding gene. With such training examples, it is possible that an accurate predictor of probe quality can be learned.
RESULTS: Two of the learning algorithms we investigate-naïve Bayes and neural networks-learn to predict probe quality surprisingly well. For example, in the top ten predicted probes for a given gene not used for training, on average about five rank in the top 2.5% of that gene's hundreds of possible probes. Decision-tree induction and the simple approach of using predicted melting temperature to rank probes perform significantly worse than these two algorithms. The features we use to represent probes are very easily computed and the time taken to score each candidate probe after training is minor. Training the naïve Bayes algorithm takes very little time, and while it takes over 10 times as long to train a neural network, that time is still not very substantial (on the order of a few hours on a desktop workstation). We also report the information contained in the features we use to describe the probes. We find the fraction of cytosine in the probe to be the most informative feature. We also find, not surprisingly, that the nucleotides in the middle of the probes sequence are more informative than those at the ends of the sequence.

Entities:  

Mesh:

Substances:

Year:  2002        PMID: 12169544     DOI: 10.1093/bioinformatics/18.suppl_1.s164

Source DB:  PubMed          Journal:  Bioinformatics        ISSN: 1367-4803            Impact factor:   6.937


  5 in total

1.  Transcriptional firing helps to drive NETosis.

Authors:  Meraj A Khan; Nades Palaniyar
Journal:  Sci Rep       Date:  2017-02-08       Impact factor: 4.379

2.  Engineering and screening of novel β-1,3-xylanases with desired hydrolysate type by optimized ancestor sequence reconstruction and data mining.

Authors:  Bo Zeng; ShuYan Zhao; Rui Zhou; YanHong Zhou; WenHui Jin; ZhiWei Yi; GuangYa Zhang
Journal:  Comput Struct Biotechnol J       Date:  2022-06-27       Impact factor: 6.155

3.  The identification of informative genes from multiple datasets with increasing complexity.

Authors:  S Yahya Anvar; Peter A C 't Hoen; Allan Tucker
Journal:  BMC Bioinformatics       Date:  2010-01-15       Impact factor: 3.169

4.  Transcriptome analysis of spermatogenically regressed, recrudescent and active phase testis of seasonally breeding wall lizards Hemidactylus flaviviridis.

Authors:  Mukesh Gautam; Amitabh Mathur; Meraj Alam Khan; Subeer S Majumdar; Umesh Rai
Journal:  PLoS One       Date:  2013-03-11       Impact factor: 3.240

5.  iPcc: a novel feature extraction method for accurate disease class discovery and prediction.

Authors:  Xianwen Ren; Yong Wang; Xiang-Sun Zhang; Qi Jin
Journal:  Nucleic Acids Res       Date:  2013-06-12       Impact factor: 16.971

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.