| Literature DB >> 16927432 |
Christian Stephan1, Kai A Reidegeld, Michael Hamacher, André van Hall, Katrin Marcus, Chris Taylor, Philip Jones, Michael Müller, Rolf Apweiler, Lennart Martens, Gerhard Körting, Daniel C Chamrad, Herbert Thiele, Martin Blüggel, David Parkinson, Pierre-Alain Binz, Andrew Lyall, Helmut E Meyer.
Abstract
The newly available techniques for sensitive proteome analysis and the resulting amount of data require a new bioinformatics focus on automatic methods for spectrum reprocessing and peptide/protein validation. Manual validation of results in such studies is not feasible and objective enough for quality relevant interpretation. The necessity for tools enabling an automatic quality control is, therefore, important to produce reliable and comparable data in such big consortia as the Human Proteome Organization Brain Proteome Project. Standards and well-defined processing pipelines are important for these consortia. We show a way for choosing the right database model, through collecting data, processing these with a decoy database and end up with a quality controlled protein list merged from several search engines, including a known false-positive rate.Entities:
Mesh:
Substances:
Year: 2006 PMID: 16927432 DOI: 10.1002/pmic.200600294
Source DB: PubMed Journal: Proteomics ISSN: 1615-9853 Impact factor: 3.984