Literature DB >> 26543201

Reproducible quantitative proteotype data matrices for systems biology.

Hannes L Röst1, Lars Malmström2, Ruedi Aebersold3.   

Abstract

Historically, many mass spectrometry-based proteomic studies have aimed at compiling an inventory of protein compounds present in a biological sample, with the long-term objective of creating a proteome map of a species. However, to answer fundamental questions about the behavior of biological systems at the protein level, accurate and unbiased quantitative data are required in addition to a list of all protein components. Fueled by advances in mass spectrometry, the proteomics field has thus recently shifted focus toward the reproducible quantification of proteins across a large number of biological samples. This provides the foundation to move away from pure enumeration of identified proteins toward quantitative matrices of many proteins measured across multiple samples. It is argued here that data matrices consisting of highly reproducible, quantitative, and unbiased proteomic measurements across a high number of conditions, referred to here as quantitative proteotype maps, will become the fundamental currency in the field and provide the starting point for downstream biological analysis. Such proteotype data matrices, for example, are generated by the measurement of large patient cohorts, time series, or multiple experimental perturbations. They are expected to have a large effect on systems biology and personalized medicine approaches that investigate the dynamic behavior of biological systems across multiple perturbations, time points, and individuals.
© 2015 Röst et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

Entities:  

Mesh:

Year:  2015        PMID: 26543201      PMCID: PMC4710225          DOI: 10.1091/mbc.E15-07-0507

Source DB:  PubMed          Journal:  Mol Biol Cell        ISSN: 1059-1524            Impact factor:   4.138


INTRODUCTION

For quantitative systems biology, accurate and precise measurements of analyte concentrations across multiple conditions constitute a crucial requirement. This allows researchers to study human disease across large cohorts, compare multiple perturbations, or describe the dynamics of a transformation in a biological system. The data output of a typical systems biology experiment is generally a two-dimensional data matrix containing quantitative measurement values of specific analytes (first dimension) across multiple samples (second dimension; Figure 1a). For proteomic measurements, the analytes are typically peptides, modified peptides, or proteins inferred from peptide measurements. The comprehensiveness and accuracy of the data matrix mostly determine the success of the downstream data analysis, where both dimensions are of equal importance: the number of measured compounds, as well as the number of analyzed samples.
FIGURE 1:

The proteotype data matrix as often found in proteomics experiments. (a) The data matrix contains quantitative values for different analytes (peptides or proteins) measured across multiple samples. One major goal in proteomics is to achieve high throughput (high number of quantified analytes) consistently quantified across many samples (experimental conditions, perturbations, or patient samples). (b) Sample-centric workflows (such as discovery proteomics or shotgun proteomics) place heavy emphasis on a high number of identifications in a single sample, which is achieved by data-dependent acquisition. However, the resulting data matrices often contain missing values due to undersampling issues, and in large studies, not all analytes can be quantified in every single sample. (c) In analyte-centric workflows (such as SRM and other low-throughput targeted proteomics techniques), the major focus is on achieving highly consistent quantification across many samples. The resulting data matrices are often devoid of missing values but only cover a few, carefully selected analytes.

The proteotype data matrix as often found in proteomics experiments. (a) The data matrix contains quantitative values for different analytes (peptides or proteins) measured across multiple samples. One major goal in proteomics is to achieve high throughput (high number of quantified analytes) consistently quantified across many samples (experimental conditions, perturbations, or patient samples). (b) Sample-centric workflows (such as discovery proteomics or shotgun proteomics) place heavy emphasis on a high number of identifications in a single sample, which is achieved by data-dependent acquisition. However, the resulting data matrices often contain missing values due to undersampling issues, and in large studies, not all analytes can be quantified in every single sample. (c) In analyte-centric workflows (such as SRM and other low-throughput targeted proteomics techniques), the major focus is on achieving highly consistent quantification across many samples. The resulting data matrices are often devoid of missing values but only cover a few, carefully selected analytes. Measurements primarily focusing on the first dimension (many analytes, one or few samples or conditions) may provide a useful overview of the sample and can generate an inventory of analytes present in the sample. These enumeration-oriented approaches, however, often lack the statistical power, number of conditions, or temporal resolution to observe subtle and nontrivial biological effects. For example, multiple consistent and reproducible measurements during a time-dependent system transformation are critical to understanding the time evolution of biological systems. To describe such a system’s response not only qualitatively but also quantitatively, dense sampling during the transition phase is important. Furthermore, to estimate confounding sources of error and variation in quantitative measurements and model them appropriately, repeat measurements of high reproducibility are required. In clinical studies, for example, large patient cohorts are critical to uncovering biological signal against a background of individual variation, which means that measurements need to be performed on dozens to hundreds of patient samples with high reproducibility. Conversely, measurements focusing on the second dimension alone (few analytes, many samples) may suffer from bias and potentially miss important parts of the system’s behavior if they are not included in the data collection scheme. In proteomics, the proteins selected for measurement are often chosen based on the availability of measurement assays (frequently, assays based on affinity reagents) and the previous literature, leading to many experimental studies focusing on a few “popular” targets while leaving out a number of potentially crucial system components (Edwards ; Reker and Malmström, 2012). Therefore these types of experiments are only suitable for later stages of a study, when the proteins that best describe a system and its behavior are well characterized. In practice, however, the optimal set of such target proteins can often be defined only by exactly the types of large-scale studies that generate a complete data matrix across conditions. This leads to a catch-22 situation in which, in order to perform large-scale proteomics studies, the targets need to be known in advance, but they can only be identified by such large-scale studies. Historically, for lack of methods to generate large-scale data matrices by direct proteomic measurement, target protein sets for systems studies were frequently extracted from the literature or inferred from surrogate measurements, for example, at the transcript level, with various levels of success. For a truly comprehensive systems approach, both dimensions of the data matrix need to be given equal consideration. This would allow researchers to perform a single experiment to obtain information about which proteins are involved and the manner in which they participate in specific biological processes and their quantitative behavior. Specifically, proteomics could be used to study protein–protein interaction networks in their native and perturbed states and reveal how complex diseases such as cancer or diabetes rewire these networks (Lage ; Collins ). Furthermore, improved proteomic profiling could facilitate the search for new protein biomarkers in tissue and blood, since more samples and a larger number of proteins could be quantitatively compared across many patients (Liu , 2015). Applying proteomics techniques to signaling networks would require dense temporal sampling and accurate quantification of posttranslational modifications to capture fast-acting changes in, for example, phosphorylation states (Bodenmiller ). This could improve our capacity to model the dynamics of these cellular signaling networks and lead to potential points for intervention to modulate these networks in disease states (Sabidó ). Furthermore, accurate data matrices would allow a multitude of tools from statistics and machine learning to draw inferences about causal interactions among different proteomic compounds (Swan ; Libbrecht and Noble, 2015). Applying such data-driven methods to biological problems might uncover important regulatory mechanisms and implicate novel proteins in well-studied biological processes, which could help researchers to better determine the behavior of the system. Finally, such matrices could foster integration with high-throughput data from other fields (such as genomics and other sequencing-based fields) in which comprehensive data matrices are already a standard experimental output. However, obtaining high-quality data matrices from proteomics data has historically been highly challenging.

CURRENT APPROACHES IN PROTEOMICS

One of the primary objectives in the field of proteomics in recent decades has been the identification of peptide and protein species in complex biological samples (Sabidó ). In contrast to nucleic acid sequencing–based approaches, particularly by next-generation sequencing (NGS), in proteomics, the analyte cannot be amplified, the dynamic range of protein abundances is substantially larger than that of transcripts (Schwanhäusser ), and the number of analytes (peptides) from a complex sample by far exceeds the available sequencing cycles of even the most advanced instruments. Therefore most proteomics approaches rely on extensive biochemical fractionation methods that produce a (mostly) pure form of the analyte and then subsequently use highly sensitive analysis techniques to determine the nature of and quantify the analyte. Initially, fractionation was achieved on whole proteins using two-dimensional biochemical separation (2D-PAGE) by isoelectric focusing and apparent molecular mass separation, and subsequent identification of separated species was performed by Edman sequencing or mass spectrometry (MS). This approach was supplanted by a number of strategies based on online chromatographic peptide separation and subsequent gas-phase separation or isolation of selected peptide ions (precursor ions) in the gas phase.

Shotgun proteomics

Most high-throughput proteomics studies use so-called “bottom-up” liquid chromatography (LC) coupled to tandem mass spectrometry (LC-MS/MS), in which proteins are enzymatically cleaved to produce a mixture of homogeneous peptides and then separated by online LC coupled to MS/MS. In an effort to subject as many peptide precursors (molecular ions of a specific peptide entity) as possible to sequencing, the mass spectrometer selects the most intense peptide precursor for fragmentation at each time point, a process known as data-dependent acquisition or “shotgun proteomics.” This strategy is highly efficient in obtaining the fragment ion information necessary to identify the amino acid sequence of the respective peptide, since it samples precursor ions at positions with high MS1 intensity and thus has increased likelihood of obtaining a high-quality fragment ion spectrum (Aebersold and Mann, 2003; Domon and Aebersold, 2006). When applied to whole-cell lysates, shotgun proteomics provides fast enumeration of the most abundant protein species present in the sample, which enables exploratory data analysis and identification of previously unknown peptides. However, whereas shotgun proteomics allows discovery-driven research and offers high throughput, its sensitivity is strongly sample dependent, and it suffers from inconsistent identification reproducibility across samples. This is mainly due to the fact that for complex samples, the number of peptides by far exceeds the number of sequencing cycles provided by the mass spectrometer, leading to an undersampling of the proteome (Figure 1b; Michalski ; Bruderer ). These challenges are substantially influenced by different sample preparation and quantification strategies. The undersampling issue can be alleviated by sample fractionation before LC-MS/MS analysis, albeit at the cost of sample throughput and increased complications in quantitative cross-run comparisons, because several repeat analyses are required per sample to achieve maximal coverage (Domon and Aebersold, 2010). Furthermore, each quantification strategy comes with its own challenges and provides different quantitative accuracy and throughput. Isotopic labeling approaches such as Isotope-coded affinity tag (ICAT), stable isotope labeling with amino acids in cell culture (SILAC), or dimethyl N-terminal labeling deliver high quantitative accuracy but increase sample complexity and further exacerbate the undersampling problem. On the other hand, isobaric labeling approaches like iTRAQ and TMT can increase multiplexing and decrease cross-sample variability on the MS1 level but at the cost of coupling quantification to fragmentation and thus accepting missing values for cases for which no fragmentation was triggered. Even though isotopic and isobaric labeling methods support multiplexing, the capacity is limited to a few (two to 10) channels per MS run, which still poses a substantial challenge in large-scale analyses, in which hundreds of samples may be analyzed. Finally, label-free approaches do not increase sample complexity but still suffer from undersampling, as well as from reduced quantitative accuracy due to the lack of an internal standard. In the context of the systems biology data matrix, the data produced by shotgun proteomics thus pose significant challenges, since measurements are performed with high throughput and coverage but generally low comprehensiveness. Often the resulting data matrices are only complete for the most intense peptides of high abundance proteins but contain missing values for proteins of lower abundance (Figure 1b; Sabidó ). In addition, the more samples are analyzed and the more biologically diverse the samples are, the lower is the number of complete rows; due to the intensity dependence of the sampling and undersampling issues for complex samples, the missing values will generally not be missing completely at random (Bruderer ). Specifically, proteins that are variable across the experimental conditions will likely contain more missing values (with those conditions not quantified where abundance is low), whereas highly abundant, invariant proteins are faithfully sampled by the approach. It is therefore the efficiency of shotgun proteomics that produces maximal information on a single sample that is detrimental to the production of highly informative data matrices on multiple samples, since sampling more often occurs at noninformative positions, whereas information-rich processes with high variance are sparsely sampled.

Targeted proteomics

To address these problems, proteomic scientists have developed techniques that allow deterministic sampling across multiple conditions (Sabidó ). The most prominent ones are “targeted proteomics” approaches, specifically selective reaction monitoring (SRM) and, more recently, parallel reaction monitoring, both of which can target multiple proteins (which need to be selected before the measurement) consistently across multiple conditions (Lange ; Domon and Aebersold, 2010). In SRM mode, the mass spectrometer is programmed to deterministically record the signal at fixed coordinates across the chromatographic retention time. These coordinates (the assay) are specific to a peptide analyte and will reliably detect the analyte signal if present, similarly to a classical biochemical assay such as an antibody-based method. The acquisition of signal for multiple fragment ions (transitions) ensures high specificity (Sherman ; Röst ) and sensitivity. This deterministic acquisition strategy increases reproducibility and quantification consistency compared to shotgun approaches, where sampling is semistochastic and data acquisition for each single peptide depends on a multitude of factors. However, SRM is limited by throughput and can only monitor dozens to hundreds of peptides per run, since the deterministic sampling strategy implies acquiring signal even at time points at which no analyte elutes in order to collect complete chromatographic traces (Picotti ). Thus the data matrices obtained from SRM are much more complete than those produced by shotgun proteomics but generally contain one to two orders of magnitude fewer proteins (Figure 1c). Because the proteins to be measured have to be preselected, the measurements tend to be biased by prior hypotheses and may not cover all biologically relevant cellular processes and pathways. Therefore SRM has mostly been used in studies in which large sample numbers are required and only few proteins are under investigation (such as clinical biomarker studies; Cima ; Hüttenhain ; Drabovich ; Li, 2013; Surinova , b), for protein quantitative trait analysis, in which sets of protein are quantified across genetic reference strain collections (Picotti ; Wu ), or for systems biology studies, in which the response of a biological system to perturbations is measured (Sabidó ).

PROTEOMICS FOR SYSTEMS BIOLOGY

For systems biology investigations, neither SRM nor shotgun approaches are fully satisfactory to generate the desired complete data matrix. Whereas shotgun proteomics places heavy emphasis on the analyte dimensions and successfully identifies many protein species, it is often challenging to trace analytes across the sample dimension (Figure 1b). Conversely, SRM is well able to quantify analytes across many MS runs but suffers from low throughput in the analyte dimension (Figure 1c). To allow proteomics to become a true systems science, efforts should be directed toward improving proteomics measurement with regard to both dimensions of the data matrix, which means that future improvements in measurement technology and analysis strategy should be evaluated by the quality of the data matrices they are able to produce. Although the field was highly successful in compiling extensive protein inventories in the past, future efforts should turn toward the generation of fully quantitative, high-quality data matrices. This challenge has been recognized by the field, and multiple efforts toward this aim have been presented recently or are under way. In particular, recent advances in acquiring and analyzing data-independent acquisition mass-spectrometric data, such as SWATH-MS data, constitute a promising advance toward this goal (Gillet ; Röst ). In SWATH-MS, the mass spectrometer performs deterministic acquisition of fragment ion spectra but does not aim to target specific peptides explicitly by their intensity (as shotgun does) or by prior hypothesis (as SRM does). Instead, SWATH-MS records the complete fragment ion signal in a single experiment, essentially creating a complete digital representation of all fragment ion signals in a biological sample. This digitized sample can then be used to extract quantitative information for individual peptides after data acquisition. SWATH-MS features the same characteristics as SRM regarding specificity, reproducibility, and sensitivity but allows for high throughput and coverage of the analyzed proteome (Table 1; Gillet ). Similar to SRM, in the sample dimension, SWATH-MS is able to reproducibly measure protein analytes across hundreds of samples. However, unlike SRM, SWATH-MS is capable of high throughput in the analyte dimension and achieves substantial proteomic coverage; in microbial samples, coverage reaches almost saturation even with single MS injections (Röst ; Schubert ). However, one of the main limitations of SWATH-MS is the complexity of the resulting data, which consists of highly multiplexed fragment ion spectra that require novel algorithmic approaches for deconvolution. To assign signal to individual peptides and quantify analytes, multiple open-source tools using complementary algorithms are available, but further research is required to improve the underlying analysis approaches and fully exploit the potential of SWATH-MS.
TABLE 1:

Comparison of MS-based proteomics methods.

ShotgunSRMData-independent acquisition (SWATH-MS)
ThroughputHighLow to mediumMedium to high
ReproducibilityLowHighHigh
Identification specificityHighMediumMedium
SensitivityLow to mediumHigh to very highMedium to high
Quantitative accuracyMedium to highHigh to very highHigh
Acquisition methodFragment spectraFragment chromatogramsFragment spectra and chromatograms
ApplicationProtein enumeration and discoveryReproducible quantificationReproducible quantification in high throughput
Analysis softwareWell establishedVisual (manual)Multiple tools available

This table compares three major techniques used in mass spectrometry–based proteomics according to different performance criteria: shotgun proteomics, targeted proteomics or SRM, and data-independent acquisition or SWATH-MS. All three techniques have unique benefits and disadvantages; therefore different techniques need to be applied for different tasks.

Comparison of MS-based proteomics methods. This table compares three major techniques used in mass spectrometry–based proteomics according to different performance criteria: shotgun proteomics, targeted proteomics or SRM, and data-independent acquisition or SWATH-MS. All three techniques have unique benefits and disadvantages; therefore different techniques need to be applied for different tasks. Thus, SWATH-MS is a technology that addresses both dimensions of the data matrix at the same time and allows true systems analysis on protein measurements. It provides a valuable addition to the set of tools available to proteomics researchers and strikes a balance between throughput and reproducibility, making it an interesting option next to shotgun and targeted proteomics. Recent studies have shown the applicability of SWATH-MS to a multitude of problems in systems biology and medicine. These studies include investigations of the dynamics of microbial virulence with high proteomic coverage (Röst ; Schubert ), the interrogation of the dynamics of the human interactome (Collins ; Lambert ), and the quantification of >2000 proteins in human and mouse tissue across multiple patient samples and experimental conditions (Bruderer ; Guo ). In addition, SWATH-MS measurements allowed the investigation of protein abundance of 342 human plasma proteins across >200 individuals, uncovering considerable variation of blood plasma protein levels across genetically identical twins and quantifying the relative contributions of heredity and environmental factors to the overall observed variability (Liu ). Analysis of SWATH-MS samples was further facilitated by the recent development of multiple software tools to analyze the generated data sets (MacLean ; Bernhardt ; Röst , 2015a, b; Teleman ; Tsou ), the development of a step-by-step protocol to generate high-quality assay libraries (Schubert ), and the publication of SWATH-compatible assay libraries containing the measurement coordinates for >10,000 human proteins (Rosenberger ). SWATH-MS is thus a promising technology that could help to provide the proteomics field with complete and accurate data matrices and may play a key role in investigating systems biology questions on the protein level.

CONCLUSION

When evaluating proteomics techniques from the viewpoint of the quantitative proteotype data matrix, we can obtain a much clearer picture of data utility for systems biology studies. It becomes apparent that neither patchy matrices littered with missing values nor highly consistent measurements of a few proteins are sufficient for systems approaches to biology. Although shotgun and SRM are valuable for a multitude of purposes, new paradigms need to be developed in order to be able to apply unbiased, data-driven systems approaches in proteomics. The field should embrace this realization and increase efforts to establish novel experimental and computational methods able to produce data matrices with extensive proteome coverage and high comprehensiveness suitable for quantitative biology approaches. Current technology and analysis software has matured enough by now to tackle the next major challenge in proteomics, namely the proteotype data matrix. Next-generation proteomics technologies, such as SWATH-MS, present promising solutions to address this challenge. They combine the strength of SRM (high reproducibility and quantitative accuracy) with the high throughput of shotgun proteomics, thus focusing on both analyte and sample dimension of the data matrix at the same time. Using SWATH-MS, proteomics technology can produce quantitatively accurate and qualitatively complete data matrices, allowing researchers to track protein quantities across many samples. These advances in the field will allow proteomics researchers to ask novel questions about ensembles of proteins and their behavior across many experimental conditions, time points, and individuals. Thus proteomics is expected to contribute significantly to the emerging fields of precision and personalized medicine, high-throughput screening, and analysis, as well as to systems biology and systems medicine.
  40 in total

1.  Options and considerations when selecting a quantitative proteomics strategy.

Authors:  Bruno Domon; Ruedi Aebersold
Journal:  Nat Biotechnol       Date:  2010-07-09       Impact factor: 54.908

Review 2.  Mass spectrometry and protein analysis.

Authors:  Bruno Domon; Ruedi Aebersold
Journal:  Science       Date:  2006-04-14       Impact factor: 47.728

3.  Too many roads not taken.

Authors:  Aled M Edwards; Ruth Isserlin; Gary D Bader; Stephen V Frye; Timothy M Willson; Frank H Yu
Journal:  Nature       Date:  2011-02-10       Impact factor: 49.962

4.  More than 100,000 detectable peptide species elute in single shotgun proteomics runs but the majority is inaccessible to data-dependent LC-MS/MS.

Authors:  Annette Michalski; Juergen Cox; Matthias Mann
Journal:  J Proteome Res       Date:  2011-02-28       Impact factor: 4.466

5.  Skyline: an open source document editor for creating and analyzing targeted proteomics experiments.

Authors:  Brendan MacLean; Daniela M Tomazela; Nicholas Shulman; Matthew Chambers; Gregory L Finney; Barbara Frewen; Randall Kern; David L Tabb; Daniel C Liebler; Michael J MacCoss
Journal:  Bioinformatics       Date:  2010-02-09       Impact factor: 6.937

6.  Global quantification of mammalian gene expression control.

Authors:  Björn Schwanhäusser; Dorothea Busse; Na Li; Gunnar Dittmar; Johannes Schuchhardt; Jana Wolf; Wei Chen; Matthias Selbach
Journal:  Nature       Date:  2011-05-19       Impact factor: 49.962

7.  Phosphoproteomic analysis reveals interconnected system-wide responses to perturbations of kinases and phosphatases in yeast.

Authors:  Bernd Bodenmiller; Stefanie Wanka; Claudine Kraft; Jörg Urban; David Campbell; Patrick G Pedrioli; Bertran Gerrits; Paola Picotti; Henry Lam; Olga Vitek; Mi-Youn Brusniak; Bernd Roschitzki; Chao Zhang; Kevan M Shokat; Ralph Schlapbach; Alejandro Colman-Lerner; Garry P Nolan; Alexey I Nesvizhskii; Matthias Peter; Robbie Loewith; Christian von Mering; Ruedi Aebersold
Journal:  Sci Signal       Date:  2010-12-21       Impact factor: 8.192

8.  Cancer genetics-guided discovery of serum biomarker signatures for diagnosis and prognosis of prostate cancer.

Authors:  Igor Cima; Ralph Schiess; Peter Wild; Martin Kaelin; Peter Schüffler; Vinzenz Lange; Paola Picotti; Reto Ossola; Arnoud Templeton; Olga Schubert; Thomas Fuchs; Thomas Leippold; Stephen Wyler; Jens Zehetner; Wolfram Jochum; Joachim Buhmann; Thomas Cerny; Holger Moch; Silke Gillessen; Ruedi Aebersold; Wilhelm Krek
Journal:  Proc Natl Acad Sci U S A       Date:  2011-02-07       Impact factor: 11.205

9.  Dissecting spatio-temporal protein networks driving human heart development and related disorders.

Authors:  Kasper Lage; Kjeld Møllgård; Steven Greenway; Hiroko Wakimoto; Joshua M Gorham; Christopher T Workman; Eske Bendsen; Niclas T Hansen; Olga Rigina; Francisco S Roque; Cornelia Wiese; Vincent M Christoffels; Amy E Roberts; Leslie B Smoot; William T Pu; Patricia K Donahoe; Niels Tommerup; Søren Brunak; Christine E Seidman; Jonathan G Seidman; Lars A Larsen
Journal:  Mol Syst Biol       Date:  2010-06-22       Impact factor: 11.429

10.  Selected reaction monitoring for quantitative proteomics: a tutorial.

Authors:  Vinzenz Lange; Paola Picotti; Bruno Domon; Ruedi Aebersold
Journal:  Mol Syst Biol       Date:  2008-10-14       Impact factor: 11.429

View more
  12 in total

1.  diaPASEF: parallel accumulation-serial fragmentation combined with data-independent acquisition.

Authors:  Florian Meier; Andreas-David Brunner; Max Frank; Annie Ha; Isabell Bludau; Eugenia Voytik; Stephanie Kaspar-Schoenefeld; Markus Lubeck; Oliver Raether; Nicolai Bache; Ruedi Aebersold; Ben C Collins; Hannes L Röst; Matthias Mann
Journal:  Nat Methods       Date:  2020-11-30       Impact factor: 28.547

2.  Quantitative proteomics: challenges and opportunities in basic and applied research.

Authors:  Olga T Schubert; Hannes L Röst; Ben C Collins; George Rosenberger; Ruedi Aebersold
Journal:  Nat Protoc       Date:  2017-06-01       Impact factor: 13.491

3.  Most Alternative Isoforms Are Not Functionally Important.

Authors:  Michael L Tress; Federico Abascal; Alfonso Valencia
Journal:  Trends Biochem Sci       Date:  2017-05-05       Impact factor: 13.807

Review 4.  Label-free visual proteomics: Coupling MS- and EM-based approaches in structural biology.

Authors:  Oleg Klykov; Mykhailo Kopylov; Bridget Carragher; Albert J R Heck; Alex J Noble; Richard A Scheltema
Journal:  Mol Cell       Date:  2022-01-20       Impact factor: 17.970

5.  TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics.

Authors:  Hannes L Röst; Yansheng Liu; Giuseppe D'Agostino; Matteo Zanella; Pedro Navarro; George Rosenberger; Ben C Collins; Ludovic Gillet; Giuseppe Testa; Lars Malmström; Ruedi Aebersold
Journal:  Nat Methods       Date:  2016-08-01       Impact factor: 28.547

6.  Interaction profiling of RNA-binding ubiquitin ligases reveals a link between posttranscriptional regulation and the ubiquitin system.

Authors:  Andrea Hildebrandt; Gregorio Alanis-Lobato; Andrea Voigt; Kathi Zarnack; Miguel A Andrade-Navarro; Petra Beli; Julian König
Journal:  Sci Rep       Date:  2017-11-29       Impact factor: 4.379

7.  MaxQuant.Live Enables Global Targeting of More Than 25,000 Peptides.

Authors:  Christoph Wichmann; Florian Meier; Sebastian Virreira Winter; Andreas-David Brunner; Jürgen Cox; Matthias Mann
Journal:  Mol Cell Proteomics       Date:  2019-02-12       Impact factor: 5.911

8.  Multibatch TMT Reveals False Positives, Batch Effects and Missing Values.

Authors:  Alejandro Brenes; Jens Hukelmann; Dalila Bensaddek; Angus I Lamond
Journal:  Mol Cell Proteomics       Date:  2019-07-22       Impact factor: 5.911

9.  Isoform-resolved correlation analysis between mRNA abundance regulation and protein level degradation.

Authors:  Barbora Salovska; Hongwen Zhu; Tejas Gandhi; Max Frank; Wenxue Li; George Rosenberger; Chongde Wu; Pierre-Luc Germain; Hu Zhou; Zdenek Hodny; Lukas Reiter; Yansheng Liu
Journal:  Mol Syst Biol       Date:  2020-03       Impact factor: 11.429

Review 10.  Beyond Genes: Re-Identifiability of Proteomic Data and Its Implications for Personalized Medicine.

Authors:  Kurt Boonen; Kristien Hens; Gerben Menschaert; Geert Baggerman; Dirk Valkenborg; Gokhan Ertaylan
Journal:  Genes (Basel)       Date:  2019-09-05       Impact factor: 4.096

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.