Literature DB >> 27408686

Assessment of pharmacogenomic agreement.

Zhaleh Safikhani1, Nehme El-Hachem2, Rene Quevedo1, Petr Smirnov3, Anna Goldenberg4, Nicolai Juul Birkbak5, Christopher Mason6, Christos Hatzis7, Leming Shi8, Hugo Jwl Aerts9, John Quackenbush10, Benjamin Haibe-Kains11.   

Abstract

In 2013 we published an analysis demonstrating that drug response data and gene-drug associations reported in two independent large-scale pharmacogenomic screens, Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE), were inconsistent. The GDSC and CCLE investigators recently reported that their respective studies exhibit reasonable agreement and yield similar molecular predictors of drug response, seemingly contradicting our previous findings. Reanalyzing the authors' published methods and results, we found that their analysis failed to account for variability in the genomic data and more importantly compared different drug sensitivity measures from each study, which substantially deviate from our more stringent consistency assessment. Our comparison of the most updated genomic and pharmacological data from the GDSC and CCLE confirms our published findings that the measures of drug response reported by these two groups are not consistent. We believe that a principled approach to assess the reproducibility of drug sensitivity predictors is necessary before envisioning their translation into clinical settings.

Entities:  

Keywords:  Bioinformatics; Biomarkers; Cancer Cell Lines; Drug Response; Experimental Design; High-Throughput Screening; Pharmacogenomics; Statistics

Year:  2016        PMID: 27408686      PMCID: PMC4926729          DOI: 10.12688/f1000research.8705.1

Source DB:  PubMed          Journal:  F1000Res        ISSN: 2046-1402


Introduction

Pharmacogenomic studies correlate genomic profiles and sensitivity to drug exposure in a collection of samples to identify molecular predictors of drug response. The success of validation of such predictors depends on the level of noise both in the pharmacological and genomic data. The groundbreaking release of the Genomics of Drug Sensitivity in Cancer [1] (GDSC) and Cancer Cell Line Encyclopedia [2] (CCLE) datasets enables the assessment of pharmacogenomic data consistency, a necessary requirement for developing robust drug sensitivity predictors. Below we briefly describe the fundamental analytical differences between our initial comparative study [3] and the recent assessment of pharmacogenomic agreement published by the GDSC and CCLE investigators [4].

Which pharmacological drug response data should one use?

The first GDSC and CCLE studies were published in 2012 and the investigators of both studies have continued to generate data and to release them publicly. One would imagine that any comparative study would use the most current versions of the data. However, the authors of the reanalysis used an old release of the GDSC (July 2012) and CCLE (February 2012) pharmacological data, resulting in the use of outdated IC 50 values, as well as missing approximately 400 new drug sensitivity measurements for the 15 drugs screened both in GDSC and CCLE. Assessing data that are three years old and which have been replaced by the very same authors with more recent data seems to be a substantial missed opportunity. It raises the question as to whether the current data would be considered to be in agreement and which data should be used for further analysis.

Comparison of drug sensitivity predictors

Given the complexity and high dimensionality of pharmacogenomic data, the development of drug sensitivity predictors is prone to overfitting and requires careful validation. In this context, one would expect the most significant predictors derived in GDSC to accurately predict drug response in CCLE and vice versa. This will be the case if both studies independently produce consistent measures of both genomic profiles and drug response for each cell line. In our comparative study [3], we made direct comparison of the same measurements generated independently in both studies by taking into account the noise in both the genomic and pharmacological data ( Figure 1a). By investigating the authors’ code and methods, we identified key shortcomings in their analysis protocol, which have contributed to the authors’ assertion of consistency between drug sensitivity predictors derived from GDSC and CCLE.
Figure 1.

Analysis designs used to compare pharmacogenomic studies.

( a) Analysis design used in our comparative study (Haibe-kains et al., Nature 2013) where each data generated by GDSC and CCLE are independently compared to avoid information leak and biased assessment of consistency. ( b) Analysis design used by the GDSC and CCLE investigators for their ANOVA analysis where the mutation data generated with GDSC were duplicated for use in the CCLE study. ( c) Analysis design for the ElasticNet analysis where the molecular profiles from CCLE were duplicated in the GDSC study and the GDSC IC 50 were compared to CCLE AUC data. Differences between our analysis design and those used by the GDSC and CCLE investigators are indicated by yellow signs with exclamation mark symbol.

Analysis designs used to compare pharmacogenomic studies.

( a) Analysis design used in our comparative study (Haibe-kains et al., Nature 2013) where each data generated by GDSC and CCLE are independently compared to avoid information leak and biased assessment of consistency. ( b) Analysis design used by the GDSC and CCLE investigators for their ANOVA analysis where the mutation data generated with GDSC were duplicated for use in the CCLE study. ( c) Analysis design for the ElasticNet analysis where the molecular profiles from CCLE were duplicated in the GDSC study and the GDSC IC 50 were compared to CCLE AUC data. Differences between our analysis design and those used by the GDSC and CCLE investigators are indicated by yellow signs with exclamation mark symbol. For their ANOVA analyses, the authors used drug activity area (1-AUC) values independently generated in GDSC and CCLE, but used the same GDSC mutation data across the two different datasets ( Figure 1b; see Methods). By using the same mutation calls for both GDSC and CCLE, the authors have disregarded the noise in the molecular profiles, while creating an information leak between the two studies. For their ElasticNet analysis, the authors followed a similar design by reusing the CCLE genomic data across the two datasets, but comparing different drug sensitivity measures that are IC 50 in GDSC vs. AUC in CCLE ( Figure 1c; see Methods). We are puzzled by the seemingly arbitrary choices of analytical design made by the authors, which raises the question as to whether the use of different genomic data and drug sensitivity measures would yield the same level of agreement. Moreover, by ignoring the (inevitable) noise and biological variation in the genomic data, the authors’ analyses is likely to yield over-optimistic estimates of data consistency, as opposed to our more stringent analysis design [3].

What constitutes agreement?

In examining correlation, there is no universally accepted standard for what constitutes agreement. However, the FDA/MAQC consortium guidelines define good correlation for inter-laboratory reproducibility [5– 8] to be ≥0.8. The authors of the present study used two measures of correlation, Pearson correlation (ρ) and Cohen’s kappa (κ) coefficients, but never clearly defined a priori thresholds for consistency, instead referring to ρ>0.5 as “reasonable consistency” in their discussion. Of the 15 drugs that were compared, their analysis found only two (13%) with ρ>0.6 for AUC and three (20%) above that threshold for IC 50. This raises the question whether ρ~0.5–0.6 for one third of the compared drugs should be considered as “good agreement.” If one applies the FDA/MAQC criterion, only one drug (nilotinib) passes the threshold for consistency. Similarly, the authors referred to the results of their new Waterfall analysis as reflective of “high consistency,” even though only 40% of drugs had a κ≥0.4, with five drugs yielding moderate agreement and only one drug (lapatinib) yielding substantial agreement according to the accepted standards [9]. Based on these results, the authors concluded that 67% of the evaluable compounds showed reasonable pharmacological agreement, which is misleading as only 8/15 (53%) and 6/15 (40%) drugs yielded ρ>0.5 for IC 50 and AUC, respectively. Taking the union of consistency tests is bad practice; adding more sensitivity measures (even at random) would ultimately bring the union to 100% without providing objective evidence of actual data agreement.

Consistency in pharmacological data

The authors acknowledged that the consistency of pharmacological data is not perfect due to the methodological differences between protocols used by CCLE and GDSC, further stating that standardization will certainly improve correlation metrics. To test this important assertion, the authors could have analyzed the replicated experiments performed by the GDSC using identical protocols to screen camptothecin and AZD6482 against the same panel of cell lines at the Wellcome Trust Sanger Institute and the Massachusetts General Hospital. Our re-analyses [3, 10] of drug sensitivity data from these drugs found a correlation between GDSC sites on par with the correlations observed between GDSC and CCLE (ρ=0.57 and 0.39 for camptothecin and AZD6482, respectively; Figure 2 a,b). These results suggest that intrinsic technical and biological noise of pharmacological assays is likely to play a major role in the lack of reproducibility observed in high-throughput pharmacogenomic studies, which cannot be attributed solely to the use of different experimental protocols.
Figure 2.

Consistency of sensitivity profiles between replicated experiments across GDSC sites.

( a) Camptothecin and ( b) AZD6482. PCC: Pearson correlation coefficient; MGH: Massachusetts General Hospital (Boston, MA, USA); WTSI: Wellcome Trust Sanger Institute (Hinxton, UK).

Consistency of sensitivity profiles between replicated experiments across GDSC sites.

( a) Camptothecin and ( b) AZD6482. PCC: Pearson correlation coefficient; MGH: Massachusetts General Hospital (Boston, MA, USA); WTSI: Wellcome Trust Sanger Institute (Hinxton, UK).

Consistency in genomic data

In their comparative study, the authors did not assess the consistency of genomic data between GDSC and CCLE [4]. Consistency of gene copy number and expression data were significantly higher than for drug sensitivity data (one-sided Wilcoxon rank sum test p-value=3×10 -5; Figure 3), while mutation data exhibited poor consistency as reported previously [11]. The very high consistency of copy number data is quite remarkable ( Figure 3a) and could be partly attributed to the fact that CCLE investigators used their SNP array data to compare cell line fingerprints with those of the GDSC project prior to publication and removed the discordant cases from their dataset [2].
Figure 3.

Consistency of molecular profiles between GDSC and CCLE.

( a) Continuous values for gene copy number ratio (CNV), gene expression (EXPRESSION), AUC and IC 50 and ( b) for binary values for presence/absence of mutations (MUTATION) and insensitive/sensitive calls based on AUC >= 0.2 and IC 50 > 1 microMolar values. PCC: Pearson correlation coefficient; Kappa: Cohen's Kappa coefficient.

Consistency of molecular profiles between GDSC and CCLE.

( a) Continuous values for gene copy number ratio (CNV), gene expression (EXPRESSION), AUC and IC 50 and ( b) for binary values for presence/absence of mutations (MUTATION) and insensitive/sensitive calls based on AUC >= 0.2 and IC 50 > 1 microMolar values. PCC: Pearson correlation coefficient; Kappa: Cohen's Kappa coefficient.

Conclusions

We agree with the authors that their and our observations “[…] raise important questions for the field about how best to perform comparisons of large-scale data sets, evaluate the robustness of such studies, and interpret their analytical outputs.” We believe that a principled approach using objective measures of consistency and an appropriate analysis strategy for assessing the independent datasets is essential. An investigation of both the methods described in the manuscript and the software code used by the authors to perform their analysis [4] identified fundamental differences in analysis design compared to our previous published study [3]. By taking into account variations in both the pharmacological and genomic data, our assessment of pharmacogenomic agreement is more stringent and closer to the translation of drug sensitivity predictors in preclinical and clinical settings, where zero-noise genomic information cannot be expected. Our stringent re-analysis of the most updated data from the GDSC and CCLE confirms our 2013 finding that the measures of drug response reported by these two groups are not consistent and have not improved substantially as the groups have continued generating data since 2012 [10]. While the authors make arguments suggesting consistency, it is difficult to imagine using these post hoc methods to drive discovery or precision medicine applications. The observed inconsistency between early microarray gene expression studies served as a rallying cry for the field, leading to an improvement and standardization of experimental and analytical protocols, resulting in the agreement we see between studies published today. We are looking forward to the establishment of new standards for large-scale pharmacogenomic studies to realize the full potential of these valuable data for precision medicine.

Methods

The authors’ software source code. As the authors’ source code, we refer to the ‘CCLE.GDSC.compare’ (version 1.0.4 from December 18, 2015) and DRANOVA (version 1.0 from October 21, 2014) R packages available from http://www.broadinstitute.org/ccle/Rpackage/.

Pharmacogenomic data

As evidenced in the authors' code (lines 20 and 29 of CCLE.GDSC.compare::PreprocessData.R), they used GDSC and CCLE pharmacological data released on July 2012 and February 2012, respectively. However the GDSC released updated sets of pharmacological data (release 5) on June 2014, gene expression arrays (E-MTAB-3610) and SNP arrays (EGAD00001001039) on July 2015. CCLE released updated pharmacological data on February 2015, the mutation and SNP array on October 2012, and the gene expression data, on March 2013. These updates substantially increased the overlap in genomic features between the two studies, thus providing new opportunities to investigate the consistency between GDSC and CCLE [10].

ANOVA analysis

In the authors’ ANOVA analyses, identical mutation data were used for both GDSC and CCLE studies as can be seen in the authors’ analysis code in lines 20, 25–35 of CCLE.GDSC.compare::plotFig2A_biomarkers.R.

ElasticNet (EN) analysis

In their EN analyses, the authors compared different drug sensitivity measures, using IC 50 in GDSC and AUC in CCLE, as described in the Supplementary Data 5 and stated in the Methods section of their published study: “ Since the IC50 is not reported in CCLE when it exceeds the tested range of 8 μM, we used the activity area for the regression as in the original CCLE publication. We also used the values considered to be the best in the original GDSC study: the interpolated log(IC50) values.” This was confirmed by looking at the authors’ analysis code, lines 83 and 102 of CCLE.GDSC.compare::ENcode/prepData.R. Moreover, identical genomic data were used for both GDSC and CCLE studies, as described the Methods section of the published study: “ In order to compare features between the two studies, we used the same genomic data set (CCLE).” This was confirmed by looking at the authors’ analysis code, lines 17, 38, 51, and 70 of CCLE.GDSC.compare::ENcode/genomic.data.R, and lines 10-11 of CCLE.GDSC.compare::plotFigS6_ENFeatureVsExpected.R.

Statistical analysis

All analyses were performed using the most updated version of the GDSC and CCLE pharmacogenomic data based on our PharmacoGx package [12] (version 1.1.4).

Research replicability

All analyses were performed using the most updated version of the GDSC and CCLE pharmacogenomic data based on our PharmacoGx package [12] (version 1.1.4). PharmacoGx provides intuitive function to download, intersect and compare large pharmacogenomics datasets. The PharmacoSet for the GDSC and CCLE datasets are available from pmgenomics.ca/bhklab/sites/default/files/downloads/ using the downloadPSet() function. The code and the data used to generate all the results and figures are available as Data Files 1 and 2. The code is also available on GitHub: github.com/bhklab/cdrug-rebuttal.

The Waterfall approach

In the Methods, the authors use all cell lines to optimally identify the inflection point in the response distribution curves. The authors stated that “ This is a major difference to the Haibe-Kains et al. analysis, as that analysis only considered the cell-lines in common between the studies when generating response distribution curves.” This is not correct. As can be seen in our publicly available R code, we performed the sensitivity calling (using the Waterfall approach as published in the CCLE study [2] before restricting our analysis to the common cell lines, for the obvious reasons that the authors mentioned in their manuscript. See lines 308 and 424 in https://github.com/bhklab/cdrug/blob/master/CDRUG_format.R.

Data and software availability

Open Science Framework: Dataset: Assessment of pharmacogenomic agreement, doi 10.17605/osf.io/47rfh [13] The paper highlights the curious lack of rigorous standards for what constitutes ‘agreement’, ‘consistency’ between genomic studies, or more generally, the fundamental issues of ‘validation’ and ‘reproducibility’, etc. The problem is even more serious of results based on high-throughput omics data as the potential for false positive is substantial. The persistent lack of consensus or standards may partly indicate that these issues are not so straightforward. The main problem is that when we say we ‘validate’ a result, this can be done at different strengths. For example, consider the commonly performed method in statistical analyses, the so-called ‘cross-validation’, where we split our total sample into training and validation sets. If the split is done randomly, then we have only a ‘soft validation’, since it applies to the same sample (or same lab, same population, same measurement method, etc) so the ‘validation’ is internal and corresponds to statistical significance only. In contrast a scientist may wish for something stronger,  for an external validation, for example, for the ‘biological truth’ to apply other populations; thus, one study may be performed in a European population, but the external validation is done in an Asian population. The latter is a stronger validation than the random-split validation, giving a more compelling and general biological story. What is relevant here is that both validations are commonly done in practice, and both are valid, but they carry different levels of information. I think what matters in practice is that the implication of the validation should always be clear (or clarified), so that the user of the information can judge its relevance. The key point of Safikhani et al is that their 2013 validation study of the genomic predictors of drug-sensitivity was more stringent than the 2015 validation studies by the GDSC and CCLE investigators. This is clearly highlighted in Figure 1, where the latter used the same molecular data, so the ‘validation’ is only of the pharmacological data and perhaps (not clear to me) the method of analyses. Which level of validation is more relevant here? Let us imagine how the results (eg the genomic predictors) are to be used in patients. The molecular data are likely to be generated and analyzed in a diversity of labs, so the genomic predictors should really be robust to the actual heterogeneity in the molecular data. The results (the genomic predictors) may not survive such stringent requirements, but that is what we need to know. So, overall, I agree with Safikhani et al that a more stringent validation allowing for variability in both molecular and pharmacological data is more relevant in this context of drug prediction. (However, reading Haibe-Kains et al, there seemed to be an emphasis that the failure of agreement was due to the high variability in the pharmacological data. So it is possible that the later studies by the GDSC-CCLE investigators responded to this concern only.) Regarding specific issues in the paper: I do not consider the use of most recent data as a key issue. I agree that the choice of IC 50 in GDSC vs AUC in CCLE is puzzling and only raises a question mark regarding the results. Arbitrary cutoffs in defining what constitutes an ‘agreement’ are unnecessary if authors can refrain from using judgmental  words like ‘high consistency’ etc., especially when used as a summary statement across distinct drugs. It would be better to just report the actual performance for each drug or for each cancer type, since it is still not clear how these statistics would translate in terms of clinical cost-benefit balance. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. I found the title appropriate, and that the abstract represented a suitable summary of the work. I believe that the design, methods and analysis of results are appropriate for the topic being studied, and that for the most part, they were clearly explained. A couple of perceived shortcomings are itemized here. p.3, column 2, line 2. The “but” would be better replaced by “and”. p.5. Figure 2. The dotted and solid diagonal lines on these plots are not identified in either the caption or the text. p.5, Figure 3. It is nowhere explained whose Pearson correlations (PCC) are summarized in these box plots. I suppose that some number (to be stated) of cell lines were profiled in both GDSC and CCLE, and that in all cases, the PCC in the box plots are calculated from molecular data from pairs consisting of the data on the same cell line generated in GDSC and in CCLE. A clear statement along these lines would be helpful. p.6, column 1, lines 1-4. This assertion would have more force if the authors told the reader how many cell lines could have contributed PCC to the box plot of Figure 3a, and how many did do so. Further, I do believe that the conclusions are sensible, balanced and justified on the basis of the results of the study. Finally, I understand that all the data used in this study is available, and this is also true for the code used to generate all the results and figures. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. It is a lot to take/digest the manuscript. I break this story into three parts: I have no problem with their analysis and support their conclusions. With that said, I did find the paper could flow better by moving two sections into Discussion. These are: Overall, I support its indexation with revision by focusing on the flow of the story and the structure of manuscript. In 2012, both GDSC and CCLE released/published drug sensitivity data (both pharmacological and genomic). In 2013, the authors compared the two studies using the drugs in common between two. Their analysis was carried out in a direct fashion which account for variations of both genomic and pharmacological data from the same site (GDSC or CCLE) and found the results between two did not agree. Recently, GDSC/CCLE did an independent analysis and demonstrated that the agreement between two are actually higher (using ANOVA) than what the authors reported. They concluded that the results between GDSC and CCLE were consistent. However, the comparison was only focused on the pharmacological data because the genomic data used actually came from one site. That means their analysis did not include the noise introduced by both sites in this comparison. The authors, again, reanalyzed data by including pharmacological and genomic data from both sites and the conclusions remain as the same as they reported in 2013. “Which pharmacological drug response data should one use?” - It seems odd and smell bad that GDSC/CCLE used the data published in 2012 and totally ignored the most current data in their analysis. This could be due to many different reasons. Thus, speculation is not necessary considered as “results”. I would say this will be better justified as “discussion”. “What constitutes agreement” – Again, this is a difficult call. I believe there is no single baseline that can be used to justify consistency. Thus, most text in this section will sit better in “discussion”. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
  11 in total

1.  Multi-platform assessment of transcriptome profiling using RNA-seq in the ABRF next-generation sequencing study.

Authors:  Sheng Li; Scott W Tighe; Charles M Nicolet; Deborah Grove; Shawn Levy; William Farmerie; Agnes Viale; Chris Wright; Peter A Schweitzer; Yuan Gao; Dewey Kim; Joe Boland; Belynda Hicks; Ryan Kim; Sagar Chhangawala; Nadereh Jafari; Nalini Raghavachari; Jorge Gandara; Natàlia Garcia-Reyero; Cynthia Hendrickson; David Roberson; Jeffrey Rosenfeld; Todd Smith; Jason G Underwood; May Wang; Paul Zumbo; Don A Baldwin; George S Grills; Christopher E Mason
Journal:  Nat Biotechnol       Date:  2014-08-24       Impact factor: 54.908

Review 2.  The kappa statistic in reliability studies: use, interpretation, and sample size requirements.

Authors:  Julius Sim; Chris C Wright
Journal:  Phys Ther       Date:  2005-03

3.  The MicroArray Quality Control (MAQC) project shows inter- and intraplatform reproducibility of gene expression measurements.

Authors:  Leming Shi; Laura H Reid; Wendell D Jones; Richard Shippy; Janet A Warrington; Shawn C Baker; Patrick J Collins; Francoise de Longueville; Ernest S Kawasaki; Kathleen Y Lee; Yuling Luo; Yongming Andrew Sun; James C Willey; Robert A Setterquist; Gavin M Fischer; Weida Tong; Yvonne P Dragan; David J Dix; Felix W Frueh; Frederico M Goodsaid; Damir Herman; Roderick V Jensen; Charles D Johnson; Edward K Lobenhofer; Raj K Puri; Uwe Schrf; Jean Thierry-Mieg; Charles Wang; Mike Wilson; Paul K Wolber; Lu Zhang; Shashi Amur; Wenjun Bao; Catalin C Barbacioru; Anne Bergstrom Lucas; Vincent Bertholet; Cecilie Boysen; Bud Bromley; Donna Brown; Alan Brunner; Roger Canales; Xiaoxi Megan Cao; Thomas A Cebula; James J Chen; Jing Cheng; Tzu-Ming Chu; Eugene Chudin; John Corson; J Christopher Corton; Lisa J Croner; Christopher Davies; Timothy S Davison; Glenda Delenstarr; Xutao Deng; David Dorris; Aron C Eklund; Xiao-hui Fan; Hong Fang; Stephanie Fulmer-Smentek; James C Fuscoe; Kathryn Gallagher; Weigong Ge; Lei Guo; Xu Guo; Janet Hager; Paul K Haje; Jing Han; Tao Han; Heather C Harbottle; Stephen C Harris; Eli Hatchwell; Craig A Hauser; Susan Hester; Huixiao Hong; Patrick Hurban; Scott A Jackson; Hanlee Ji; Charles R Knight; Winston P Kuo; J Eugene LeClerc; Shawn Levy; Quan-Zhen Li; Chunmei Liu; Ying Liu; Michael J Lombardi; Yunqing Ma; Scott R Magnuson; Botoul Maqsodi; Tim McDaniel; Nan Mei; Ola Myklebost; Baitang Ning; Natalia Novoradovskaya; Michael S Orr; Terry W Osborn; Adam Papallo; Tucker A Patterson; Roger G Perkins; Elizabeth H Peters; Ron Peterson; Kenneth L Philips; P Scott Pine; Lajos Pusztai; Feng Qian; Hongzu Ren; Mitch Rosen; Barry A Rosenzweig; Raymond R Samaha; Mark Schena; Gary P Schroth; Svetlana Shchegrova; Dave D Smith; Frank Staedtler; Zhenqiang Su; Hongmei Sun; Zoltan Szallasi; Zivana Tezak; Danielle Thierry-Mieg; Karol L Thompson; Irina Tikhonova; Yaron Turpaz; Beena Vallanat; Christophe Van; Stephen J Walker; Sue Jane Wang; Yonghong Wang; Russ Wolfinger; Alex Wong; Jie Wu; Chunlin Xiao; Qian Xie; Jun Xu; Wen Yang; Liang Zhang; Sheng Zhong; Yaping Zong; William Slikker
Journal:  Nat Biotechnol       Date:  2006-09       Impact factor: 54.908

4.  The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models.

Authors:  Leming Shi; Gregory Campbell; Wendell D Jones; Fabien Campagne; Zhining Wen; Stephen J Walker; Zhenqiang Su; Tzu-Ming Chu; Federico M Goodsaid; Lajos Pusztai; John D Shaughnessy; André Oberthuer; Russell S Thomas; Richard S Paules; Mark Fielden; Bart Barlogie; Weijie Chen; Pan Du; Matthias Fischer; Cesare Furlanello; Brandon D Gallas; Xijin Ge; Dalila B Megherbi; W Fraser Symmans; May D Wang; John Zhang; Hans Bitter; Benedikt Brors; Pierre R Bushel; Max Bylesjo; Minjun Chen; Jie Cheng; Jing Cheng; Jeff Chou; Timothy S Davison; Mauro Delorenzi; Youping Deng; Viswanath Devanarayan; David J Dix; Joaquin Dopazo; Kevin C Dorff; Fathi Elloumi; Jianqing Fan; Shicai Fan; Xiaohui Fan; Hong Fang; Nina Gonzaludo; Kenneth R Hess; Huixiao Hong; Jun Huan; Rafael A Irizarry; Richard Judson; Dilafruz Juraeva; Samir Lababidi; Christophe G Lambert; Li Li; Yanen Li; Zhen Li; Simon M Lin; Guozhen Liu; Edward K Lobenhofer; Jun Luo; Wen Luo; Matthew N McCall; Yuri Nikolsky; Gene A Pennello; Roger G Perkins; Reena Philip; Vlad Popovici; Nathan D Price; Feng Qian; Andreas Scherer; Tieliu Shi; Weiwei Shi; Jaeyun Sung; Danielle Thierry-Mieg; Jean Thierry-Mieg; Venkata Thodima; Johan Trygg; Lakshmi Vishnuvajjala; Sue Jane Wang; Jianping Wu; Yichao Wu; Qian Xie; Waleed A Yousef; Liang Zhang; Xuegong Zhang; Sheng Zhong; Yiming Zhou; Sheng Zhu; Dhivya Arasappan; Wenjun Bao; Anne Bergstrom Lucas; Frank Berthold; Richard J Brennan; Andreas Buness; Jennifer G Catalano; Chang Chang; Rong Chen; Yiyu Cheng; Jian Cui; Wendy Czika; Francesca Demichelis; Xutao Deng; Damir Dosymbekov; Roland Eils; Yang Feng; Jennifer Fostel; Stephanie Fulmer-Smentek; James C Fuscoe; Laurent Gatto; Weigong Ge; Darlene R Goldstein; Li Guo; Donald N Halbert; Jing Han; Stephen C Harris; Christos Hatzis; Damir Herman; Jianping Huang; Roderick V Jensen; Rui Jiang; Charles D Johnson; Giuseppe Jurman; Yvonne Kahlert; Sadik A Khuder; Matthias Kohl; Jianying Li; Li Li; Menglong Li; Quan-Zhen Li; Shao Li; Zhiguang Li; Jie Liu; Ying Liu; Zhichao Liu; Lu Meng; Manuel Madera; Francisco Martinez-Murillo; Ignacio Medina; Joseph Meehan; Kelci Miclaus; Richard A Moffitt; David Montaner; Piali Mukherjee; George J Mulligan; Padraic Neville; Tatiana Nikolskaya; Baitang Ning; Grier P Page; Joel Parker; R Mitchell Parry; Xuejun Peng; Ron L Peterson; John H Phan; Brian Quanz; Yi Ren; Samantha Riccadonna; Alan H Roter; Frank W Samuelson; Martin M Schumacher; Joseph D Shambaugh; Qiang Shi; Richard Shippy; Shengzhu Si; Aaron Smalter; Christos Sotiriou; Mat Soukup; Frank Staedtler; Guido Steiner; Todd H Stokes; Qinglan Sun; Pei-Yi Tan; Rong Tang; Zivana Tezak; Brett Thorn; Marina Tsyganova; Yaron Turpaz; Silvia C Vega; Roberto Visintainer; Juergen von Frese; Charles Wang; Eric Wang; Junwei Wang; Wei Wang; Frank Westermann; James C Willey; Matthew Woods; Shujian Wu; Nianqing Xiao; Joshua Xu; Lei Xu; Lun Yang; Xiao Zeng; Jialu Zhang; Li Zhang; Min Zhang; Chen Zhao; Raj K Puri; Uwe Scherf; Weida Tong; Russell D Wolfinger
Journal:  Nat Biotechnol       Date:  2010-07-30       Impact factor: 54.908

5.  PharmacoGx: an R package for analysis of large pharmacogenomic datasets.

Authors:  Petr Smirnov; Zhaleh Safikhani; Nehme El-Hachem; Dong Wang; Adrian She; Catharina Olsen; Mark Freeman; Heather Selby; Deena M A Gendoo; Patrick Grossmann; Andrew H Beck; Hugo J W L Aerts; Mathieu Lupien; Anna Goldenberg; Benjamin Haibe-Kains
Journal:  Bioinformatics       Date:  2015-12-09       Impact factor: 6.937

6.  Inconsistency in large pharmacogenomic studies.

Authors:  Benjamin Haibe-Kains; Nehme El-Hachem; Nicolai Juul Birkbak; Andrew C Jin; Andrew H Beck; Hugo J W L Aerts; John Quackenbush
Journal:  Nature       Date:  2013-11-27       Impact factor: 49.962

7.  Pharmacogenomic agreement between two cancer cell line data sets.

Authors: 
Journal:  Nature       Date:  2015-11-16       Impact factor: 49.962

8.  The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity.

Authors:  Jordi Barretina; Giordano Caponigro; Nicolas Stransky; Kavitha Venkatesan; Adam A Margolin; Sungjoon Kim; Christopher J Wilson; Joseph Lehár; Gregory V Kryukov; Dmitriy Sonkin; Anupama Reddy; Manway Liu; Lauren Murray; Michael F Berger; John E Monahan; Paula Morais; Jodi Meltzer; Adam Korejwa; Judit Jané-Valbuena; Felipa A Mapa; Joseph Thibault; Eva Bric-Furlong; Pichai Raman; Aaron Shipway; Ingo H Engels; Jill Cheng; Guoying K Yu; Jianjun Yu; Peter Aspesi; Melanie de Silva; Kalpana Jagtap; Michael D Jones; Li Wang; Charles Hatton; Emanuele Palescandolo; Supriya Gupta; Scott Mahan; Carrie Sougnez; Robert C Onofrio; Ted Liefeld; Laura MacConaill; Wendy Winckler; Michael Reich; Nanxin Li; Jill P Mesirov; Stacey B Gabriel; Gad Getz; Kristin Ardlie; Vivien Chan; Vic E Myer; Barbara L Weber; Jeff Porter; Markus Warmuth; Peter Finan; Jennifer L Harris; Matthew Meyerson; Todd R Golub; Michael P Morrissey; William R Sellers; Robert Schlegel; Levi A Garraway
Journal:  Nature       Date:  2012-03-28       Impact factor: 49.962

9.  Systematic identification of genomic markers of drug sensitivity in cancer cells.

Authors:  Mathew J Garnett; Elena J Edelman; Sonja J Heidorn; Chris D Greenman; Anahita Dastur; King Wai Lau; Patricia Greninger; I Richard Thompson; Xi Luo; Jorge Soares; Qingsong Liu; Francesco Iorio; Didier Surdez; Li Chen; Randy J Milano; Graham R Bignell; Ah T Tam; Helen Davies; Jesse A Stevenson; Syd Barthorpe; Stephen R Lutz; Fiona Kogera; Karl Lawrence; Anne McLaren-Douglas; Xeni Mitropoulos; Tatiana Mironenko; Helen Thi; Laura Richardson; Wenjun Zhou; Frances Jewitt; Tinghu Zhang; Patrick O'Brien; Jessica L Boisvert; Stacey Price; Wooyoung Hur; Wanjuan Yang; Xianming Deng; Adam Butler; Hwan Geun Choi; Jae Won Chang; Jose Baselga; Ivan Stamenkovic; Jeffrey A Engelman; Sreenath V Sharma; Olivier Delattre; Julio Saez-Rodriguez; Nathanael S Gray; Jeffrey Settleman; P Andrew Futreal; Daniel A Haber; Michael R Stratton; Sridhar Ramaswamy; Ultan McDermott; Cyril H Benes
Journal:  Nature       Date:  2012-03-28       Impact factor: 49.962

10.  A comprehensive assessment of RNA-seq accuracy, reproducibility and information content by the Sequencing Quality Control Consortium.

Authors: 
Journal:  Nat Biotechnol       Date:  2014-08-24       Impact factor: 54.908

View more
  18 in total

1.  Computational Analyses Connect Small-Molecule Sensitivity to Cellular Features Using Large Panels of Cancer Cell Lines.

Authors:  Matthew G Rees; Brinton Seashore-Ludlow; Paul A Clemons
Journal:  Methods Mol Biol       Date:  2019

Review 2.  A review of connectivity map and computational approaches in pharmacogenomics.

Authors:  Aliyu Musa; Laleh Soltan Ghoraie; Shu-Dong Zhang; Galina Glazko; Olli Yli-Harja; Matthias Dehmer; Benjamin Haibe-Kains; Frank Emmert-Streib
Journal:  Brief Bioinform       Date:  2018-05-01       Impact factor: 11.622

Review 3.  Predictive approaches for drug combination discovery in cancer.

Authors:  Seyed Ali Madani Tonekaboni; Laleh Soltan Ghoraie; Venkata Satya Kumar Manem; Benjamin Haibe-Kains
Journal:  Brief Bioinform       Date:  2018-03-01       Impact factor: 11.622

4.  Measuring Cancer Drug Sensitivity and Resistance in Cultured Cells.

Authors:  Mario Niepel; Marc Hafner; Mirra Chung; Peter K Sorger
Journal:  Curr Protoc Chem Biol       Date:  2017-06-19

5.  Evaluating the consistency of large-scale pharmacogenomic studies.

Authors:  Raziur Rahman; Saugato Rahman Dhruba; Kevin Matlock; Carlos De-Niz; Souparno Ghosh; Ranadip Pal
Journal:  Brief Bioinform       Date:  2019-09-27       Impact factor: 11.622

6.  Revisiting inconsistency in large pharmacogenomic studies.

Authors:  Zhaleh Safikhani; Petr Smirnov; Mark Freeman; Nehme El-Hachem; Adrian She; Quevedo Rene; Anna Goldenberg; Nicolai J Birkbak; Christos Hatzis; Leming Shi; Andrew H Beck; Hugo J W L Aerts; John Quackenbush; Benjamin Haibe-Kains
Journal:  F1000Res       Date:  2016-09-16

7.  Disruption of the anaphase-promoting complex confers resistance to TTK inhibitors in triple-negative breast cancer.

Authors:  K L Thu; J Silvester; M J Elliott; W Ba-Alawi; M H Duncan; A C Elia; A S Mer; P Smirnov; Z Safikhani; B Haibe-Kains; T W Mak; D W Cescon
Journal:  Proc Natl Acad Sci U S A       Date:  2018-01-29       Impact factor: 11.205

8.  CellMiner Cross-Database (CellMinerCDB) version 1.2: Exploration of patient-derived cancer cell line pharmacogenomics.

Authors:  Augustin Luna; Fathi Elloumi; Sudhir Varma; Yanghsin Wang; Vinodh N Rajapakse; Mirit I Aladjem; Jacques Robert; Chris Sander; Yves Pommier; William C Reinhold
Journal:  Nucleic Acids Res       Date:  2021-01-08       Impact factor: 16.971

9.  Tissue specificity of in vitro drug sensitivity.

Authors:  Fupan Yao; Seyed Ali Madani Tonekaboni; Zhaleh Safikhani; Petr Smirnov; Nehme El-Hachem; Mark Freeman; Venkata Satya Kumar Manem; Benjamin Haibe-Kains
Journal:  J Am Med Inform Assoc       Date:  2018-02-01       Impact factor: 4.497

10.  Dose-response modeling in high-throughput cancer drug screenings: an end-to-end approach.

Authors:  Wesley Tansey; Kathy Li; Haoran Zhang; Scott W Linderman; Raul Rabadan; David M Blei; Chris H Wiggins
Journal:  Biostatistics       Date:  2022-04-13       Impact factor: 5.279

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.