Literature DB >> 29375816

The rise and fall of machine learning methods in biomedical research.

Hashem Koohy1,2.   

Abstract

In the era of explosion in biological data, machine learning techniques are becoming more popular in life sciences, including biology and medicine. This research note examines the rise and fall of the most commonly used machine learning techniques in life sciences over the past three decades.

Entities:  

Keywords:  deep neural network; hierarchical clustering; linear regression; machine learning; principal component; random forest; support vector machine; t-SNE

Year:  2017        PMID: 29375816      PMCID: PMC5760972          DOI: 10.12688/f1000research.13016.2

Source DB:  PubMed          Journal:  F1000Res        ISSN: 2046-1402


Introduction

Over the past three decades, biological data have grown dramatically in both size and complexity. The major contributors to the growth in size of computation biology data include, but not are not limited to, the ability of biologists to sequence complex genomes such as the human genome (1990–2003) ( Lander ), the advent of new high throughput sequencing techniques (around 2008) ( Marx, 2013), and most recently the very rapid advancements in single cell technologies, introduced in 2009 ( Wang & Navin, 2015). The complexity of biological data has been growing even faster, and doesn’t seem to be linearly dependent on the size of data. Examples of complexity in the field of computational genomics include multiple diverse sources of technical noise, low signal to noise ratio, low numbers of biological replicates in comparative approaches, rare and usually hardly detectable mutations in non-coding regions and rare and barely identifiable cell types in complex heterogeneous systems such as the immune system and/or the brain. At the intersection of mathematics, statistics and computer science is machine learning (ML), the de facto tool box in data science for deciphering the relationship between the input and output as well as detecting significant patterns within large, complex data sets. These quantitative approaches have been shown to be effective and are becoming increasingly popular in addressing challenges such as those outlined above. Highlights of their successful applications in functional genomics include, but are not limited to, learning and characterizing chromatin states by employing unsupervised approaches such as chromHMM ( Ernst & Kellis, 2012), predicting sequence specificities of DNA- and RNA-binding proteins using convolutional neural networks such as DeepBind ( Alipanahi ), and employing a combination of supervised and unsupervised approach to determine the genetic and epigenetic contributors of antibody repertoire diversity ( Bolland ). Nowadays it is almost impossible to publish a study on single cell assays without using dimensionality reduction methods such as Principal Component Analysis or t-SNE. One indirect measure of the success of these techniques in extracting scientific insights from biological data is to measure the popularity and usage of machine learning algorithms in life sciences research over time ( Jensen & Bateman, 2011). Motivated by Jensen et al., I therefore set out to update machine learning usage in life sciences. For this I quantified what fraction of published papers in the PubMed database mention a particular technique and how these number of citations are changed each year (see methods).

Methods

For this analysis, I used the R RISmed package ( Kovalchik, 2015) to parse the publication data from NCBI. I examined publications in PubMed from 1990 to 2017 using a metric that measures the proportion of publications per year that mention the technique in the full text (Hits Per Year per Million articles published, or HPYM). The Popularity Rate (PR) of a technique was then defined as the difference between HPYMs in any two consecutive years. A positive PR shows an increase in popularity, whereas a negative PR reflects a decrease in popularity. I limited this note to 12 models listed in Table 1 which have been the most common or which showed a sharp change in popularity rate at a particular time. However, the R code is available with which any particular model during a specific period of time can be easily measured.
Table 1.

Common Machine Learning Techniques in Life Sciences.

This table shows 12 machine learning techniques whose popularity in life sciences have been investigated in this study. Technical note: Supervised means that the model requires training data to learn its parameters. A supervised model is used to predict the future instances. An unsupervised model doesn’t require any training data and is used to detect patterns within a dataset. Dimensionality reduction models are used to project high-dimensional datasets into lower dimension space where new variables are more interpretable.

TechniqueAbbreviationCategory
Random ForestRFSupervised
Support Vector MachineSVMSupervised
Artificial Neural NetworkANNSupervised
Deep Neural NetworkDNNSupervised & Unsupervised
Principal Component AnalysisPCADimensionality Reduction
Linear RegressionLRSupervised
Markov ModelMMUnsupervised
Decision TreeDTSupervised
Hierarchical ClusteringHCUnsupervised
t-Distributed Stochastic Neighbour Embeddingt-SNEDimensionality Reduction
Logistic Regression ModelLogRegSupervised
Naïve Bayes ClassifierNBCSupervised

Common Machine Learning Techniques in Life Sciences.

This table shows 12 machine learning techniques whose popularity in life sciences have been investigated in this study. Technical note: Supervised means that the model requires training data to learn its parameters. A supervised model is used to predict the future instances. An unsupervised model doesn’t require any training data and is used to detect patterns within a dataset. Dimensionality reduction models are used to project high-dimensional datasets into lower dimension space where new variables are more interpretable.

Results

This analysis demonstrates that the overall popularity of machine learning methods in biomedical research has linearly increased since 1990 to 2017, but with two different slopes. From 1990 to 2000 the slope is 0.02, meaning that popularity increased only 2% per year. In 2001 (when sequencing big genomes became possible) the slope increased to 0.06, and since then it has remained constant. A maximum of 1.2% of all papers published in PubMed in any calendar year have mentioned one of the machine learning methods investigated in this study ( Figure 1). I was expecting to see a higher usage of ML in life sciences, but without a gold standard set to compare with, I would not be able to judge if this is too high or low or just about right.
Figure 1.

Cumulative usage of all 12 machine-learning techniques used in this manuscript.

Two different linear regression models have been fitted to this data. The first one covers years from 1990 to 2000. The second one that shows a triple increase in its slope covers from 2001 till 2017. Y-axis shows the number of hits per 100 publications.

Cumulative usage of all 12 machine-learning techniques used in this manuscript.

Two different linear regression models have been fitted to this data. The first one covers years from 1990 to 2000. The second one that shows a triple increase in its slope covers from 2001 till 2017. Y-axis shows the number of hits per 100 publications. The Linear Regression (LR) models have been the most dominant machine learning techniques in the life sciences over the past three decades ( Figure 2A). It is interesting to see that LR models are still highly in used despite recent appearance of sophisticated ML techniques such as ensemble-based approaches and/or Support Vector Machines and even with very recent and state of the art deep learning techniques. Although, its popularity rate has been plateaued over the past few years ( Figure 3) meaning that its usage is increased linearly with a constant slope. With a constant increase of 300 HPYM, and considering its higher intercept at 1990, the linear regression models is predicted to be one of the most popular techniques over the next few years.
Figure 2.

A: Trends of individual machine-learning techniques defined as per million hits in y-axis. B: Similar to A but without the two very highly used techniques Linear Regression and Principal Components Analysis in order to enhance clarity in usage of other not-very-commonly used techniques that were overshadowed by LRs and PCAs.

Figure 3.

An illustration of popularity rate of all 12 techniques used in this manuscript.

The PR has been defined as differences of HPYMs in each two-consecutive year for each model. This number have been further re-scaled to vary only between -1 and 1.

A: Trends of individual machine-learning techniques defined as per million hits in y-axis. B: Similar to A but without the two very highly used techniques Linear Regression and Principal Components Analysis in order to enhance clarity in usage of other not-very-commonly used techniques that were overshadowed by LRs and PCAs.

An illustration of popularity rate of all 12 techniques used in this manuscript.

The PR has been defined as differences of HPYMs in each two-consecutive year for each model. This number have been further re-scaled to vary only between -1 and 1. Perhaps a very surprising observation of this study is the rise and fall of Principle Component Analysis (PCA). PCA became very fashionable during 2000 to 2013. In fact, 3329 per million papers published in 2013 mentioned PCA which was the highest number of PCA usage. Since then it has been used less, although it still is the second most popular tool ( Figure 2A). In early 2000s, unsupervised Hierarchical Clustering alongside newly introduced supervised techniques Support Vector Machines (SVMs) and Random Forests (RFs), showed a sharp rise in usage, which was mainly associated to microarray data analysis. Usage of hierarchical clustering plateaued shortly after its sharp popularity rise in 2000. SVMs kept their popularity longer, for almost a decade in fact, but subsequently dropped to an almost negligible popularity rate ( Figure 3). RFs on the other hand, showed less popularity at the beginning of their arrival, but later on (after 2013) they were ranked the second highest in popularity after Deep Neural Networks (DNN) ( Figure 2A, 2B and Figure 3). During the period of 1990–2017, Artificial Neural Networks (ANNs) have demonstrated considerable fluctuations in popularity ( Figure 2B and Figure 3). ANNs in the early 1990’s after Linear Regression and PCA, were the most commonly used techniques until early 2000, when they lost their popularity to MMs, HCs and SVMs and even later to RFs. However, since 2013, a sub-family of ANN known as Deep Neural Networks (DNNs) made their way into the life sciences, and their usage since then has increased remarkably, so that DNNs currently have the highest popularity rate ( Figure 3). The dimensionality reduction technique t-distributed Stochastic Neighbour Embedding (t-SNE) published in 2008, has become quickly tailored to all sorts of single cell techniques. It is therefore not surprising to see that t-SNE usage has also been very rapidly growing over the past few years ( Figure 2B). Click here for additional data file.

Discussion

I have illustrated the rise and fall of ML techniques in life sciences from 1990 to the present day. I chose this period because I believe this is the transition period for life scientists to join the big-data club. With the same R code used in this study to parse the publication data from NCBI, it would be possible to look at any period of time. It was not very surprising to see LR models as the most commonly used model in the field, since: a) LR models are one of the oldest ML methods that have been in use in almost any field, b) Parameters in LR models can be learned by using a training data with just a few data samples. c) A lot of other models can be placed under this umbrella, for instance by first applying a transformation function. It was, however, surprising to see the sharp rise and fall of PCA. Perhaps a contributing factor to PCA being the most dominant dimensionality reduction method available in this period was its easy-to-use implementation in R. The question still remains as to why its popularity decreased from 2008 onwards. Perhaps the arrival of more versatile models such as RFs and SVMs which are very capable of handling high dimensionality and dealing with co-linearity in biological data eased the need to use PCA. Additionally, t-SNE as a tremendously growing dimensional reduction model in the field, is establishing itself as a strong competitor for the PCA. ANNs have been fairly popular since the 1990s until around 2004. Around that time more readily useable and less complex techniques became available, such as SVMs, RFs and MMs. However, with the huge investments of giant information companies such as Google leading to very impressive applications of DNNs and other sub-families of ANNs, in various disciplines, DNNs, currently has the sharpest popularity rate ( Figure 3). I appreciate that there are limitations to this study. For instance, for the majority of comparative analyses of gene expression, researchers use a differential expression software and/or package, but cite only the package name and not the underlying statistical or ML technique used in the package. These cases have not been covered in this study. However, this study can be considered as an approximation of the extent to which machine learning techniques are used in life sciences. This note can be considered as an update of a similar study by Jensen et al. ( Jensen & Bateman, 2011), in which the authors investigated the rise and fall of a number supervised machine learning techniques in life sciences. Here, I have gone beyond the abstracts and searched the full text of each paper, for the usage of both supervised and unsupervised ML technique.

Data and software availability

Dataset 1: The text file contains the raw data underlying the results presented in this study, i.e. the number of publications in PubMed mentioning each machine learning technique from 1990–2017. These data is further normalized per million for downstream analysis. DOI, 10.5256/f1000research.13016.d184022 ( Koohy, 2017). R code used to parse the publication data from NCBI is available at: https://github.com/hkoohy/Machine_Learning_in_Life_Sciences Archived source code as at the time of publication: http://doi.org/10.5281/zenodo.1039642 ( hkoohy, 2017). License: GNU GENERAL PUBLIC LICENSE I thank Dr. Koohy for addressing my raised issues adequately. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. In the manuscript "The rise and fall of machine learning methods in biomedical research" the author has generated a quantitative perspective on the usage of machine learning methods in the life sciences. For some of the methods a hypothesis about the underlying reason for an increased or decrease popularity are discussed. The code for performing the analysis is available on GitHub and - like the retrieved PubMed data - has been deposited at Zenodo. I have several major objections / question / suggestion for the author: I tried to reproduce the analysis using RStudio 1.1.383 with the deposited RStudio project but got the following error when executing the R chunks in the file Machine_Learning_Trends.Rmd: "Error in library(informationRetrieval) : there is no package called ‘informationRetrieval’" The file informationRetrieval.R is located in another subfolder and I guess this just needs proper referencing inside of the project. The author states that he has selected widely used machine learning methods used in life sciences. I would have expected Naive Bayes classifiers in the list of most popular methods. A simple PubMed search for '"naive bayes classifier" OR "naive bayesian classifier' return twice as many hits as for "deep neural networks" (but over a longer time span): https://www.ncbi.nlm.nih.gov/pubmed/?term=%22naive+bayes+classifier%22+OR+%22naive+bayesian+classifier%2 https://www.ncbi.nlm.nih.gov/pubmed?term=%22deep+neural+networks%22 Similar issue for logistic regression: The analysis in the provided file Machine_Learning_Trends.Rmd actually contains the counting of publications containing logistic regression that shows a large (206,619 at the time of writing) and growing number of this but this method has not been discussed in the manuscript and is not displayed in the plots. https://www.ncbi.nlm.nih.gov/pubmed?term=%22logistic%20regression%22 The counting of hits for deep neural networks (DNN) is not done properly. Looking at the code to count the number of hits of different search terms shows that the author use "artificial neural networks" and "deep neural networks" and "deep learning" as search term for DNN (see code selection at the bottom of this section). I think using the search term "artificial neural network" for both ANN and DNN is not sound and changes the story of DNN (a special form ANN) significantly. Either DNN is treated as subset of ANN and only ANN are plotted or DNN and ANN are treated separately and the search term "artificial neural network" is not used for DNN. Furthermore the search term "deep learning" results in numerous unrelated hits before 2010 (e.g. PMID: 8936230, 9165817, 9487168, 10463930). https://www.ncbi.nlm.nih.gov/pubmed/?term=%22deep+learning%22 (then click on the "Result by year" histogram). The authors tries to explain the underlying reasons for the gain or loss of certain ML methods. In Figure 1 one of the publications of the human genome is placed in the year 2000 without any citation. The human draft genome was published in 2001 (International Human Genome Sequencing Consortium, Nature 409, 860–921, 2001, https://doi.org/10.1038/35057062) and it would be interesting to see what the author is referring to. The Popularity Rate (PR) introduced here is not plotted anywhere directly but is the slope of edges between the data points of two consecutive years. The author should consider visualizing this measurement of change as well. The curve plotted in Fig 1 A is nearly reassembled by the LRM curve in Fig 1 B. Is the observation in Fig 1 A maybe only an observation of the dominating LRM method? I do not understand why Fig 1 A can actually look nearly exactly like the LRM curve considering the other methods e.g. the PCA curve. Code selection regarding ANN and DNN ``` ANN_hits       <-  get_normliazed_number_of_hits(years = YEARs, query="artificial neural network[tw]", db="pubmed", normalization_value=1000000) NN_term <-  "(artificial neural networks[tw] OR deep neural networks[tw] or deep learning[tw])" DNN_hits <-  get_normliazed_number_of_hits(years=YEARs, query=NN_term, db="pubmed", normalization_value=1000000) ``` Minor issues: Figure 1 Style: The different lines are hard to distinguish by color only - maybe consider an additional discriminator (e.g. dashed lines for a subset); Next to Fig 1 C is a lot of white space. Placing the t-SNE subplot to a different location (e.g. the middle of Fig 1 C) would make it possible to use this space more efficiently. Maybe think rearranging the whole figure. Figure 1 C is a subplot of figure 1 B like the t-SNE plot is a subplot of Figure 1 C "de facto" should be written in in italic font The link to RISmed should use the link indicated at the page itself that says "Please use the canonical form https://CRAN.R-project.org/package=RISmed to link to this page." For Linear Regression Model sometimes "LRM" and sometime "LR" is used in the manuscript In order to understand which biological approaches / questions that are influencing the usage of different ML method the association of those methods with certain MeSH term would be interesting. Either as part of this manuscript or a future one. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. I thank Dr Konrad Forstner for his time in evaluating the manuscript and for his very detailed comments/suggestions that I believe will immensely enhance the quality of the manuscript. In the following I address the issues raised by Konrad. In the manuscript "The rise and fall of machine learning methods in biomedical research" the author has generated a quantitative perspective on the usage of machine learning methods in the life sciences. For some of the methods a hypothesis about the underlying reason for an increased or decrease popularity are discussed. The code for performing the analysis is available on GitHub and - like the retrieved PubMed data - has been deposited at Zenodo. I have several major objections / question / suggestion for the author: I I tried to reproduce the analysis using RStudio 1.1.383 with the deposited RStudio project but got the following error when executing the R chunks in the file  Machine_Learning_Trends.Rmd: "Error in library(informationRetrieval) : there is no package called ‘informationRetrieval’" The file  informationRetrieval.R is located in another subfolder and I guess this just needs proper referencing inside of the project. The author states that he has selected widely used machine learning methods used in life sciences. I would have expected Naive Bayes classifiers in the list of most popular methods. A simple PubMed search for '"naive bayes classifier" OR "naive bayesian classifier' return twice as many hits as for "deep neural networks" (but over a longer time span): https://www.ncbi.nlm.nih.gov/pubmed/?term=%22naive+bayes+classifier%22+OR+%22naive+bayesian+classifier%2 https://www.ncbi.nlm.nih.gov/pubmed?term=%22deep+neural+networks%22 Similarly, I had logistic regression models in my initial analysis. And for the same reason I took it out from the final submission and left to the reader to test if they wish. It has been added again. I really apologize for this. I have corrected in the code, and changed the manuscript accordingly. I apologize again for the confusion. I was in fact referring 2001 IHGSC paper, as I had cited in the manuscript. I have changed the figure to make it clear. Very valid point. I have restructured the manuscript and the figures. Now, I have a separate figure for this. I think what happens is that around the time that  PCA starts declining, we have an almost exponential increase from other models such as RF, SVM and later on from DNN. These collectively delute effect of PCA decline. Similar issue for logistic regression: The analysis in the provided file  Machine_Learning_Trends.Rmd actually contains the counting of publications containing logistic regression that shows a large (206,619 at the time of writing) and growing number of this but this method has not been discussed in the manuscript and is not displayed in the plots. https://www.ncbi.nlm.nih.gov/pubmed?term=%22logistic%20regression%22 The counting of hits for deep neural networks (DNN) is not done properly. Looking at the code to count the number of hits of different search terms shows that the author use "artificial neural networks" and "deep neural networks" and "deep learning" as search term for DNN (see code selection at the bottom of this section). I think using the search term "artificial neural network" for both ANN and DNN is not sound and changes the story of DNN (a special form ANN) significantly. Either DNN is treated as subset of ANN and only ANN are plotted or DNN and ANN are treated separately and the search term "artificial neural network" is not used for DNN. Furthermore the search term "deep learning" results in numerous unrelated hits before 2010 (e.g. PMID: 8936230, 9165817, 9487168, 10463930).  https://www.ncbi.nlm.nih.gov/pubmed/?term=%22deep+learning%22 (then click on the "Result by year" histogram). The authors tries to explain the underlying reasons for the gain or loss of certain ML methods. In Figure 1 one of the publications of the human genome is placed in the year 2000 without any citation. The human draft genome was published in 2001 (International Human Genome Sequencing Consortium, Nature 409, 860–921, 2001,  https://doi.org/10.1038/35057062) and it would be interesting to see what the author is referring to. The Popularity Rate (PR) introduced here is not plotted anywhere directly but is the slope of edges between the data points of two consecutive years. The author should consider visualizing this measurement of change as well. The curve plotted in Fig 1 A is nearly reassembled by the LRM curve in Fig 1 B. Is the observation in Fig 1 A maybe only an observation of the dominating LRM method? I do not understand why Fig 1 A can actually look nearly exactly like the LRM curve considering the other methods e.g. the PCA curve. Great observation. Both figures are very similar, though with different slopes and intercepts. In order to check if the cumulative figure is dominant by LRM, in a separate task, I filtered out LRM and made the cumulative figure. Although in both full-model and filtered-model we can see two different slopes (from 1990 to 2000, from 2001 ot 2017), not surprisingly, the full model fits better. Code selection regarding ANN and DNN ``` ANN_hits       <-  get_normliazed_number_of_hits(years = YEARs, query="artificial neural network[tw]", db="pubmed", normalization_value=1000000) NN_term <-  "(artificial neural networks[tw] OR deep neural networks[tw] or deep learning[tw])" DNN_hits <-  get_normliazed_number_of_hits(years=YEARs, query=NN_term, db="pubmed", normalization_value=1000000) ``` Minor issues: Figure 1 Style: The different lines are hard to distinguish by color only - maybe consider an additional discriminator (e.g. dashed lines for a subset); Next to Fig 1 C is a lot of white space. Placing the t-SNE subplot to a different location (e.g. the middle of Fig 1 C) would make it possible to use this space more efficiently. Maybe think rearranging the whole figure. Figure 1 C is a subplot of figure 1 B like the t-SNE plot is a subplot of Figure 1 C As suggested, I have restructure the manuscript and the figures. The manuscript now has three main figures which are hopefully more clearer than the previous version. "de facto" should be written in in italic font Corrected. The link to RISmed should use the link indicated at the page itself that says "Please use the canonical form  https://CRAN.R-project.org/package=RISmed to link to this page." For Linear Regression Model sometimes "LRM" and sometime "LR" is used in the manuscript I have corrected for this. In order to understand which biological approaches / questions that are influencing the usage of different ML method the association of those methods with certain MeSH term would be interesting. Either as part of this manuscript or a future one. This is a very interesting point. Though as suggested is beyond this manuscript. I should firstly point out that I was co-author on the 2011 editorial published in Bioinformatics titled, “The rise and fall of supervised machine learning techniques” [1].  Therefore I was momentarily surprised to be invited to review a paper with such a similar title. That editorial was only a page and a half long and only really scratched the surface of the interesting topic of the prevalence of use of machine learning in the biosciences. The author cites our 2011 paper and mentions that the current article can be considered an update of it. However, that is only mentioned in the very final paragraph of the paper. It would seem reasonable to me to make that one of the first things mentioned in the paper.  Of course I am far from a neutral observer on this point. Overall I think that the article presents sound and interesting science and should be published within F1000Research. I think it provides a timely update to the 2011 editorial and expands it with some nice extra details. The article increases the number of ML methods investigated from 5 to 10.  Most notably linear regression models are included which top the league table. I noticed an inconsistency in the data presented for ANNs in this new paper compared to the 2011 paper. Why is that? The numbers for ANN are considerably lower in this article.  Is that because DNNs are split out from ANNs?  Throughout the paper it says that ANNs have become known as DNNs.  That is not correct. DNNs are a subtype of ANNs. So all DNNs are ANNs, but not all ANNs are DNNs.  That needs correction throughout. The following statement does not read well: "The sharp increase usage in popularity rate of DNNs over the past few years (Figure 1C) suggests that DNNs will take the PR lead again in the coming years." After multiple readings I would presume that PR lead means it has the highest popularity rate.  DNNs would have more than 300 more mentions per million papers per year.  Firstly that sentence is very confusing to understand for a reader.  For the first two readings I thought you were saying that DNNs would take the lead from LRMs, which would seem unlikely. On third reading I thought you meant that the slope of DNNs would overtake LRMs, but clearly it has already done that.  I think you should rethink that sentence or take it out. Minor points: Page 2. At he intersection -> At the intersection Page 2. You mention that a surprising maximum of 1.2% of all paper mention one of the 10 ML techniques. Why is that surprising?  Is it too low, too high? Please explain. Page 2. NCBI database is mentioned.  NCBI has a lot of databases,  please specify which one. Page 3. less used less -> used less I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. I thank Dr. Alex Bateman for his time in evaluating the manuscript as well as for his very valuable comments. In fact, I was inspired by Alex’s commentary. I therefore apologize for not appropriately mentioning this earlier in the manuscript. I have made this change to the manuscript and I hope that is clear enough now. As Alex has suggested, I have made the distinction between ANNs and DNNs clear in corresponding paragraph and change the manuscript accordingly. I have also addressed the minor points accordingly. I hope the current version of manuscript meets Alex’s standards and consequently is clearer for the readers now.
  8 in total

1.  Initial sequencing and analysis of the human genome.

Authors:  E S Lander; L M Linton; B Birren; C Nusbaum; M C Zody; J Baldwin; K Devon; K Dewar; M Doyle; W FitzHugh; R Funke; D Gage; K Harris; A Heaford; J Howland; L Kann; J Lehoczky; R LeVine; P McEwan; K McKernan; J Meldrim; J P Mesirov; C Miranda; W Morris; J Naylor; C Raymond; M Rosetti; R Santos; A Sheridan; C Sougnez; Y Stange-Thomann; N Stojanovic; A Subramanian; D Wyman; J Rogers; J Sulston; R Ainscough; S Beck; D Bentley; J Burton; C Clee; N Carter; A Coulson; R Deadman; P Deloukas; A Dunham; I Dunham; R Durbin; L French; D Grafham; S Gregory; T Hubbard; S Humphray; A Hunt; M Jones; C Lloyd; A McMurray; L Matthews; S Mercer; S Milne; J C Mullikin; A Mungall; R Plumb; M Ross; R Shownkeen; S Sims; R H Waterston; R K Wilson; L W Hillier; J D McPherson; M A Marra; E R Mardis; L A Fulton; A T Chinwalla; K H Pepin; W R Gish; S L Chissoe; M C Wendl; K D Delehaunty; T L Miner; A Delehaunty; J B Kramer; L L Cook; R S Fulton; D L Johnson; P J Minx; S W Clifton; T Hawkins; E Branscomb; P Predki; P Richardson; S Wenning; T Slezak; N Doggett; J F Cheng; A Olsen; S Lucas; C Elkin; E Uberbacher; M Frazier; R A Gibbs; D M Muzny; S E Scherer; J B Bouck; E J Sodergren; K C Worley; C M Rives; J H Gorrell; M L Metzker; S L Naylor; R S Kucherlapati; D L Nelson; G M Weinstock; Y Sakaki; A Fujiyama; M Hattori; T Yada; A Toyoda; T Itoh; C Kawagoe; H Watanabe; Y Totoki; T Taylor; J Weissenbach; R Heilig; W Saurin; F Artiguenave; P Brottier; T Bruls; E Pelletier; C Robert; P Wincker; D R Smith; L Doucette-Stamm; M Rubenfield; K Weinstock; H M Lee; J Dubois; A Rosenthal; M Platzer; G Nyakatura; S Taudien; A Rump; H Yang; J Yu; J Wang; G Huang; J Gu; L Hood; L Rowen; A Madan; S Qin; R W Davis; N A Federspiel; A P Abola; M J Proctor; R M Myers; J Schmutz; M Dickson; J Grimwood; D R Cox; M V Olson; R Kaul; C Raymond; N Shimizu; K Kawasaki; S Minoshima; G A Evans; M Athanasiou; R Schultz; B A Roe; F Chen; H Pan; J Ramser; H Lehrach; R Reinhardt; W R McCombie; M de la Bastide; N Dedhia; H Blöcker; K Hornischer; G Nordsiek; R Agarwala; L Aravind; J A Bailey; A Bateman; S Batzoglou; E Birney; P Bork; D G Brown; C B Burge; L Cerutti; H C Chen; D Church; M Clamp; R R Copley; T Doerks; S R Eddy; E E Eichler; T S Furey; J Galagan; J G Gilbert; C Harmon; Y Hayashizaki; D Haussler; H Hermjakob; K Hokamp; W Jang; L S Johnson; T A Jones; S Kasif; A Kaspryzk; S Kennedy; W J Kent; P Kitts; E V Koonin; I Korf; D Kulp; D Lancet; T M Lowe; A McLysaght; T Mikkelsen; J V Moran; N Mulder; V J Pollara; C P Ponting; G Schuler; J Schultz; G Slater; A F Smit; E Stupka; J Szustakowki; D Thierry-Mieg; J Thierry-Mieg; L Wagner; J Wallis; R Wheeler; A Williams; Y I Wolf; K H Wolfe; S P Yang; R F Yeh; F Collins; M S Guyer; J Peterson; A Felsenfeld; K A Wetterstrand; A Patrinos; M J Morgan; P de Jong; J J Catanese; K Osoegawa; H Shizuya; S Choi; Y J Chen; J Szustakowki
Journal:  Nature       Date:  2001-02-15       Impact factor: 49.962

2.  ChromHMM: automating chromatin-state discovery and characterization.

Authors:  Jason Ernst; Manolis Kellis
Journal:  Nat Methods       Date:  2012-02-28       Impact factor: 28.547

3.  Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning.

Authors:  Babak Alipanahi; Andrew Delong; Matthew T Weirauch; Brendan J Frey
Journal:  Nat Biotechnol       Date:  2015-07-27       Impact factor: 54.908

4.  Biology: The big challenges of big data.

Authors:  Vivien Marx
Journal:  Nature       Date:  2013-06-13       Impact factor: 49.962

Review 5.  Advances and applications of single-cell sequencing technologies.

Authors:  Yong Wang; Nicholas E Navin
Journal:  Mol Cell       Date:  2015-05-21       Impact factor: 17.970

6.  The rise and fall of supervised machine learning techniques.

Authors:  Lars Juhl Jensen; Alex Bateman
Journal:  Bioinformatics       Date:  2011-11-17       Impact factor: 6.937

7.  Two Mutually Exclusive Local Chromatin States Drive Efficient V(D)J Recombination.

Authors:  Daniel J Bolland; Hashem Koohy; Andrew L Wood; Louise S Matheson; Felix Krueger; Michael J T Stubbington; Amanda Baizan-Edge; Peter Chovanec; Bryony A Stubbs; Kristina Tabbada; Simon R Andrews; Mikhail Spivakov; Anne E Corcoran
Journal:  Cell Rep       Date:  2016-06-02       Impact factor: 9.423

8.  The rise and fall of machine learning methods in biomedical research.

Authors:  Hashem Koohy
Journal:  F1000Res       Date:  2017-11-10
  8 in total
  7 in total

1.  Machine Learning for Data-Driven Discovery: The Rise and Relevance.

Authors:  Partho P Sengupta; Sirish Shrestha
Journal:  JACC Cardiovasc Imaging       Date:  2018-12-12

2.  Using DICOM Metadata for Radiological Image Series Categorization: a Feasibility Study on Large Clinical Brain MRI Datasets.

Authors:  Romane Gauriau; Christopher Bridge; Lina Chen; Felipe Kitamura; Neil A Tenenholtz; John E Kirsch; Katherine P Andriole; Mark H Michalski; Bernardo C Bizzo
Journal:  J Digit Imaging       Date:  2020-06       Impact factor: 4.056

Review 3.  Artificial Intelligence in Cardiovascular Medicine.

Authors:  Karthik Seetharam; Sirish Shrestha; Partho P Sengupta
Journal:  Curr Treat Options Cardiovasc Med       Date:  2019-05-14

4.  Artificial Intelligence in Pharmacoepidemiology: A Systematic Review. Part 1-Overview of Knowledge Discovery Techniques in Artificial Intelligence.

Authors:  Maurizio Sessa; Abdul Rauf Khan; David Liang; Morten Andersen; Murat Kulahci
Journal:  Front Pharmacol       Date:  2020-07-16       Impact factor: 5.810

Review 5.  A Structure-Based Drug Discovery Paradigm.

Authors:  Maria Batool; Bilal Ahmad; Sangdun Choi
Journal:  Int J Mol Sci       Date:  2019-06-06       Impact factor: 5.923

6.  The rise and fall of machine learning methods in biomedical research.

Authors:  Hashem Koohy
Journal:  F1000Res       Date:  2017-11-10

7.  The ability to classify patients based on gene-expression data varies by algorithm and performance metric.

Authors:  Stephen R Piccolo; Avery Mecham; Nathan P Golightly; Jérémie L Johnson; Dustin B Miller
Journal:  PLoS Comput Biol       Date:  2022-03-11       Impact factor: 4.475

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.