Literature DB >> 26535114

The khmer software package: enabling efficient nucleotide sequence analysis.

Michael R Crusoe1, Hussien F Alameldin2, Sherine Awad3, Elmar Boucher4, Adam Caldwell5, Reed Cartwright6, Amanda Charbonneau7, Bede Constantinides8, Greg Edvenson9, Scott Fay10, Jacob Fenton11, Thomas Fenzl12, Jordan Fish11, Leonor Garcia-Gutierrez13, Phillip Garland14, Jonathan Gluck15, Iván González16, Sarah Guermond17, Jiarong Guo18, Aditi Gupta1, Joshua R Herr1, Adina Howe19, Alex Hyer20, Andreas Härpfer21, Luiz Irber11, Rhys Kidd22, David Lin23, Justin Lippi24, Tamer Mansour25, Pamela McA'Nulty26, Eric McDonald11, Jessica Mizzi27, Kevin D Murray28, Joshua R Nahum29, Kaben Nanlohy30, Alexander Johan Nederbragt31, Humberto Ortiz-Zuazaga32, Jeramia Ory33, Jason Pell11, Charles Pepe-Ranney34, Zachary N Russ35, Erich Schwarz36, Camille Scott11, Josiah Seaman37, Scott Sievert38, Jared Simpson39, Connor T Skennerton40, James Spencer41, Ramakrishnan Srinivasan42, Daniel Standage43, James A Stapleton44, Susan R Steinman45, Joe Stein46, Benjamin Taylor11, Will Trimble47, Heather L Wiencko48, Michael Wright11, Brian Wyss11, Qingpeng Zhang11, En Zyme49, C Titus Brown50.   

Abstract

The khmer package is a freely available software library for working efficiently with fixed length DNA words, or k-mers. khmer provides implementations of a probabilistic k-mer counting data structure, a compressible De Bruijn graph representation, De Bruijn graph partitioning, and digital normalization. khmer is implemented in C++ and Python, and is freely available under the BSD license at  https://github.com/dib-lab/khmer/.

Entities:  

Keywords:  bioinformatics; dna sequencing analysis; k-mer; khmer; kmer; low-memory; online; streaming

Year:  2015        PMID: 26535114      PMCID: PMC4608353          DOI: 10.12688/f1000research.6924.1

Source DB:  PubMed          Journal:  F1000Res        ISSN: 2046-1402


Introduction

DNA words of a fixed-length k, or “k-mers”, are a common abstraction in DNA sequence analysis that enable alignment-free sequence analysis and comparison. With the advent of second-generation sequencing and the widespread adoption of De Bruijn graph-based assemblers, k-mers have become even more widely used in recent years. However, the dramatically increased rate of sequence data generation from Illumina sequencers continues to challenge the basic data structures and algorithms for k-mer storage and manipulation. This has led to the development of a wide range of data structures and algorithms that explore possible improvements to k-mer-based approaches. Here we present version 2.0 of the khmer software package, a high-performance library implementing memory- and time-efficient algorithms for the manipulation and analysis of short-read data sets. khmer contains reference implementations of several approaches, including a probabilistic k-mer counter based on the CountMin Sketch [1], a compressible De Bruijn graph representation built on top of Bloom filters [2], a streaming lossy compression approach for short-read data sets termed “digital normalization” [3], and a generalized semi-streaming approach for k-mer spectral analysis of variable-coverage shotgun sequencing data sets [4]. khmer is both research software and a software product for users: it has been used in the development of novel data structures and algorithms, and it is also immediately useful for certain kinds of data analysis (discussed below). We continue to develop research extensions while maintaining existing functionality. The khmer software consists of a core library implemented in C++, a CPython library wrapper implemented in C, and a set of Python “driver” scripts that make use of the library to perform various sequence analysis tasks. The software is currently developed on GitHub under https://github.com/dib-lab/khmer, and it is released under the BSD License. There is greater than 87% statement coverage under automated tests, measured on both C++ and Python code but primarily executed at the Python level.

Methods

Implementation

The core data k-mer counting data structures and graph traversal code are implemented in C++, and then wrapped for Python in hand-written C code, for a total of 10.5k lines of C/C++ code. The command-line API and all of the tests are written in 13.7k lines of Python code. C++ FASTQ and FASTA parsers came from the SeqAn library [5]. Documentation is written in reStructuredText, compiled with Sphinx, and hosted on ReadTheDocs.org. We develop khmer on github.com as a community open source project focused on sustainable software development [6], and encourage contributions of any kind. As an outcome of several community events, we have comprehensive documentation on contributing to khmer at https://khmer.readthedocs.org/en/latest/dev/ [7]. Most development decisions are discussed and documented publicly as they happen.

Operation

khmer is primarily developed on Linux for Python 2.7 and 64-bit processors, and several core developers use Mac OS X. The project is tested regularly using the Jenkins continuous integration system running on Ubuntu 14.04 LTS and Mac OS X 10.10; the current development branch is also tested under Python 3.3, 3.4, and 3.5. Releases are tested against many Linux distributions, including RedHat Enterprise Linux, Debian, Fedora, and Ubuntu. khmer should work on most UNIX derivatives with little modification. Windows is explicitly not supported. Memory requirements for using khmer vary with the complexity of data and are user configurable. Several core data structures can trade memory for false positives, and we have explored these details in several papers, most notably Pell et al. 2012 [2] and Zhang et al. 2014 [1]. For example, most single organism mRNAseq data sets can be processed in under 16 GB of RAM [3, 8], while memory requirements for metagenome data sets may vary from dozens of gigabytes to terabytes of RAM. The user interface for khmer is via the command line. The command line interface consists of approximately 25 Python scripts; they are documented at http://khmer.readthedocs.org/ under User Documentation. Changes to the interface are managed with semantic versioning [9] which guarantees command line compatibility between releases with the same major version. khmer also has an unstable developer interface via its Python and C++ libraries, on which the command line scripts are built.

Use cases

khmer has several complementary feature sets, all centered on short-read manipulation and filtering. The most common use of khmer is for preprocessing short read Illumina data sets prior to de novo sequence assembly, with the goals of decreasing compute requirements for the assembler as well as potentially improving the assembly results.

Prefiltering sequence data for de novo assembly with digital normalization

We provide an implementation of a novel streaming “lossy compression” algorithm in khmer that performs abundance normalization of shotgun sequence data. This “digital normalization” algorithm eliminates redundant short reads while retaining sufficient information to generate a contig assembly [3]. The algorithm takes advantage of the online k-mer counting functionality in khmer to estimate per-read coverage as reads are examined; reads can then be accepted as novel or rejected as redundant. This is a form of error reduction, because the net effect is to decrease not only the total number of reads considered for assembly, but also the total number of errors considered by the assembler. Digital normalization results in a decrease of the amount of memory needed for de novo assembly of high-coverage data sets with little to no change in the assembled contigs. Digital normalization is implemented in the script normalize-by-median.py. This script takes as input a list of FASTA or FASTQ files, which it then filters by abundance as described above; see 3 for details. The output of the digital normalization script is a downsampled set of reads, with no modifications to the individual reads. The three key parameters for the script are the k-mer size, the desired coverage level, and the amount of memory to be used for k-mer counting. The interaction between these three parameters and the filtering process is complex and depends on the data set being processed, but higher coverage levels and longer k-mer sizes result in less data being removed. Lower memory allocation increases the rate at which reads are removed due to erroneous estimates of their abundance, but this process is very robust in practice [1]. The output of normalize-by-median.py can be assembled using a de novo assembler such as Velvet [10], IDBA [11], Trinity [12] or SPAdes [13].

K-mer counting and read trimming

Using a memory-efficient CountMin Sketch data structure, khmer provides an interface for online counting of k-mers in streams of reads. The basic functionality includes calculating the k-mer frequency spectrum in sequence data sets and trimming reads at low-abundance k-mers. This functionality is explored and benchmarked in [1]. Basic read trimming is performed by the script filter-abund.py, which takes as arguments a k-mer countgraph (created by khmer’s load-into-counting.py script) and one or more sequence data files. The script examines each sequence to find k-mers below the given abundance cutoff, and truncates the sequence at the first such k-mer. This truncates reads at the location of substitution errors produced by the sequencing process. When processing sequences from variable coverage data sets, filter-abund.py can also be configured to ignore reads that have low estimated abundance. K-mer abundance distributions can be calculated using the script abundance-dist.py, which takes as arguments a k-mer countgraph, a sequence data file, and an output filename. This script determines the abundance of each distinct k-mer in the data file according to the k-mer countgraph, and summarizes the abundances in a histogram output. We recently extended digital normalization to provide a generalized semi-streaming approach for k-mer spectral analysis [4]. Here, we examine read coverage on a per-locus basis in the De Bruijn graph and, once a particular locus has sufficient coverage, call errors or trim bases for all following reads belonging to that graph locus. The approach is “semi-streaming” [4] because some reads must be examined twice. This semi-streaming approach enables few-pass analysis of high coverage data sets. More, the approach also makes it possible to apply k-mer spectral analysis to data sets with uneven coverage such as metagenomes, transcriptomes, and whole-genome amplified samples. Because our core data structure sizes are preallocated based on estimates of the unique k-mer content of the data, we also provide fast and low-memory k-mer cardinality estimation via the script unique-kmers.py. This script uses the HyperLogLog algorithm to provide a probabilistic estimate of the number of unique k-mers in a data set with a guaranteed upper bound [14]. A manuscript on this implementation is in progress (Irber and Brown, unpublished).

Partitioning reads into disconnected assembly graphs

We have also built a De Bruijn graph representation on top of a Bloom filter, and implemented this in khmer. The primary use for this so far has been to enable memory efficient graph partitioning, in which reads contributing to disconnected subgraphs are placed into different files. This can lead to an approximately 20-fold decrease in the amount of memory needed for metagenome assembly [2], and may also separate reads into species-specific bins [15].

Reformatting collections of short reads

In support of the streaming nature of this project, our preferred paired-read format is with pairs interleaved in a single file. As an extension of this, we automatically support a “broken-paired” read format where orphaned reads and pairs coexist in a single file. This enables single input/output streaming connections between tools, while leaving our tools compatible with fully paired read files as well as files containing only orphaned reads. For converting to and from this format, we supply the scripts extract-paired-reads.py, interleave-reads.py, and split-paired-reads.py to respectively extract fully paired reads from sequence files, interleave two files containing read pairs, and split an interleaved file into two files containing read pairs. In addition, we supply several utility scripts that we use in our own work. These include sample-reads-randomly.py for performing reservoir sampling of reads and readstats.py for summarizing sequence files.

Summary

The khmer project is an increasingly mature open source scientific software project that provides several efficient data structures and algorithms for analyzing short-read nucleotide sequencing data. khmer emphasizes online analysis, low memory data structures and streaming algorithms. khmer continues to be useful for both advancing bioinformatics research and analyzing biological data.

Software availability

Software available from

https://khmer.readthedocs.org/en/v2.0/

Link to source code

https://github.com/dib-lab/khmer/releases/tag/v2.0

Link to archived source code as at time of publication

http://dx.doi.org/10.5281/zenodo.31258 [16]

Software license

Michael Crusoe: Copyright: 2010–2015, Michigan State University. Copyright: 2015, The Regents of the University of California. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the Michigan State University nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. This paper describes version 2 of the khmer software suite.  The software is developed to provide both a set of directly usable tools (e.g. normalize-by-median for digital normalization) as well as an experimental framework for developers looking to design new algorithms and methods.  It has proven very useful on both of these fronts.  The repository is highly watched and starred on GitHub, the developers are very responsive (see more below), and both the senior author's group and other researches seem to be leveraging this framework to build new tools and algorithms. The paper itself does a good job of describing the software at a high level, including the overall design and goals.  I would have appreciated slightly more detail about the motivation behind the design decisions, and the tradeoffs they entail (e.g. Why have a Python front-end? Why use hand-written binding code rather than a binding generator, like SWIG, that would allow interfaces to other languages as well?).  I understand that a comprehensive description is not feasible in a manuscript of this length.  It would be very interesting to know, however, the cost paid for using the high-level interface rather than the C++ library directly.  When the underlying computation is trivial, simply having to iterate over an enormous number of things in Python could add non-trivial overhead.  Despite these desiderata, I find that the paper is generally well written and does a good job of describing what a new user might want to know about khmer, and so I approve of this manuscript. Like Daniel, I also downloaded and built the software using the instructions provided in the ReadTheDocs documentation.  The process was simple, and worked well, with the exception of a minor glitch running the tests.  After debugging the cause of the problem, I posted an issue to the GitHub repository, and received a response in less than a day.  I bring this up because, while not an aspect of the paper itself, good developer support is crucial to the long-term survival and utility of a software package — khmer seems to have this. This brings me to my final point, about the (currently) controversial authorship policy on this paper, which is ancillary to the quality of the paper (and software) itself.  At this point, I must reserve judgement on whether I think the authorship policy adopted by this paper is "good" or "bad" (for science, the community, etc.).  Incidentally, this is a dichotomy that does not capture the subtlety or importance of this issue well.  In the manuscript, the authors state "We develop khmer on github.com as a community open source project focused on sustainable software development, and encourage contributions of any kind."  Thus, contributions to khmer are of a potentially wide variety in character (and also, I believe, not simply related to improving or maintaining the code).  Those who contribute to the design, improve the usability, work on documentation, support new and existing users, and develop and propagate best practices are all contributing something valuable to the khmer software "ecosystem".  It is unreasonable to expect a piece of software that is ~25k lines of code (and growing) to be actively developed, maintained, and supported by only a small contingent of people, many of whom may be graduate students soon to graduate and move on.  Thus, if we are interested in the long-term viability and quality of such software, we must adopt a system of credit that values and recognizes a variety of different types of contribution.  On the other hand, I do share the concern that, in the midst of the current authorship system, bestowing that recognition in the form of authorship may have the adverse effect of diminishing the public perception of the very credit one is trying to grant.  Perhaps there is a solution  along the lines adopted by this paper, or perhaps something drastically different needs to be considered. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Regarding the paper, it is a fairly straightforward description of a software package, containing all the things that such a paper should have - a description of the goals, the implemented methods, the hardware and software dependencies (systems on which the software has been tested), some guidance on usage, pointers to the software and documentation, and references. Regarding the software, I did download and build the software, which seemed to work, other than a fair number of warnings.  I was not able to successfully test the software, however, due to issues in  https://khmer.readthedocs.org/en/v2.0/user/install.html#run-the-tests  Does this mean I should not approve the article?  Or should I ask the authors for help in understanding the error and hold off on submitting this report? I would have liked to have chosen "Approved with reservations" for the status of this review, but my reservations are with the F1000 system for this type of paper, not with this specific paper, so in fairness to the authors, given the lack of clarity of what I should be doing as a reviewer for a software paper, I approve this paper based on its quality as a good description of the software, and not on the quality of software (and related documentation) itself. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. In addition to my report, regarding software papers in general under F1000, I believe that much more should be required from their reviewers, and what  is required should be made clear.  Software journals (e.g., Ubiquity Press's Journal of Open Research Software, Elsevier's Software X) have specific statements of what a reviewer should do, which say a lot about the quality of the review.  For JORS, this is defined on a web page ( http://openresearchsoftware.metajnl.com/about/editorialPolicies/).  For Software X, the criteria are not on the web (as far as I know) but are embedded in the review form/process, and are roughly equivalent. This is an update of a widely used tool, khmer, which is in broad use in the technical community around de Bruijn graphs and short reads, based on Bloom filters. It is a good update, provides link to the code, and is sensibly written with tests. I have no concern about the scientific aspect of this paper. I do find the author inclusion list taking a concept and going to the extreme, and I don't think it is sensible to have an anonymised author (en zyme) on the list, with in effect no way to attribute to a person this. Science's openness in publication is also about attribution. Although I understand Titus' consistency of having all git committers as authors, I think it is sensible to make a distinction of substantial/scientific changes, of which the vast majority of the authors are. Acknowledgements are precisely there to handle these other cases. I believe it is uncontroversial to appropriately trim the author list, to use the acknowledgements for anonymous improvement (happens regularly in science) and small details (again, a commonplace practice). I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
  9 in total

1.  SPAdes: a new genome assembly algorithm and its applications to single-cell sequencing.

Authors:  Anton Bankevich; Sergey Nurk; Dmitry Antipov; Alexey A Gurevich; Mikhail Dvorkin; Alexander S Kulikov; Valery M Lesin; Sergey I Nikolenko; Son Pham; Andrey D Prjibelski; Alexey V Pyshkin; Alexander V Sirotkin; Nikolay Vyahhi; Glenn Tesler; Max A Alekseyev; Pavel A Pevzner
Journal:  J Comput Biol       Date:  2012-04-16       Impact factor: 1.479

2.  Velvet: algorithms for de novo short read assembly using de Bruijn graphs.

Authors:  Daniel R Zerbino; Ewan Birney
Journal:  Genome Res       Date:  2008-03-18       Impact factor: 9.043

3.  Scaling metagenome sequence assembly with probabilistic de Bruijn graphs.

Authors:  Jason Pell; Arend Hintze; Rosangela Canino-Koning; Adina Howe; James M Tiedje; C Titus Brown
Journal:  Proc Natl Acad Sci U S A       Date:  2012-07-30       Impact factor: 11.205

4.  Tackling soil diversity with the assembly of large, complex metagenomes.

Authors:  Adina Chuang Howe; Janet K Jansson; Stephanie A Malfatti; Susannah G Tringe; James M Tiedje; C Titus Brown
Journal:  Proc Natl Acad Sci U S A       Date:  2014-03-14       Impact factor: 11.205

5.  De novo transcript sequence reconstruction from RNA-seq using the Trinity platform for reference generation and analysis.

Authors:  Brian J Haas; Alexie Papanicolaou; Moran Yassour; Manfred Grabherr; Philip D Blood; Joshua Bowden; Matthew Brian Couger; David Eccles; Bo Li; Matthias Lieber; Matthew D MacManes; Michael Ott; Joshua Orvis; Nathalie Pochet; Francesco Strozzi; Nathan Weeks; Rick Westerman; Thomas William; Colin N Dewey; Robert Henschel; Richard D LeDuc; Nir Friedman; Aviv Regev
Journal:  Nat Protoc       Date:  2013-07-11       Impact factor: 13.491

6.  These are not the k-mers you are looking for: efficient online k-mer counting using a probabilistic data structure.

Authors:  Qingpeng Zhang; Jason Pell; Rosangela Canino-Koning; Adina Chuang Howe; C Titus Brown
Journal:  PLoS One       Date:  2014-07-25       Impact factor: 3.240

7.  Walking the Talk: Adopting and Adapting Sustainable Scientific Software Development processes in a Small Biology Lab.

Authors:  Michael R Crusoe; C Titus Brown
Journal:  J Open Res Softw       Date:  2016-11-29

8.  Channeling Community Contributions to Scientific Software: A Sprint Experience.

Authors:  Michael R Crusoe; C Titus Brown
Journal:  J Open Res Softw       Date:  2016-07-19

9.  SeqAn an efficient, generic C++ library for sequence analysis.

Authors:  Andreas Döring; David Weese; Tobias Rausch; Knut Reinert
Journal:  BMC Bioinformatics       Date:  2008-01-09       Impact factor: 3.169

  9 in total
  150 in total

1.  Atypical Hemolytic Listeria innocua Isolates Are Virulent, albeit Less than Listeria monocytogenes.

Authors:  Alexandra Moura; Olivier Disson; Morgane Lavina; Pierre Thouvenot; Lei Huang; Alexandre Leclercq; Maria Fredriksson-Ahomaa; Athmanya K Eshwar; Roger Stephan; Marc Lecuit
Journal:  Infect Immun       Date:  2019-03-25       Impact factor: 3.441

2.  Identification and Characterization of Mycobacterial Species Using Whole-Genome Sequences.

Authors:  Marco A Riojas; Andrew M Frank; Samuel R Greenfield; Stephen P King; Conor J Meehan; Michael Strong; Alice R Wattam; Manzour Hernando Hazbón
Journal:  Methods Mol Biol       Date:  2021

Review 3.  A review of methods and databases for metagenomic classification and assembly.

Authors:  Florian P Breitwieser; Jennifer Lu; Steven L Salzberg
Journal:  Brief Bioinform       Date:  2019-07-19       Impact factor: 11.622

4.  Employing a biochemical protecting group for a sustainable indigo dyeing strategy.

Authors:  Tammy M Hsu; Ditte H Welner; Zachary N Russ; Bernardo Cervantes; Ramya L Prathuri; Paul D Adams; John E Dueber
Journal:  Nat Chem Biol       Date:  2018-01-08       Impact factor: 15.040

5.  Revisiting the taxonomy of the genus Elizabethkingia using whole-genome sequencing, optical mapping, and MALDI-TOF, along with proposal of three novel Elizabethkingia species: Elizabethkingia bruuniana sp. nov., Elizabethkingia ursingii sp. nov., and Elizabethkingia occulta sp. nov.

Authors:  Ainsley C Nicholson; Christopher A Gulvik; Anne M Whitney; Ben W Humrighouse; James Graziano; Brian Emery; Melissa Bell; Vladimir Loparev; Phalasy Juieng; Jarrett Gartin; Chantal Bizet; Dominique Clermont; Alexis Criscuolo; Sylvain Brisse; John R McQuiston
Journal:  Antonie Van Leeuwenhoek       Date:  2017-08-30       Impact factor: 2.271

6.  The maternal-zygotic transition and zygotic activation of the Mnemiopsis leidyi genome occurs within the first three cleavage cycles.

Authors:  Phillip L Davidson; Bernard J Koch; Christine E Schnitzler; Jonathan Q Henry; Mark Q Martindale; Andreas D Baxevanis; William E Browne
Journal:  Mol Reprod Dev       Date:  2017-11-10       Impact factor: 2.609

7.  Evidence for Multidrug Resistance in Nonpathogenic Mycoplasma Species Isolated from South African Poultry.

Authors:  Amanda Beylefeld; Pamela Wambulawaye; Dauda Garba Bwala; Johannes Jacobus Gouws; Obed Mooki Lukhele; Daniel Barend Rudolph Wandrag; Celia Abolnik
Journal:  Appl Environ Microbiol       Date:  2018-10-17       Impact factor: 4.792

Review 8.  Microbial metabolites in health and disease: Navigating the unknown in search of function.

Authors:  Kristina B Martinez; Vanessa Leone; Eugene B Chang
Journal:  J Biol Chem       Date:  2017-04-07       Impact factor: 5.157

9.  Large-scale sequence comparisons with sourmash.

Authors:  N Tessa Pierce; Luiz Irber; Taylor Reiter; Phillip Brooks; C Titus Brown
Journal:  F1000Res       Date:  2019-07-04

10.  RecoverY: k-mer-based read classification for Y-chromosome-specific sequencing and assembly.

Authors:  Samarth Rangavittal; Robert S Harris; Monika Cechova; Marta Tomaszkiewicz; Rayan Chikhi; Kateryna D Makova; Paul Medvedev
Journal:  Bioinformatics       Date:  2018-04-01       Impact factor: 6.937

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.