Literature DB >> 35583271

SPRISS: Approximating Frequent K-mers by Sampling Reads, and Applications.

Diego Santoro1, Leonardo Pellegrina1, Matteo Comin1, Fabio Vandin1.   

Abstract

MOTIVATION: The extraction of k-mers is a fundamental component in many complex analyses of large next-generation sequencing datasets, including reads classification in genomics and the characterization of RNA-seq datasets. The extraction of all k-mers and their frequencies is extremely demanding in terms of running time and memory, owing to the size of the data and to the exponential number of k-mers to be considered. However, in several applications, only frequent k-mers, which are k-mers appearing in a relatively high proportion of the data, are required by the analysis.
RESULTS: In this work we present SPRISS, a new efficient algorithm to approximate frequent k-mers and their frequencies in next-generation sequencing data. SPRISS employs a simple yet powerful reads sampling scheme, which allows to extract a representative subset of the dataset that can be used, in combination with any k-mer counting algorithm, to perform downstream analyses in a fraction of the time required by the analysis of the whole data, while obtaining comparable answers. Our extensive experimental evaluation demonstrates the efficiency and accuracy of SPRISS in approximating frequent k-mers, and shows that it can be used in various scenarios, such as the comparison of metagenomic datasets, the identification of discriminative k-mers, and SNP (single nucleotide polymorphism) genotyping, to extract insights in a fraction of the time required by the analysis of the whole dataset. AVAILABILITY: SPRISS* is available at https://github.com/VandinLab/SPRISS. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
© The Author(s) 2022. Published by Oxford University Press.

Entities:  

Year:  2022        PMID: 35583271      PMCID: PMC9237683          DOI: 10.1093/bioinformatics/btac180

Source DB:  PubMed          Journal:  Bioinformatics        ISSN: 1367-4803            Impact factor:   6.931


1 Introduction

The study of substrings of length k, or k-mers, is a fundamental task in the analysis of large next-generation sequencing datasets. The extraction of k-mers, and of the frequencies with which they appear in a dataset of reads, is a crucial step in several applications, including the comparison of datasets and reads classification in metagenomics (Wood and Salzberg, 2014), the characterization of variation in RNA-seq data (Audoux ), the analysis of structural changes in genomes (Liu ; Li and Waterman, 2003), RNA-seq quantification (Patro ; Zhang and Wang, 2014), fast search-by-sequence over large high-throughput sequencing repositories (Solomon and Kingsford, 2016), genome comparison (Sims ) and error correction for genome assembly (Kelley ; Salmela ). k-mers and their frequencies can be obtained with a linear scan of a dataset. However, due to the massive size of the modern datasets and the exponential growth of the k-mers number (with respect to k), the extraction of k-mers is an extremely computationally intensive task, both in terms of running time and memory (Elworth ), and several algorithms have been proposed to reduce the running time and memory requirements (see Section 1.2). Nonetheless, the extraction of all k-mers and their frequencies from a reads dataset is still highly demanding in terms of time and memory [e.g. KMC 3 (Kokot ), one of the currently best performing tools for k-mer counting, requires more than 2.5 hours, 34 GB of memory and 500 GB of space on disk on a sequence of 729 Gbases (Kokot ), and from our experiments more than 30 minutes, 300 GB of memory and 97 GB of disk space for counting k-mers from Mo17 dataset (Using k = 31, 32 workers, and maximum RAM of 350 GB. See Supplementary Table S2 for the size of Mo17.)]. While some applications, such as error correction (Kelley ; Salmela ) or reads classification (Wood and Salzberg, 2014), require to identify all k-mers, even the ones that appear only once or few times in a dataset, other analyses, such as the comparison of abundances in metagenomic datasets (Benoit ; Danovaro ; Dickson ; Pellegrina ) or the discovery of k-mers discriminating between two datasets (Liu ; Ounit ), hinge on the identification of frequent k-mers, which are k-mers appearing with a (relatively) high frequency in a dataset. For the latter analyses, tools capable of efficiently extracting frequent k-mers only would be extremely beneficial and much more efficient than tools reporting all k-mers (given that a large fraction of k-mers appear with extremely low frequency). However, the efficient identification of frequent k-mers and their frequencies is still relatively unexplored (see Section 1.2). A natural approach to speed-up the identification of frequent k-mers is to analyze only a sample of the data, since frequent k-mers appear with high probability in a sample, while unfrequent k-mers appear with lower probability. A major challenge in sampling approaches is how to rigorously relate the results obtained analyzing the sample and the results that would be obtained analyzing the whole dataset. Tackling such challenge requires to identify a minimum sample size which guarantees that the results on the sample well represent the results to be obtained on the whole dataset. An additional challenge in the use of sampling for the identification of frequent k-mers is due to the fact that, for values of k of interest in modern applications (e.g. ), even the most frequent k-mers appear in a relatively low portion of the data (e.g. 10−7–10−5). The net effect is that the application of standard sampling techniques to rigorously approximate frequent k-mers results in sample sizes larger than the initial dataset.

1.1 Our contributions

In this work, we study the problem of approximating frequent k-mers in a dataset of reads. In this regard, our contributions are: We propose , SamPling Reads algorIthm to eStimate frequent k-merS (https://vec.wikipedia.org/wiki/Spriss). is based on a simple yet powerful read sampling approach, which renders very flexible and suitable to be used in combination with any k-mer counter. In fact, the read sampling scheme of returns a subset of a dataset of reads, which can be used to obtain representative results for down-stream analyses based on frequent k-mers. We prove that provides rigorous guarantees on the quality of the approximation of the frequent k-mers. In this regard, our main technical contribution is the derivation of the sample size required by , obtained through the study of the pseudodimension (Pollard, 1984), a key concept from statistical learning theory, of k-mers in reads. We show on several real datasets that approximates frequent k-mers with high accuracy, while requiring a fraction of the time needed by approaches that analyze all k-mers in a dataset. We show the benefits of using the approximation of frequent k-mers obtained by in three applications: the comparison of metagenomic datasets, the extraction of discriminative k-mers and SNP genotyping. In all these applications, significantly speeds up the analysis, while providing the same insights obtained by the analysis of the whole data.

1.2 Related works

The problem of exactly counting k-mers in datasets has been extensively studied, with several methods proposed for its solution (Audano and Vannberg, 2014; Kokot ; Kurtz ; Marçais and Kingsford, 2011; Melsted and Pritchard, 2011; Pandey ; Rizk ; Roy ). Such methods are typically highly demanding in terms of time and memory when analyzing large high-throughput sequencing datasets (Elworth ). For this reason, many methods have been recently developed to compute approximations of the k-mers abundances to reduce the computational cost of the task (e.g. Chikhi and Medvedev, 2014; Melsted and Halldórsson, 2014; Mohamadi ; Pandey ; Sivadasan ; Zhang ). However, such methods do not provide guarantees on the accuracy of their approximations that are simultaneously valid for all (or the most frequent) k-mers. In recent years, other problems closely related to the task of counting k-mers have been studied, including how to efficiently index (Harris and Medvedev, 2020; Marchet ,b; Pandey ), represent (Almodaresi ; Chikhi ; Dadi ; Guo ; Holley and Melsted, 2020; Marchet ; Rahman and Medvedev, 2020), query (Bradley ; Marchet ; Solomon and Kingsford, 2016, 2018; Sun ; Yu ) and store (Hernaez ; Hosseini ; Numanagić ; Rahman ) the massive collections of sequences or of k-mers that are extracted from the data. See also Chikhi for a unified presentation of methods to store and query a set of k-mers. A natural approach to reduce computational demands is to analyze a small sample instead of the entire dataset. To this end, methods that perform a downsampling of massive datasets have been recently proposed (Brown ; Coleman ; Wedemeyer ). These methods focus on discarding reads of the datasets that are very similar to the reads already included in the sample, computing approximate similarity measures as each read is considered. Such measures (i.e. the Jaccard similarity) are designed to maximize the diversity of the content of the reads in the sample. This approach is well suited for applications where rare k-mers are important, but they are less relevant for analyses, of interest to this work, where the most frequent k-mers carry the major part of the information. Furthermore, these methods have a heuristic nature, and do not provide guarantees on the relation between the accuracy of the analysis performed on the sample w.r.t. the analysis performed on the entire dataset. SAKEIMA (Pellegrina ) is the first sampling method that provides an approximation of the set of frequent k-mers (together with their estimated frequencies) with rigorous guarantees, based on counting only a subset of all occurrences of k-mers, chosen at random. SAKEIMA performs a full scan of the entire dataset, in a streaming fashion, and processes each k-mer occurrence according to the outcome of its random choices. , the algorithm we present in this work, is instead the first sampling algorithm to approximate frequent k-mers (and their frequencies), with rigorous guarantees, by sampling reads from the dataset. In fact, does not require to receive in input and to scan the entire dataset, but, instead, it needs in input only a small sample of reads drawn from the dataset, sample that may be obtained, for example, at the time of the physical creation of the whole dataset. While the sampling strategy of SAKEIMA could be analyzed using the concept of VC dimension (Vapnik, 1998), the reads-sampling strategy of requires the more sophisticated concept of pseudodimension (Pollard, 1984), for its analysis. In this work, we consider the use of to speed up the computation of the Bray-Curtis distance between metagenomic datasets, the identification of discriminative k-mers and the SNP genotyping process. Computational tools for these problems have been recently proposed (Benoit ; Saavedra ; Sun and Medvedev, 2018). These tools are based on exact k-mer counting strategies, and the approach we propose with could be applied to such strategies as well.

2 Preliminaries

Let Σ be an alphabet of σ symbols. A dataset is a bag of reads, where, for , a read r is a string of length n built from Σ. For a given integer k, a k-mer K is a string of length k on Σ, that is . Given a k-mer K, a read r of and a position , we define the indicator function to be 1 if K appears in r at position j, that is , while is 0 otherwise. The size of the multiset of k-mers that appear in is . The average size of the multiset of k-mers that appear in a read of is , while the maximum value of such quantity is . The support  of k-mer K in dataset is the number of distinct positions of where k-mer K appears, that is . The frequency  of a k-mer K in is the fraction of all positions in where K appears, that is . The task of finding frequent k-mers (FKs) is defined as follows: given a dataset , a positive integer k and a minimum frequency threshold , find the set of all the k-mers whose frequency in is at least θ, and their frequencies, that is . The set of frequent k-mers can be computed by scanning the dataset and counting the number of occurrences for each k-mers. However, when dealing with a massive dataset , the exact computation of the set requires large amount of time and memory. For this reason, one could instead focus on finding an approximation of with rigorous guarantees on its quality. In this work, we consider the following approximation, introduced in (Pellegrina ). Given a dataset , a positive integer k, a frequency threshold , and an accuracy parameter , an ε-approximation of is a set of pairs with the following properties: contains a pair for every ; contains no pair such that ; for every , it holds . Intuitively, the approximation contains no false negatives (i.e. all the frequent k-mers in are in C) and no k-mer whose frequency in is much smaller than θ. In addition, the frequencies in are good approximations of the actual frequencies in , i.e. within a small error . Given a dataset of n reads, we define a reads sample S of as a bag of m reads, sampled independently and uniformly at random, with replacement, from the bag of reads in . A natural way to compute an approximation of the set of frequent k-mers is by processing a sample, i.e. a small portion of the dataset , instead of the whole dataset. While previous work (Pellegrina ) considered samples obtained by drawing k-mers independently from , we consider samples obtained by drawing entire reads. Note that the development of an efficient scheme to effectively approximate the frequency of all frequent k-mers by sampling reads is highly non-trivial, due to dependencies among k-mers appearing in the same read. As explained in Section 1.1, our approach has several advantages, including the vfact that it can be combined with any efficient k-mer counting procedure, and that it can be used to extract a representative subset of the data on which to conduct down-stream analyses obtaining, in a fraction of the time required to process the whole dataset, the same insights. Such representative subsets could be stored and used for exploratory analyses, with a gain in terms of space and time requirements compared to using the whole dataset. In addition, note that can approximate both canonical and non-canonical k-mers.

3 Method and algorithm

In this section, we develop and analyze our algorithm , the first efficient algorithm to approximate frequent k-mers by read sampling. Let be a bag of n reads. We define as a bag of indexes of reads of chosen uniformly at random, with replacement, from the set . Then we define an -reads sample  as a collection of m bags of reads . Let k be a positive integer. Define the domain X as the set of bags of indexes of reads of . Then define the family of real-valued functions where, for every and for every , we have , where counts the number of occurrences of K in all the reads of . Therefore, and . Note that, for a given bag , the functions have value equal to even if K appears more than once in all the reads of , thus ignoring multiple occurrences of K in the bag. We define the frequency of a k-mer K obtained from the sample of bags of reads as , which is an unbiased estimator of (i.e. ). While the unbiased estimate is the frequency reported by as the estimated frequency of a k-mer K, selects the k-mers to produce in output using a different estimate, namely , which is a ‘biased’ version of since multiple occurrences of K in a bag are ignored. For the technical motivation to use the biased frequency , see the analysis in Supplementary Section S3. Our algorithm (Algorithm 1) starts by computing the number m of bags of reads as in Equation (1), based on the input parameters and on the characteristics () of dataset . It then draws a sample S of exactly reads, uniformly and independently at random, with replacement, from . Next, it computes for each k-mer K the number of occurrences of K in sample S, using any exact k-mers counting algorithm. We denote the call of this method by exact_counting(S, k), which returns a collection T of pairs . The sample S is then randomly partitioned into m bags, where each bag contains exactly reads. For each k-mer K, computes the biased frequency and the unbiased frequency , reporting in output only k-mers with biased frequency at least . Note that, the estimated frequency of a k-mer K reported in output is always given by the unbiased frequency . (Algorithm 1) is motivated by our main technical result, Proposition 1, which establishes a rigorous relation between the number m of bags of reads and the guarantees obtained by approximating the frequency of a k-mer K with its (biased) estimate (the full analysis is in Supplementary Section S3—see Supplementary Proposition S13). Let k and be two positive integers. Consider a sample of m bags of reads from . For fixed frequency threshold , error parameter and confidence parameter , if then, with probability at least : for any k-mer such that it holds ; for any k-mer K with it holds ; for any k-mer it holds ; for any k-mer K with it holds . builds on Proposition 1, and returns the approximation of defined by the set . Therefore, with probability at least the output of provides the guarantees stated in Proposition 1. Note that, given a sample of m bags of reads from , with m satisfying the condition of Proposition 1, the set A is almost an ε-approximation of : Proposition 1 ensures that all k-mers in A have frequency with probability at least , but it does not guarantee that all k-mers with frequency will be in output. However, we show in Section 4.2 that, in practice, almost all of them are reported in output by . Furthermore, we remark that it is possible to obtain different guarantees on the approximation computed by by modifying the criteria used to report k-mers in output; for example, in some applications, perfect recall may be particularly important. To this aim, we note that by reporting all k-mers with upper bound (where the upper bound to is given by (iv) in Proposition 1), we obtain that all frequent k-mers are in the approximation, with relaxed guarantees on the precision (i.e. some k-mers with frequency may be in the output). Moreover, in applications in which obtaining tight confidence intervals on all exact frequencies is important, an approximation scheme based on using multiple values of , analogous to the one described in Section 3.3 of Pellegrina , is directly applicable to . Data: , k, , integer Result: Approximation A of with probability at least sample of exactly reads drawn from ; random partition of S into m bags of reads each; for all thedo number of bags of where K appears; ifthen return A; In practice, in Algorithm 1, the partition of S into m bags and the computation of S could be highly demanding in terms of running time and space, since one has to compute and store, for each k-mer K, the exact number S of bags where K appears at least once among all reads of the bag. We now describe a much more efficient approach to approximate the values S, without the need to explicitly compute the bags. The number of reads in a given bag where K appears is well approximated by a Poisson distribution , where is the number of reads of S where k-mer K appears at least once. Therefore, the number S of bags where K appears at least once is approximated by a binomial distribution . Thus, one can avoid to explicitly create the bags and to exactly count S, by replacing line with . Corollary 5.11 of Mitzenmacher and Upfal (2017) guarantees that, by using this Poisson distribution to approximate S, the output of satisfies the properties of Proposition 1 with probability at least . This leads to the replacement of with in the computation of m. However, the approach described above requires to compute, for each k-mer K, the number of reads of S where K appears at least once. We believe such computation can be obtained with minimal effort within the implementation of most k-mer counters, but we now describe a simple way to approximate . Since most k-mers appear at most once in a read, the number of reads where a k-mer K appears is well approximated by the number of occurrences of K in the sample S. Thus, instead of using we can replace it with , which only requires the counts obtained from the exact counting procedure exact_counting(S, k) (see Algorithm S2 in Supplementary Material). Note that approximating with leads to overestimating the frequencies of few k-mers who reside in very repetitive sequences, e.g. k-mers composed by the same k consecutive nucleotides, for which . However, since the majority of k-mers reside in non-repetitive sequences, we can assume .

4 Experimental evaluation

In this section, we present the results of our experimental evaluation. In particular: We assess the performance of in approximating the set of frequent k-mers from a dataset of reads. In particular, we evaluate the accuracy of estimated frequencies and false negatives in the approximation, and compare with the state-of-the-art sampling algorithm SAKEIMA (Pellegrina ) in terms of sample size and running time. We evaluate ’s performance for the comparison of metagenomic datasets. We use ’s approximations to estimate abundance-based distances (e.g. the Bray-Curtis distance) between metagenomic datasets, and show that the estimated distances can be used to obtain informative clusterings of metagenomic datasets from the Sorcerer II Global Ocean Sampling Expedition (Rusch ) (https://www.imicrobe.us) in a fraction of the time required by the exact distances computation (i.e. based on exact k-mers frequencies). We test to discover discriminative k-mers between pairs of datasets. We show that identifies almost all discriminative k-mers from pairs of metagenomic datasets rom (Liu ) and the Human Microbiome Project (HMP) (https://hmpdacc.org/HMASM/), with a significant speed-up compared to standard approaches. We evaluate for approximate SNP genotyping, by combining the sampling scheme of with previously proposed genotyping algorithms. We show that we achieve accurate approximations of the most common performance measures (precision, sensitivity and F-measure), obtaining a significant speed-up of the genotyping process.

4.1 Implementation, datasets, parameters and environment

We implemented as a combination of C++ scripts, which perform the reads sampling and save the sample on a file, and as a modification of KMC 3 (Kokot ) (available at https://github.com/refresh-bio/KMC), a fast and efficient counting k-mers algorithm. We used KMC 3 with the default option to count canonical k-mers. Note that our flexible sampling technique can be combined with any k-mer counting algorithm. [See Supplementary Material for results, e.g. Supplementary Figure S1, obtained using Jellyfish v. 2.3 (available at https://github.com/gmarcais/Jellyfish) as k-mer counter in .] We use the variant of that employs the Poisson approximation for computing S (see end of Section 3). implementation, information about how to retrieve the data used in this work, and scripts for reproducing all results are publicity available (available at https://github.com/VandinLab/SPRISS). We compared with the exact k-mer counter KMC and with SAKEIMA (Pellegrina ) (available at https://github.com/VandinLab/SAKEIMA), the state-of-the-art sampling-based algorithm for approximating frequent k-mers. In all experiments we fix and . If not stated otherwise, we considered k = 31 and in our experiments. For SAKEIMA, as suggested in Pellegrina we set the number g of k-mers in a bag to be . We remark that a bag of reads of contains the same (expected) number of k-mers positions of a bag of SAKEIMA; this guarantees that both algorithms provide outputs with the same guarantees, thus making the comparison between the two methods fair. To assess in approximating frequent k-mers, we considered six large metagenomic datasets from HMP, each with reads and average read length (see Supplementary Table S1). For the evaluation of in comparing metagenomic datasets, we also used 37 small metagenomic datasets from the Sorcerer II Global Ocean Sampling Expedition (Rusch ), each with reads and average read length (see Supplementary Table S4). For the assessment of in the discovery of discriminative k-mers we used two large datasets from (Liu ), B73 and Mo17, each with reads and average read length (see Supplementary Table S2), and we also experimented with the HMP datasets. To evaluate the benefits of using for SNP genotyping, we used an Illumina WGS dataset from NA12878, with reads and average read length (see Supplementary Table S3), available from the Genome In A Bottle (GIAB) consortium (Zook ). All experiments have been performed on a machine with 512 GB of RAM and 2 Intel(R) Xeon(R) CPU E5-2698 v3 at 2.3 GHz, with one worker, if not stated otherwise. All reported results are averages over five runs.

4.2 Approximation of frequent k-mers

In this section, we first assess the quality of the approximation of provided by , and then compare with SAKEIMA. We use to extract approximations of frequent k-mers on six datasets from HMP for values of the minimum frequency threshold . The output of satisfied the guarantees from Proposition 1 for all five runs of every combination of dataset and θ. In all cases the estimated frequencies provided by are close to the exact ones (see Fig. 1a for an example). In fact, the average (across all reported k-mers) absolute deviation of the estimated frequency w.r.t. the true frequency is always small, i.e. one order of magnitude smaller than θ (Fig. 1b), and the maximum deviation is very small as well (Supplementary Fig. S2B). In addition, even if the values of [see (i) in Proposition 1] are always between and results in a very low false negative rate (i.e. fraction of k-mers of not reported by ), which is always been below 0.012 in our experiments.
Fig. 1.

(a) k-mers exact frequency and frequency estimated by for dataset SRS024075 and . (b) Average deviations between exact frequencies and frequencies estimated by () and SAKEIMA (), for various datasets and values of θ. (c) Running time of (), SAKEIMA () and the exact computation ()—see also legend of (b). (d) Fraction of the dataset analyzed by () and by SAKEIMA ()

(a) k-mers exact frequency and frequency estimated by for dataset SRS024075 and . (b) Average deviations between exact frequencies and frequencies estimated by () and SAKEIMA (), for various datasets and values of θ. (c) Running time of (), SAKEIMA () and the exact computation ()—see also legend of (b). (d) Fraction of the dataset analyzed by () and by SAKEIMA () In terms of running time, required at most 64% of the time required by the exact approach KMC (Fig. 1c). In addition, used at most 30% of the RAM memory required by the exact approach KMC. This is due to requiring to analyze at most 34% of the entire dataset (Fig. 1d). Note that the use of collections of bags of reads is crucial to achieve useful sample size, i.e. lower than the whole dataset. In fact, the sample sizes obtained from less sophisticated statistical tools, e.g. Hoeffding’s inequality combined with union bound (see Supplementary Section S1), and pseudodimension without collections of bags (see Supplementary Section S2), are much greater than the dataset size: and , respectively, which are useless sample sizes for datasets of reads. These results show that obtains very accurate approximations of frequent k-mers in a fraction of the time required by exact counting approaches. We then compared with SAKEIMA. In terms of quality of approximation, reports approximations with an average deviation lower than SAKEIMA’s approximations, while SAKEIMA’s approximations have a lower maximum deviation. However, the ratio between the maximum deviation of and the one of SAKEIMA are always below 2. Overall, the quality of the approximation provided by and SAKEIMA are, thus, comparable. In terms of running time, significantly improves over SAKEIMA (Fig. 1c), and processes slightly smaller portions of the dataset compared to SAKEIMA (Fig. 1d). Summarizing, is able to report most of the frequent k-mers and estimate their frequencies with small errors, by analyzing small samples of the datasets and with significant improvements on running times compared to exact approaches and to state-of-the-art sampling algorithms.

4.3 Comparing metagenomic datasets

We evaluated to compare metagenomic datasets by computing an approximation to the Bray-Curtis (BC) distance between pairs of datasets of reads, and using such approximations to cluster datasets. Let and be two datasets of reads. Let and be the set of frequent k-mers, respectively, of and , where θ is a minimum frequency threshold. The BC distance between and considering only frequent k-mers is defined as , where and Conversely, the BC similarity is defined as . We considered six datasets from HMP, and estimated the BC distances among them by using to approximate the sets of frequent k-mers and for the values of θ as in Section 4.2. We compared such estimated distances with the exact BC distances and with the estimates obtained using SAKEIMA. Both and SAKEIMA provide accurate estimates of the BC distances (Fig. 2a and Supplementary Fig. S3), which can be used to assess the relative similarity of pairs of datasets. However, to obtain such approximations requires at most 40% of the time required by SAKEIMA and usually 30% of the time required by the exact computation with KMC (Fig. 2b). Therefore provides accurate estimates of metagenomic distances in a fraction of time required by other approaches.
Fig. 2.

(a) Comparison of the approximations of the Bray-Curtis (BC) distances using approximations of frequent k-mers provided by (×) and by SAKEIMA (), and the exact distances, for . (b) Running time to approximate BC distances for all pairs of datasets with , with SAKEIMA and the exact approach. (c) Average linkage hierarchical clustering of GOS datasets using Jaccard similarity. (d) Same as (c), using estimated BC similarity from with 50% of the data (see also larger Supplementary Figs S4–S6 for better readability of datasets’ labels and computed clusters)

(a) Comparison of the approximations of the Bray-Curtis (BC) distances using approximations of frequent k-mers provided by (×) and by SAKEIMA (), and the exact distances, for . (b) Running time to approximate BC distances for all pairs of datasets with , with SAKEIMA and the exact approach. (c) Average linkage hierarchical clustering of GOS datasets using Jaccard similarity. (d) Same as (c), using estimated BC similarity from with 50% of the data (see also larger Supplementary Figs S4–S6 for better readability of datasets’ labels and computed clusters) As an example of the impact in accurately estimating distances among metagenomic datasets, we used the sampling approach of to approximate all pairwise BC distances among 37 small datasets from the Sorcerer II Global Ocean Sampling Expedition (GOS) (Rusch ), and used such distances to cluster the datasets using average linkage hierarchical clustering. The k-mer-based clustering of metagenomic datasets is often performed by using presence-based distances, such as the Jaccard distance (Ondov ), which estimates similarities between two datasets by computing the fraction of k-mers in common between the two datasets. Abundance-based distances, such as the BC distance (Benoit ; Danovaro ; Dickson ), provide more detailed measures based also on the k-mers abundance, but are often not used due to the heavy computational requirements to extract all k-mers counts. However, the sampling approach of can significantly speed-up the computation of all BC distances, and, thus, the entire clustering analysis. In fact, for this experiment, the use of reduces the time required to analyze the datasets (i.e. obtain k-mers frequencies, compute all pairwise distances and obtain the clustering) by 62%. We then compared the clustering obtained using the Jaccard distance (Fig. 2c) and the clustering obtained using the estimates of the BC distances (Fig. 2d) obtained using only 50% of reads in the GOS datasets, which are assigned to groups and macro-groups according to the origin of the sample (Rusch ). Even if the BC distance is computed using only a sample of the datasets, while the Jaccard distance is computed using the entirety of all datasets, the use of approximate BC distances leads to a better clustering in terms of correspondence of clusters to groups, and to the correct cluster separation for macro-groups. In addition, the similarities among datasets in the same group and the dissimilarities among datasets in different groups are more accentuated using the approximated BC distance. In fact, the ratio between the average BC similarity among datasets in the same group and the analogous average Jaccard is in the interval for all groups. In addition, the ratio between (i) the difference of the average BC similarity within the tropical macro-group and the average BC similarity between the tropical and temperate groups, and (ii) the analogous difference using the Jaccard similarity is . These results tell us the approximate BC-distances, computed using only half of the reads in each dataset, increase by the similarity signal inside all groups defined by the original study (Rusch ), and the dissimilarities between the two macro-groups (tropical and temperate). To conclude, the estimates of the BC similarities obtained using the sampling scheme of allows to better cluster metagenomic datasets than using the Jaccard similarity, while requiring less than 40% of the time needed by the exact computation of BC similarities, even for fairly small metagenomic datasets.

4.4 Approximation of discriminative k-mers

In this section, we assess for approximating discriminative k-mers in metagenomic datasets. In particular, we consider the following definition of discriminative k-mers (Liu ). Given two datasets , and a minimum frequency threshold θ, we define the set of -discriminative k-mers as the collection of k-mers K for which the following conditions both hold: (i) ; (ii) , with ρ  =  2. Note that the computation of requires to extract and . can be used to approximate the set , by computing approximations of the sets , i = 1, 2, of frequent k-mers in , and then reporting a k-mer K as -discriminative if the following conditions both hold: (i) ; (ii) , or when . To evaluate such approach, we considered two datasets from (Liu ), and and ρ = 2, which are the parameters used in (Liu ). We used the sampling approach of with and , resulting in analyzing of 5% and 10% of all reads, to approximate the sets of discriminative -discriminative and of -discriminative k-mers. When 5% of the reads are used, the false negative rate is < 0.028, while when 10% of the reads are used, the false negative rate is < 0.018. The running times are and s, respectively, while the exact computation of the discriminative k-mers with KMC requires s (we used 32 workers for both and KMC). Similar results are obtained when analyzing pairs of HMP datasets, for various values of θ (Supplementary Fig. S7). These results show that can identify discriminative k-mers with small false negative rates while providing a remarkable improvement in running time compared to the exact approach.

4.5 SNP genotyping

In this section, we evaluate for approximate SNP genotyping. In particular, we assess the use of in combination with previously proposed algorithms for SNP genotyping in terms of precision, sensitivity and F-measure. The genotyping algorithms we used are the standard pipeline [BWA (Li and Durbin, 2009) as aligner, and BCFtools (Li, 2011) as variant caller], and VarGeno (Sun and Medvedev, 2018). We considered hg19 as reference genome, and dbSNP (Sherry, 2001) as reference SNP database. We used the gold standard of NA12878 individual provided by the Genome In A Bottle (GIAB) consortium (Zook ). The Illumina WGS dataset of reads from NA12878 we used has a coverage of x. We used the sampling scheme of to create samples of 12.5%, 25%, 50% and 75% of reads of . The standard pipeline was run with 64 threads. When evaluating the running time, we do not include the time to obtain the sample, since once the sample is created it can be reused several times. Moreover, the time to obtain the sample is always a small fraction of the overall execution time (e.g. even for a sample containing 75% of reads of the required time is < 3000 s). The performance measures of the standard pipeline on are the following: 0.961 of precision, 0.959 of sensitivity and 0.960 of F-measure. Figure 3 describes the running times and the performance measures of the standard pipeline using samples of from . Considering a sample of just 25% of reads of , the sensitivity and the F-measure decrease, respectively, by 0.02 and 0.004, while the precision increases by 0.012. The increment of the precision is due to a decrement in the number of false positive calls, which is achieved by the reads sampling of that filters out low coverage regions and erroneous k-mers. The speed-up of using a sample of 25% of reads of instead of the entire dataset is x.
Fig. 3.

As function of the sample rate, experimental results of combining with VarGeno and the standard pipeline in the SNP genotyping process: VarGeno’s precision (a), sensitivity (c) F-measure (e), running time (g) and standard pipeline’s precision (b), sensitivity (d) F-measure (f), running time (h)

As function of the sample rate, experimental results of combining with VarGeno and the standard pipeline in the SNP genotyping process: VarGeno’s precision (a), sensitivity (c) F-measure (e), running time (g) and standard pipeline’s precision (b), sensitivity (d) F-measure (f), running time (h) VarGeno achieves on of precision, 0.585 of sensitivity and 0.731 of F-measure. With a sample from of just 25% of reads of , we obtain a decrement of the performance of VarGeno of 0.003 in precision, 0.015 in sensitivity, 0.013 in F-measure and a speed-up of x with respect to the time required to analyze the entire dataset . The results for the other sample sizes are described in Figure 3. To conclude, the sampling scheme of is very useful to remarkably speed up genotyping algorithms, while achieving very small decrements in the performance measures, and even improving the precision in some cases.

5 Discussion

We presented , an efficient algorithm to compute rigorous approximations of frequent k-mers and their frequencies by sampling reads. builds on pseudodimension, an advanced concept from statistical learning theory. Our extensive experimental evaluation shows that provides high-quality approximations and can be employed to speed-up exploratory analyses in various applications, such as the analysis of metagenomic datasets, the identification of discriminative k-mers and SNP genotyping. Overall, the sampling approach used by provides an efficient way to obtain a representative subset of the data that can be used to perform complex analyses more efficiently than examining the whole data, while obtaining representative results.

Funding

Part of this work was supported by the Italian Ministry of Education, University and Research (MIUR), under PRIN Project No. 20174LF3T8 AHeAD (Efficient Algorithms for HArnessing Networked Data) and the initiative ‘Departments of Excellence’ (Law 232/2016); and University of Padova under project SEED 2020 RATED-X. Conflict of Interest: none declared. Click here for additional data file.
  51 in total

1.  A statistical framework for SNP calling, mutation discovery, association mapping and population genetical parameter estimation from sequencing data.

Authors:  Heng Li
Journal:  Bioinformatics       Date:  2011-09-08       Impact factor: 6.937

2.  Comparison of high-throughput sequencing data compression tools.

Authors:  Ibrahim Numanagić; James K Bonfield; Faraz Hach; Jan Voges; Jörn Ostermann; Claudio Alberti; Marco Mattavelli; S Cenk Sahinalp
Journal:  Nat Methods       Date:  2016-10-24       Impact factor: 28.547

3.  Turtle: identifying frequent k-mers with cache-efficient algorithms.

Authors:  Rajat Shuvro Roy; Debashish Bhattacharya; Alexander Schliep
Journal:  Bioinformatics       Date:  2014-03-10       Impact factor: 6.937

4.  A submarine volcanic eruption leads to a novel microbial habitat.

Authors:  Roberto Danovaro; Miquel Canals; Michael Tangherlini; Antonio Dell'Anno; Cristina Gambi; Galderic Lastras; David Amblas; Anna Sanchez-Vidal; Jaime Frigola; Antoni M Calafat; Rut Pedrosa-Pàmies; Jesus Rivera; Xavier Rayo; Cinzia Corinaldesi
Journal:  Nat Ecol Evol       Date:  2017-04-24       Impact factor: 15.460

5.  Improved representation of sequence bloom trees.

Authors:  Robert S Harris; Paul Medvedev
Journal:  Bioinformatics       Date:  2020-02-01       Impact factor: 6.937

6.  To Petabytes and beyond: recent advances in probabilistic and signal processing algorithms and their application to metagenomics.

Authors:  R A Leo Elworth; Qi Wang; Pavan K Kota; C J Barberan; Benjamin Coleman; Advait Balaji; Gaurav Gupta; Richard G Baraniuk; Anshumali Shrivastava; Todd J Treangen
Journal:  Nucleic Acids Res       Date:  2020-06-04       Impact factor: 16.971

7.  Representation of k-Mer Sets Using Spectrum-Preserving String Sets.

Authors:  Amatur Rahman; Paul Medevedev
Journal:  J Comput Biol       Date:  2020-12-07       Impact factor: 1.479

8.  RNA-Skim: a rapid method for RNA-Seq quantification at transcript level.

Authors:  Zhaojun Zhang; Wei Wang
Journal:  Bioinformatics       Date:  2014-06-15       Impact factor: 6.937

Review 9.  Data structures based on k-mers for querying large collections of sequencing data sets.

Authors:  Camille Marchet; Christina Boucher; Simon J Puglisi; Paul Medvedev; Mikaël Salson; Rayan Chikhi
Journal:  Genome Res       Date:  2020-12-16       Impact factor: 9.043

10.  Fast and accurate short read alignment with Burrows-Wheeler transform.

Authors:  Heng Li; Richard Durbin
Journal:  Bioinformatics       Date:  2009-05-18       Impact factor: 6.937

View more
  1 in total

1.  SPRISS: Approximating Frequent K-mers by Sampling Reads, and Applications.

Authors:  Diego Santoro; Leonardo Pellegrina; Matteo Comin; Fabio Vandin
Journal:  Bioinformatics       Date:  2022-05-18       Impact factor: 6.931

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.