| Literature DB >> 30999863 |
Umberto Ferraro Petrillo1, Mara Sorella2, Giuseppe Cattaneo3, Raffaele Giancarlo4, Simona E Rombo5.
Abstract
BACKGROUND: Distributed approaches based on the MapReduce programming paradigm have started to be proposed in the Bioinformatics domain, due to the large amount of data produced by the next-generation sequencing techniques. However, the use of MapReduce and related Big Data technologies and frameworks (e.g., Apache Hadoop and Spark) does not necessarily produce satisfactory results, in terms of both efficiency and effectiveness. We discuss how the development of distributed and Big Data management technologies has affected the analysis of large datasets of biological sequences. Moreover, we show how the choice of different parameter configurations and the careful engineering of the software with respect to the specific framework under consideration may be crucial in order to achieve good performance, especially on very large amounts of data. We choose k-mers counting as a case study for our analysis, and Spark as the framework to implement FastKmer, a novel approach for the extraction of k-mer statistics from large collection of biological sequences, with arbitrary values of k.Entities:
Keywords: Apache Spark; Distributed computing; Performance evaluation; k-mer counting
Mesh:
Year: 2019 PMID: 30999863 PMCID: PMC6471689 DOI: 10.1186/s12859-019-2694-8
Source DB: PubMed Journal: BMC Bioinformatics ISSN: 1471-2105 Impact factor: 3.169
Fig. 1Stages of the pipeline implementing FastKmer
Fig. 2Extraction of superkmers from input sequences, using signatures (k=10,m=3)
Number of distinct and total k-mers for our datasets
| kmers | 32GB | 125GB | 32GB | 125GB |
|---|---|---|---|---|
| Distinct | 12,551,234 K | 37,337,258 K | 14,203,028 K | 47,830,662 K |
| Total | 22,173,612 K | 86,674,803 K | 18,722,642 K | 73,209,044 K |
Fig. 3Algorithm execution times on the 32GB dataset with k=28,55 and an increasing number of bins (B) and parallelism level. Best combination shown in bold
Fig. 4Comparison of execution times of FastKmer on the 32GB for k∈{28,55}, using default (left) or custom (right) MPS-based partitioning method, for an increasing parallelism level. The number of bins is set to 8192. For the custom partitioning scheme the performance of the signature granularity is also shown, marked with a dashed line
Running time (minutes) for various distributed k-mer counting algorithms, with a time limit of 10 hours
| Algorithm | 32GB | 125GB | 32GB | 125GB |
|---|---|---|---|---|
| FK | 23 | 82 | 38 | 119 |
| KCH | 28 | 196 |
|
|
| BioPig | 122 | Out of time | 450 | Out of time |
| ADAM | Out of mem | Out of mem | Out of mem | Out of mem |
Dash symbols represent combinations where the value of k is not supported by the algorithm
Fig. 5Running time comparison of FK and other k-mer counting algorithms for various values of k and an increasing number of workers
Running time breakdown, in seconds, of the two FastKmer stages on the 32GB dataset with k fixed to 28 and a decreasing number of executors
| 64 Workers | 32 Workers | 16 Workers | 8 Workers | |
|---|---|---|---|---|
| Stage 1 | ||||
| Scheduler Delay time | 0.07 | 0.08 | 0.1 | 0.26 |
| Executor Deserialization time | 0.93 | 1.01 | 2.15 | 3.98 |
| Executor Compute time | 351.4 | 580.9 | 1112.51 | 2655.48 |
| Shuffle Read time | 0 | 0 | 0 | 0 |
| Shuffle Write time | 1.22 | 2.33 | 4.59 | 10.42 |
| Shuffle Read local (MB) | 0 | 0 | 0 | 0 |
| Shuffle Read remote (MB) | 0 | 0 | 0 | 0 |
| Shuffle Write (MB) | 504.7 | 1009.5 | 2018.7 | 4542 |
| Stage 2 | ||||
| Scheduler Delay time | 0.08 | 0.14 | 0.07 | 0.11 |
| Executor Deserialization time | 0.19 | 0.44 | 1.05 | 1.82 |
| Executor Compute time | 773.52 | 868.59 | 1648.76 | 3859.24 |
| Shuffle Read time | 0.06 | 0 | 0 | 0.01 |
| Shuffle Write time | 0 | 0 | 0 | 0 |
| Shuffle Read local (MB) | 15.6 | 62.5 | 250.6 | 1125.9 |
| Shuffle Read remote (MB) | 484.4 | 937.9 | 1749.9 | 3375.3 |
| Shuffle Write (MB) | 0 | 0 | 0 | 0 |
The table reports also the size, in megabytes, of the corresponding read and write shuffles
Running time breakdown, in seconds, of the two FastKmer stages on the 32GB dataset with k fixed to 55 and a decreasing number of executors
| 64 Workers | 32 Workers | 16 Workers | 8 Workers | |
|---|---|---|---|---|
| Stage 1 | ||||
| Scheduler Delay time | 0 | 0.1 | 0.1 | 0.2 |
| Executor Deserialization time | 0 | 0.4 | 0.8 | 2.7 |
| Executor Compute time | 293.4 | 569.7 | 1152.8 | 2575.2 |
| Shuffle Read time | 0 | 0 | 0 | 0 |
| Shuffle Write time | 0.8 | 1.7 | 3.3 | 7.4 |
| Shuffle Read local (MB) | 0 | 0 | 0 | 0 |
| Shuffle Read remote (MB) | 0 | 0 | 0 | 0 |
| Shuffle Write (MB) | 504.7 | 1009.5 | 2018.7 | 4542 |
| Stage 2 | ||||
| Scheduler Delay time | 0 | 0 | 0.1 | 0.1 |
| Executor Deserialization time | 0.2 | 0.44 | 0.4 | 1.4 |
| Executor Compute time | 1083 | 1171.2 | 2060.2 | 4556.4 |
| Shuffle Read time | 0 | 2.3 | 0 | 0 |
| Shuffle Write time | 0 | 0 | 0 | 0 |
| Shuffle Read local (MB) | 15.6 | 62.5 | 250.6 | 1125.9 |
| Shuffle Read remote (MB) | 484.4 | 937.9 | 1749.9 | 3375.3 |
| Shuffle Write (MB) | 0 | 0 | 0 | 0 |
The table reports also the size, in megabytes, of the corresponding read and write shuffles