Daniele Dall'Olio1, Nico Curti2, Gastone Castellani2, Enrico Giampieri2, Eugenio Fonzi3, Claudia Sala4, Daniel Remondini1. 1. Department of Physics and Astronomy, University of Bologna, 40127, Bologna, BO, Italy. 2. Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138, Bologna, BO, Italy. 3. Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori (IRST) IRCCS, 47014, Meldola, Italy. 4. Department of Physics and Astronomy, University of Bologna, 40127, Bologna, BO, Italy. claudia.sala3@unibo.it.
Abstract
BACKGROUND: Current high-throughput technologies-i.e. whole genome sequencing, RNA-Seq, ChIP-Seq, etc.-generate huge amounts of data and their usage gets more widespread with each passing year. Complex analysis pipelines involving several computationally-intensive steps have to be applied on an increasing number of samples. Workflow management systems allow parallelization and a more efficient usage of computational power. Nevertheless, this mostly happens by assigning the available cores to a single or few samples' pipeline at a time. We refer to this approach as naive parallel strategy (NPS). Here, we discuss an alternative approach, which we refer to as concurrent execution strategy (CES), which equally distributes the available processors across every sample's pipeline. RESULTS: Theoretically, we show that the CES results, under loose conditions, in a substantial speedup, with an ideal gain range spanning from 1 to the number of samples. Also, we observe that the CES yields even faster executions since parallelly computable tasks scale sub-linearly. Practically, we tested both strategies on a whole exome sequencing pipeline applied to three publicly available matched tumour-normal sample pairs of gastrointestinal stromal tumour. The CES achieved speedups in latency up to 2-2.4 compared to the NPS. CONCLUSIONS: Our results hint that if resources distribution is further tailored to fit specific situations, an even greater gain in performance of multiple samples pipelines execution could be achieved. For this to be feasible, a benchmarking of the tools included in the pipeline would be necessary. It is our opinion these benchmarks should be consistently performed by the tools' developers. Finally, these results suggest that concurrent strategies might also lead to energy and cost savings by making feasible the usage of low power machine clusters.
BACKGROUND: Current high-throughput technologies-i.e. whole genome sequencing, RNA-Seq, ChIP-Seq, etc.-generate huge amounts of data and their usage gets more widespread with each passing year. Complex analysis pipelines involving several computationally-intensive steps have to be applied on an increasing number of samples. Workflow management systems allow parallelization and a more efficient usage of computational power. Nevertheless, this mostly happens by assigning the available cores to a single or few samples' pipeline at a time. We refer to this approach as naive parallel strategy (NPS). Here, we discuss an alternative approach, which we refer to as concurrent execution strategy (CES), which equally distributes the available processors across every sample's pipeline. RESULTS: Theoretically, we show that the CES results, under loose conditions, in a substantial speedup, with an ideal gain range spanning from 1 to the number of samples. Also, we observe that the CES yields even faster executions since parallelly computable tasks scale sub-linearly. Practically, we tested both strategies on a whole exome sequencing pipeline applied to three publicly available matched tumour-normal sample pairs of gastrointestinal stromal tumour. The CES achieved speedups in latency up to 2-2.4 compared to the NPS. CONCLUSIONS: Our results hint that if resources distribution is further tailored to fit specific situations, an even greater gain in performance of multiple samples pipelines execution could be achieved. For this to be feasible, a benchmarking of the tools included in the pipeline would be necessary. It is our opinion these benchmarks should be consistently performed by the tools' developers. Finally, these results suggest that concurrent strategies might also lead to energy and cost savings by making feasible the usage of low power machine clusters.
Authors: Christopher Schmied; Peter Steinbach; Tobias Pietzsch; Stephan Preibisch; Pavel Tomancak Journal: Bioinformatics Date: 2015-12-01 Impact factor: 6.937
Authors: M Jafar Taghiyar; Jamie Rosner; Diljot Grewal; Bruno M Grande; Radhouane Aniba; Jasleen Grewal; Paul C Boutros; Ryan D Morin; Ali Bashashati; Sohrab P Shah Journal: Gigascience Date: 2017-07-01 Impact factor: 6.524
Authors: MacIntosh Cornwell; Mahesh Vangala; Len Taing; Zachary Herbert; Johannes Köster; Bo Li; Hanfei Sun; Taiwen Li; Jian Zhang; Xintao Qiu; Matthew Pun; Rinath Jeselsohn; Myles Brown; X Shirley Liu; Henry W Long Journal: BMC Bioinformatics Date: 2018-04-12 Impact factor: 3.169