Literature DB >> 29589554

Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids.

Abolfazl Hashemi1, Banghua Zhu2, Haris Vikalo3.   

Abstract

BACKGROUND: Haplotype assembly is the task of reconstructing haplotypes of an individual from a mixture of sequenced chromosome fragments. Haplotype information enables studies of the effects of genetic variations on an organism's phenotype. Most of the mathematical formulations of haplotype assembly are known to be NP-hard and haplotype assembly becomes even more challenging as the sequencing technology advances and the length of the paired-end reads and inserts increases. Assembly of haplotypes polyploid organisms is considerably more difficult than in the case of diploids. Hence, scalable and accurate schemes with provable performance are desired for haplotype assembly of both diploid and polyploid organisms.
RESULTS: We propose a framework that formulates haplotype assembly from sequencing data as a sparse tensor decomposition. We cast the problem as that of decomposing a tensor having special structural constraints and missing a large fraction of its entries into a product of two factors, U and [Formula: see text]; tensor [Formula: see text] reveals haplotype information while U is a sparse matrix encoding the origin of erroneous sequencing reads. An algorithm, AltHap, which reconstructs haplotypes of either diploid or polyploid organisms by iteratively solving this decomposition problem is proposed. The performance and convergence properties of AltHap are theoretically analyzed and, in doing so, guarantees on the achievable minimum error correction scores and correct phasing rate are established. The developed framework is applicable to diploid, biallelic and polyallelic polyploid species. The code for AltHap is freely available from https://github.com/realabolfazl/AltHap .
CONCLUSION: AltHap was tested in a number of different scenarios and was shown to compare favorably to state-of-the-art methods in applications to haplotype assembly of diploids, and significantly outperforms existing techniques when applied to haplotype assembly of polyploids.

Entities:  

Keywords:  Haplotype assembly; Iterative algorithm; Tensor decomposition

Mesh:

Year:  2018        PMID: 29589554      PMCID: PMC5872563          DOI: 10.1186/s12864-018-4551-y

Source DB:  PubMed          Journal:  BMC Genomics        ISSN: 1471-2164            Impact factor:   3.969


Background

Fast and accurate DNA sequencing has enabled unprecedented studies of genetic variations and their effect on human health and medical treatments. Complete information about variations in an individual’s genome is given by haplotypes, the ordered lists of single nucleotide polymorphisms (SNPs) on the individual’s chromosomes [1]. Haplotype information is of fundamental importance for a wide range of applications. For instance, when the corresponding genes on a homologous pair of chromosomes contain multiple variants, they could exhibit different gene expression patterns. In humans, this may affect an individual’s susceptibility to diseases and response to therapeutic drugs, and hence suggest directions for medical and pharmaceutical research [2]. Haplotype information also enables whole genome association studies that focus on the so-called tag SNPs [3], representative SNPs in a region of the genome characterized by strong correlation between alleles (i.e., by high linkage disequilibrium). Moreover, haplotype sequences can be used to infer recombination patterns and identify genes under positive selection [4]. In addition to the SNPs and minor structural variations found in a healthy individual’s genome, complex chromosomal aberrations such as translocations and nonreciprocal structural changes – including aneuploidy – are present in cancer cells. Cancer haplotype assembly enables identification of “driver” mutations and thus helps to understanding the mechanisms behind the disease and discovery of its genetic signatures. Haplotype assembly from short reads obtained by high-through-put DNA sequencing requires partitioning (either directly or indirectly) the reads into K clusters (K=2 for diploids, K=3 for triploids, etc.), each collecting the reads corresponding to one of the chromosomes. If the reads were free of sequencing errors, this task would be straightforward. However, sequencing is erroneous – state-of-the-art platforms have error rates on the order of 10−3−10−2. This leads to ambiguities regarding the origin of a read and therefore renders haplotype assembly challenging. For this reason, the vast majority of haplotype assembly techniques attempts to remove the aforementioned ambiguities by either discarding or altering sequencing data; this has led to the minimum fragment removal, minimum SNP removal [5], maximum fragments cut [6], and minimum error correction formulations of the assembly problem [7]. Most of the recent haplotype assembly methods (see, e.g., [8-12]) focus on the minimum error correction (MEC) formulation where the goal is to find the smallest number of nucleotides in reads that need to be changed so that any read partitioning ambiguities would be resolved. It has been shown that finding optimal solution to the MEC formulation of the haplotype assembly problem is NP-hard [5, 12, 13]. In [14], the authors used a branch-and-bound scheme to minimize the MEC objective over the space of reads; to reduce the search space, they relied on a bound on the objective obtained by a random partition of the reads. Unfortunately, exponential growth of the complexity of this scheme makes it computationally infeasible even for moderate haplotype lengths. Integer linear programming techniques have been applied to haplotype assembly in [15], but the approach there fails at computationally difficult instances of the problem. More recently, fixed parameter tractable (FPT) algorithms with runtimes exponential in the number of variants per read [16, 17] were proposed; these methods are well-suited for short reads but become infeasible for the long ones. A dynamic programming scheme for haplotype assembly of diploids proposed in [18] is also exponential in the length of the longest read. A probabilistic dynamic programming algorithm that optimizes a likelihood function generalizing the MEC objective is developed in [10]; this method is characterized by high accuracy but is significantly slower than the previous heuristics. Authors in [9, 11] aim to process long reads by developing algorithms for the exact optimization of weighted variants of the MEC score that scale well with read length but are exponential in the sequencing coverage. These methods, along with ProbHap [10], struggle to remain accurate and practically feasible at high coverages (e.g., higher than 12 [10]). The computational challenges of optimizing MEC score has motivated several polynomial time heuristics. In a pioneering work [19], a greedy algorithm seeking the most likely haplotypes was used to assemble haplotypes of the first complete diploid individual genome obtained via high-throughput sequencing. To compute posterior joint probabilities of consecutive SNPs, Bayesian methods relying on MCMC and Gibbs sampling schemes were proposed in [20] and [21], respectively; unfortunately, slow convergence of Markov chains that these schemes rely on limits their practical feasibility. Following an observation that haplotype assembly can be interpreted as a clustering problem, a max-cut formulation was proposed in [22]; an efficient algorithm (HapCUT, recently upgraded to HapCUT2 [23]) that solves it and significantly outperforms the method in [19] was developed and has widely been used. A flow-graph based approach in [24], HapCompass, re-examined fragment removal strategy and demonstrated superior performance over HapCUT. Other recent diploid haplotype assembly methods include a greedy max-cut approach in [25], convex optimization program for minimizing the MEC score in [26], a communication-theoretic interpretation of the problem solved via belief propagation (BP) in [27], and methods that use external reference panels such as 1000 Genomes to improve accuracy of haplotype assembly in [28, 29]. Note that deep sequencing coverage provided by state-of-the-art high-throughput sequencing platforms and the emergence of very long insert sizes in recent technologies (e.g., fosmid [25]) may enable assembly of extremely long haplotype blocks but also impose significant computational burden on the methods above. Increased affordability, capability to provide deep coverage, and longer sequencing read lengths also enabled studies of genetic variations of polyploid organisms. However, haplotype assembly for polyploid genomes is considerably more challenging than that for diploids; to illustrate this, note that for a polyploid genome with k haplotype sequences of length m, under the all-heterozygous assumption there are (k−1) different genotypes and at least 2((k−1) different haplotype phasings. In part for this reason relatively fewer methods for solving the haplotype assembly problems in polyploids have been developed. In fact, with the exception of HapCompass [24], SDhaP [26] and BP [27], the above listed methods are restricted to diploid genomes. Other techniques capable of reconstructing haplotypes for both diploid and polyploid genomes include HapTree [30], a Bayesian method to find the maximum likelihood haplotype shown to be superior to HapCompass and SDhaP (see, e.g., [31] for a detailed comparison), H-PoP [8], the state-of-the-art dynamic programming method that significantly outperforms the schemes developed in [24, 26, 30] in terms of accuracy, memory consumption, and speed, and the recently proposed matrix factorization schemes in [32, 33]. In this paper, we propose a unified framework for haplotype assembly of diploid and polyploid genomes based on sparse tensor decomposition; the framework essentially solves a relaxed version of the NP-hard MEC formulation of the haplotype assembly problem. In particular, read fragments are organized in a sparse binary tensor which can be thought of as being obtained by multiplying a matrix that contains information about the origin of erroneous sequencing reads and a tensor that contains haplotype information of an organism. The problem then is recast as that of decomposing a tensor having special structural constraints and missing a large fraction of its entries. Based on a modified gradient descent method and after unfolding the observed and haplotype information bearing tensors, an iterative procedure for finding the decomposition is proposed. The algorithm exploits underlying structural properties of the factors to perform decomposition at a low computational cost. In addition, we analyze the performance and convergence properties of the proposed algorithm and determine bounds on the minimum error correction (MEC) scores and correct phasing rate (CPR) – also referred to as reconstruction rate – that the algorithm achieves for a given sequencing coverage and data error rate. To the best of our knowledge, this is the first polynomial time approximation algorithm for haplotype assembly of diploids and polyploids having explicit theoretical guarantees for its achievable MEC score and CPR. The proposed algorithm, referred to as AltHap, is tested in applications to haplotype assembly for both diploid and polyploid genomes (synthetic and real data) and compared with several state-of-the-art methods. Our extensive experiments reveal that AltHap outperforms the competing techniques in terms of accuracy, running time, or both. It should be noted that while state-of-the-art haplotype assembly methods for polyploids assume haplotypes may only have biallelic sites, AltHap is capable of reconstructing polyallelic haplotypes which are common in many plants and some animals, are of particular importance for applications such as crop cultivation [34], and may help in reconstruction of viral quasispecies [35]. Moreover, while many state-of-the-art haplotype assembly methods are computationally intensive (e.g., [10, 15]), our extensive numerical experiments demonstrate efficacy of AltHap in a variety of practical settings.

Methods

Problem formulation

We briefly summarize notation used in the paper. Bold capital letters refer to matrices and bold lowercase letters represent vectors. Tensors are denoted by underlined bold capital letters, e.g., . M::1 and denote the frontal slice and the mode-1 unfolding of a third-order tensor , respectively. For a positive integer n, [n] denotes the set {1…,n}. The condition number of rank-k matrix M is defined as κ=σ1/σ where σ1≥⋯≥σ>0 are singular values of M. SVD(M) denotes the rank k approximation (compact SVD) of M computed by power iteration method [36, 37]. Let denote the set of haplotype sequences of a k-ploid organism, and let R be an n×m SNP fragment matrix where n denotes the number of sequencing reads and m is the length of haplotype sequences. R is an incomplete matrix that can be thought of as being obtained by sampling, with errors, matrix M that consists of n rows; each row of M is a sequence randomly selected from among k haplotype sequences. Since each SNP is one of four possible nucleotides, we use the alphabet to describe the information in the haplotype sequences; the mapping between nucleotides and alphabet components follows arbitrary convention. The reads can now be organized into an n×m×4 SNP fragment tensor which we denote by . The (i,j,:) fiber of , i.e., a one-dimensional slice obtained by fixing the first and second indices of the tensor, represents the value of the j SNP in the i read. Let Ω denote the set of informative fibers of , i.e., the set of (i,j,:) such that the i read covers the j SNP. Define an operator as is a tensor obtained by sampling, with errors, tensor having n copies of k encoded haplotype sequences as its horizontal slices. More specifically, we can write , where contains haplotype information, i.e., the j vertical slice of , V:, is the encoded sequence of the j haplotype, and U∈{0,1} is a matrix that assigns each of n horizontal slices of to one of k haplotype sequences, i.e., the i row of U, u, is an indicator of the origin of the i read. Let Φ={e1,…,e}, where is the l standard basis vector having 1 in the l position and 0 elsewhere. The rows of U are standard unit basis vectors in , i.e., u∈Φ, ∀i∈[n]. This representation is illustrated in Fig. 1 where the (1,1,:) fiber of specified with dashed lines is mapped to the (1,1,:) fiber of which in turn implies that in the example described in Fig. 1 we have u1=e1.
Fig. 1

Representing haplotype sequences and sequencing reads using tensors. Tensor contains haplotype information while matrix U∈{0,1} assigns each of the n horizontal slices of to one of the k haplotype sequences, i.e., the i row of U is an indicator of the origin of the i read

Representing haplotype sequences and sequencing reads using tensors. Tensor contains haplotype information while matrix U∈{0,1} assigns each of the n horizontal slices of to one of the k haplotype sequences, i.e., the i row of U is an indicator of the origin of the i read DNA sequencing is erroneous and hence we assume a model where the informative fibers in are perturbed versions of the corresponding fibers in with data error rate p, i.e., if the (i,j,:)∈Ω fiber in takes value , R with probability 1−p equals e and with probability p takes one of the other three possibilities. Thus, the observed SNP fragment tensor can be modeled as where is an additive noise tensor defined as where the notation denotes uniform selection of a vector from . The goal of haplotype assembly can now be formulated as follows: Given the SNP fragment tensor, find the tensor of haplotype sequencesthat minimizes the MEC score. Next, we formalize the MEC score as well as the correct phasing rate, also known as reconstruction rate, the two metrics that are used to characterize performance of haplotype assembly schemes (see, e.g., [15, 18, 38, 39]). For two alleles a1, , we define a dissimilarity function d(a1,a2) as The MEC score is the smallest number of fibers in that need to be altered so that the resulting modified data is consistent with the reconstructed haplotype , i.e., The correct phasing rate (CPR), also referred to as the reconstruction rate, can conveniently be written using the dissimilarity function d(.,.). Let denote the tensor of true haplotype sequences. Then where is a one-to-one mapping from lateral slices of to those of , i.e., a one-to-one mapping from the set of reconstructed haplotypes to the set of true haplotypes. We now describe our proposed relaxation of the MEC formulation of the haplotype assembly problem. Let p∈[k], ∀i∈[n] be defined as . Notice that for any j such that d(R,V)=1, . Therefore, by denoting where Ω the set of informative fibers for the i read we obtain where (a) follows from the definition of the Frobenius norm and vec(.) in (b) denotes the vectorization of its argument. Let e be the p standard unit vector ∀p∈[k]. It is straightforward to observe that the last equality in (6) can equivalently be written as where is the mode-1 unfolding of the tensor . Hence, Let U∈{0,1} be the matrix such that for its i row it holds that . In addition, notice that vec(R) is the i row of . Therefore, from the definition of the Frobenius norm and the fact that we obtain The optimization problem in (7) is NP-hard since the entries of are binary and the objective function is non-convex. Relaxing the binary constraint to , ∀i∈[4m],∀j∈[k], where , results in the following relaxation of the MEC formulation, The new formulation can be summarized as follows. We start by finding the so-called mode-1 unfolding of tensors and and denote the decomposition , as illustrated in Fig. 2. As implied by the figure, after unfolding, the entries of the (1,1,:) fiber are mapped to four blocks of and that correspond to the frontal slices of tensors and , respectively. Then, to determine the haplotype sequence that minimizes the MEC score, one needs to solve (8) and find the optimal tensor decomposition.
Fig. 2

Representing haplotype sequences and sequencing reads using unfolded tensors. Matrix contains haplotype information while matrix U∈{0,1} assigns each of the n rows of to one of the k haplotype sequences, i.e., the i row of U is an indicator of the origin of the i read

Representing haplotype sequences and sequencing reads using unfolded tensors. Matrix contains haplotype information while matrix U∈{0,1} assigns each of the n rows of to one of the k haplotype sequences, i.e., the i row of U is an indicator of the origin of the i read

The AltHap algorithm

Although the objective function in (8), i.e., is convex in each of the factors when the other factor is fixed, is generally nonconvex. To facilitate computationally efficient search for the solution of (8), we rely on a modified gradient search algorithm which exploits the special structures of U and and iteratively updates the estimates starting from some initial point . More specifically, given the current estimates , the update rules are where denotes the partial derivative of evaluated at , α is a judiciously chosen step size, and denotes the projection operator onto . Notice that the optimization in (9) is done by exhaustively searching over k vectors in Φ. Since the number of haplotypes k is relatively small, the complexity of the exhaustive search (9) is low. The proposed scheme is formalized as Algorithm 1. Note that AltHap differs from a previously proposed SCGD algorithm in [32] as follows: (i) AltHap’s novel representation of haplotypes and sequencing reads using binary tensors provides a unified framework for haplotype assembly of diploids as well as biallelic and polyallelic polyploids. The method in [32] is not capable of performing haplotype assembly of polyallelic polyploid genomes. (ii) Unlike [32], AltHap exploits the fact that is composed of binary entries by imposing the constraint in the MEC relaxation in (8). As our results in Section 5 demonstrate, this leads to significant performance improvements of AltHap over SCGD in a variety of settings. (iii) Lastly, in Section 4 we provide analysis of the global convergence of AltHap and derive explicit analytical bounds on its achievable performance. Such performance guarantees do not exist for the method in [32].

Convergence analysis of AltHap

In this section, we analyze the convergence properties of AltHap and provide performance guarantees in different scenarios. In the Additional file 1 we show that, a judicious choice of the step size α according to where C∈(0,2) is a constant, guarantees that the value of the objective function in (8) decreases as one alternates between (9) and (10), which in turn implies that AltHap converges. The key observation that leads to this result is that is a convex function in each of the factor matrices and that is a convex set; hence the projection in (10) leads to a reduction of in each iteration t. It is important however to determine the conditions under which the stationary point of AltHap coincides with the global optima of (8). To this end, we first provide the definition of incoherence of matrices [40].

Definition 1

A rank-k matrixwith singular value decompositionis incoherent with parameterif for every 1≤i≤n, 1≤j≤m Let each fiber in MT be observed uniformly with probably p. Let Csnp=△mp denote the expected number of SNPs covered by each read, and Cseq=△np denote the expected coverage for each of the haplotype sequences. Theorem 1 built upon the results of [41-43] states that with an adequate number of covered SNPs, the solution found by AltHap reconstructs up to an error term that stems from the existence of errors in sequencing reads.

Theorem 1

Assume is μ-incoherent. Suppose the condition number of is κ. Then there exist numerical constants C0,C1>0 such that if Ω is uniformly generated at random and with probability at least , the solution found by AltHap satisfies The proof of Theorem 1 relies on a coupled perturbation analysis to establish a certain type of local convexity of the objective function around the global optima. Thus, under (13) there is no other stationary point around the global optima and hence, starting from a good initial point, AltHap converges globally. We employ the initialization procedure suggested by [42] – summarized in the initialization step of Algorithm 1 – which is based on a low cost singular value decomposition of using power method [36, 37] and with high probability lies in the described convexity region of .

Remark 1

Under the assumption of 1, the Condition specifies a lower bound on the expected number of covered SNPs, Csnp, that is required for the exact recovery of in the idealistic error-free scenario, i.e., for p=0. With higher sequencing coverage, more SNPs are covered by the reads and hence Csnp required for accurate haplotype assembly scales with Cseq along with other parameters. Moreover, the term on the right hand side of (14) is the bound on the error of the solution generated by AltHap which increases with the sequencing error rate p and ploidy k, and decreases with Csnp and the number of reads n, as expected.

Remark 2

If is well-conditioned, i.e., is characterized by a small incoherence parameter μ and a small condition number κ, the recovery becomes easier; this is reflected in less strict sufficient condition (13) and improved achievable performance (14). In fact, as we verified in our simulation studies, by using the proposed framework for haplotype assembly, the parameters μ and κ associated with are close to 1 (the ideal case). Theorem 2 provides theoretical bounds on the expected MEC scores and CPR achieved by AltHap. (See Additional file 1 for the proof).

Theorem 2

Under the conditions of Theorem 1, with probability at least it holds that Moreover, if the reads sample haplotype sequences uniformly, with probability at least it holds that

Remark 3

The bound established in (15) suggests that the expected MEC increases with the length of the haplotype sequences, sequencing error, number of haplotype sequences, and sequencing coverage. A higher sequencing coverage results in a larger fragment data which in turn leads to higher MEC scores.

Remark 4

As intuitively expected, the bound (16) suggests that AltHap’s achievable expected CPR improves with the number of sequencing reads and the SNP coverage; on the other hand, the CPR deteriorates at higher data error rates. Finally, assuming the same sequencing parameters, (16) implies that reconstruction of polyploid haplotypes is more challenging than that of diploids.

Results and discussion

We evaluated the performance of the proposed method on both experimental and simulated data, as described next. AltHap was implemented in Python and MATLAB, and the simulations were conducted on a single core Intel Xeon E5-2690 v3 (Haswell) with 2.6 GHz and 64 GB DDR4-2133 RAM. The benchmarking algorithms include Belief Propagation (BP) [27], a communication-inspired method capable of performing haplotype assembly of diploid and biallelic polyploid species, HapTree [30], integer linear programming (ILP) technique [15], SCGD [32], and H-PoP [8], the state-of-the-art dynamic programming algorithm for haplotype assembly of diploid and biallelic polyploid species shown to be superior to HapTree [30], HapCompass [24], and SDhaP [26] in terms of both accuracy and speed [8, 31]. Following the prior works on haplotype assembly (see, e.g., [15, 18, 38, 39]) we use MEC score and CPR to assess the quality of the reconstructed haplotypes. For clarity, in the tables we report the CPR percentage, i.e., CPR × 100.

Experimental data

We first tested performance of AltHap in an application to haplotype reconstruction of a data set from the 1000 Genomes Project – in particular, the sample NA12878 sequenced at high coverage using the 454 sequencing platform. In this work, we take the trio-phased variant calls from the GATK resource bundle [44] as the true haplotype sequences. We compare the MEC score, CPR, and running time achieved by AltHap to those of H-PoP, BP, HapTree, SCGD and ILP. All the algorithms used in the benchmarking study were executed with their default settings. The results are given in Table 1. As seen there, among the considered algorithms AltHap achieves the highest CPR for majority of the chromosomes (nine), followed by H-PoP and ILP (five each). As expected, ILP achieves the lowest MEC scores among all the methods but this comes at a computational cost much higher than that of AltHap. Notice that lower MEC score does not necessarily imply better CPR. MEC is the error evaluated on observed SNPs positions, i.e., the training data points, while CPR is related to the generalization error that is calculated on unobserved SNPs positions, i.e., the test data points. Since the sequencing reads are erroneous, an algorithm might over-fit while trying to minimize the MEC score.
Table 1

Performance comparison of AltHap, H-PoP, BP, HapTree, SCGD, and ILP applied to haplotype reconstruction of the CEU NA12878 data set in the 1000 Genomes Project

AltHapH-PoPBP
ChromosomeCPRMECt(sec)CPRMECt(sec)CPRMECt(sec)
197.4201111.2695.722645.2299.123218.17
295.3256212.2295.629715.6589.528979.83
393.3208410.3891.223126.9974.323678.30
496.9236812.1697.026485.2474.826136.76
597.219249.9696.621034.6788.221854.76
694.9368714.1795.233434.9388.735886.94
797.0184611.1992.419864.2481.120737.88
896.216349.6394.718484.1488.518578.01
997.112726.4291.014623.3689.814916.13
1096.815847.9794.516833.6790.818397.18
1193.313947.4591.515533.7175.615866.69
1292.114237.1290.315703.4674.415896.48
1397.012694.4294.114402.8989.114095.38
1490.38579.5397.19742.5470.09954.53
1597.29419.4297.410392.4074.610633.92
1696.711985.4093.511922.4779.712694.42
1797.511464.5891.112441.9892.412343.15
1891.08604.5497.68932.5182.09423.79
1997.66183.3297.86951.8298.010602.47
2097.37033.5395.07192.0097.17962.74
2197.44702.5197.05121.7097.55321.86
2297.33671.9898.34271.4490.74381.72
Mean95.814647.6994.815853.5085.016435.51
Sd2.277803.542.547901.498.947932.32
# best900505300
HapTreeSCGDILP
ChromosomeCPRMECt(sec)CPRMECt(sec)CPRMECt(sec)
184.1230515.4392.524563.6295.61741173.68
284.5287517.5992.635094.4195.32219190.37
385.2236315.0691.924983.4095.61788152.09
483.5260418.6792.737545.4797.12048168.56
584.8217116.9593.927503.5495.41691147.72
684.6358323.8693.056128.7095.72643181.51
784.7207013.0693.528263.9595.41590133.36
884.2183814.8190.716922.1895.61472136.60
985.1147914.9097.118852.9495.21125105.34
1085.7182312.1392.618762.5695.71354120.89
1183.6157711.3393.222652.9595.21206104.74
1284.815899.9792.316122.0395.41214103.88
1382.814059.5597.029473.3195.5110593.33
1485.49877.7991.19041.3695.375265.07
1583.610617.4399.110411.2194.180966.52
1685.112738.1393.013051.7995.592077.81
1784.812306.3496.721232.6196.194347.99
1884.19417.1390.39331.1695.272071.49
1984.67655.2697.212903.2596.653344.32
2086.97956.0896.89491.3895.861254.30
2186.35285.0594.34990.6395.241531.82
2286.94364.6594.14220.7495.231631.89
Mean84.8162311.4293.920522.8795.51237104.69
Sd1.038025.232.312221.800.5761250.37
# best00010175220

The best results in each Chromosome and in all Chromosomes are in bolface font

Performance comparison of AltHap, H-PoP, BP, HapTree, SCGD, and ILP applied to haplotype reconstruction of the CEU NA12878 data set in the 1000 Genomes Project The best results in each Chromosome and in all Chromosomes are in bolface font Fosmid pool-based sequencing provides very long fragments and is characterized by much higher ratio of the number of SNPs to the number of reads than the standard data sets generated by high-throughput sequencing platforms. We consider the fosmid sequence data for chromosomes of HapMap NA12878 and again take the trio-phased variant calls from the GATK resource bundle [44] as the true haplotype sequences. We compare the performance of AltHap to those of H-PoP, BP, HapTree, SCGD and ILP and report the results in Table 2. As can be seen from Table 2, AltHap achieves the best CPR for most of the chromosomes (thirteen out of 22) followed by H-PoP (four). As with the 1000 Genome Project Data, ILP achieves the best MEC scores but is much slower and significantly inferior to AltHap in terms of CPR. Note that since HapTree could not finish assembling haplotype of the 6 chromosome in 48 hours, that result is missing from the table.
Table 2

Performance comparison of AltHap, H-PoP, BP, HapTree, SCGD, and ILP applied to the Fosmid data set. HapTree could not finish assembling haplotype of the 6 chromosome in 48 hours

AltHapH-PoPBP
ChromosomeCPRMECt(sec)CPRMECt(sec)CPRMECt(sec)
195.5973118.3884.898452.1387.6956740.18
295.5958938.8990.494442.1684.8969842.90
391.7731129.4091.771821.7984.7758730.61
492.7550826.6992.657751.7686.9628831.10
592.0671127.3993.969101.9586.3697536.94
690.9721333.6888.575052.4085.0759041.20
790.7615128.6091.968291.6885.8609136.94
891.2592723.8290.261431.8987.3628238.87
991.8534719.4091.857191.7685.1549326.13
1090.1604424.0792.463281.4886.4650327.65
1190.8542421.7390.364321.6885.8557920.56
1291.5545624.2591.456531.4685.0570624.19
1390.4364614.2390.137081.5482.7397617.33
1489.5415618.6489.142611.2187.0400414.84
1590.0407914.6772.940011.0682.3402214.35
1688.5619726.2871.561191.2084.4511229.51
1789.7450716.3588.349111.2287.6474918.29
1893.0308012.6890.833151.1485.5345713.31
1985.7421213.4086.341150.8483.5392813.44
2090.3351213.6490.041210.8584.9381415.97
2192.718716.2091.919740.6887.219538.18
2285.1365417.2487.837570.6286.7391014.72
mean90.9542421.3588.656391.4885.6555825.33
Sd2.519507.795.719340.501.5194810.84
# best1300409000
HapTreeSCGDILP
ChromosomeCPRMECt(sec)CPRMECt(sec)CPRMECt(sec)
191.59676650195.1101272.5979.0688980.33
292.39802719694.597212.4176.1670076.60
390.77705484788.674101.8376.9512279.50
490.86500839287.654941.4877.0407251.49
590.87094567089.670581.7176.0463754.39
6---90.478432.1475.7524863.37
791.56169558989.461891.7377.9417446.85
891.26379831687.459961.4776.3430153.57
991.75513446590.055921.2076.8397442.41
1088.96553483892.860271.6076.8450859.25
1190.55625518390.156621.3479.0390345.45
1291.35770565490.557311.5577.5390748.76
1389.84029536787.637270.7977.1266932.09
1490.64038410392.948591.1275.4281439.61
1590.74116335787.844420.8878.7290333.80
1694.25142968395.564741.6079.8384462.44
1793.14806300397.148431.0180.8344842.00
1891.93493230388.334780.7176.9233732.27
1992.83953198482.542040.8778.6270733.68
2090.13886152994.637900.8378.7278331.78
2192.11979141090.720420.3677.2136716.42
2292.43307135190.634951.0677.0242260.62
mean91.455024797.1990.656451.3877.6385149.39
Sd1.219982392.543.419770.561.39136016.82
# best40040130220

The best results in each Chromosome and in all Chromosomes are in bolface font

Performance comparison of AltHap, H-PoP, BP, HapTree, SCGD, and ILP applied to the Fosmid data set. HapTree could not finish assembling haplotype of the 6 chromosome in 48 hours The best results in each Chromosome and in all Chromosomes are in bolface font

Simulated data: the diploid case

To further benchmark performance of the proposed scheme, we test it on the synthetic data from [39] often used to compare methods for haplotype assembly of diploids. These data sets emulate haplotype assembly under varied coverage, sequencing error rates and haplotype block lengths. We constrain our study to the assembly of haplotype blocks having length m=700 bp (the longest blocks in the data set). The results, averaged over 100 instances of the problem, are given in Table 3. As evident from this table, AltHap outperforms other algorithms for nearly all the combinations of data error rates and sequencing coverage and is also much faster than SCGD, ILP, BP and HapTree while being slightly slower than H-PoP. Note that ILP could only finish assembling haplotype of two settings with p=0.1 and coverages of 5 and 8, in 48 hours. Hence, the results for other settings are missing from the table.
Table 3

Performance comparison of AltHap, H-PoP, BP, HapTree, SCGD, and ILP on a simulated diploid data set from [39] with haplotype block length m=700. ILP could only finish assembly of haplotypes for two settings in 48 hours

AltHapH-PoPBP
Error rateCoverageCPRMECt(s)CPRMECt(s)CPRMECt(s)
0.1599.64770.04399.34020.01286.76981.421
0.1899.97590.12899.87800.03587.28614.627
0.11099.99540.40499.99030.10987.3113013.58
0.2590.99410.06187.710210.02781.29532.671
0.2898.114580.14188.915320.09886.118476.897
0.21099.118360.39491.520230.20186.7248510.13
0.3560.712280.06961.813310.04153.716773.235
0.3867.720220.14565.722500.09857.224697.982
0.31075.025580.37571.229790.21759.6311415.32
HapTreeSCGDILP
Error rateCoverageCPRMECt(s)CPRMECt(s)CPRMECt(s)
0.1588.64912.1396.65230.6698.8467471
0.1888.47673.8299.87720.8499.77602004
0.11087.39634.0399.99650.97---
0.2576.29889.3676.19790.72---
0.2880.815626.6991.315311.18---
0.21082.719434.2095.419021.50---
0.3564.6117010.2157.811360.73---
0.3865.720216.1763.719981.14---
0.31065.125975.7467.925741.44---

The best results in each simulation setting are in bolface font

Performance comparison of AltHap, H-PoP, BP, HapTree, SCGD, and ILP on a simulated diploid data set from [39] with haplotype block length m=700. ILP could only finish assembly of haplotypes for two settings in 48 hours The best results in each simulation setting are in bolface font

Simulated data: the biallelic polyploid case

The performance of AltHap in applications to haplotype assembly for polyploids was tested using simulations; in particular, we studied how AltHap’s accuracy depends on coverage and sequencing error rate. The generated data sets consist of paired-end reads with long inserts that emulate the scenario where long connected haplotype blocks need to be assembled. We simulate sampling of the entire genome using paired-end reads and generate SNPs along the genome with probability 1 in 300. In other words, the distance between pairs of adjacent SNPs follows a geometric random variable with parameter (the SNP rate). To emulate a sequencing process capable of facilitating reconstruction of long haplotype blocks, we randomly generate paired-end reads of length 2×250 with average insert length of 10,000 bp and the standard deviation of 10%; sequencing errors are inserted using realistic error profiles [45] and genotyping is performed using a Bayesian approach [44]. At such read and insert lengths, the generated haplotype blocks are nearly fully connected. Each experiment is repeated 10 times. AltHap is compared with H-PoP, BP and SCGD. We also tried to run HapTree. However, HapTree could not finish the simulations for the considered block size in 48 hours. Table 4 compares the CPR, MEC score, and running times of AltHap with those of H-PoP, BP and SCGD for biallelic triploid genomes with haplotype block lengths of m=1000 for several combinations of sequencing coverage and data error rates. As can be seen there, in terms of CPR AltHap outperforms all other methods in all the scenarios while in terms of MEC score it outperforms other methods in the vast majority of the scenarios. Note that unlike H-PoP’s, the complexity of AltHap scales gracefully with coverage (i.e., although H-PoP is very fast at low coverages, at the highest coverage AltHap becomes much faster than H-PoP). As can be seen in Table 6, overall CPR score (MEC score) of all algorithms decreases (increases) as the probability of error increases. This is expected – and also reflected in our theoretical results – since with higher data error rate haplotype assembly becomes more challenging. Additionally, MEC scores increases with coverage since higher coverage implies more sequencing reads. Therefore, the total number of observed SNP positions increases which in turn results in higher MEC scores.
Table 4

Performance comparison of AltHap, H-PoP, BP, and SCGD on a simulated biallelic triploid data set with haplotype block length m=1000. HapTree could not finish the simulations in 48 hours

AltHapH-PoPBPSCGD
ErrCovCPRMECt(s)CPRMECt(s)CPRMECt(s)CPRMECt(s)
0.00210 98.2 322 3071.53642 14 68.9421013269.711988159
0.00220 95.1 1986 5973.17728 41 72.9776241651.835660283
0.0023098.4241210970.81286526569.714751131052.153248422
0.0110 91.7 1379 3070.037861468.1409213868.412108161
0.0120 97.7 1597 6070.98375 42 68.9860146052.035606295
0.013098.9314311071.81176926668.115124130152.753185422
0.0510 97.1 2802 3170.139781466.9422713567.513037158
0.0520 94.9 8222 5970.39276 41 70.1948446051.735693285
0.053082.61728411071.31377826867.616876131552.152499431

The best results in each simulation setting are in bolface font

Table 6

Performance of AltHap on simulated biallelic triploid data set with haplotype block length m=1000, data error rate p=0.002, and different read lengths

Read lengthCovCPRMECt(s)
2 × 2501098.232230.74
2 × 2502095.1198659.65
2 × 2503098.42412109.73
2 × 3001093.085634.83
2 × 3002097.9141066.50
2 × 3003097.73216117.62
2 × 5001095.568239.36
2 × 5002092.4260566.37
2 × 5003093.05869116.69
Performance comparison of AltHap, H-PoP, BP, and SCGD on a simulated biallelic triploid data set with haplotype block length m=1000. HapTree could not finish the simulations in 48 hours The best results in each simulation setting are in bolface font The results of tests conducted on simulated biallelic tetraploid genomes are summarized in Table 5, where we observe that AltHap outperforms the competing schemes in terms of both accuracy and running time. To investigate how the performance and complexity of AltHap vary with coverage and read length, in Table 6 we report its CPR, MEC, and runtimes obtained by simulating assembly of biallelic triploid haplotypes using paired end reads of length 2×250, 2×300, and 2×500 and coverage 10, 20 and 30 (block length is set to m=1000 and data error rate is p=0.002). The results imply that AltHap’s runtime scales approximately linearly with both read length and coverage over the consider range of these two parameters. Additionally, as we see in Table 5, MEC score slightly increases with read length. The impact of read length in this matter is similar to that of sequencing coverage. longer sequencing reads provide more observed SNP positions and hence the MEC might increase, as also predicted by our theoretical results.
Table 5

Performance comparison of AltHap, H-PoP, BP, and SCGD on a simulated biallelic tetraploid data set with haplotype block length m=1000. HapTree could not finish the simulations in 48 hours

AltHapH-PoPBPSCGD
ErrCovCPRMECt(s)CPRMECt(s)CPRMECt(s)CPRMECt(s)
0.0021091.1 1113 4370.733664369.8456829067.114839208
0.0022095.0 2113 8773.4735911371.2943454051.741241419
0.0023099.967416372.61169359871.512745149651.861885653
0.011098.2 938 4469.335114666.4647529667.114819213
0.012099.3 1668 8770.3788211466.91021355251.541712414
0.013095.3651816471.01239259768.413245148551.561981652
0.051093.7 3905 4467.741104664.5686930665.015861213
0.052095.896458969.1910911868.51147762351.941042408
0.053081.51869016570.01421260167.517681150451.762261643

The best results in each simulation setting are in bolface font

Performance comparison of AltHap, H-PoP, BP, and SCGD on a simulated biallelic tetraploid data set with haplotype block length m=1000. HapTree could not finish the simulations in 48 hours The best results in each simulation setting are in bolface font Performance of AltHap on simulated biallelic triploid data set with haplotype block length m=1000, data error rate p=0.002, and different read lengths

Simulated data: the polyallelic polyploid case

We further studied the performance of AltHap on triploid and tetraploid organisms having polyallelic sites and the results are summarized in Tables 7 and 8, respectively. Notice that none of the competing schemes are capable of handling polyallelic genomes. Evidently, AltHap is able to reconstruct underlying haplotype sequences with competitive performance at a low computational cost.
Table 7

Performance of AltHap on simulated polyallelic triploid data set with haplotype block length m=1000. H-PoP, BP, HapTree, and SCGD cannot assemble polyallelic polyploid haplotypes

Error rateCovCPRMECt(s)
0.002583.2137743.05
0.0021093.2897115.13
0.0021593.51799173.55
0.0022095.22346232.07
0.01574.7234158.13
0.011094.41269115.41
0.011590.93755173.38
0.012085.57272235.86
0.05579.9307657.77
0.051089.43925116.33
0.051593.16100174.37
0.052093.99120236.73
Table 8

Performance of AltHap on simulated polyallelic tetraploid data set with haplotype block length m=1000. H-PoP, BP, HapTree, and SCGD cannot assemble polyallelic polyploid haplotypes

Error rateCovCPRMECt(s)
0.002579.42380109.00
0.0021086.52043220.6
0.0021593.82148328.49
0.0022096.32388432.28
0.01579.72398113.08
0.011084.12927220.33
0.011582.85787327.10
0.012099.22319432.85
0.05574.64721113.38
0.051089.05146211.43
0.051592.37555327.20
0.052092.013704435.15
Performance of AltHap on simulated polyallelic triploid data set with haplotype block length m=1000. H-PoP, BP, HapTree, and SCGD cannot assemble polyallelic polyploid haplotypes Performance of AltHap on simulated polyallelic tetraploid data set with haplotype block length m=1000. H-PoP, BP, HapTree, and SCGD cannot assemble polyallelic polyploid haplotypes The results of these extensive simulations imply that, as expected, haplotype assembly becomes more challenging as the number of haplotype sequences (i.e., the ploidy) increases. Nevertheless, in all the conducted studies, AltHap consistently reconstructs haplotype sequences accurately and with practically feasible computational cost. In addition, the results of Tables 4 and 5 demonstrate that the computational time of AltHap grows significantly slower with coverage than the computational time of the competing schemes. In particular, for high coverages that are characteristic of high-throughput sequencing technologies, AltHap is the most efficient among the considered algorithm.

CPR lower bound

Finally, we use the results obtained by running AltHap on simulated biallelic triploid data (i.e., the results summarized in Table 4) to examine tightness of the theoretical bounds on the CPR stated in Theorem 2. In particular, theoretical bounds on CPR are compared to the CPRs empirically computed for various combinations of coverage and data error rates (averaged over 10 independent problem instances). In Fig. 3, the theoretical bound and experimental CPR results are shown as functions of the data error rate for coverage 15. We observe that the bound is reasonably close to the experimental results over the considered range of data error rates. In Fig. 4, the theoretical bound and experimental CPR results are plotted against sequencing coverage for the data error rate p=0.002. This figure, too, implies that the theoretical CPR bound is relatively close to the experimental results. Notice that as Fig. 3 shows, the CPR score as well as the lower bound derived in Theorem 2 decrease when the sequencing error increases. On the other hand, Fig. 4 depicts that higher coverage improves AltHap’s CPR score, which again is reflected in our theoretical results.
Fig. 3

Comparison of the theoretical and experimental results. Comparison of the theoretical bound on CPR with the experimental results when Cseq=15 obtained by applying AltHap to the problem of reconstructing biallelic triploid haplotypes (synthetic data)

Fig. 4

Comparison of the theoretical and experimental results. Comparison of the theoretical bound on CPR with the experimental results when p=0.002 obtained by applying AltHap to the problem of reconstructing biallelic triploid haplotypes (synthetic data)

Comparison of the theoretical and experimental results. Comparison of the theoretical bound on CPR with the experimental results when Cseq=15 obtained by applying AltHap to the problem of reconstructing biallelic triploid haplotypes (synthetic data) Comparison of the theoretical and experimental results. Comparison of the theoretical bound on CPR with the experimental results when p=0.002 obtained by applying AltHap to the problem of reconstructing biallelic triploid haplotypes (synthetic data)

Conclusion

In this paper, we developed a novel haplotype assembly framework for both diploid and polyploid organisms that relies on sparse tensor decomposition. The proposed algorithm, referred to as AltHap, exploits structural properties of the problem to efficiently find tensor factors and thus assemble haplotypes in an iterative fashion, alternating between two computationally tractable optimization tasks. If the algorithm starts the iterations from an appropriately selected initial point, AltHap converges to a stationary point which is with high probability in close proximity of the solution that is optimal in the MEC sense. In addition, we analyzed the performance and convergence properties of AltHap and found bounds on its achievable MEC score and the correct phasing rate. AltHap, unlike the majority of existing methods for haplotype assembly for polyploids, is capable of reconstructing haplotypes with polyallelic sites, making it useful in a number of applications involving plant genomes. Our extensive tests on real and simulated data demonstrate that AltHap compares favorably to competing methods in applications to haplotype assembly of diploids, and significantly outperforms existing techniques when applied to haplotype assembly of polyploids. As part of the future work, it is of interest to extend the sparse tensor decomposition framework to viral quasispecies reconstruction and recovery of bacterial haplotypes from metagenomic data. Supplement for “Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids”. Additional file 1 provides details on derivation of the proposed step size, and derivation of MEC and CPR bounds. (PDF 210 kb)
  32 in total

1.  QuRe: software for viral quasispecies reconstruction from next-generation sequencing data.

Authors:  Mattia C F Prosperi; Marco Salemi
Journal:  Bioinformatics       Date:  2011-11-15       Impact factor: 6.937

2.  HapCompass: a fast cycle basis algorithm for accurate haplotype assembly of sequence data.

Authors:  Derek Aguiar; Sorin Istrail
Journal:  J Comput Biol       Date:  2012-06       Impact factor: 1.479

3.  HapCol: accurate and memory-efficient haplotype assembly from long reads.

Authors:  Yuri Pirola; Simone Zaccaria; Riccardo Dondi; Gunnar W Klau; Nadia Pisanti; Paola Bonizzoni
Journal:  Bioinformatics       Date:  2015-08-26       Impact factor: 6.937

4.  Haplotype reconstruction from SNP fragments by minimum error correction.

Authors:  Rui-Sheng Wang; Ling-Yun Wu; Zhen-Ping Li; Xiang-Sun Zhang
Journal:  Bioinformatics       Date:  2005-02-24       Impact factor: 6.937

5.  Exact algorithms for haplotype assembly from whole-genome sequence data.

Authors:  Zhi-Zhong Chen; Fei Deng; Lusheng Wang
Journal:  Bioinformatics       Date:  2013-06-18       Impact factor: 6.937

6.  Exploiting next-generation sequencing to solve the haplotyping puzzle in polyploids: a simulation study.

Authors:  Ehsan Motazedi; Richard Finkers; Chris Maliepaard; Dick de Ridder
Journal:  Brief Bioinform       Date:  2018-05-01       Impact factor: 11.622

7.  On the Minimum Error Correction Problem for Haplotype Assembly in Diploid and Polyploid Genomes.

Authors:  Paola Bonizzoni; Riccardo Dondi; Gunnar W Klau; Yuri Pirola; Nadia Pisanti; Simone Zaccaria
Journal:  J Comput Biol       Date:  2016-06-09       Impact factor: 1.479

8.  Optimal algorithms for haplotype assembly from whole-genome sequence data.

Authors:  Dan He; Arthur Choi; Knot Pipatsrisawat; Adnan Darwiche; Eleazar Eskin
Journal:  Bioinformatics       Date:  2010-06-15       Impact factor: 6.937

Review 9.  A comparison of several algorithms for the single individual SNP haplotyping reconstruction problem.

Authors:  Filippo Geraci
Journal:  Bioinformatics       Date:  2010-07-11       Impact factor: 6.937

10.  The diploid genome sequence of an individual human.

Authors:  Samuel Levy; Granger Sutton; Pauline C Ng; Lars Feuk; Aaron L Halpern; Brian P Walenz; Nelson Axelrod; Jiaqi Huang; Ewen F Kirkness; Gennady Denisov; Yuan Lin; Jeffrey R MacDonald; Andy Wing Chun Pang; Mary Shago; Timothy B Stockwell; Alexia Tsiamouri; Vineet Bafna; Vikas Bansal; Saul A Kravitz; Dana A Busam; Karen Y Beeson; Tina C McIntosh; Karin A Remington; Josep F Abril; John Gill; Jon Borman; Yu-Hui Rogers; Marvin E Frazier; Stephen W Scherer; Robert L Strausberg; J Craig Venter
Journal:  PLoS Biol       Date:  2007-09-04       Impact factor: 8.029

View more
  7 in total

1.  ComHapDet: a spatial community detection algorithm for haplotype assembly.

Authors:  Abishek Sankararaman; Haris Vikalo; François Baccelli
Journal:  BMC Genomics       Date:  2020-09-09       Impact factor: 3.969

2.  Viral quasispecies reconstruction via tensor factorization with successive read removal.

Authors:  Soyeon Ahn; Ziqi Ke; Haris Vikalo
Journal:  Bioinformatics       Date:  2018-07-01       Impact factor: 6.937

3.  NGS based haplotype assembly using matrix completion.

Authors:  Sina Majidian; Mohammad Hossein Kahaei
Journal:  PLoS One       Date:  2019-03-26       Impact factor: 3.240

4.  A graph-based algorithm for estimating clonal haplotypes of tumor sample from sequencing data.

Authors:  Yixuan Wang; Xuanping Zhang; Shuai Ding; Yu Geng; Jianye Liu; Zhongmeng Zhao; Rong Zhang; Xiao Xiao; Jiayin Wang
Journal:  BMC Med Genomics       Date:  2019-01-31       Impact factor: 3.063

5.  A chaotic viewpoint-based approach to solve haplotype assembly using hypergraph model.

Authors:  Mohammad Hossein Olyaee; Alireza Khanteymoori; Khosrow Khalifeh
Journal:  PLoS One       Date:  2020-10-29       Impact factor: 3.240

6.  Haplotype threading: accurate polyploid phasing from long reads.

Authors:  Sven D Schrinner; Rebecca Serra Mari; Jana Ebler; Mikko Rautiainen; Lancelot Seillier; Julia J Reimer; Björn Usadel; Tobias Marschall; Gunnar W Klau
Journal:  Genome Biol       Date:  2020-09-21       Impact factor: 13.583

7.  flopp: Extremely Fast Long-Read Polyploid Haplotype Phasing by Uniform Tree Partitioning.

Authors:  Jim Shaw; Yun William Yu
Journal:  J Comput Biol       Date:  2022-01-17       Impact factor: 1.479

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.