Literature DB >> 33120403

A chaotic viewpoint-based approach to solve haplotype assembly using hypergraph model.

Mohammad Hossein Olyaee1, Alireza Khanteymoori2, Khosrow Khalifeh3,4.   

Abstract

Decreasing the cost of high-throughput DNA sequencing technologies, provides a huge amount of data that enables researchers to determine haplotypes for diploid and polyploid organisms. Although various methods have been developed to reconstruct haplotypes in diploid form, their accuracy is still a challenging task. Also, most of the current methods cannot be applied to polyploid form. In this paper, an iterative method is proposed, which employs hypergraph to reconstruct haplotype. The proposed method by utilizing chaotic viewpoint can enhance the obtained haplotypes. For this purpose, a haplotype set was randomly generated as an initial estimate, and its consistency with the input fragments was described by constructing a weighted hypergraph. Partitioning the hypergraph specifies those positions in the haplotype set that need to be corrected. This procedure is repeated until no further improvement could be achieved. Each element of the finalized haplotype set is mapped to a line by chaos game representation, and a coordinate series is defined based on the position of mapped points. Then, some positions with low qualities can be assessed by applying a local projection. Experimental results on both simulated and real datasets demonstrate that this method outperforms most other approaches, and is promising to perform the haplotype assembly.

Entities:  

Mesh:

Year:  2020        PMID: 33120403      PMCID: PMC7595403          DOI: 10.1371/journal.pone.0241291

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Improving the high-throughput DNA sequencing technologies dramatically decreased the costs of genome sequencing methods. This achievement help researchers to understand the variation of individual’s genomic data and pave the way toward individualized strategies for diagnostic or therapeutic decision-making [1]. The most frequent type of genetic variation is the single nucleotide polymorphisms (SNPs). Each SNP is just a mutation over similar distinctive positions on the DNA sequences of homologous pair of chromosomes in an individual, and among the corresponding DNA sequences of the whole population. Similarly, the term “allele” refers to different forms of a gene at one loci. Accordingly, four different alleles are possible for a given SNP site. Nonetheless, most SNPs are bi-allelic containing only two kinds of alleles, which can be simply denoted by ‘0’ and ‘1’ [2]. Each SNP contains valuable information about genomic alternations. Experimental studies revealed that SNPs have been clustered across the human genome and are not randomly distributed [3]. In line with this assumption, linkage disequilibrium (LD), demonstrates that there are correlations and spatial dependencies among neighboring SNPs. Different SNPs on the string of DNA is known as a haplotype. In other words, a haplotype could be considered as the combinations of marker alleles which are positioned closely together on the same strand of DNA, and tend to be inherited together from parents to offspring [4]. It has been shown that some diseases such as sickle-cell anemia [5], cystic fibrosis [6] and hemochromatosis [7] are more common in specific ethnic populations due to unique genetic mutations in their genomes; but they are rarely found in others. There are also reports indicating that different populations may have various responses to drugs [8-10]. These findings demonstrate that haplotypes in human genomics data could be a useful and informative tool in mapping genes that are involves in representative diseases, as well as personalized medicine [11]. Haplotypes can also be used to investigate the pattern of inheritance over evolution, human migration, and the genetically aspects of populations [12-14]. Genetic association analysis for gene mapping can also be improved by haplotype analysis [15]. Also, it is possible to detect errors and missing sequencing data in experimental sequencing of DNA sequences using the information of haplotypes [16]. It is worth mentioning that the experimental analysis of haplotypes is labor-intensive and expensive. Moreover, it can be used only for constructing local haplotypes. In other words, human haplotypes are provided as sequencing reads or fragments. It is a vital task to obtain haplotype information from the numerous fragments due to its profound impacts on different aspects of medicine and molecular biology [15, 17–19]. However, the detection of genetic variations has critical limitations compared with the molecular approaches. According to the type of input data, the existing methods of haplotype reconstruction are divided into two main categories, including single individual haplotyping (SIH) and haplotype inference. SIH methods receive several fragments that have been sequenced from a given chromosome. It is to be noted that most of the fragments contain gaps, and are usually disrupted by noise. To cope with these problems, the input fragments are clustered based on their similarities. Then, the haplotypes can be reconstructed using the center of each cluster [4]. The haplotype inference methods receive genotype information of several individuals as input data and infer their related haplotype sequences [20]. It is worth noting that each genotype represents a combination of haplotypes on the homologous chromosomes. With increasing the size of data, a growing number of researchers have tried to solve haplotype assembly problem. Moreover, several computational models, including minimum fragment removal (MFR), minimum error correction (MEC), minimum SNP removal (MSR), and the longest haplotype reconstruction (LHR), have been developed to cope with the SIH problem. The MEC is one of the most popular and successful algorithms compared with the models as mentioned above [4, 21–28]. This model attempts to cluster the input fragments, such that all the fragments belonging to a specified cluster to be compatible. Otherwise, they will be compatible by applying the minimum alternations. The current approaches can be divided into exact and heuristic methods. Since finding the optimal minimum error correction is NP-Hard, the exact approaches have exponential complexity [21]. Among exact solutions, WhatsHap [29] is regarded as a pioneering method, which is dynamic programming-based and utilizes a weighted variant of the MEC. The experimental results demonstrate that it can process long reads at coverage up to 20×. In [30], the authors proposed a parallel version of WhatsHap which is able to process higher coverages up to 25×. AROHap [24] is a recently published evolutionary-based method that exploits the asexual reproduction optimization algorithm to solve the SIH problem. In this method, the fitness function is designed based on the MEC model. In [26], a heuristic method, namely, Fasthap was developed, where it makes a weighted fuzzy conflict graph based on the MEC model. Furthermore, the constructed graph is used to cluster the input fragments. Fuzzy C-means (FCM) approach has been applied in [25] to enhance the performance of the proposed method in clustering the fragments. However, this method obtains low performance in dealing with noisy fragments. Some popular methods, including MCMC [31], HapCUT [27], and HapCUT2 [32], have differently construct the graph. These methods start with a set of arbitrary sequences as initial haplotypes, and improve it step by step concerning the input fragments. They make a similar weighted graph in their distinctive model. However, instead of fragments, SNPs are used as vertices of the graph. Each pair of SNPs is connected if they are covered by at least one input fragment. The weight of each edge determines the amount of consistency with their corresponding positions in the current haplotypes. Although this model efficiently determines the consistency of the current haplotype with the input fragments, the existing gaps and noise lead to a loss of accuracy in determining the weight of edges. In [33]. It has been proved that the hypergraph can precisely describe the distance of input fragments. Although, various methods have been developed to solve the SIH problem, most of them can only be applied to diploid organisms, and fail to consider polyploid organisms. It should be noted that the haplotype reconstruction in polyploid type is more complicated than a diploid one. Suppose that P is the number of ploids, and m is the length of haplotype sequences. In this case, there are at least 2(P − 1) different solutions for phasing the haplotypes [23]. Recently, several studies, such as [23, 34–36], have been conducted on the polyploid organism. Althap [23] and SCGD [36] are two recently developed methods based on matrix factorization to solve the SIH problem. H-PoP [34] is a heuristic method that divides the input fragments into P clusters. Therefore, the members of each cluster have the minimum distance with each other and are entirely far from the fragments of other clusters. Belief propagation (BP) [35] is another method addressing the SIH problem by mapping the MEC model to a decoding mechanism. It involves a message transmission in a noisy channel. In this context, it has been reported that the haplotype’s blocks with proper lengths can exhibit chaotic behavior. This feature has been recently used to improve the reconstruction rate in the single individual haplotyping problem [37]. Considering the chaotic nature of haplotype sequences, in this paper, an iterative algorithm is proposed to reconstruct the haplotypes using the hypergraph model. The method includes two main steps. Firstly, an iterative mechanism is applied due to the SNP matrix to construct the haplotype set, and the consistency between SNPs is modeled based on the hypergraph. Then, the corrected parts of the haplotypes are determined by partitioning the hypergraph. This step is followed by transforming the obtained haplotypes into a line using the chaos game representation, where a coordinate series is defined based on the position of the mapped points. Also, a local projection (LP) method is applied to refine the remaining ambiguous measures and increasing the quality of the reconstructed haplotypes. The significant contributions of the proposed method are as follows: The similarity measurement between the input fragments can be described more accurately by utilizing the hypergraph model. Moreover, it helps to overcome challenges originated from the huge amount of gaps and sequencing errors. The quality score for each position of the reconstructed haplotypes can be calculated to predict the remaining error measures. The chaotic nature hypothesis is used to refine the reconstructed haplotypes. To this end, we only concentrate on the neighboring dependencies between SNPs. The proposed method could be applied effectively for both diploid and polyploid organisms. The rest of the paper is organized as follows. Section 2 provides a brief review of the problem statement. In section 3, the proposed method is described in detail. Experimental results are presented in section 4. Finally, the conclusion is arrived at section 5.

Preliminaries and assumptions

The challenge of the SIH problem in the polyploid organisms includes the reconstruction of the whole set H = {h1, h2, …, h} containing P haplotype sequences. It is based on the available aligned input fragments. Similar to diploid case, the input fragments can be represented as a standard form. Let X be the SNP matrix in which each row corresponds to an input fragment, and each column indicates a specified SNP. In binary allelic haplotypes, it is assumed that x ∈ {0,1,′ −′} indicating the obtained allele in a specified fragment f at SNP s. Also, each haplotype h (i = 1,2, …, P) equals to {1,0}. In diploid case, there are some positions called homozygote sites in which h1 equals to h2. On the other hand, the sites with different measures are called heterozygote positions. Homozygote sites are usually removed from the input matrix, as they do not provide useful information for the haplotype assembly problem. It is worth noting that the ′−′ sign indicates missing information during the sequencing process. For two fragments which are originated from different haplotypes, it is expected that there are some dissimilarities between them. Several relations have been developed to describe the differences between the two fragments. Hamming distance (HD) is the most practical approach, which can be used to calculate the differences between two input fragments f and f as follows: Where d is defined as follows: In the case where the SNP matrix is error-free, two fragments that were sequenced from the same haplotype are compatible, as their distance equals to zero. On the other hand, in dealing with the noisy SNP matrix, for two arbitrary fragments f, f, it is not possible to simply interpret the dissimilarity between two fragments, as they can be originated from the existing noise or have been sequenced from different haplotypes. In the error-free case, the fragments can be clustered in P clusters, such that the members of each cluster are compatible with each other. Fig 1 represents an example of the SIH problem in the ploidy level. The rows of matrix X indicate sequenced fragments, and the rows of matrix H contain the obtained haplotypes.
Fig 1

An example of SNP matrices X and H relevant to the resulting haplotypes.

The red measures in X indicate sequencing errors. Each row of H demonstrates a specified haplotype sequence.

An example of SNP matrices X and H relevant to the resulting haplotypes.

The red measures in X indicate sequencing errors. Each row of H demonstrates a specified haplotype sequence. In diploid case, several models have been proposed to solve the SIH problem based on the input fragments. Extending the models to solve the SIH problem in polyploidy form is a difficult task [38]. Recently, several MEC-based approaches have been developed to solve this problem. In this regard, the input fragments are organized in P clusters, and the haplotypes are considered as the centers of constructed clusters. In fact, each cluster involves the fragments which have the same provenance. The optimized result of the clustering algorithm can be obtained by minimizing the following Eq.: In the optimal case, if the SNP matrix is error-free, then the MEC measurement equals zero, and each fragment f belonging to C is compatible with H. However, in dealing with the noisy SNP matrix, it is expected that some fragments to be in conflict with their corresponding haplotypes. It should be noted that finding the optimal MEC measure is an NP-hard problem. On the other hand, the huge amount of gaps in the input fragments does negatively affect the distance measurement between pairs of input fragments. Therefore, the current work aims to address these challenges by a better description of the similarity measurement between the input fragments. This was done by a heuristic method with a favorable runtime based on the hypergraph model.

The proposed method

This section presents a Haplotype Reconstruction approach based on the Chaotic viewpoint and Hypergraph model (HRCH). The proposed method is briefly described below. (i) a set of haplotype sequences is randomly generated;(ii) the input fragments are assigned to the haplotype sequences based on their similarities;(iii) a weighted SNP hypergraph is built, using the similarity measure between haplotype sequences and the assigned input fragments;(iv) the constructed hypergraph is used to find a set called CutSet, containing the SNPs which should be modified. This procedure is repeated for a predefined number of iterations to minimize the MEC score. Next, by considering the existence of chaotic properties of haplotype sequences, the results are improved. A high-level overview of the method is demonstrated in Fig 2.
Fig 2

The workflow of proposed method.

Data preprocessing

As described in the preliminaries sections, X is a matrix containing M reads with length N. It is essential to note that homozygote columns can be ignored in diploid cases. Removing the homozygote positions was performed as described by [33] such that the most frequent measure for each column could be found. If the frequency is higher than 0.8, the column is identified as a homozygote site. Thus, the output of this step is a matrix with M fragments and N′ columns; where N′ ≤ N. Finally, H0 = {h1, h2, …, h}, as an initial set of haplotypes is randomly generated.

Pair-SNP consistency

Let ⋈ be a binary operator which provides the concatenation of two variables. For example, if a and b are two variables with measures ‘0’ and ‘1’, respectively, a ⋈ b equals to ‘01’. Given two variables, c, d ∈ {′00′, ′01′, ′10′, ′11′} the operator ⨁ is defined as follows: Definition 1 (Pair-SNP consistency). Given matrix X involving the input fragments, pair-SNP consistency, ω is defined between s and s as two arbitrary SNPs which are covered by f (k = 1,2, …, M) as follows. Where T is the number of fragments covering both SNPs s and s. By applying this measure, ω is normalized such that its value ranges between -1 and +1 (i.e., −1 ≤ ω ≤ +1). Moreover, cov(s, s) includes the fragments which cover the SNPs s and s. Finally, as mentioned above, c(f) identifies the origin of f. The Pair-SNP consistency metric is used to evaluate the compatibility between each pair of SNPs with the current haplotype H. The intuition behind the Eq 5 is as follows. For given SNPs s and s, ω describes the amount of similarity between the pair SNPs and their corresponding measures in H. This measure equals to -1 if the current haplotype is entirely identical with the covered fragments in columns i and j. On the other hand, it takes 1 if they are completely different in those columns. Otherwise, ω equals to 0, when the SNPs are not covered by any fragment. It is noticeable that, for high measures of ω, it is expected that the SNPs, s and s, are considered to belong to different clusters upon partitioning. The complexity of this step is O(MN2), where M and N are the number of fragments and SNPs, respectively.

Hypergraph construction

To construct the weighted hypergraph based on the achieved ω matrix, for each SNP s, its K nearest neighbors is found using the following Eq.: Where l is index of the K nearest SNP of s. KNN(s) is a set containing the index of K nearest neighbors of s. More specifically, each set represents the K SNPs, which have the most consistent relationship with s. In this case, each hyperedge can connect more than two vertices. Applying K nearest neighbors is a common approach to determine the hyperedges. However, it is necessary to specify the hyperedges more precisely due to the existing noise and sparsity of the SNP matrix. Therefore, the connectivity of vertices is defined by finding frequent itemsets. In other words, the hyperedges are determined as the shared K nearest neighbors, which can be defined as follows: In Eq 7, E contains several SNPs, and SKNN(E) provides a set of SNPs which are shared between all nearest neighbors of E. If the number of shared KNNs is more than a predefined threshold, called minimum support count (sc), then E can be defined as a frequent itemset. In the proposed model, each frequent itemset is defined as a specified hyperedge e, and the number of shared KNN is assigned as its weight measure w. Among the existing methods, frequent pattern (FP)-growth [39] has been gaining much attention due to the ability to find frequent itemsets. FP-growth is a tree-based method which uses a depth-first strategy to mine frequent itemsets. Accordingly, the database is modeled as a prefix tree, and the depth-first search is recursively applied to generate all maximal frequent itemsets. The runtime of this algorithm increases linearly, and it depends on the number of SNPs [40].

Improving H by partitioning the hypergraph

As can be seen in Fig 3, in the constructed hypergraph, the SNPs correspond with vertices, and each hyperedge equals with an obtained frequent itemset. In other words, it contains a set of SNPs that has more consistency with the corresponding position in H. It is noteworthy that hyperedges with higher weights indicate the higher similarity between the constituent’s SNPs and their relevant positions in H. The vertices can be divided into two clusters via partitioning the hypergraph. The objective of the partitioning is to minimize the sum of the weights of the hyperedges located between the clusters. To this end, hmetis as a popular algorithm was used. The algorithm includes three steps: (i) a number of small hypergraphs in several layers are built; (ii) the hypergraph in the lowest level is partitioned; (iii) the resulted partitions are extended to the upper levels through a successive mapping.
Fig 3

An example of constructing and partitioning the hypergraph.

S corresponds with the ith SNP, and the curves demonstrate the hyperedges. C1 and C2 denote the clusters which are obtained by hypergraph partitioning.

An example of constructing and partitioning the hypergraph.

S corresponds with the ith SNP, and the curves demonstrate the hyperedges. C1 and C2 denote the clusters which are obtained by hypergraph partitioning. The computational complexity of the algorithm is O(|E|), where E is the set of hyperedges. Suppose that C1 and C2 are two clusters obtained by the hmetis algorithm. As can be seen in Fig 4, and are partial haplotypes originating from the resulting clusters.
Fig 4

Partitioning H of a three ploid genome.

The yellow parts indicate and the green parts demonstrate . It must be pointed out that .

Partitioning H of a three ploid genome.

The yellow parts indicate and the green parts demonstrate . It must be pointed out that . In the diploid case, as can be seen in Fig 5 like the HapCUT method, improving H is performed as follows. First, C1 or C2 is selected as a CutSet. Next, H is obtained from H by flipping the measures of the SNPs in the CutSet.
Fig 5

An example of updating the current haplotype based on the partitioning of the hypergraph.

For polyploid, improving H is accomplished based on the algorithm which is shown in Fig 6. In the first step, a partial haplotype (i.e., or ) is randomly assigned to CutSet. This set involves some parts of H that should be corrected.
Fig 6

The algorithm of improving H.

As shown in Fig 7, all combinations of the CutSet are evaluated to find a new set of haplotypes () with lower MEC score.
Fig 7

Two combinations of six possible combinations of the CutSet in three ploid form.

Moreover, in order to evaluate more allelic combinations of SNPs, for a predefined percent of SNPs belonging to the CutSet, in each time two arbitrary SNPs are nominated. Then one of its various genotype’s combinations is randomly selected, and is replaced at corresponding positions in H. This step repeats for a predefined percent of SNPs. Since H has randomly generated, in the early iterations, its MEC score is poor. Therefore, finding the hyperedges with lower weights is not a difficult task. But, by improving the quality of H and increasing the consistencies between SNPs, MEC measure will be decreased slowly.

Refinement of H

Computing confidence score

Upon performing the iterative procedure of the proposed method, the haplotype H = {h1, h2, …, h} will be obtained. It is possible to define a confidence measure for each loci of the reconstructed haplotypes. For diploid case, we used the emission probability P(X |h, R) that has been defined in [41], which is used to identify errors in the reconstructed haplotype. This measure is calculated for each position j as follows: Where h is a haplotype sequence belongs to H, h ∈ {0,1} denotes an allele in position j, and PO(i) contains the columns which have been covered by f as the i-th fragment. Furthermore, R is a set which includes fragments such as f covering a position j. Finally, p(x | h, f) is calculated as follows: Where Q is an M × N matrix; for each element x ∈ X, Q includes the probability of sequencing error and h (f) as the j-th loci of the reconstructed haplotype is computed based on the following Eq.: Where C1 and C2 are the obtained clusters containing similar fragments that indicate the provenance of f. Eq (10) provides more information in each loci, and is based on the fact that . Therefore, the confidence score could be calculated more precisely. On the other hand, there is no relationship between the haplotype sequences in the polyploid form. Hence, applying Eq (8) is not applicable. In this case, we used genotype information. Suppose that g is the genotype information in position i and H is the reconstructed measure in this position. The sorted measure of g and H are compared, and the position i will be selected for refinement if the two sets are not equivalent.

Applying chaos game representation

Chaos game representation (CGR) is a graphical tool which maps an arbitrary sequence to a 2-dimensional form. This map is reversible and all the information of the sequence is preserved. Moreover, it depicts the hidden dependencies among the letters. CGR was initially introduced by Barnsley [42] to evaluate random sequences. Afterwards, Jeffrey [43] developed the method for visualizing genomic sequences. For this aim, according to the number of distinct letters constructing the input sequence, a regular polygon can be considered. For example since DNA sequences are constructed from four nucleotides ‘a’, ‘t’, ‘c’, and ‘g’, a square with unit length is considered and each distinct letter is assigned to one vertex. Each letter of the given sequence is iteratively mapped as a point inside the square. The process is started by locating the first point half-way between the center of the square and the corner related to the occurrence of the first letter. The method continues such that the i-th point is placed half-way between the previous point and the vertex related to the i-th letter. Using this procedure, many attempts have been made with the purpose of extracting novel features from biological sequences by exploiting CGR [44-48]. Recently, CGR was used to reveal the chaotic properties of haplotypes [37]. Since haplotypes are represented in binary form, the achieved map will be a dotted line which its vertices are named by 0 and 1, respectively. In this step, in order to improve the reconstructed haplotypes, CGR is utilized as follows. For loci’s which their qualities are less than θ, as a predefined threshold, their measures may be disrupted by noise or missing information. Therefore, it is refined based on the existing dependencies between SNPs. For this purpose each h ∈ H is mapped to a line by applying CGR. The places assigned to each point construct a coordinate series, namely cs. The route of chaos helps to refine the ambiguous measures in the low-quality positions. To this end, the points in cs that are correspond with the alleles with low confidences, are shown by ‘-‘. Then, the measure of ambiguous positions can be determined by applying a local projection (LP) method. After filling the removed measures based on the LP method, the refined coordinate series called , are transformed into the final haplotype known as . It must be indicated that extracting the cs and applying the LP method are accomplished in linear time. The conversion is calculated according to Eq 11:

Results

In the following section, the performance of the proposed method is compared with several state-of-the-art approaches in diploid and polyploid forms. The method was implemented in MATLAB, and all the results were obtained on a Windows 10 PC with 3.6 GHz CPU and 16 G Ram. The parameters of the algorithm are defined as t = 100, k = 5, sc = 2, and nc = 20%. Reconstruction rate (RR) [4] as a conventional metric was used to evaluate the quality of the obtained haplotypes. In diploid case, RR is defined as follows: Here, HD denotes hamming distance between h and which are the target and the reconstructed haplotype, respectively and i, j = 1,2. For polyploid case, this formula is written in the form of Eq 13: Where is a one-to-one mapping from the set of reconstructed haplotypes to the set of target haplotypes.

Diploid case

The experiments have been carried out on two widely used and well-known datasets including Geraci’s dataset [49] and a dataset from the 1000 genome project that are prime examples of the simulated and experimental datasets, respectively.

Simulated data

The Geraci’s dataset involves three parameters: Coverage c = {3,5,8,10}, length of haplotypes l = {100,350,700}, and error rate e = {0,0.1,0.2,0.3}. For each combination of these parameters, there are 100 samples. The output of the proposed method was compared with a set of state-of-the-art and well-known methods including; SCGD [36], H-pop [34], ARO [24], HG [33], FCM [25], FastHap [26], DGS [50], SHR [51], MLF [52], HapCut [27], 2d [22], Fast [53], and SPH [54]. All of these methods were run with their default parameter settings. In accordance with the existing methods, the reconstruction rate (RR) was also used to assess the result of the current method. Tables 1–3 comparatively show the reconstruction rate of the proposed method with those described for haplotype blocks with length 100, 350, and 700. Note that in the last column of each table, the highest measures are boldfaced. Moreover, the second-highest measures are highlighted. The results, demonstrate that current method has an acceptable level of performance and outperforms in most of the cases.
Table 1

Average of reconstruction rate for haplotypes with length 100.

eCSCGDH-popSPHFast2dCutMLFSHRDGSFasthapFCMHGAROHRCH
0%31.0001.0000.9990.9990.9901.0000.9730.8161.0000.9161.0000.9990.9921.000
50.9991.0001.0000.9990.9971.0000.9920.8611.0000.9531.0001.0001.0000.999
80.9991.0001.0001.0001.0001.0000.9970.9121.0000.9561.0001.0001.0000.999
101.0001.0001.0001.0001.0001.0000.9980.9441.0001.0001.0001.0001.0000.997
10%30.9180.9210.8950.9130.9110.9280.8890.6960.9300.8230.8820.9410.8440.957
50.9440.9190.9670.9640.9510.9200.9690.7380.9850.9170.9480.9890.9220.987
80.9480.9000.9890.9930.9830.9010.9850.7580.9890.9550.9710.9940.9450.991
100.9590.8920.9900.9980.9880.8920.9950.7620.9970.9260.9720.9970.9200.995
20%30.8060.8360.6230.7150.7380.7820.7250.6150.7250.8060.7390.7520.7110.851
50.8250.8650.7990.7970.7930.8380.8360.6550.8130.8340.7720.8990.7360.926
80.8610.8730.8520.8810.8730.8640.9180.6810.8780.8490.7930.9660.7600.941
100.8860.8780.8650.9150.8940.8710.9380.6990.9170.8990.8350.9810.7880.956
30%30.6710.7170.4800.6170.6230.6020.6180.5570.6110.5780.6290.6210.6270.695
50.6760.7840.6370.6390.6400.6290.6530.5990.6470.7110.6480.6980.6380.798
80.7400.8350.6670.6610.6750.6730.6970.6320.6630.7000.6640.7900.6490.861
100.7980.8550.6760.6750.6780.7090.7150.6320.6880.7320.6750.8560.6530.881
Table 3

Average of reconstruction rate for haplotypes with length 700.

eCSCGDH-popSPHFast2dCutMLFSHRDGSFasthapFCMHGAROHRCH
0%31.0001.0000.9990.9880.9461.0000.7820.7811.0000.9921.0000.9831.0000.986
51.0001.0001.0000.9990.9761.0000.8540.8321.0000.9931.0000.9891.0000.999
81.0001.0001.0001.0000.9921.0000.9190.8681.0000.9941.0000.9941.0000.999
101.0001.0001.0000.9990.9971.0000.9330.8981.0000.9911.0001.0001.0001.000
10%30.9340.9190.7050.8290.7860.9270.6980.6680.9310.9170.8340.9340.8010.928
50.9510.9230.9470.9410.8800.9160.8090.7160.9770.8720.8810.9900.8620.972
80.9560.9450.9850.9860.9480.8960.8630.7430.9870.9450.8830.9870.8990.983
100.9730.9510.9860.9950.9650.8890.8840.7260.9970.9830.9960.9970.9120.992
20%30.7960.8110.1990.6520.6470.7530.6240.5910.6690.7030.6520.6770.6440.797
50.8290.8540.6810.7120.6970.8250.6820.6170.7410.6810.6720.9100.6620.869
80.8320.8680.8010.8080.7510.8560.7470.6530.8180.9160.6860.8840.6950.885
100.860.8690.8130.8720.7780.8610.7650.6750.8610.8960.7460.8940.6980.900
30%30.6520.6000.0950.5810.5830.5520.5700.5360.5730.6270.5920.5920.5880.602
50.6590.7330.5230.5910.5960.5550.5940.5620.5950.6820.5990.6210.5980.699
80.6620.8040.6160.6150.6130.5970.6140.6110.6140.7410.6060.6460.6130.729
100.7140.8440.6270.6160.6220.6450.6250.6250.6220.8050.6060.6960.6180.759
The performance of the refinement phase has been considered in Table 4. Since evaluating the chaotic feature is limited to the long coordinate series, this phase can only be performed for sequences with length 700. For this purpose, the LP method is applied for each coordinate series with embedding dimensions (em) equal to 1 and 2, individually. It should be noted that the first column demonstrates the quality of the obtained haplotypes after terminating the first phase. The next two columns involve the rate of reconstruction for em equals to 1 and 2, respectively. The obtained results demonstrate that the inclusion of the chaotic nature of haplotype sequences can significantly improve the reconstruction rate.
Table 4

The effect of refinement phase for haplotypes with length 700 in diploid case.

ECHypergraphem = 1em = 2
10%30.9160.9180.928
50.9660.9680.972
80.9800.9810.983
100.9890.9910.992
20%30.7860.7870.797
50.8540.8570.869
80.8700.8800.885
100.8860.8960.900
30%30.6000.6010.602
50.6880.6890.699
80.7160.7200.729
100.7480.7540.759

Experimental dataset

The second dataset which is used for evaluation of the proposed algorithm involves experimental data which was provided by 1000 genome project. The gathered data belongs to an individual NA12878 which often is used to analyze the performance of the existing haplotype assembly methods. The sample was provided by using 454 sequencing method. According to the overlapping of the obtained fragments, they are represented in multiple matrices. In this experiment, for each chromosome, the first 500 matrices have been selected. In each matrix, the length of each row is ~90 in average and cover the genome at a depth of ~ × 3. Furthermore, the trio-phased variant calls from the GATK resource bundle [55] was used as the target haplotypes. The obtained reconstruction rates of the proposed method are compared to those of H-pop [34], SCGD [36], HG [33], ARO [24], and FCM [25] approaches. The results for all 22 homologous chromosomes are listed in Table 5. The results show that in most cases the proposed method achieved higher reconstruction rates compared to the others. The last row of the table demonstrates the mean of RR values of the comparing methods for all of the chromosomes. According to the obtained results, it can be concluded that this method completely outperforms the other approaches.
Table 5

The reconstruction rate and running time for the proposed method, H-pop, SCGD, HG, ARO and FCM applied to the experimental dataset NA12878 dataset provided by 1000 genome project.

ChrH-popSCGDHGAROFCMHRCH
RRt(sec)RRt(sec)RRt(sec)RRt(sec)RRt(sec)RRt(sec)
10.9575.220.9253.620.9371.540.93520.280.9131.090.95410.40
20.9565.650.9264.410.9291.300.94318.030.9081.040.94312.34
30.9126.990.9193.400.9281.170.94018.450.9131.910.94412.75
40.9705.240.9275.470.9231.200.94918.060.9231.680.96013.07
50.9664.670.9393.540.9321.240.94215.090.9121.270.95214.98
60.9524.930.9308.700.9351.220.94815.600.9291.040.95813.58
70.9244.240.9353.950.9251.260.95116.340.9041.030.95412.53
80.9474.140.9072.180.9061.250.93416.620.9031.070.94913.03
90.9103.360.9712.940.9011.300.96615.250.9371.040.92112.63
100.9453.670.9262.560.9401.210.94515.730.9131.280.95413.14
110.9153.710.9322.950.9391.170.94214.340.9231.180.96310.46
120.9033.460.9232.030.9451.190.93514.260.9081.140.95411.33
130.9412.890.9703.310.9301.220.93515.720.9251.430.94614.12
140.9712.540.9111.360.9171.520.93415.420.9321.110.94914.03
150.9742.400.9911.210.9201.020.93716.650.9051.040.95112.24
160.9352.470.9301.790.9321.110.94615.270.9241.350.96211.01
170.9111.980.9672.610.9311.250.95115.860.9201.110.96311.35
180.9762.510.9031.160.9241.860.94915.660.9191.010.95413.02
190.9781.820.9723.250.9491.600.94214.580.9231.400.96010.46
200.9502.000.9681.380.9451.900.94615.490.9221.120.95711.31
210.9701.700.9430.630.9331.520.94115.120.9151.080.96012.77
220.9831.440.9410.740.9511.160.94114.340.9141.330.9649.64
Mean0.9483.500.9392.870.9311.330.94316.000.9181.220.95312.28

Polyploid case

Here, the proposed method is compared with three recent approaches that have been developed to solve haplotype assembly in polyploid form including Althap [23], H-POP [34] and SCGD [36]. The source codes of all comparing methods are available. To investigate the quality of reconstructed haplotypes, reconstruction rate (RR), and MEC measure of the methods have compared. Indeed, the benchmark dataset is provided by simulation. For this aim, we have used the source code, which is available upon request by [23]. Its input parameters are coverage (c), error rate (e), and length of haplotypes (l). In this experiment, we defined c ∈ {5,10,15,20}, e ∈ {0.1,0.2,0.3} and l ∈ {100,350,700}. For each combination of those parameters 10 samples have generated. Each sample contains an SNP matrix with a huge amount of gaps. As can be seen in Tables 6–8 the proposed method is compared with RR and MEC-based algorithms.
Table 6

Average of reconstruction rate for haplotypes with length 100.

eCSCGDH-popAltHapHRCH
RRMECRRMECRRMECRRMEC
10%50.60912890.7452690.7363150.830260
100.56729170.8135340.7835980.846523
150.56742820.8288050.75410950.859773
200.56458460.84410040.74718640.8391020
20%50.59613670.6674670.6574780.717447
100.54828620.69210090.73010500.768942
150.54930470.70615390.66622410.7971443
200.54758940.74020160.63135590.7811920
30%50.58713730.5965540.6305480.661538
100.55028570.59912440.63313380.6891190
150.54842670.61919160.59626400.7301849
200.55059160.63325910.57240510.7302512
Table 8

Average of reconstruction rate for haplotypes with length 700.

eCSCGDH-popAltHapHRCH
RRMECRRMECRRMECRRMEC
10%50.64192590.70225490.77222820.7202392
100.582203130.73654380.92439710.7514850
150.514309960.79580620.94860860.8236860
200.514415550.817102050.889102570.8708389
20%50.61489870.63434880.65634090.6983281
100.520205000.65077810.72272800.7117107
150.511309360.659117850.788136720.76310705
200.517407810.701158330.768237380.76414509
30%50.57895470.59438820.60038640.6813813
100.523204940.60087700.60786800.6798442
150.514312110.614137370.566173850.69215393
200.514411550.611186200.545278130.69917841
The results demonstrate that the proposed method outperforms the other approaches in most cases considering both RR and MEC parameters. Similar to the previous section, to emphasize the efficiency of the refinement phase, the RRs of haplotypes with length 700 have been considered, as provided in Table 9. Similar to the diploid case, the improvement of RRs reveals the role of chaotic viewpoint to efficiently decrease the amount of remaining noise in the constructed haplotypes. Obviously, the proposed method is slower than the competitors, because it starts from a random measure and is iterative. However, it can solve the problem in a reasonable amount of time.
Table 9

The effect of refinement phase for haplotypes with length 700 in polyploid case.

eCHypergraphem = 1em = 2
10%50.7120.7150.720
100.7490.7500.751
150.8230.8230.823
200.8700.8700.870
20%50.6780.6870.698
100.7050.7080.711
150.7620.7630.763
200.7630.7640.764
30%50.6430.6620.681
100.6640.6720.679
150.6810.6880.692
200.6920.6970.699
Since sequencing coverage of the used benchmark datasets were relatively low, we further evaluated the performance of HRCH by dealing with high coverage data. For this aim, by using the provided source code in [23], for diploid and polyploid cases, several samples were generated individually. For each combination of l = 500, e = 0.3, and c = {30,40,50}, 10 samples were generated. As shown in Fig 8, the reconstruction rates of the proposed method are compared to those of AltHap [23], SCGD [36], and H-pop [34]. The obtained results demonstrate that HRCH provides encouraging accuracy as compared to the competing schemes in diploid and polyploid forms.
Fig 8

Comparison of reconstruction rate of the methods over high coverage data a) Diploid b) Polyploid.

Conclusion

The high amounts of noise, as well as existing gaps in the input fragments, are the main challenges in solving the SIH problem. In this study, we established a sampling-based method that starts from an initial set of haplotypes and iteratively proceeds to improve the input data by correcting the SNPs with wrong measures. The proposed method involves two main steps. First, it utilizes the hypergraph model to conquer the sparsity and high amount of noise, and reconstructs haplotypes iteratively. Positions with low confidence are then rectified by mapping haplotype sequences to the coordinate series and applying a chaotic viewpoint. The proposed method has the capability to manipulate genomic data of both diploid and polyploid organisms. The promising results for diploid and polyploid data highlight that the method is comparable with the existing approaches, and they have complementary roles to each other. Finally, the source code of the proposed method is available at https://github.com/mholyaee/HRCH.
Table 2

Average of reconstruction rate for haplotypes with length 350.

eCSCGDH-popSPHFast2dCutMLFSHRDGSFasthapFCMHGAROHRCH
0%30.9991.0000.9990.9890.9651.0000.8640.8301.0000.9851.0000.9960.9990.999
50.9991.0001.0000.9990.9931.0000.9290.8291.0000.9831.0000.9971.0000.999
81.0001.0001.0001.0000.9981.0000.9690.8951.0000.9831.0000.9981.0000.996
101.0001.0001.0001.0000.9991.0000.9810.8781.0000.9981.0001.0001.0000.999
10%30.9410.9210.8190.8710.8390.9300.7520.6820.9260.8720.8730.9390.8440.939
50.9450.9120.9590.9450.9130.9130.8580.7240.9780.9270.9190.9790.8920.983
80.9500.8960.9840.9850.9640.8960.9330.7420.9960.9770.9340.9880.9080.991
100.9520.8890.9840.9950.9780.8880.9620.7280.9980.9470.9350.9950.9100.994
20%30.8130.8130.4390.6840.6750.7710.6420.5910.6910.7630.6710.7120.6590.813
50.8170.8600.7290.7460.7280.8310.7280.6320.7690.8110.7190.9050.6910.897
80.8320.8710.8250.8530.7910.8620.7980.6700.8420.9120.7280.8990.7090.922
100.8380.8730.8550.8770.8170.8670.8310.6680.8780.9230.7330.9070.7190.937
30%30.6370.6290.2510.5900.5930.5650.5810.5480.5780.5750.5970.6020.5950.640
50.6610.7440.5780.6020.6060.5820.6060.5570.6090.7200.6140.6320.6090.737
80.6900.8300.6290.6260.6230.6210.6340.6040.6280.7900.6260.6750.6280.788
100.7000.8500.6380.6440.6340.6640.6410.6190.6410.8330.6310.7420.6350.821
Table 7

Average of reconstruction rate for haplotypes with length 350.

eCSCGDH-popAltHapHRCH
RRMECRRMECRRMECRRMEC
10%50.58549250.59612360.74610160.7371048
100.559100590.69822250.83520550.8092146
150.546157170.67036110.72465830.8442922
200.547198240.72743810.686111030.7864307
20%50.57648190.51717710.65617160.6611700
100.552101590.56537840.65153440.6963450
150.539150780.58857760.60299520.7515201
200.538208740.58978010.592143450.7556962
30%50.55548450.47020200.58820160.6311998
100.542101080.50844550.56062080.6464317
150.540151640.51168910.558108410.6656631
200.538214030.51893300.546154220.6588946
  47 in total

1.  Experimentally-derived haplotypes substantially increase the efficiency of linkage disequilibrium studies.

Authors:  J A Douglas; M Boehnke; E Gillanders; J M Trent; S B Gruber
Journal:  Nat Genet       Date:  2001-08       Impact factor: 38.330

2.  SpeedHap: an accurate heuristic for the single individual SNP haplotyping problem with many gaps, high reading error rate and low coverage.

Authors:  Loredana M Genovese; Filippo Geraci; Marco Pellegrini
Journal:  IEEE/ACM Trans Comput Biol Bioinform       Date:  2008 Oct-Dec       Impact factor: 3.710

3.  Haplotype of multiple polymorphisms resolved by enzymatic amplification of single DNA molecules.

Authors:  G Ruano; K K Kidd; J C Stephens
Journal:  Proc Natl Acad Sci U S A       Date:  1990-08       Impact factor: 11.205

4.  Black and white patients response to antidepressant treatment for major depression.

Authors:  R V Varner; P Ruiz; D R Small
Journal:  Psychiatr Q       Date:  1998

5.  H-PoP and H-PoPG: heuristic partitioning algorithms for single individual haplotyping of polyploids.

Authors:  Minzhu Xie; Qiong Wu; Jianxin Wang; Tao Jiang
Journal:  Bioinformatics       Date:  2016-08-16       Impact factor: 6.937

6.  A model for the clustered distribution of SNPs in the human genome.

Authors:  Chang-Yong Lee
Journal:  Comput Biol Chem       Date:  2016-06-08       Impact factor: 2.877

7.  Decoding Genetic Variations: Communications-Inspired Haplotype Assembly.

Authors:  Zrinka Puljiz; Haris Vikalo
Journal:  IEEE/ACM Trans Comput Biol Bioinform       Date:  2016 May-Jun       Impact factor: 3.710

8.  Lesser response to angiotensin-converting-enzyme inhibitor therapy in black as compared with white patients with left ventricular dysfunction.

Authors:  D V Exner; D L Dries; M J Domanski; J N Cohn
Journal:  N Engl J Med       Date:  2001-05-03       Impact factor: 91.245

9.  Sickle cell anemia: clinical diversity and beta S-globin haplotypes.

Authors:  Sandra Regina Loggetto
Journal:  Rev Bras Hematol Hemoter       Date:  2013

10.  Sparse Tensor Decomposition for Haplotype Assembly of Diploids and Polyploids.

Authors:  Abolfazl Hashemi; Banghua Zhu; Haris Vikalo
Journal:  BMC Genomics       Date:  2018-03-21       Impact factor: 3.969

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.