Literature DB >> 30483597

Evaluating the performance of tools used to call minority variants from whole genome short-read data.

Khadija Said Mohammed1,2, Nelson Kibinge2, Pjotr Prins2,3, Charles N Agoti1,2, Matthew Cotten4, D J Nokes2,5, Samuel Brand5, George Githinji2.   

Abstract

Background: High-throughput whole genome sequencing facilitates investigation of minority virus sub-populations from virus positive samples. Minority variants are useful in understanding within and between host diversity, population dynamics and can potentially assist in elucidating person-person transmission pathways. Several minority variant callers have been developed to describe low frequency sub-populations from whole genome sequence data. These callers differ based on bioinformatics and statistical methods used to discriminate sequencing errors from low-frequency variants.
Methods: We evaluated the diagnostic performance and concordance between published minority variant callers used in identifying minority variants from whole-genome sequence data from virus samples. We used the ART-Illumina read simulation tool to generate three artificial short-read datasets of varying coverage and error profiles from an RSV reference genome. The datasets were spiked with nucleotide variants at predetermined positions and frequencies. Variants were called using FreeBayes, LoFreq, Vardict, and VarScan2. The variant callers' agreement in identifying known variants was quantified using two measures; concordance accuracy and the inter-caller concordance.
Results: The variant callers reported differences in identifying minority variants from the datasets. Concordance accuracy and inter-caller concordance were positively correlated with sample coverage. FreeBayes identified the majority of variants although it was characterised by variable sensitivity and precision in addition to a high false positive rate relative to the other minority variant callers and which varied with sample coverage. LoFreq was the most conservative caller. Conclusions: We conducted a performance and concordance evaluation of four minority variant calling tools used to identify and quantify low frequency variants. Inconsistency in the quality of sequenced samples impacts on sensitivity and accuracy of minority variant callers. Our study suggests that combining at least three tools when identifying minority variants is useful in filtering errors when calling low frequency variants.

Entities:  

Keywords:  RSV; concordance; minority variants; performance; variant calling

Year:  2018        PMID: 30483597      PMCID: PMC6234735          DOI: 10.12688/wellcomeopenres.13538.2

Source DB:  PubMed          Journal:  Wellcome Open Res        ISSN: 2398-502X


Introduction

RNA viruses have been described as a population of closely related sequences that arise from rapid genomic evolution coupled with a high replication and mutation rates ( Domingo ; Eigen ; Holland ). Genetic changes in RNA viruses result from genetic drift, erroneous replication processes, mutagenic agents and upon which natural selection acts ( Moya ). Rapid replication and mutations generate an ensemble of mutant genomes that are comprised of both dominant and low frequency variants. This diversity has been shown to affect virus fitness landscape, transmission, colonization and replication ( Henn ; Stack ; Vignuzzi ). Many recent studies ( Henn ; Poon ; Stack ) have demonstrated the potential application of virus diversity to inform person-to-person transmission during virus outbreaks. A number of methods that incorporate both genomic and epidemiologic data to infer pathogen transmission have recently been developed ( Worby ). These approaches rely partly on the accurate detection and quantification of minority variant populations from genomic samples. Several tools have been developed to identify and quantify minority variants from short-read data ( Koboldt ; Koboldt ; Lai ; Macalalad ; Wilm ; Yang ). Nonetheless, these tools do not fully account for discrepancies that arise from sample collection, pre-processing and sequencing in addition to errors that are introduced during downstream bioinformatic analysis. Rigorous quality control in sample processing and analysis is often suggested to distinguish true biological variants from artefactual variants ( Zhang & Flaherty, 2017). In some cases, sequencing errors can be reduced by developing high-fidelity protocols and laboratory quality control measures ( Kinde ; McCrone & Lauring, 2016; Watson ). Additionally, the uncertainty resulting from random sequencing errors can be countered by sequencing larger populations at higher coverage ( Zukurov ). A number of studies have extensively explored variants from somatic or tumour samples ( Hofmann ; Koboldt ; Krøigård ; Lai ; Pabinger ) and their application in clinical genomics, but only a limited number of studies have explored the nature of variants from patient-derived samples that target viral populations ( Henn ; Macalalad ; Wilm ; Yang ; Zukurov ) and especially when calling variants from respiratory viruses such as the respiratory syncytial virus (RSV). In this study, we evaluated four published minority variant detection tools using artificial short-read data with different error profiles. We explored the tools’ ability to detect and quantify minority variants and assessed their overall agreement which we defined using two metrics, concordance accuracy, which measures the combined accuracy of the variant callers, and inter-caller concordance, which is the size of the largest set of variant callers that agree at each position. We show that concordance metrics are dependent on sample coverage and are influenced by the quality of input data.

Methods

Overall, we considered ten published, open-source tools with presumed ability to call minority variants from virus deep sequence data. A number of callers were excluded from the analysis for various reasons, for example, the GATK HaplotypeCaller primarily targets germline calling from human and not variant calling from viral samples. We experienced technical difficulties in setting up the Platypus caller and even after setup, Platypus did not provide calls across all levels of coverage in our datasets. SAMtools mpileup did not provide direct allele frequencies while V-Phaser was superseded by V-Phaser 2 which has reported bugs and could not handle reads aligned with BWA-MEM. Therefore, the following four tools were evaluated, FreeBayes version 1.1.0-3-g961e5f3, LoFreq version 2.1.2, VarDict version 30.3.17 and VarScan version 2.4.2. A schematic diagram showing the overall approach is shown in Figure 1.
Figure 1.

A schematic diagram showing the variant calling workflow.

The artificial datasets (BAM files) were generated using ART-Illumina based on an RSV reference genome. BAMSurgeon was used to spike the resulting BAM files by inserting known variants at known locations across the artificial BAM file.

A schematic diagram showing the variant calling workflow.

The artificial datasets (BAM files) were generated using ART-Illumina based on an RSV reference genome. BAMSurgeon was used to spike the resulting BAM files by inserting known variants at known locations across the artificial BAM file.

Artificial datasets

Artificial datasets were generated based on an RSV reference sequence (GenBank accession number KX510245.1) using ART-Illumina version 2.5.8 ( Huang ). ART-Illumina was took the reference RSV genome sequence as input and generated artificial reads using data derived error models to mimic sequence data. Each dataset comprised of eight samples with varying depth of coverage (20, 50, 100, 500, 1000, 2000, 5000, 10000) and was generated using the methods described in Supplementary File 1, section S1.1. The first dataset did not incorporate an error profile. Error profile models (empirical error models based on the distribution of base quality scores) were generated from uncompressed FastQC raw reads derived from sequenced RSV whole genome samples, referred to as “good” and a “bad” sample based on FastQC metrics ( Supplementary File 2 and Supplementary File 3) and used to generate artificial reads in dataset 2 and 3. ART-Illumina-generated artificial SAM files were converted to the BAM format, sorted and indexed using SAMtools version 1.3.1 for each dataset. 166 randomly and uniformly generated nucleotide mutations at different frequencies ( Supplementary Table 1) were inserted into each of the artificial datasets using BamSurgeon ( Ewing ), such that a base change was made amongst the reads at each alignment position as described in Supplementary File 1, section S1.2. This process was repeated with a separate set of 155 positions that comprised a set of mutations with frequencies below 0.5 ( Supplementary Table 1).

Variant calling

The BAM files from each of the three datasets were used as input to each of the four variant callers (FreeBayes, Lofreq, VarDict and VarScan2). The default parameter options used in each tool are explicitly provided in Supplementary File 4. All output files were provided in the variant call format (VCF) or as a tabular file for the case of VarDict. The output from the VCF and tabular file was parsed and written as a comma separated (CSV) file.

Performance measures

To evaluate the performance of the variant calling algorithms, we compared the sequence generated by each variant caller vc, denoted to the gold standard "spiked" sequence, denoted S , at each of N=15205 nucleotide positions. The accuracy of each variant caller is the normalized Hamming distance from the gold standard sequence, where d(x,y) is the standard discrete metric giving 1 when x=y, and 0 otherwise. By distinguishing between the sets of positions where variants did and did not occur in the gold standard sequence we calculated sensitivity, specificity, precision and accuracy ( Table 1).
Table 1.

A breakdown of performance metrics of variant callers evaluated from first dataset that did not incorporate an error profile.

The samples represent simulated datasets of varying depth of coverage. True positive (TP), true negative (TN), false positive (FP) and false negatives (FN) were used to calculate performance metrics of each caller. FPR – False positive rate.

SampleCallerTPTNFPFNSensitivitySpecificityPrecisionFPRAccuracy
1 (20X) freebayes871501821790.52410.99860.80560.00140.9934
lofreq291503901370.17471100.991
vardict72150390940.43371100.9938
varscan111503901550.06631100.9898
2 (50X) freebayes11814901138470.71520.99080.46090.009180.9878
lofreq67150390990.403611100.9935
vardict108150390580.65061100.9962
varscan221503901440.132531100.9905
3 (100X) freebayes12714454585380.76970.96110.17840.03890.959
lofreq571503901090.34341100.9928
vardict104150381620.62650.99990.99056.65E-050.9959
varscan401503901260.2411100.9917
4 (500X) freebayes131125592480300.81370.83510.05020.16490.8349
lofreq601503901060.36141100.993
vardict1101502910560.66270.99930.91676.65E-040.9957
varscan73150390930.43981100.9939
5 (1000X) freebayes14614414625200.87950.95840.18940.04160.9576
lofreq571503901090.34341100.9928
vardict109150363570.65670.99980.97321.99E-040.9961
varscan79150390870.47591100.9943
6 (2000X) freebayes14614923116200.87950.99230.55710.00770.9911
lofreq70150390960.42171100.9937
vardict120150390460.72291100.997
varscan83150390830.51100.9945
7 (5000X) freebayes1491502019170.89760.99870.88690.00130.9976
lofreq67150390990.403661100.9935
vardict117150363490.70480.99980.9751.99E-040.9966
varscan78150390880.46991100.9942
8 (10000X) freebayes1451502217210.87350.99890.89510.00110.9975
lofreq72150390940.43371100.9938
vardict118150381480.71080.99990.99166.65E-050.9968
varscan97150390690.58431100.9955

A breakdown of performance metrics of variant callers evaluated from first dataset that did not incorporate an error profile.

The samples represent simulated datasets of varying depth of coverage. True positive (TP), true negative (TN), false positive (FP) and false negatives (FN) were used to calculate performance metrics of each caller. FPR – False positive rate.

Concordance analysis

We defined two concordance metrics to present the level of agreement between different callers in detecting the same variant positions in the sequence. The first concordance metric is concordance accuracy, which measures the combined accuracy of the variant callers. At the true variant position i we then which can be either 0, 1, 2, 3, 4 for each true variant position. The second concordance metric is inter-caller concordance, which is the size of the largest set of variant callers that agree at each position i, without reference to any gold standard sequence. We used both bar plots and heat maps to visualize the effect of coverage on C . Visualization of inter-caller concordance for variant sets was achieved using a bar plot and expounded by UpSet plots ( Lex ) in R version 3.4.2.

Results

We used three artificial datasets of varying coverage and error profile to assess the concordance accuracy and inter-caller concordance for four minority variant callers. The first dataset comprised of artificial reads based on an RSV genome, the second dataset comprised of the similar simulated set of reads whilst incorporating an error profile from the set of reads used to assemble the reference genome, the third dataset was generated using an error profile from a poorly sequenced sample. Overall, concordance accuracy improved with increase in sample coverage ( Figure 2), and the proportion of positions that could not be identified by any variant caller decreased with increase in coverage ( Supplementary Figure 1). For all the three datasets, and at each coverage level, fully concordant variants were below 50% of the total variants suggesting that considering only fully concordant positions eliminated a substantial number of variant positions. There were marginal improvements in the number of concordant variants in the second dataset compared to the first and the third error profile. Across all datasets, there was little improvement at detecting fully concordant positions after a coverage of 2000 ( Figure 2). We utilized UpSetR plots to provide a visual summary of the combination of variant callers that contributed to the observed concordance accuracy (Supplementary Figure 4).
Figure 2.

Proportion of fully concordant positions with respect to sample coverage.

Each plot A– C represents the proportion (y-axis) of fully concordant variants with respect to read coverage (x-axis) for the first, second and third dataset. Concordant positions were defined as positions that were identified by all the four variant callers.

Proportion of fully concordant positions with respect to sample coverage.

Each plot A– C represents the proportion (y-axis) of fully concordant variants with respect to read coverage (x-axis) for the first, second and third dataset. Concordant positions were defined as positions that were identified by all the four variant callers. FreeBayes identified the majority of variants ( Figure 3) across all the datasets although it was characterised by a substantial trade-off between sensitivity and precision in artificial dataset 1 ( Figure 4) in addition to a high false positive rate relative to the other minority variant callers observed in datasets 1 and 3. Regardless, Freebayes reported comparatively better sensitivity relative to the rest of the tools. Lofreq was the most conservative of the evaluated callers and it missed majority of variants across all the three datasets. In addition, Lofreq’s sensitivity in coverages above 100 did not differ in a substantial way ( Figure 4). Vardict performance increased with read coverage but not by a great magnitude compared to FreeBayes and Varscan. Its performance across different datasets was more consistent relative to other tools. Overall, we observed lower reported frequency in called minority variants compared with spiked frequencies ( Figure 5). This observation was consistent in all the datasets from all the variant callers.
Figure 3.

Heat maps illustrating tool specific concordance for the first artificial dataset.

The red tiles represent variants detected by each caller from the list of 166 variant positions. The panels are arranged left to right A– H in the order of increasing sample coverage (20,50,100,500,1000,2000,5000 and 10,000). The “not called” column in each panel represents the variants that were not identified by any of the variant callers.

Figure 4.

A summary of the relationship between sample coverage, sensitivity ( A– C) and sample coverage and precision ( D– F). The x-axis shows the sample coverage and the y-axis represents the sensitivity and precision respectively. Sensitivity of the callers rose gradually from low to high coverage samples. Again, precision was variable for FreeBayes calls while relatively high for the rest of the three callers.

Figure 5.

Box plots showing the distribution of frequencies between the spiked variants and the corresponding called variant for each variant caller at each coverage ( A– H) for the first dataset.

Heat maps illustrating tool specific concordance for the first artificial dataset.

The red tiles represent variants detected by each caller from the list of 166 variant positions. The panels are arranged left to right A– H in the order of increasing sample coverage (20,50,100,500,1000,2000,5000 and 10,000). The “not called” column in each panel represents the variants that were not identified by any of the variant callers. A summary of the relationship between sample coverage, sensitivity ( A– C) and sample coverage and precision ( D– F). The x-axis shows the sample coverage and the y-axis represents the sensitivity and precision respectively. Sensitivity of the callers rose gradually from low to high coverage samples. Again, precision was variable for FreeBayes calls while relatively high for the rest of the three callers.

Discussion

Detecting and reporting minority variant calls is challenging, given that low frequency calls occur at a frequency that is the same as error generated from sequencing and PCR reactions. Recent studies have linked the sharing of minority variants with transmission patterns, and hence it is important to distinguish actual minor variants from spurious variant calls. Several minority variants callers use different detection algorithms and statistics, each of which attempt to optimize an aspect of the variant calling process. Therefore, there could be disparities between what is reported by a given minority variant caller, given datasets of varying sequencing depths and error profiles. This study aimed to identify the proportion of positions that were recognized as variants by a set of tools using three artificial datasets of varying coverage and error profiles. Concordance accuracy and inter-caller concordance measures were dependent on the sample coverage and error profile. Sensitivity for the majority of the tools was positively correlated with depth of coverage and similarly observed previously ( Spencer ) in a study that investigated performance in methods used to detect low-frequency variants. It is important to note that the tools provide different performance metrics depending on the variant’s threshold ( Supplementary File 5). In the first artificial dataset, VarDict detected true positive variants with comparably good performance (sensitivity 43.4% – 72.3%), though it was marginally invariant to changes in average coverage above 20. VarDict has in-built features that could contribute to its efficient performance. It is able to activate an “amplicon calling mode” that filters out amplicon biased variants and mispaired primers as PCR artefacts. A similar pattern was observed with LoFreq, where sensitivity was not significantly affected by depth of coverage. VarScan2 was more affected by coverage and maintained average sensitivity (6.6% – 58.4%). Applying filters in VarScan has been reported to improve sensitivity by reducing number of false positives ( Hofmann ; Koboldt ). FreeBayes’ trade-off between sensitivity and precision was also reported by other studies ( Hwang ; Sandmann ). The use of caller-specified filters could have enhanced the sensitivity of some callers, but the option to adopt default parameters allows equivalent assessment of tool performance. Moreover, all possible combinations of tuning parameters are challenging, time-consuming and sometimes impractical. Based on artificial reads from the second dataset, FreeBayes performed comparatively better than the other tools with a very low false positive rate and better sensitivity (46.4% – 94.6%). This suggests that FreeBayes is potentially useful in identifying minority variants when sample data comes from reads with a low error profile. This implies the error rate results are outcome of tool performance. Specificity of a caller is its ability to correctly predict the absence of a variant. The variant callers make use of a high specificity to minimize the number of false positive calls thereby reducing post-call filtering and consequently filter out true low-frequency variants. Moreover, high accuracy measures demonstrate the reliability of the variant caller in correctly identifying true variants. All the minority variant callers reported slightly lower frequencies in called variants compared to the frequencies in the original spiked variants ( Figure 5). This could be explained by the fact that many of the callers are tuned to report lower differences in the calls owing to stringent pre-processing criteria. A thorough investigation of this observation is therefore required. Box plots showing the distribution of frequencies between the spiked variants and the corresponding called variant for each variant caller at each coverage ( A– H) for the first dataset. In absence of an explicit error model from samples of heterogeneous sequencing quality, combining at least three tools when identifying minority variants could potentially assist in filtering out errors from low frequency variants. Given that there are no definitive data and next generation sequencing pipeline standards for variant calling approaches that are specific for viruses, there are opportunities to develop robust methods and tools that strike a balance between detecting errors and true minority variants from field virus samples that present with different sequencing quality.

Data availability

The data analysis scripts and datasets used in analysis are available from our institutional Dataverse repository: http://dx.doi.org/10.7910/DVN/ZIO43M ( Mohammed & Githinji, 2018) Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0). Thank you for the detailed response and revision. The changes you've made greatly improve the paper and I especially appreciate the inclusion of run parameters for the tools and additional stratification plots. These revisions address all the previous suggestions, and my only additional comment from seeing Figure 5 and the run parameters was that most of these tools are producing a high percentage of low frequency noise which will result in poor precision when trying to identify variants in real samples. A useful followup would be explore filtering or ensemble methods to help improve this specificity. Thank you again for the work on this paper. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Thank you for addressing the issues raised in my review of Version 1. Everything has been resolved and I have no further comments to add. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. This is a comparative review of variant calling tools with focus on the detection of minority variant calls, specifically with reference to virus data. The authors have provided data on four freely available tools that can be tuned to call minority variants, or are specifically designed for this purpose. The data used here was artificial reads generated with the ART-Illumina tool, which were spiked in with variants at known locations. This was replicated eight times at levels of read coverage spanning four orders of magnitude. The outcome of the experiment was a low level of concordance between tools. None of the tools were able to identify all of the spiked-in variants. The approach taken here is technically sound, as it is based on a strictly defined truth set, although I am slightly uncomfortable with the lack of real data (but equally I am aware of the methodological problems real data presents in this kind of scenario). The study is useful for the wider community as a) it reiterates the point that using a single tool for a given bioinformatics task comes with the risk of missing information and b) it provides concrete pointers as to which tools perform well in this scenario. My main reservation is that there is no mention of the importance of parameter settings. Different parameter values can affect variant calling outcomes dramatically, even when just comparing different runs of the same tool. FreeBayes alone has over 70 command line parameters, many of these continuous variables. This makes for an incredibly large parameter space, and using different sets of parameter values can potentially have much more of an impact on the results than the choice of tool. This should at least be mentioned in the Discussion section. There also doesn’t seem to be any data on how the individual tools were parameterised in this study. Were the defaults used in each case? Presumably not, given that we are looking for minority variants (and tools like FreeBayes appear to be developed and parameterised with a human germline use case in mind). This information is critical for reproducibility and must be included (even if it is just a statement saying the defaults were used). My other comments below reference the online PDF version of the article. Abstract Results section: change “FreeBayes identified majority” to “FreeBayes identified the majority” Conclusions section: change “impact” to “impacts” Introduction para 1 line 7: is the “and” redundant? para 5 line 7: comma after “callers” Methods para 1 line 1: “we considered eleven published,open-source tools” I counted six excluded tools in Table 1, plus four that have been taken through to evaluation = 10 para 2 (“Artificial datasets”): The concept of the error profile has not been well explained. This section needs to be expanded. I would like to see some description of what the error profile consists of, how it has been derived from the FASTQC data, and how it has been applied to the reads in practice. para 3 line 2:“at varying frequencies” – what were they? I presume we are talking about the proportion of reads at a spiked-in variant site that carried the alternate allele at this site. This information strikes me as absolutely critical for this paper – this is about minority variants after all. Please provide this data. para 4 line 8: “written as a comma separated (CSV) files” should be “written as a comma separated (CSV) file” Results para 3 line 1: “FreeBayes identified majority” should be “FreeBayes identified the majority” para 3 line 7: “it missed majority” should be “it missed the majority” General comment: Vardict should get a mention in this section, as it actually had better recall rates than FreeBayes in 6 out of 8 samples. Discussion para 1 line 5: change “ascertain” to “distinguish” para 3 line 1: “Sensitivity for majority” should be “Sensitivity for the majority” Table 3 As far as I can tell, the counts for FN + TP should always add up to 166. This is not the case for the majority of the Freebayes runs (except for Freebayes with samples 1 and 3, which are ok). Supplementary File S1 To ensure reproducibility, please include the BAMSurgeon command line statement for spiking in the variants. Supplementary Figure 2 It is not apparent what the individual figures/pages in the PDF file represent. Please label these appropriately. I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. We are grateful for taking time to review this work and for providing useful comments that we have considered in this new draft of the manuscript. We agree on the challenges of obtaining and using the appropriate actual data when conducting these types of analysis. We have done our best to generate reliable artificial datasets based on an actual RSV reference genome and error profiles from two sequenced samples using the methods described in this manuscript. In regard to the importance of parameter settings and lack of data on how individual tools were parameterized, we have explicitly stated the parameter settings in the variant calling section and in the Supplementary File 4. In the discussion section, we highlight the challenge of exploring all possible parameter settings. The concept of the error profile has is now well explained in paragraph 1 of the artificial datasets section in the current manuscript. The actual frequencies used to spike the variants are now described and provided in Supplementary file 5. There was a bug in the script used to calculate the FN and TP counts in FreeBayes runs. This has been corrected and updated to reflect the true FN and TP counts. A revised table and subsequent figures have been provided. The authors validate methods to call low frequency minority variants from viral samples. Detection of minority species in viral population sequencing helps improve treatments by matching drugs to the full spectrum of sub-species, and allows tracking of sub-species within outbreaks. The paper uses synthetic datasets generated by the ART read simulator (https://www.niehs.nih.gov/research/resources/software/biostatistics/art/) with randomly generated variants. While this won't fully describe the complexity of a real viral mixture, it does provide a solid baseline for assessing callers. The authors used mutation error profiles from good and bad sequencing samples to model different error rates, and evaluated the impact of differing coverage on sensitivity and specificity. My main suggestion is to stratify comparisons by variant frequency to assess how callers do with different low frequency events, and I provide more specific comments below. Methods suggestions The stratifications by coverage, error rates and callers are great for identifying the effect of different callers on detection. The one missing component is the allele frequency of the generated variants. The paper does not define the range of frequencies generated, and this will have a large impact on the ability of detection across callers. Looking through the list of generated variants (Githinji_2018_variant.list.tab) the range varies from high frequency (90%+) to very low frequency (<1%). Since the focus is on low frequency detection, I'd suggest stratifying the results into bins: standard (>25%), moderately low frequency (5-25%) and low frequency (<5%) and examine the caller and depth detection within these bins. Pending the results of this, the authors may want to generate additional low frequency variants to help differentiate caller methods. A confusing aspect of evaluating the depth metrics is that the number of possible true positives differs between sample depths. In Table 3, the FreeBayes TP + FNs are 166, 165 and 166 for 20x, 50x and 100x. However they change to 61, 55 and 55 for 500x, 1000x and 200x. Other callers seem to be consistent. Why does FreeBayes have different total variant numbers across depth? Paper suggestions Table 1 should get compressed into a single paragraph saying that HaploptypeCaller, Platypus and mpileup target germline calling and not low frequency detection. It doesn't need to be a separate table. Table 2 should be replaced into a link to descriptions of true/false positives/negatives and does not need to be a separate figure in the paper. The title should reflect that this paper describes viral calling (versus, say low frequency somatic variant detection). I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. We are very grateful for taking time to review this work and for providing useful comments which we have considered in the new version of the manuscript: All the paper suggestions have been considered table 1 and table 2 were removed and replaced with text descriptions. We have added a new figure comparing the allele frequency of the spiked and called variants. We highlight the observations in the result and discussion sections. In addition, we provided a stratified result of the variants frequency and discuss how the callers perform for different low frequency events. We also provided an addition set of spiked variants at frequency below 50%. The results section in table 3 have been revised to reflect the correct number of TP and FN for FreeBayes. Figure was also revised to reflect the new changes.
  30 in total

1.  Quasispecies diversity determines pathogenesis through cooperative interactions in a viral population.

Authors:  Marco Vignuzzi; Jeffrey K Stone; Jamie J Arnold; Craig E Cameron; Raul Andino
Journal:  Nature       Date:  2005-12-04       Impact factor: 49.962

Review 2.  RNA virus populations as quasispecies.

Authors:  J J Holland; J C De La Torre; D A Steinhauer
Journal:  Curr Top Microbiol Immunol       Date:  1992       Impact factor: 4.291

3.  VarScan: variant detection in massively parallel sequencing of individual and pooled samples.

Authors:  Daniel C Koboldt; Ken Chen; Todd Wylie; David E Larson; Michael D McLellan; Elaine R Mardis; George M Weinstock; Richard K Wilson; Li Ding
Journal:  Bioinformatics       Date:  2009-06-19       Impact factor: 6.937

4.  Performance of common analysis methods for detecting low-frequency single nucleotide variants in targeted next-generation sequence data.

Authors:  David H Spencer; Manoj Tyagi; Francesco Vallania; Andrew J Bredemeyer; John D Pfeifer; Rob D Mitra; Eric J Duncavage
Journal:  J Mol Diagn       Date:  2013-11-05       Impact factor: 5.568

5.  UpSet: Visualization of Intersecting Sets.

Authors:  Alexander Lex; Nils Gehlenborg; Hendrik Strobelt; Romain Vuillemot; Hanspeter Pfister
Journal:  IEEE Trans Vis Comput Graph       Date:  2014-12       Impact factor: 4.579

6.  Measurements of Intrahost Viral Diversity Are Extremely Sensitive to Systematic Errors in Variant Calling.

Authors:  John T McCrone; Adam S Lauring
Journal:  J Virol       Date:  2016-07-11       Impact factor: 5.103

7.  Detailed simulation of cancer exome sequencing data reveals differences and common limitations of variant callers.

Authors:  Ariane L Hofmann; Jonas Behr; Jochen Singer; Jack Kuipers; Christian Beisel; Peter Schraml; Holger Moch; Niko Beerenwinkel
Journal:  BMC Bioinformatics       Date:  2017-01-03       Impact factor: 3.169

8.  Evaluation of variant detection software for pooled next-generation sequence data.

Authors:  Howard W Huang; James C Mullikin; Nancy F Hansen
Journal:  BMC Bioinformatics       Date:  2015-07-29       Impact factor: 3.169

9.  Evaluation of Nine Somatic Variant Callers for Detection of Somatic Mutations in Exome and Targeted Deep Sequencing Data.

Authors:  Anne Bruun Krøigård; Mads Thomassen; Anne-Vibeke Lænkholm; Torben A Kruse; Martin Jakob Larsen
Journal:  PLoS One       Date:  2016-03-22       Impact factor: 3.240

10.  VarDict: a novel and versatile variant caller for next-generation sequencing in cancer research.

Authors:  Zhongwu Lai; Aleksandra Markovets; Miika Ahdesmaki; Brad Chapman; Oliver Hofmann; Robert McEwen; Justin Johnson; Brian Dougherty; J Carl Barrett; Jonathan R Dry
Journal:  Nucleic Acids Res       Date:  2016-04-07       Impact factor: 16.971

View more
  4 in total

1.  Evaluating assembly and variant calling software for strain-resolved analysis of large DNA viruses.

Authors:  Zhi-Luo Deng; Akshay Dhingra; Adrian Fritz; Jasper Götting; Philipp C Münch; Lars Steinbrück; Thomas F Schulz; Tina Ganzenmüller; Alice C McHardy
Journal:  Brief Bioinform       Date:  2021-05-20       Impact factor: 11.622

2.  Detection of minor variants in Mycobacterium tuberculosis whole genome sequencing data.

Authors:  Sander N Goossens; Tim H Heupink; Elise De Vos; Anzaan Dippenaar; Margaretha De Vos; Rob Warren; Annelies Van Rie
Journal:  Brief Bioinform       Date:  2022-01-17       Impact factor: 11.622

3.  Genomic epidemiology of SARS-CoV-2 under an elimination strategy in Hong Kong.

Authors:  Haogao Gu; Ruopeng Xie; Dillon C Adam; Joseph L-H Tsui; Daniel K Chu; Lydia D J Chang; Sammi S Y Cheuk; Shreya Gurung; Pavithra Krishnan; Daisy Y M Ng; Gigi Y Z Liu; Carrie K C Wan; Samuel S M Cheng; Kimberly M Edwards; Kathy S M Leung; Joseph T Wu; Dominic N C Tsang; Gabriel M Leung; Benjamin J Cowling; Malik Peiris; Tommy T Y Lam; Vijaykrishna Dhanasekaran; Leo L M Poon
Journal:  Nat Commun       Date:  2022-02-08       Impact factor: 14.919

4.  Tool evaluation for the detection of variably sized indels from next generation whole genome and targeted sequencing data.

Authors:  Ning Wang; Vladislav Lysenkov; Katri Orte; Veli Kairisto; Juhani Aakko; Sofia Khan; Laura L Elo
Journal:  PLoS Comput Biol       Date:  2022-02-17       Impact factor: 4.475

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.