Literature DB >> 28808973

Discovery of Variants Underlying Host Susceptibility to Virus Infection Using Whole-Exome Sequencing.

Gabriel A Leiva-Torres1,2,3, Nestor Nebesio1,2,3, Silvia M Vidal4,5,6.   

Abstract

The clinical course of any viral infection greatly differs in individuals. This variation results from various viral, host, and environmental factors. The identification of host genetic factors influencing inter-individual variation in susceptibility to several pathogenic viruses has tremendously increased our understanding of the mechanisms and pathways required for immunity. Next-generation sequencing of whole exomes represents a powerful tool in biomedical research. In this chapter, we briefly introduce whole-exome sequencing in the context of genetic approaches to identify host susceptibility genes to viral infections. We then describe general aspects of the workflow for whole-exome sequence analysis together with the tools and online resources that can be used to identify and annotate variant calls, and then prioritize them for their potential association to phenotypes of interest.

Entities:  

Keywords:  Antiviral immunity; Exome; Gene annotation; Host genetics; Read depth; Sequence alignment; Variant annotation; Variant calling; Whole-exome sequencing

Mesh:

Year:  2017        PMID: 28808973      PMCID: PMC7120756          DOI: 10.1007/978-1-4939-7237-1_14

Source DB:  PubMed          Journal:  Methods Mol Biol        ISSN: 1064-3745


Introduction

Value and Genetic Approaches to Identify Host Susceptibility Genes to Virus Infection

A characteristic feature of human infections , including virus infections, is that just a proportion of exposed individuals develop clinical disease. Even during the 1918 influenza pandemic, the more recent human immunodeficiency virus (HIV) epidemic or severe acute respiratory syndrome coronavirus (SARS-CoV) pandemic, only a proportion of individuals succumbed to infection [1, 2]. On the contrary, widespread pathogens that are innocuous for the most of the population, such as herpes simplex virus type 1 (HSV-1), can be fatal only to a very few [3]. It is now well established that host genetic variation is an important component of the varied onset, severity, and outcome of infectious disease. Such data have provided important insights into the pathogenesis of virus infections shedding light into antiviral mechanisms required for host defense. Several different, yet complementary approaches to the identification of genetic variation important in infectious disease progression have been taken. By far the most common approach has been to look for association in candidate genes using case–control studies. These studies have highlighted a few common, high-penetrance (see Table 1 for the definition of the terms) human genetic variants associated with infection and disease resistance due to virus receptor polymorphism. A homozygous 32 base-pair deletion in the chemokine receptor 5 (CCR5 Δ32) gene provides near complete protection against HIV-1 infection [4], whereas homozygous individuals with nonsense mutations in the fucosyltransferase 2, or FUT2, gene are almost completely protected from experimental and natural infections with norovirus [5]. In a second approach, genome-wide linkage analyses paired with candidate-gene approaches have led to the identification of rare large-effect genetic variants in susceptibility to infection against pathogens segregating in families. An excellent example is the dissection of the genetic architecture of childhood herpes simplex encephalitis (HSE), a rare life-threatening complication of primary infection with HSV-1 [6]. A body of elegant studies have revealed that children with mutations in the TLR3-UNC93B-TRIF-TBK1-TRAF3-IRF3 pathway are particularly susceptible to HSE [7], due to impaired CNS-intrinsic TLR3-dependent IFN-α/β and IFN-λ immunity to HSV-1 [8]. Candidate gene approaches, however, have been limited by their reliance on hypothesis based on—often incomplete—biological knowledge.
Table 1

Definition of terms (in alphabetic order)

TermMeaning
HaplotypeA set of alleles that commonly segregate together and are defined as regions of extended linkage disequilibrium, which in humans is often up to 100 kb in length.
IndelInsertions and deletions in a genome; the second most common type of variation after SNPs.
Minor allele frequency (MAF)Refers to the frequency at which the second most common allele occurs in population.
PenetranceDescribes the proportion of individuals with a mutation or risk variant who have the disease. Incomplete penetrance is said when individuals carrying pathogenic mutations manifest no disease phenotype.
Rare alleleAllele present with MAF <1% (PMID: 19293820)
SNPSingle nucleotide polymorphism. Variation of a single nucleotide base, with the minor allele present in at least 1% of alleles in the population.
SNVSingle nucleotide variant. Minor allele frequency undefined.
Definition of terms (in alphabetic order) The sequencing of the human genome and the international HapMap project [9-11] led the way to Genome Wide Association Studies (GWAS) [12]. This approach does not require a prior hypothesis. Using large well-characterized cohorts of cases and controls, the whole genome is interrogated with a large set of genetic variants to possible association between a variant and the disease trait. One of the most remarkable successes of GWAS in infection diseases was the identification of IFNL3 variants in association with the clearance of hepatitis C virus (HCV) following treatment (ribavirin and IFN-α) [13-15] or with spontaneous HCV clearance [16, 17], highlighting the importance of IFN-λ3 signaling in innate control of HCV [18]. GWAS applied to other viral infections have confirmed a major role for HLA genes in host susceptibility against HIV, Dengue and hepatitis B viruses and identified several new risk loci [19-21]. However, except for HCV mentioned before, non-HLA loci often span numerous linked genes and have modest effect size challenging their identification. Interestingly, these loci seem to behave in a pathogen-specific fashion, possibly delineating host-pathogen interactions that are specific to a given virus infection.

Power and Constraints of Whole-Exome Sequencing

In the past few years, the advent of next-generation sequencing technologies (NGS)—such as whole-exome sequencing (WES)—has revolutionized the biomedical field, including the discovery of many new mutations in patients with unexplained infections often seen at the immunodeficiency clinic [22-24]. WES provides a one-step simultaneous interrogation of virtually all exonic and adjacent intronic sequences, which has been remarkably successful both in a diagnostic setting sequencing and as a discovery tool (research exome sequencing) [25, 26]. These studies have been most effective for the discovery of rare, high-penetrance protein-coding variants for presumed monogenic disorders. A recent report counted that out of about 300 primary immuno-deficiencies characterized at the single gene level, close to 1/3 have been identified by NGS in the past 5 years [27]. WES discoveries have provided fresh insights into the mechanisms that control the development, function, and regulation of immune cells during response to infection (recently reviewed in [26, 28]). Notably, they have highlighted (1) pathways that are required for general protection against infection, generally involving genetic block in the T/B-lymphocyte differentiation program or result in absence of specific immune cells, and (2) pathways that are required for response to narrow groups of pathogens, somewhat reminiscent of infection-specific risk loci mapped by GWAS. An example of the latter was the discovery of compound heterozygous mutations in IRF7 in a child suffering from life-threatening influenza [29]. Each parent was heterozygous for a single mutated allele, indicating autosomal-recessive segregation for the IRF7-deficiency. Detailed biochemical analysis indicated that both alleles were loss-of-function mutations, consistent with the mode of inheritance. Mechanistically, IRF7-deficiency was linked to both, lack of IFN-α production in the patient’s plamocytoid cells challenged with influenza virus and lack of intrinsic anti-viral immunity in patient-specific fibroblasts or pulmonary epithelial cells derived from induced pluripotent stem cells (iPSC). This study represented the first demonstration of a genetic cause for severe influenza in humans and may well pave the way for the discovery of other influenza susceptibility genes in the IRF7 pathway, akin to mutations in the TLR3-pathways underlying HSE. The example above illustrates critical requirements for the successful application of WES , including variant prioritization and variant validation. The study design requires a substantial body of previous knowledge about the phenotype including the prevalence in the general population and the penetrance to help in surmising the mode of inheritance [27, 30]. This will dictate the selection samples (see Note ). For situations in which there is a single affected case and no family history, sequencing the unaffected parents (as for IRF7-deficiency) permits efficient discovery of de novo mutations and compound heterozygous genotypes. The availability of multiple families with very similar clinical phenotypes substantially increases power for gene discovery. However, prioritization of disease-causing variants by WES remains one of the main challenges due to the sheer number of variants found in individual exomes. The exome has been defined traditionally as the sequence encompassing all exons of protein-coding genes in the genome and covers between 1 and 2% of the genome [31-33]. Yet this portion houses 85% of the known disease causing variants [34, 35]. An individual exome typically harbors thousands of variants, compared to a reference genome, which are predicted to lead to nonsynonymous amino acid substitutions, alterations of conserved splice site residues, or small insertions or deletions. As presented below, various methods exist to identify which variants deleteriously affect the function of individual proteins. However, each genome is thought to harbor about 100 genuine loss-of-function variants with about 20 genes completely inactivated [36, 37]. Hence, rigorous criteria, including the absence of the candidate variant genotype in individuals without the clinical phenotype together with robust experimental validation, have been proposed to validate disease-causing variants [38]. Whereas study design and experimental approaches need to be developed in a case-by-case situation, below we will present the reagents and methodology for the discovery of and validation of candidate genetic variants in a typical exome-sequencing pipeline.

Materials

In addition to DNA samples from cases, their families, and the appropriate controls, the materials required for WES are a well-annotated reference genome, whole-exome capture DNA libraries, and computing facilities.

Annotated Reference Genome

The human reference assembly defines a standard upon which other whole genome studies are based. The last build of the human reference genome provided by the Genome Reference Consortium reports ~3 × 109 bases having coding and noncoding sequences. The exome is defined as all the exons for the 20,000 protein-coding genes in the human genome and all the exons pertaining to microRNA, small nucleolar RNA, and large intergenic noncoding RNA genes [39]. This information is not static and projects such as GENCODE [40] and RefSeq [41] continue to provide comprehensive annotation of both protein-coding genes and noncoding transcripts. The last assembly of human reference genome (GRCh38) can be accessed via the European Bioinformatics Institute and the Wellcome Trust Sanger Institute (Ensembl) [42] or the University of California Santa Cruz (UCSC) [43] genome browsers.

Whole-Exome Capture Library

Exome capture essentially consists of the steps of fragmenting a DNA sample, hybridizing the DNA to complementary oligonucleotide baits whose sequence has been designed to hybridize to exon regions. After binding to genomic DNA, these probes are pulled down and PCR amplified through the addition of adapters, allowing exon regions to be selectively sequenced. The most common and efficient strategies are in-solution capture methods offered by Roche/NimbleGen’s SeqCap EZ Human Exome Library and Agilent’s SureSelect Human All Exon. Several publications have compared the specificity and sensitivity of these platforms [44-46]. The NimbleGen’s kit has the greatest bait density of any of the platforms and uses short (55 − 105 bp), overlapping baits to cover the target region [46]. This approach has been found to be an efficient method to cover the target region, sensitively detect variants and has a high level of specificity. Indeed, NimbleGen’s kit shows fewer off-target reads than other platforms [46]. Importantly, this bait design has been found to show greater genotype sensitivity than the other platforms in difficult to sequence regions, such as areas of high GC content [44]. The Agilent’s kit is the only platform to use RNA probes. The baits are longer than those used in NimbleGen’s platform (114 − 126 bp) and the corresponding target sequences are adjacent to one another rather than overlapping. This design has been found to be good at identifying insertions and deletions (indels), because longer baits can tolerate larger mismatches [45].

High-Performance Computing Facility/Network for Data Storage and Maintenance of Pipelines for WES Analysis

Massively parallel short-read sequencing on NGS platforms typically results in the production of ∼50–100 million reads per exome. This large volume of reads needs to be analyzed and stored. Moreover, software packages work best when tools and sequencing data are immediately available in the same network as accessing an external storage location for sequencing data slows down the process. High-performance computing infrastructure (HPC) and IT professionals are needed to access and storage of the generated and analyzed data. The most common infrastructure components include HPC resources ranging from high-performance computing clusters to cloud computing resources, equipped with batch (queuing) systems, and commonly connected to shared-network-attached storage. Academic researchers have access to these services through national infrastructures, which provide HPC, storage, and ultra-high-speed network connectivity and remote access to research data. These systems are equipped with actively maintained bioinformatics suites for automation of WES analysis. The most widely used variant callers include the Sequence Alignment/Map (SAM) tools [47] and the Genome Analysis Tool Kit (GATK) [48, 49] developed by the Broad Institute. The latter was found the most efficient NGS variant caller in comparative studies [50] (see Table 2 for commonly used tools for WES and their weblinks).
Table 2

Commonly used tools and weblinks for whole- exome sequence data analysis pipeline

Toolweblink
Genome browser
Ensembl www.ensembl.org
UCSChttp://genome.ucsc.edu
Quality control
FastQChttp://www.bioinformatics.babraham.ac.uk/projects/fastqc/
Short read mapping
Bowtie http://bowtie-bio.sourceforge.net/index.shtml
Bfasthttp://bfast.sourceforge.net
Mosaikhttps://github.com/wanpinglee/MOSAIK
BWAhttp://bio-bwa.sourceforge.net/
Manipulate NGS data (mark duplicates, merge files)
Picard tools https://broadinstitute.github.io/picard/index.html
SAMTools http://www.htslib.org/doc/samtools.html
Variant calling
GATK https://software.broadinstitute.org/gatk/
SAMTools http://www.htslib.org/doc/samtools.html
Variant annotation: (1) Coding effect predictions
SnpEff http://snpeff.sourceforge.net/
VEP http://ensembl.org/info/docs/tools/vep/index.html
SIFT http://sift.jcvi.org/
PolyPhen2 http://genetics.bwh.harvard.edu/pph2/
Variant annotation: (2) Conservation
PhyloP http://compgen.bscb.cornell.edu/phast
GERP++ http://gvs.gs.washington.edu/GVS147/
CADD http://cadd.gs.washington.edu/
Variant annotation: (3) Gene-level
MSC http://lab.rockefeller.edu/casanova/MSC
GAVIN https://molgenis20.gcc.rug.nl/
: (4) integrative
ANNOVAR http://annovar.openbioinformatics.org/en/latest/user-guide/download/
Knowledge-based annotation
HGPS http://hgc.rockefeller.edu/
KEGGwww.genome.jp/kegg/
REACTOME www.reactome.org/
MPO www.informatics.jax.org/humanDisease.shtml
GEO www.ncbi.nlm.nih. gov/geoprofiles
GXA www.ebi.ac.uk/gxa
BioGPS http://biogps.org
STRING http://string-db.org
ToppGene https://toppgene.cchmc.org
GeneMania http://genemania.org
Commonly used tools and weblinks for whole- exome sequence data analysis pipeline

Methods

A typical pipeline of WES analysis consists of the main following steps: (1) raw data QC and preprocessing, (2) sequence alignment mapping, (3) post-alignment processing, (4) variant analysis, (5) variant prioritization, and (6) variant validation (Fig. 1).
Fig. 1

Basic workflow and tools for whole- exome sequencing project. Following sequencing, reads undergo quality assessment and read alignment against a reference genome, followed by variant identification. The detected variants are annotated to infer their biological relevance. Then, variants are filtered based on quality of the read and frequency on the population. Then variants are prioritized based on the genetic hypothesis for the trait under study and knowledge about the candidate gene/protein. Ultimately, experimental validation is required to ascertain variant discovery. On the right the format outputs are indicated

Basic workflow and tools for whole- exome sequencing project. Following sequencing, reads undergo quality assessment and read alignment against a reference genome, followed by variant identification. The detected variants are annotated to infer their biological relevance. Then, variants are filtered based on quality of the read and frequency on the population. Then variants are prioritized based on the genetic hypothesis for the trait under study and knowledge about the candidate gene/protein. Ultimately, experimental validation is required to ascertain variant discovery. On the right the format outputs are indicated

Raw Data Quality Control (QC) and Preprocessing

An effective QC is critical for a reliable data analysis, since this may affect downstream analysis results. The raw sequence output format for NGS is the FASTQ format (see Table 3 for commonly used file formats in WES ), which incorporates (1) a text-based representation of sequences (FASTA format) and (2) a per-base quality score of the read provided by the sequencing instrument. The latter is a Phred-like score [51] assigned by an algorithm of the sequencing instrument that estimates the probability that a base is called incorrectly. Several tools have been developed for QC of raw sequence data. The most commonly used is the java script FastQC [52]; it can generate diagnosis plots such as distributions of base quality scores and GC content, N content, and sequence duplication levels. FastQC can also perform standard preprocessing procedure including adapter removal and trimming of low-quality bases at the ends of the reads.
Table 3

Description of commonly used file formats in WES workflows

FormatCharacteristics
FASTQ file (.fastq)Text file that stores nucleotide sequence and quality score for downstream analysis. There are typically four lines in a FASTQ file: (1) sequence identifier initialized “@”; (2) biological sequence of nucleotide reads (ACTG); (3) sequence identifier initialized “+”; (4) quality score of corresponding sequencing read, which is coded with ASCII characters.
Sequence alignment/map (SAM) file (.sam)Text file that stores alignment information of short reads to reference genome. The SAM file contains multiple lines including a header initialized “@” and multiple lines for the sequence alignment.
Binary alignment/map (BAM) file (.bam)Binary file (stored in a format that is only computer readable) containing the same information as the SAM file, the content of which has been compressed to reduce storage disk space and increase performance.
Browser extensible data (BED) file (.bed)Tab-delimited text file that consists of several lines each representing a single genomic region, such as an exon. BED files provide the coordinates of those regions including chromosome, start and end positions, and additional fields can be added.
Variant call format (VCF) file (.vcf)Text file containing meta-information lines (i.e., file format, date, or other information about the overall experiment), a header line naming the columns (chromosome #, position, ID, reference allele, alternative allele, quality, filte, infor), and then data lines each containing information about a position in the genome. It is a standardized text file format for representing SNP, indel, and structural variation calls.
Description of commonly used file formats in WES workflows

Sequence Alignment Mapping

After raw data QC and preprocessing, the next step is to map the reads to the reference genome. This is arguably the most crucial step and most time-consuming operation of most WES analysis pipelines. The computational challenge resides in finding an alignment algorithm that is tolerant to imperfect matches, where genomic variations may occur, while being able to align millions of reads at a reasonable speed. To achieve high-speed most alignment algorithms are based on an effective compression algorithm, the Burrows–Wheeler Transformation (BWT) [53]. Many short-read aligners have been developed using this method: Bowtie [54], Bfast [55], Mosaik [56], and BWA [57]. They vary a lot in speed and accuracy, which are likely to affect the identification of structural variations and influence variant calling. BWA is the most common choice of WES alignment [58]. It allows gapped alignment, using very little memory. It performs separated alignment on both strands of a paired-end lane, in multi-threaded execution, unifying results in a single mapping file in the Sequence Alignment Map (SAM) format [47].

Post-Alignment Processing

To enhance the quality of the alignments for more accurate variant detection, the pipeline carries out three “cleanup” procedures. They consist of read duplicate removal, base quality score recalibration (BQSR), and indel realignment. A final, intermediate step provides important metrics to assess the quality of the data.

Read Duplicate Removal

Many of the reads from massively parallel sequencing instruments are identical—same sequence, start site, and orientation—indicating PCR artefacts [59]. These duplicates may introduce a bias in estimating variant allele frequencies, thus it is advisable that they are removed prior to the variant calling. Programs such as the function rmdup from SAMTools [47] or PicardMarkDuplicates integrated in Picard Tools [49] apply optimal fragment-based duplicate identification and provide unique identifiers for each read group, i.e., the set of reads generated from a single run of an instrument. This allows minimizing of experimental noise, reducing the number of false calls and improving the accuracy in the search of the variants.

Indel Re-Alignment

Small insertions or deletions (Indels) in coding regions have been strongly associated with human diseases but accurate Indel calling remains difficult [60, 61]. The local realignment around Indels is an important step. This process searches a consensus alignment among all the reads spanning a deletion or an insertion or both (1) to improve Indel detection sensitivity and accuracy, and (2) to reduce variant false calls due to misalignment of the flanking bases. The alignment is improved by increasing the number of sequences in their local context. The program Haplotype Caller from GATK offers an efficient solution to Indel detection by generating local de novo assembly of aligned reads prior to Indel calling, improving Indel detection [62]. As presented in Subheading 4, the HaplotypeCaller is capable of calling variants and indels simultaneously, which improves Indel detection while producing more accurate variant calls.

BQSR

The per-base quality scores (Phred-score), which convey the probability that the called base in the read is the true sequenced base [51], are quite inaccurate and co-vary with features like sequencing technology , machine cycle, and sequence context. These inaccurate quality scores propagate into faulty SNP discovery [51]. BQSR is a process in which machine learning tools are applied to model these errors empirically and adjust the quality scores accordingly. One of the most commonly used BQSR programs is BaseRecalibrator from the GATK suite, which takes alignment files and for each unknown base, a re-calibrated quality score is calculated to be used for variant calling. Recalibrated scores better reflect the empirical probability of mismatches to the reference genome, and by doing so provide more accurate quality scores [48, 62].

Metrics

Biases in sample preparation, sequencing, genomic alignment, and assembly can result in genomic regions lacking coverage (i.e., gaps) or in regions with much higher coverage than theoretically expected. Hence to evaluate the quality of data to discover variants with reasonable confidence, two important metrics are the breadth and the depth of coverage of a target genome. Breadth of coverage denotes the percentage of bases that are sequenced a given number of times. Depth of coverage represents the number of reads that align at a given position, which is often quoted as average raw or aligned read depth. For example, a genome sequencing study may sequence a genome to 50× average depth and achieve a 95% breadth of coverage of the reference genome at a minimum depth of ten reads. The flagstat command from SAMtools [47] or DepthOfCoverage from GATK [48, 62] provides the calculation of the fraction of reads that successfully mapped to the reference, with number and percentages of the read mapped and unmapped.

Variant Analysis

Following these treatment steps of the read, variant analysis consists of three independent steps: variant calling, annotation, and prioritization. Several open source tools are available for variant calling (Table 4).
Table 4

Databases of human genetic variation

Name Weblink and description
Combined annotation dependent depletion database (CADD)

http://cadd.gs.washington.edu/

Catalog of precomputed scores for all possible SNPs or small Indels of the reference genome and the 1000 Genomes obtained by combining 63 annotations (e.g., SIFT, GERP, others) through a machine-learning framework.

Single nucleotide polymorphism database (dbSNP)

https://www.ncbi.nlm.nih.gov/projects/SNP/

Broad collection of SNPs and Indels submitted by investigators worldwide and curated by NCBI.

Human gene mutation database (HGMD)

http://www.hgmd.org

A catalog of all published gene lesions responsible for human inherited disease.

Exome aggregation consortium (ExAC)

http://exac.broadinstitute.org/

Catalogue of exome variation in 60706 individuals some with adult onset diseases (Type 2 Diabetes, schizophrenia) patients presenting severe pediatric diseases have been excluded.

1000 Genomes project

http://www.internationalgenome.org/

Catalogue of genome variation with at least 1% frequency in the population based on whole-genome sequencing of 2504 individuals from 26 populations (including study cohorts for adult onset diseases).

NHLBI exome sequencing project (ESP6500)

http://evs.gs.washington.edu/EVS/

Catalogue of variation within 6500 exomes from well-phenotyped populations from various projects, e.g. Severe Asthma Research Project; Pulmonary Arterial Hypertension population; Acute Lung Injury cohort; Cystic Fibrosis cohort.

Databases of human genetic variation http://cadd.gs.washington.edu/ Catalog of precomputed scores for all possible SNPs or small Indels of the reference genome and the 1000 Genomes obtained by combining 63 annotations (e.g., SIFT, GERP, others) through a machine-learning framework. https://www.ncbi.nlm.nih.gov/projects/SNP/ Broad collection of SNPs and Indels submitted by investigators worldwide and curated by NCBI. http://www.hgmd.org A catalog of all published gene lesions responsible for human inherited disease. http://exac.broadinstitute.org/ Catalogue of exome variation in 60706 individuals some with adult onset diseases (Type 2 Diabetes, schizophrenia) patients presenting severe pediatric diseases have been excluded. http://www.internationalgenome.org/ Catalogue of genome variation with at least 1% frequency in the population based on whole-genome sequencing of 2504 individuals from 26 populations (including study cohorts for adult onset diseases). http://evs.gs.washington.edu/EVS/ Catalogue of variation within 6500 exomes from well-phenotyped populations from various projects, e.g. Severe Asthma Research Project; Pulmonary Arterial Hypertension population; Acute Lung Injury cohort; Cystic Fibrosis cohort.

Variant Calling

Variant calling implies identifying the sites in the sample that statistically differ from the reference genomic sequence. Single nucleotide polymorphisms (SNPs) and Indels are detected where the reads collectively provide evidence of variation (see Note ). As with alignment tools, several open source tools are available to identify a high-quality set of variants in WES projects [63]. SAMtools [47] and GATK HaplotypeCaller [48, 62] are widely used in genomic variant analyses. HaplotypeCaller has been found to have high sensitivity for SNP detection and outperform other pipelines for Indels [50, 63]. HaplotypeCaller runs a “reading window” along the reference genome, comparing the reference to sequenced reads counting mismatches and Indels. These variations from the reference are used as a measure of entropy, or disorder in the read data. If the level of entropy within the reading window surpasses a cutoff score (default value can be changed), the window is marked as an Active Region, which is inspected to generate the plausible haplotypes. Then, HaplotypeCaller uses a Bayesian statistical model for the calculation of the probability of the genotype, estimating the accuracy of the call with a score of Phred-like quality. The results are reported in a standard Variant Call Format (VCF) file.

Variant Annotation

Annotation of disease-causing variants involves determining (1) the effect they have on the protein-coding sequence, including synonymous and non-synonymous changes, stop-gained or stop-lost, consensus splice site changes for SNPs, frame-shift or other structural impacts on transcript structure for Indels, (2) the frequency of the variant in the population, as disease-causing variants are expected to be rare. Three major tools are used to classify variants functionally: SnpEff (SNP Effects) [64], VEP (Variant Effect Predictor) [65], and ANNOVAR (Annotate Variation) [66, 67]. SnpEff annotates variants based on their genomic location and predicts coding effects [64], as does VEP, a tool available from the genome browser, Ensembl [65]. Besides annotating functional effects of variants with respect to genes, ANNOVAR has many additional functionalities, such as integrating information from up to 4000 different databases and external resources to annotate the variants [67]. For SNPs , these include (1) calculating their predicted functional importance scores using SIFT (Sorting Intolerant From Tolerant) [68] and PolyPhen2 (Polymorphisms Phenotyping v2) [69] and (2) reporting their conservation levels by PhyloP (Phylogenetic -values) [70, 71] and GERP++ (Genomic Evolutionary Rate Profiling) [72]. The CADD (Combined Annotation Dependent Depletion) database is another useful external linked for deleterious prediction of a variant. The CADD score combines information from several resources to score both protein-altering and regulatory variants [73]. New tools are being developed for variant annotation that considers gene-level metrics (e.g., conservation at the gene-level, accumulation of mutational load) and provides more sensitive scoring of variants [74]. GAVIN (Gene-Aware Variant INterpretation for medical sequencing) classifies variants as benign, pathogenic, or a variant of uncertain significance [75]. The MSC (Mutation Significance Cutoff) [76] generates a quantitative score that provides gene-level and gene-specific phenotypic impact cutoff values above which a variant is considered pathogenic with 98% true positive detection rate. To determine variant frequency, ANNOVAR links to external databases such as dbSNP database [77, 78] or the Human Gene Mutation Database [79] to identify the presence or absence of a variant (see Table 4 for commonly used databases of human genetic variation). Large-scale genomic studies such as 1000 Genomes Project [36], the US National Institutes of Health–National Heart, Lung, and Blood Institute (NIH-NHLBI), ESP6500 exome-sequencing project [80], and the Exome Aggregation Consortium [37, 81] have catalogued sequence variants from thousands of exomes and genomes, which serve as a valuable resource for allele frequency estimations. These resources are integrated in ANNOVAR, which can find the alternative allele frequency for newly discovered variants in a WES project. The GATK pipeline also integrates ANNOVAR as an external option for variant annotation and can use the tool VariantAnnotator, which is enriched with additional features such as gene set enrichment analysis for downstream analysis.

Variant Filtration

There are two aspects to variant filtration (1) filtering low-quality variants; (2) filtering common variants, which are represent in the general population. Low-quality variants are those including variants with low coverage, low quality, strand biased, as well as those mapping to low-complexity regions or incomplete regions of the reference genome [82]. GATK uses machine learning algorithms (VQSR or variant quality score recalibration) to learn from each dataset what is the annotation profile of “good” and “bad” variants [48, 62]. The tool assigns scores (VQSLOD for variant quality score log-odds) which can be used to set the filtering of “bad” variants. There is tradeoff in the process in which increasing the specificity will decrease the sensitivity of the filtering. VQSR can be applied to SNPs or indels. The availability of in-house databases for WES variants obtained with the same sequencing technology and analysis pipeline is recommended to exclude variants resulting from systematic errors (see Note ). Under the assumption that common variants are less likely to cause disease than rare ones, it is important to set a minor allele frequency (MAF) threshold based on disease model of the study. A variant with a MAF greater than 1% is regarded as common; the remainder are considered rare or private to the subject or the kindred studied. Setting the MAF threshold at 1% is recommended, usually filters out over 70% of the variants [83].

Variant Prioritization

At this point the output is a subset of high-quality, low-frequency, predicted pathogenic variants, which require customized filtering process depending on the disease trait. The more information gathered both on (1) the phenotype and (2) the gene in which the variant resides, the greater the likelihood to accurately assess the functional significance of a variant. A deep knowledge of the clinical and cellular phenotype , the prevalence of the trait in the general population together with an understanding of the familial segregation are essential in the prioritization of gene variants. For example, a recessively inherited disease variant is likely homozygous whereas a dominant disease variant is heterozygous. In general, a dominant allele should be absent in a variant database based on healthy controls or exceedingly rare to allow for reduced penetrance. However, there can be exceptions to these rules. For instance, recessive disease variants can be compound heterozygous. In a cohort, the search for either identical variants or additional rare variants in the same gene can further strengthen the evidence for causality. Variants found in a gene in which other variants have already been associated with a certain phenotype are more likely to be associated with the same phenotype, although this is not always the case. Segregation of the variant with disease status is another key criterion for variant prioritization. This requires appropriate WES control data obtained with the same method from healthy subjects, ideally of the same ethnic origin as the patients. In case of complete penetrance, the candidate disease-causing variants found in patients cannot be present in unaffected subjects. In case of incomplete penetrance, the situation is more complex because these hypothetical disease-causing variants can also be present in asymptomatic subjects, including unaffected subjects of the same pedigree. At the gene level, it is reasonable to first review variants found in genes that participate in its related pathways . This is also true when a phenotypically similar disease exists, and related pathways are known. The HGPS (Human Gene Connectome) ranks genes by their biological distance to core genes (known to be associated with phenotype), and provides the distances and all possible biological connections between all pairs of human genes based on protein-protein interaction prediction [74, 84]. Genes can be mapped online to KEGG (Kyto Encyclopedia of Genes and Genomes) pathways [85] or REACTOME pathways [86]. It is useful to find information about candidate genes-knockout phenotypes. For this, the Mouse Genome Informatics database enables queries for humanmouse disease and MPO (Mammalian Phenotype Ontology) connections using gene symbols as an input [87]. Expression of candidate gene in the tissues or organs of interest is an important criterion for prioritization . GEO (Gene Expression Omnibus) profiles [88], the ExA (Expression Atlas [89], and the BioGPS gene annotation portals [90] are excellent resources for this purpose. Knowledge about protein structure, function, and interactions also can help rank candidate genes. The UniprotKB (Uniprot knowledgebase) collects information from several databases including curated protein sequences and structures with links to annotations of genomic variants [91]. The STRING database and associated search tools [92] are powerful resources for identifying interacting partners of a candidate gene’s product or for identifying interactions between the products of a set of genes that bear functional variants. The ToppGene [93] and GeneMania [94] web portals are other resources that perform candidate gene prioritization based on the interactome.

Variant Validation

With all the tools available and new ones emerging monthly, variant filtration and prioritization are becoming more automated. A similar trend is also observed in other parts of variant analysis such as the detection and annotation. Regardless, a deep understanding of the biological questions being asked and the etiology of the disease being studied is crucial for properly choosing tools and parameters that suit a study the best. Ultimately, variant validation requires experimental confirmation at the level of protein, cell and—if possible—animal model to establish causality. This necessitates solid knowledge of physiology and pathology of the phenotype at the study for the design of appropriate experiments relevant to the nature of the protein. The recent breakthrough of genetic manipulation of human-induced pluripotent stem cells [95], CRISPR genome-editing tools [96, 97] permits establishing the causal relationship between the candidate genotype and the clinical phenotype in relevant cell types [98] or organoids [99] representing relevant tissues, even for isolated cases.

Notes

Broadly, the mode of inheritance can be recessive, dominant, or X-linked. Recessive mutations are easier to identify by filtering for homozygosity, or compound heterozygous mutations. Dominant inherited mutations will be either inherited from one of the parents or be de novo mutations, in both cases dominant mutations should be absent in unaffected family members or matched unrelated controls. Joint application of variant calling software to multiple samples is recommended to reduce false positive variants. We can also improve variant calling in regions with fewer reads by utilizing reads from multiple samples concurrently. This increases the confidence of any given variant and allele bias and strand bias are much easier to sort. The evaluation of family trios can also eliminate low-quality variants as the majority of variants detected in the child and absent from the parents most likely result from sequence artifacts. Moreover, the accuracy of error detection and variant identification increases with the number of relatives and generations sequenced per family.
  95 in total

Review 1.  Genetic susceptibility to West Nile virus and dengue.

Authors:  M Loeb
Journal:  Public Health Genomics       Date:  2013-03-18       Impact factor: 2.000

Review 2.  Severe infectious diseases of childhood as monogenic inborn errors of immunity.

Authors:  Jean-Laurent Casanova
Journal:  Proc Natl Acad Sci U S A       Date:  2015-11-30       Impact factor: 11.205

3.  Fast and SNP-tolerant detection of complex variants and splicing in short reads.

Authors:  Thomas D Wu; Serban Nacu
Journal:  Bioinformatics       Date:  2010-02-10       Impact factor: 6.937

4.  Induction of pluripotent stem cells from adult human fibroblasts by defined factors.

Authors:  Kazutoshi Takahashi; Koji Tanabe; Mari Ohnuki; Megumi Narita; Tomoko Ichisaka; Kiichiro Tomoda; Shinya Yamanaka
Journal:  Cell       Date:  2007-11-30       Impact factor: 41.582

5.  Infectious disease. Life-threatening influenza and impaired interferon amplification in human IRF7 deficiency.

Authors:  Michael J Ciancanelli; Sarah X L Huang; Priya Luthra; Hannah Garner; Yuval Itan; Stefano Volpi; Fabien G Lafaille; Céline Trouillet; Mirco Schmolke; Randy A Albrecht; Elisabeth Israelsson; Hye Kyung Lim; Melina Casadio; Tamar Hermesh; Lazaro Lorenzo; Lawrence W Leung; Vincent Pedergnana; Bertrand Boisson; Satoshi Okada; Capucine Picard; Benedicte Ringuier; Françoise Troussier; Damien Chaussabel; Laurent Abel; Isabelle Pellier; Luigi D Notarangelo; Adolfo García-Sastre; Christopher F Basler; Frédéric Geissmann; Shen-Ying Zhang; Hans-Willem Snoeck; Jean-Laurent Casanova
Journal:  Science       Date:  2015-03-26       Impact factor: 47.728

6.  GAVIN: Gene-Aware Variant INterpretation for medical sequencing.

Authors:  K Joeri van der Velde; Eddy N de Boer; Cleo C van Diemen; Birgit Sikkema-Raddatz; Kristin M Abbott; Alain Knopperts; Lude Franke; Rolf H Sijmons; Tom J de Koning; Cisca Wijmenga; Richard J Sinke; Morris A Swertz
Journal:  Genome Biol       Date:  2017-01-16       Impact factor: 13.583

7.  Fast and accurate short read alignment with Burrows-Wheeler transform.

Authors:  Heng Li; Richard Durbin
Journal:  Bioinformatics       Date:  2009-05-18       Impact factor: 6.937

8.  An integrated map of genetic variation from 1,092 human genomes.

Authors:  Goncalo R Abecasis; Adam Auton; Lisa D Brooks; Mark A DePristo; Richard M Durbin; Robert E Handsaker; Hyun Min Kang; Gabor T Marth; Gil A McVean
Journal:  Nature       Date:  2012-11-01       Impact factor: 49.962

9.  Variant callers for next-generation sequencing data: a comparison study.

Authors:  Xiangtao Liu; Shizhong Han; Zuoheng Wang; Joel Gelernter; Bao-Zhu Yang
Journal:  PLoS One       Date:  2013-09-27       Impact factor: 3.240

10.  Performance comparison of four exome capture systems for deep sequencing.

Authors:  Chandra Sekhar Reddy Chilamakuri; Susanne Lorenz; Mohammed-Amin Madoui; Daniel Vodák; Jinchang Sun; Eivind Hovig; Ola Myklebost; Leonardo A Meza-Zepeda
Journal:  BMC Genomics       Date:  2014-06-09       Impact factor: 3.969

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.