Literature DB >> 28810826

Improving fold resistance prediction of HIV-1 against protease and reverse transcriptase inhibitors using artificial neural networks.

Olivier Sheik Amamuddy1, Nigel T Bishop2, Özlem Tastan Bishop3.   

Abstract

BACKGROUND: Drug resistance in HIV treatment is still a worldwide problem. Predicting resistance to antiretrovirals (ARVs) before starting any treatment is important. Prediction accuracy is essential, as low-accuracy predictions increase the risk of prescribing sub-optimal drug regimens leading to patients developing resistance sooner. Artificial Neural Networks (ANNs) are a powerful tool that would be able to assist in drug resistance prediction. In this study, we constrained the dataset to subtype B, sacrificing generalizability for a higher predictive performance, and demonstrated that the predictive quality of the ANN regression models have definite improvement for most ARVs.
RESULTS: Trained regression ANNs were optimized for eight protease inhibitors, six nucleoside reverse transcriptase (RT) inhibitors and four non-nucleoside RT inhibitors by experimenting combinations of rare variant filtering (none versus 1 residue occurrence) and ANN topologies (1-3 hidden layers with 2, 4, 6, 8 and 10 nodes per layer). Single hidden layers (5-20 nodes) were used for training where overfitting was detected. 5-fold cross-validation produced mean R2 values over 0.95 and standard deviations lower than 0.04 for all but two antiretrovirals.
CONCLUSIONS: Overall, higher accuracies and lower variances (compared to results published in 2016) were obtained by experimenting with various preprocessing methods, while focusing on the most prevalent subtype in the raw dataset (subtype B).We thus highlight the need to develop and make available subtype-specific datasets for developing higher accuracy in drug-resistance prediction methods.

Entities:  

Keywords:  Artificial neural network; Drug resistance prediction; HIV protease; HIV reverse transcriptase; HIV-1 subtype B; Subtype-specific training

Mesh:

Substances:

Year:  2017        PMID: 28810826      PMCID: PMC5558779          DOI: 10.1186/s12859-017-1782-x

Source DB:  PubMed          Journal:  BMC Bioinformatics        ISSN: 1471-2105            Impact factor:   3.169


Background

Living with HIV has come a long way from being a deadly disease to become a manageable chronic infection [1] mainly due to the development and use of antiretrovirals (ARVs). However, resistance to ARVs still prevails for multiple reasons including non-adherence to treatment, use of sub-optimal regimens and delayed initiation of therapy [2, 3]. Thus predicting resistance to ARVs before and during any treatment is important, and therefore genotypic testing for prediction finds wide application due to its simplicity, speed and relatively low cost, in comparison to the gold standard of phenotypic assays [4-6]. Furthermore, the prediction algorithms are continuously evaluated [7, 8], while mutation lists keep being updated to improve predictability of drug resistance [9, 10]. Disparities between prediction methods have decreased but discordances still exist between the different algorithms, especially for some ARVs, as at 2015 [11]; which motivates the need to further improve accuracy. Prediction accuracy is essential, as low-accuracy predictions increase the risk of prescribing sub-optimal drug regimens and missing the timing for regimen switches, leading to patients developing resistance sooner and so needing recourse to less well-tolerated third line ARV therapy. If left uncontrolled, the accumulation of resistance mutations may increase the probability of resistant strains directly spreading to drug-naive individuals, rendering therapy more difficult. In order to address these issues, different research groups have been involved in producing independent prediction algorithms – such as REGA [12], ANRS [13] and HIVdb [14] amongst others [15]. As stated in [17], to date the most widely used ones are the HIVdb algorithm [14] and the support vector machine-based geno2pheno tool [16, 18]. More recent work has applied different machine learning approaches for drug resistance prediction, for instance multi-label classification [17], K-Nearest Neighbor and Random Forests [19], sparse signal representations coupled to Delaunay triangulation [20, 21] and Support Vector Machines variants [22], some of which are based on sequence information, while others also utilise protein structural information. The objective of this work was to develop prediction models that are as accurate as possible. This problem is usually treated as one of classification, since in a clinical context it is normally sufficient to predict the effectiveness (or not) of a given ARV. However, here we solve a regression problem, thereby making full use of all available data and so potentially improving the predictive accuracy of the model. We note that the model output may be transformed into a classification by setting cut-off values, and that the drug resistance score may be clinically useful if the value is borderline, i.e. very close to a cut-off value. Our method incorporated the following features: (a) The prediction algorithm used was a regression Artificial Neural Network (ANN); (b) because the great majority of publicly available data in the Stanford HIVdb is for subtype B HIV, only subtype B data was used in this database to train and test the network, so that the prediction algorithm is mainly applicable to subtype B sequence data; (c) in order to reduce data noise, various forms of data filtering, as described in the Methodology section, were used. Our regression ANN models compared favourably against recent work by Shen and co-workers [19], for which similar metrics were used. The ANN regression models were applied to the protease (PR) inhibitors fosamprenavir (FPV), atazanavir (ATV), indinavir (IDV), lopinavir (LPV), saquinavir (SQV), tipranavir (TPV), nelfinavir (NFV) and darunavir (DRV), and to the reverse transcriptase (RT) inhibitors lamivudine (3TC), abacavir (ABC), zidovudine (AZT), stavudine (D4T), didanosine (DDI), tenofovir (TDF), efavirenz (EFV), etravirine (ETR), nevirapine (NVP), rilpivirine (RPV). Applying cut-offs, we obtain a classification output from our ANN models which is then evaluated against HIVdb and SHIVA [17]. Our work resulted in the production of drug-specific regression ANNs with high mean R2 values, low variance and competitive classification performances for each of the eight PR inhibitors (PIs), six nucleoside RT inhibitors (NRTIs) and four non-nucleoside RT inhibitors (NNRTIs) for predictions from subtype B HIV.

Methods

Dataset description

Unfiltered PhenoSense assay datasets were retrieved from Stanford HIVdb [23] for both PR and RT. The datasets are compactly organized from a consensus B sequence with conserved positions coded as “-”, with differing residues coded as the actual amino acids. Mixed residues are grouped together while indels are represented as “#” and “~” respectively in a tab-separated file format. Drug resistance scores for PR and RT inhibitors are present for each sequence entry as metadata.

Dataset pre-processing

Incomplete sequence entries (i.e. with missing fold resistance ratios for some ARVs) were retained to increase the sample size. Sequences containing the ambiguous residue ‘X’, indels or the characters ‘.’, ‘*’, ‘l’, ‘d’ and ‘^’ were flagged and then expanded to obtain all possible sequences consistent with the sequence data. The sequence expansion procedure thus yielded differing numbers of sequences for each ARV (Table 1). Non-B subtypes were also filtered out from the dataset to improve predictability for the subtype B cluster only. RT sequences were truncated to 240 residues to conform to the format of the filtered RT PhenoSense dataset as available from Stanford HIVdb. Several sequence entries yielded several thousand to millions of combinations of sequences, which made the initial design non-practical in terms of running time and also potentially introduced bias to the model that would be obtained from the dataset. This inherent uncertainty resides in the fact that the sequences may truly be mixed or contain sequencing errors. Thus a filter was introduced that removed from the datasets any sequence whose expansion yielded more sequences than some user-chosen cut-off value.
Table 1

ANN topologies and filtering parameters for highest observed accuracies for the various ARVs

ARVsTopologyNumber of unique sequence IDs/expanded sequencesNumber of allowed combinationsRare variant filteringNumber of outliers removed
PIsATV10x8x6995 / 13,625< 10001
DRV8 × 8590 / 10,374< 10002
FPV8x8x81429 / 17,501< 1000xnone
IDV8x6x101459 / 16,977< 10001
LPV10x8x101284 / 11,019< 300xnone
NFV10x10x101524 / 11,929< 300xnone
SQV10x10x81484 / 11,509< 300xnone
TPV10x6x8698 / 11,989< 10002
NRTIs3TC10x10x61342 / 33,181< 1000none
ABC141401 / 34,016< 1000xnone
AZT191358 / 33,818< 1000none
D4T10x4x41365 / 34,056< 1000none
DDI10x6x61368 / 34,062< 1000none
TDF10 × 21130 / 29,637< 1000xnone
NNRTIsEFV10x6x101400 / 33,906< 1000none
ETR8x2x10448 / 11,397< 1000x2
NVP10x10x41414 / 20,348< 300xnone
RPV16169 / 2977< 1000none
ANN topologies and filtering parameters for highest observed accuracies for the various ARVs The experiment was initially started by training machine learners with sequences that had less than 5, 10, 20, 50, 100, 200, 300 and 1000 combinations upon expansion. Thereafter only the 300 and 1000 filter levels were used as candidates for rare variant filtering, due to their higher performance and number of unique sequence IDs that they contained. Rare variant filtering here means that a sequence is removed if it contains a residue at a given position that occurs only once across all sequence samples, and ANNs were constructed and tested both with and without this filtering. In order to process the sequence data, the amino acid letters were converted to integers using an ad hoc Python script, utilizing a simple integer encoding scheme, whereby residues “A”, “R”, “N”, “D”, “B”, “C”, “E”, “Q”, “Z”, “G”, “H”, “I”, “L”, “K”, “M”, “F”, “P”, “S”, “T”, “W”, “Y” and “V” were converted to positive integers 1 to 22 respectively in a similar manner, but not identical to the encoding approach used by Araya and Hazelhurst [4], who applied codon-based integer encoding instead on a dataset used by Ravela and coworkers in 2003 [24]. Possible outliers were detected by using (1) Principal Components Analysis from input features and target values and (2) the prediction error distributions between actual and predicted scores, and removed (Table 1).

Neural network construction and architecture optimization

MATLAB’s (version 2016a) implementation of the Levenberg-Marquardt feed-forward algorithm with back-propagation from the Neural Network Toolbox was used for supervised training, utilizing the mean squared error (MSE) for weight adjustment. Absolutely conserved residue positions were filtered out in order to reduce computation time. The initial dataset was (pseudo) randomly split into training, testing and validation sets at rates of 70%, 15 and 15% respectively, setting random seed numbers for reproducibility in training and cross-validation. Training was stopped upon reaching any of a maximum of 1000 epochs, a maximum of 6 successive validation failures to decrease or a performance gradient lower than a minimum set at 1e-7. Input features were the 1-letter amino acid characters recoded as integers while the target values were the individual fold drug resistance ratios. After initial runs using all drug target values at once for training the regression model, large MSE values were obtained (not shown), which redirected analysis towards building individual trained matrices for each drug target. As a requirement for the MATLAB’s newff function, both the feature vectors and their matching target values were transposed. The number of hidden layers was varied from 1 to 3 while nodes were set at permutations of 2, 4, 6, 8 and 10 for each hidden layer. One hidden layer of 5–20 nodes was re-evaluated in cases where high training performances were observed to have a significantly lower test performances or high variances.

Evaluation of training performance

Training performance was assessed both by regression and classification methods. For regression-based evaluation, the coefficient of determination (R2) values were obtained between the predicted (y ) and actual (x ) fold scores for the whole dataset using the formula Further, the dataset was randomly divided into 5 subsets of approximately equal size, and 5 different ANNs were trained on datasets that comprised 4 of the 5 subsets, and then 5 different R2 values were calculated; we then calculated the mean and the standard deviation of these 5 R2 values. Regression performances were then compared against prediction models from the article published in 2016 by Shen and co-workers [19], in which regression machine learning models, namely the Random Forest and the K-nearest neighbor algorithms were used. The raw dataset used in this work and in ref. [19] is the same, i.e. the Stanford HIVdb dataset; however, the filtering used in this paper is as described above, whereas ref. [19] uses filtering provided by Stanford HIVdb [23]. In order to further verify our models against overfitting, R2values were calculated over different subsets of the data set, namely the whole dataset, the validation set and finally the test set. Furthermore, classification accuracy was evaluated against Stanford HIVdb and a recently-published approach implemented as the SHIVA web server [17]. We used the EMBOSS backtranseq tool [25] to back-translate protein sequences to one of its (DNA) codon permutations in FASTA format as input for Stanford HIVdb’s Sierra web service (GraphQL API) tool to obtain resistance predictions. SHIVA predictions were obtained by submitting FASTA-formatted protein sequences to the web server. Drug resistance classes (susceptible, resistant and intermediate) were coded as numbers 0, 1 and 2 respectively. While Stanford HIVdb defined three classes, SHIVA defined two: susceptible and resistant. Classification accuracies were evaluated by calculating misclassification rates, defined as the proportion of non-concordant pairs between PhenoSense Assay classes and the independently-predicted classes for each of: our ANN approach, Stanford HIVdb and SHIVA. Cut-offs from Stanford HIVdb available at [26] were used for classifying our ANN predictions and those of the PhenoSense Assay dataset. We do not define new binary cut-offs for evaluating SHIVA; for a limited number of ARVs binary cut-offs are available from the PhenoSense Assay [27], and for the remaining ARVs we proceed in the following way. An upper and a lower bound misclassification rate were computed for SHIVA as the conversion from a multiclass to a binary classification is ambiguous - an intermediate class may lie closer to a resistant or susceptible class. We set the number of truly misclassified pairs (0,1 or 1,0) as the lower bound, while the number of discordant pairs involving intermediate resistance sequences (2,0 or 2,1) was added to the discordance value to set an upper bound for misclassification rates. All proportions were then evaluated as percentages, as shown in Table 2.
Table 2

Comparison of misclassification rates (percentages) for our ANN approach, Stanford HIVdb and SHIVA

ARVsANNHIVdbSHIVA
PIsATV26.6128.5784.53
DRV2.9822.5732.41–53.49
FPV16.0836.9767.0–79.74
IDV34.2926.1981.92
LPV9.7936.8268.05–83.51
NFV25.2320.3680.84
SQV30.3738.7567.25–88.16
TPV9.0739.88unavailable
NRTIs3TC3.8712.0990.21
ABC6.5333.7850.76–72.25
AZT36.1929.8890.38
D4T7.3144.0779.15
DDI8.0557.5234.14–92.44
TDF5.3937.237.36–66.53
NNRTIsEFV16.0821.0581.32
ETR6.5813.21unavailable
NVP24.879.473.97
RPV1.5524.998.33
Comparison of misclassification rates (percentages) for our ANN approach, Stanford HIVdb and SHIVA

Results and discussion

Table 1 shows that differing numbers of sequences were obtained from the different filtering approaches. In general, allowing expansion of sequences to less than 1000, combined with rare variant filtering produced the best results. Multiple (2–3) hidden layers were found to be required for all ARVs, with the exception of ABC, AZT, and RPV. DRV, ETR and RPV have the lowest numbers of unique sequence IDs, and hence may suffer from lack of generalizability compared to the other ARVs. However, in this study we attempted to find the optimal balance between the number of sequences and the possibility of retaining sequences containing sequencing errors. The procedure used to build our models is referred to as protocol A. Our results are compared to the models used by Shen and co-workers [19], namely the Random Forest (RF) and the K-nearest neighbor (KNN), which both utilise Delaunay triangulation for structural feature encoding (henceforth referred to as protocol B and C respectively in this paper).

Regression performances for HIV PIs

The results are presented in Fig. 1a and Additional file 1: Table S1. The procedure used to build our models is referred to as protocol A. In all, protocol A yielded better results than protocols B and C. Very low variances were generally observed using protocol A, except in the case of ATV, IDV and LPV where variances were comparable to those observed in protocols B and C. Improvements of largest magnitudes for PIs were observed from protocol A for FPV, SQV and TPV with mean differences of 0.117, 0.116 and 0.219 respectively from the top-scoring protocols in B.
Fig. 1

The mean R2 values and their standard deviations for the protocols A, B, C, and the various ARVs

The mean R2 values and their standard deviations for the protocols A, B, C, and the various ARVs

Regression performances for NRTIs

In the case of NRTIs (Fig. 1b and Additional file 1: Table S2), better predictability was observed for all drugs using protocol A except for 3TC, where the performance, though high, was similar to that obtained in protocol B. Very high mean R2 values with very small variances were obtained for AZT, DDI and TDF. Their high degree of fit combined to their low variability suggests that the ANN model is explaining most of the observed variation, likely due to higher sequence quality obtained after filtering.

Regression performances for NNRTIs

In the case of NNRTIs (Fig. 1c and Additional file 1: Table S3), protocol C outperformed protocol A by a narrow margin in for EFV and NVP. Very high mean accuracies were attained in the case of RPV and ETR, surpassing both protocols B and C. However, the smaller sample size for RPV (Table 1) (169 unique sequence IDs for a total of 2977 expanded sequences) may indicate that while appearing to perform exceptionally well, the model may not generalize well to more divergent sequences. ETR is supported by a comparatively higher number of unique sequence IDs, and will generalize slightly better that the model developed for RPV.

Overfitting assessment

As seen in Table 3, for all ARVs we verify that overfitting is minimized by ensuring that R2 values do not significantly decline in the test set with respect to both the whole dataset and the validation sets.
Table 3

R2 values (3 dp) obtained from individual subsets obtained after filtering

ARV classesARVsWhole dataset R2 valuesValidation setR2 valuesTest setR2 values
PIsATV0.9510.9130.856
DRV0.9910.9910.989
FPV0.9800.9380.958
IDV0.8990.8160.842
LPV0.9660.9220.883
NFV0.9750.9240.939
SQV0.9770.9490.906
TPV0.9890.9950.943
NRTIs3TC0.9950.9880.985
ABC0.9840.9560.954
AZT0.9940.9790.985
D4T0.9950.9960.979
DDI0.9970.9970.992
TDF0.9991.0000.992
NNRTIsEFV0.9760.9050.967
ETR0.9960.9930.982
NVP0.9620.9390.927
RPV0.9820.9560.915
R2 values (3 dp) obtained from individual subsets obtained after filtering

Classification performance for all antiretrovirals

We provide additional support for our approach by comparing misclassification rates against Stanford HIVdb and SHIVA, all with respect to the PhenoSense assay data. It can be observed from Table 2 that lower misclassification rates are obtained, with the exception of NVP, AZT, NFV and IDV. An important point to observe here is that we considered the entirety of the dataset filtered by our means for the development of the ANN described in this paper, the counts being shown in Table 1. This was performed so that only high confidence sequences would be compared for each individual antiretroviral. Both Stanford HIVdb and SHIVA were developed using another data set, the Stanford HIVdb pre-filtered data, and this factor may have affected their performance on the dataset used here.

Conclusions

This work focused on the pre-processing and optimization of ANN regression models for the prediction of fold resistance scores for HIV-1 subtype B using RT and PR PhenoSense data available in the public domain from Stanford HIVdb. As expressed by Dahake and co-workers [28], there is a need to develop subtype-specific databases, and we made such an attempt by constraining the dataset for subtype specificity, sacrificing generalizability for a higher predictive performance for subtype B. The results obtained show that the predictive quality of the ANN regression models is at least comparable to that of other methods, and for most ARVs is a definite improvement. The approach presented in this paper is applicable to subtype B, and an obvious question is whether it can be extended to the other subtypes? Previous studies [29, 30] involving HIV-1 subtypes A, B and C envelope glycoprotein V3 loop region, suggest that subtype B and C share similar co-receptor usage as opposed to subtype A. Also, Raymond and co-workers [31]⁠ hinted that subtypes B and C share similar genotypic determinants, and for this reason, by extrapolation our method may extend to the C subtype. However, a key difficulty is the paucity of publicly available phenotypic assay data for training and testing any extrapolation to other subtypes, so the development of a methodology that leads to accurate models will be challenging [32, 33]. It is hoped that our work will lead to more non-B subtype drug resistance data becoming available.
  27 in total

1.  Performance of genotypic tools for prediction of tropism in HIV-1 subtype C V3 loop sequences.

Authors:  Soham Gupta; Ujjwal Neogi; Hiresave Srinivasa; Anita Shet
Journal:  Intervirology       Date:  2015-01-07       Impact factor: 1.763

Review 2.  Current Approaches in Computational Drug Resistance Prediction in HIV.

Authors:  Mona Riemenschneider; Dominik Heider
Journal:  Curr HIV Res       Date:  2016       Impact factor: 1.581

3.  Effect of earlier initiation of antiretroviral treatment and increased treatment coverage on HIV-related mortality in China: a national observational cohort study.

Authors:  Fujie Zhang; Zhihui Dou; Ye Ma; Yao Zhang; Yan Zhao; Decai Zhao; Shuntai Zhou; Marc Bulterys; Hao Zhu; Ray Y Chen
Journal:  Lancet Infect Dis       Date:  2011-07       Impact factor: 25.071

4.  HIV as a chronic disease considerations for service planning in resource-poor settings.

Authors:  Lucy Reynolds
Journal:  Global Health       Date:  2011-10-04       Impact factor: 4.185

5.  2014 Update of the drug resistance mutations in HIV-1.

Authors:  Annemarie M Wensing; Vincent Calvez; Huldrych F Günthard; Victoria A Johnson; Roger Paredes; Deenan Pillay; Robert W Shafer; Douglas D Richman
Journal:  Top Antivir Med       Date:  2014 Jun-Jul

6.  Sparse Representation for Prediction of HIV-1 Protease Drug Resistance.

Authors:  Xiaxia Yu; Irene T Weber; Robert W Harrison
Journal:  Proc SIAM Int Conf Data Min       Date:  2013

7.  Human immunodeficiency virus reverse transcriptase and protease sequence database.

Authors:  Soo-Yon Rhee; Matthew J Gonzales; Rami Kantor; Bradley J Betts; Jaideep Ravela; Robert W Shafer
Journal:  Nucleic Acids Res       Date:  2003-01-01       Impact factor: 16.971

8.  Modeling Outcomes of First-Line Antiretroviral Therapy and Rate of CD4 Counts Change among a Cohort of HIV/AIDS Patients in Ethiopia: A Retrospective Cohort Study.

Authors:  Tadesse Awoke; Alemayehu Worku; Yigzaw Kebede; Adetayo Kasim; Belay Birlie; Roel Braekers; Khangelani Zuma; Ziv Shkedy
Journal:  PLoS One       Date:  2016-12-20       Impact factor: 3.240

9.  Sequence and structure based models of HIV-1 protease and reverse transcriptase drug resistance.

Authors:  Majid Masso; Iosif I Vaisman
Journal:  BMC Genomics       Date:  2013-10-01       Impact factor: 3.969

10.  Genotypic Prediction of Co-receptor Tropism of HIV-1 Subtypes A and C.

Authors:  Mona Riemenschneider; Kieran Y Cashin; Bettina Budeus; Saleta Sierra; Elham Shirvani-Dastgerdi; Saeed Bayanolhagh; Rolf Kaiser; Paul R Gorry; Dominik Heider
Journal:  Sci Rep       Date:  2016-04-29       Impact factor: 4.379

View more
  6 in total

1.  Drug Resistance Prediction Using Deep Learning Techniques on HIV-1 Sequence Data.

Authors:  Margaret C Steiner; Keylie M Gibson; Keith A Crandall
Journal:  Viruses       Date:  2020-05-19       Impact factor: 5.048

2.  Characterizing early drug resistance-related events using geometric ensembles from HIV protease dynamics.

Authors:  Olivier Sheik Amamuddy; Nigel T Bishop; Özlem Tastan Bishop
Journal:  Sci Rep       Date:  2018-12-18       Impact factor: 4.379

3.  Evolution of drug resistance in HIV protease.

Authors:  Dhara Shah; Christopher Freas; Irene T Weber; Robert W Harrison
Journal:  BMC Bioinformatics       Date:  2020-12-30       Impact factor: 3.169

4.  Analysis of drug resistance in HIV protease.

Authors:  Shrikant D Pawar; Christopher Freas; Irene T Weber; Robert W Harrison
Journal:  BMC Bioinformatics       Date:  2018-10-22       Impact factor: 3.169

5.  A Computational Approach for the Prediction of HIV Resistance Based on Amino Acid and Nucleotide Descriptors.

Authors:  Olga Tarasova; Nadezhda Biziukova; Dmitry Filimonov; Vladimir Poroikov
Journal:  Molecules       Date:  2018-10-24       Impact factor: 4.411

6.  A Computational Approach for the Prediction of Treatment History and the Effectiveness or Failure of Antiretroviral Therapy.

Authors:  Olga Tarasova; Nadezhda Biziukova; Dmitry Kireev; Alexey Lagunin; Sergey Ivanov; Dmitry Filimonov; Vladimir Poroikov
Journal:  Int J Mol Sci       Date:  2020-01-23       Impact factor: 5.923

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.