Literature DB >> 35711929

ProtTrans-Glutar: Incorporating Features From Pre-trained Transformer-Based Models for Predicting Glutarylation Sites.

Fatma Indriani1,2, Kunti Robiatul Mahmudah3, Bedy Purnama4, Kenji Satou5.   

Abstract

Lysine glutarylation is a post-translational modification (PTM) that plays a regulatory role in various physiological and biological processes. Identifying glutarylated peptides using proteomic techniques is expensive and time-consuming. Therefore, developing computational models and predictors can prove useful for rapid identification of glutarylation. In this study, we propose a model called ProtTrans-Glutar to classify a protein sequence into positive or negative glutarylation site by combining traditional sequence-based features with features derived from a pre-trained transformer-based protein model. The features of the model were constructed by combining several feature sets, namely the distribution feature (from composition/transition/distribution encoding), enhanced amino acid composition (EAAC), and features derived from the ProtT5-XL-UniRef50 model. Combined with random under-sampling and XGBoost classification method, our model obtained recall, specificity, and AUC scores of 0.7864, 0.6286, and 0.7075 respectively on an independent test set. The recall and AUC scores were notably higher than those of the previous glutarylation prediction models using the same dataset. This high recall score suggests that our method has the potential to identify new glutarylation sites and facilitate further research on the glutarylation process.
Copyright © 2022 Indriani, Mahmudah, Purnama and Satou.

Entities:  

Keywords:  binary classification; imbalanced data classification; lysine glutarylation; machine learning; post-translation modification; protein embedding; protein sequence; transformer-based models

Year:  2022        PMID: 35711929      PMCID: PMC9194472          DOI: 10.3389/fgene.2022.885929

Source DB:  PubMed          Journal:  Front Genet        ISSN: 1664-8021            Impact factor:   4.772


1 Introduction

Similar to the epigenetic modification of histones and nucleic acids, the post-translational modification (PTM) of amino acids dynamically changes the function of proteins and is actively studied in the field of molecular biology. Among various kinds of PTMs, lysine glutarylation is defined as an attachment of a glutaryl group to a lysine residue of a protein (Lee et al., 2014). This modification was first detected via immunoblotting and mass spectrometry analysis and later validated using chemical and biochemical methods. It is suggested that this PTM may be a biomarker of aging and cellular stress (Harmel and Fiedler, 2018). Dysregulation of glutarylation is related to some metabolic diseases, including type 1 glutaric aciduria, diabetes, cancer, and neurodegenerative diseases (Tan et al., 2014; Osborne et al., 2016; Carrico et al., 2018). Since the identification of glutarylated peptides using proteomics techniques is expensive and time-consuming, it is important to investigate computational models and predictors to rapidly identify glutarylation. Based on a survey of previous research, various prediction models have been proposed to distinguish glutarylation sites. The earliest one, GlutPred (Ju and He, 2018), constructs features from amino acid factors (AAF), binary encoding (BE), and the composition of k-spaced amino acid pairs (CKSAAP). The authors selected 300 features using the mRMR method. To overcome the problem of imbalance in this dataset, a biased version of support vector machine (SVM) was employed to build the prediction model. Another predictor, iGlu-Lys (Xu et al., 2018), investigated four different feature sets, physicochemical properties (AAIndex), K-Space, Position-Special Amino Acid Propensity (PSAAP), and Position-Specific Propensity Matrix (PSPM), in conjunction with SVM classifier. The feature set PSPM performed best in the 10-fold cross-validation and was therefore applied to the model. iGlu-Lys performed better than GlutPred in terms of accuracy and specificity scores. However, their sensitivity scores were lower. The next model proposed, MDDGlutar (Huang et al., 2019), divided the training set into six subsets using maximal dependence decomposition (MDD). Three feature sets were evaluated separately using SVM: amino acid composition (AAC), amino acid pair composition (AAPC), and CKSAAP. The best cross-validation score was the AAC feature set. The results of independent testing yielded a balanced score of 65.2% sensitivity and 79.3% specificity, but it had lower specificity and accuracy than those of the GlutPred model. The next two predictors included the addition of new glutarylated proteins from Escherichia coli and HeLa cells for their training and test sets. RF-GlutarySite (Al-barakati et al., 2019) utilizes features constructed from 14 feature sets, reduced with XGBoost. The model’s reported performance for independent testing was balanced, with 71.3% accuracy, 74.1% sensitivity, and 68.5% specificity. However, it is interesting to note that the test data was balanced by under-sampling, which did not represent a real-world scenario. iGlu_Adaboost (Dou et al., 2021) sought to fill this gap by using test data with no resampling. This model utilizes features from 188D, enhanced amino acid composition (EAAC), and CKSAAP. With the help of Chi2 feature selection, 37 features were selected to build the model using SMOTE-Tomek re-sampling and the Adaboost classifier. The test result had good performance for recall, specificity, and accuracy metrics, but a lower Area Under the Curve (AUC) score than that of previous models. Although many models have been built to distinguish between positive and negative glutarylation sites, the performance of these methods remains limited. One challenge to this problem is finding a set of features to represent the protein subsequence, which enables a correct classification of glutarylation site. BERT models (Devlin et al., 2019), and other transformer-based language models from natural language processing (NLP) research, show excellent performance for NLP tasks. These language models, having been adapted to biological sequences by treating them as sentences and then trained using large-scale protein corpora (Elnaggar et al., 2021), also show promise for various machine learning tasks in the bioinformatics domain. Previous studies have investigated the use of pre-trained language models from BERT and BERT-like models to show its effectiveness as protein sequence representation for protein classification. For example, Ho et al. (2021) proposed a new approach to predict flavin adenine dinucleotide (FAD) binding sites from transport proteins based on pre-training BERT, position-specific scoring matrix profiles (PSSM), and an amino acid index database (AAIndex). Their approach showed an accuracy score of 85.14%, which is an improvement over the scores of the previous methods. Another study (Shah et al., 2021) extracted features using pre-trained BERT models to discriminate between three families of glucose transporters. This method, compared to two well-known feature extraction methods, AAC and DPC, showed an improved performance of more than 4% in average sensitivity and Matthews correlation coefficient (MCC). In another study, Liu built a predictor for protein lysine glycation sites using features extracted from pre-trained BERT models, which showed improved performance in terms of accuracy and AUC score compared to previous methods (Liu et al., 2022). These studies demonstrate the suitability of utilizing BERT models to improve various protein classification tasks. Therefore, using embeddings from pre-trained BERT and BERT-like models has the potential to build an improved glutarylation prediction model. In this study, we proposed a new prediction model to predict glutarylation sites (Figure 1) by incorporating features extracted from pre-trained protein models combined with features from handcrafted sequence-based features. A public dataset provided from Al-barakati et al. (2019) was used in this study. It was an imbalanced dataset with 444 positive sites and 1906 negative sites, and already separated into two sets for use in model building and independent testing. First, various feature sets were extracted from the dataset, consisting of two types of features. The first type consists of seven classic sequence-based features, and the second type consists of six embeddings from pre-trained protein language models. We evaluated the classifiers using a 10-fold cross-validation for the individual feature set. The next step was to combine two or more feature sets to evaluate further models, such as AAC-EAAC, AAC-CTDC, and AAC-ProtBert. For this, we limited the embedding features to a maximum of one in the combination. Five classification algorithms were included in the experiments: Adaboost, XGBoost, SVM (with RBF kernel), random forest (RF), and multilayer perceptron (MLP). Our best model combines the features of CTDD, AAC, and ProtT5-XL-UniRef50 with the XGBoost classification algorithm. This model, with the model of the best feature set from sequence-based feature groups and the model of the best feature set from the protein embedding feature group, was then evaluated with an independent dataset. For independent testing, the entire training set was used to develop a model. In both model building and independent testing, a random under-sampling method was used to balance the training dataset, while the testing dataset was not resampled to reflect performance in the real-world unbalanced scenario.
FIGURE 1

Workflow strategy for the development of ProTrans-Glutar model.

Workflow strategy for the development of ProTrans-Glutar model.

2 Materials and Methods

2.1 Dataset

This study utilized unbalanced benchmark datasets compiled by Al-barakati et al. (2019) to build their predictor, RF-GlutarySite. This dataset collected positive glutarylation sites from various sources, including PLMD (Xu et al., 2017) and (Tan et al., 2014) and consisted of four different species (Mus musculus, Mycobacterium tuberculosis, E. coli, and HeLa cells), for a total of 749 sites from 234 proteins. Homologous sequences that showed ≥40% sequence identity were removed using the CD-HIT tool. The remaining proteins were converted into peptides with a fixed length of 23, with glutarylated lysine as the central residue, and 11 residues each upstream and downstream. Negative sites were generated in the same way, but the central lysine residue was not glutarylated. After removing homologous sequences, the final dataset consisted of 453 positive and 2043 negative sites. The distributions of the training and testing datasets are listed in Table 1. This dataset was also used by Dou et al. (2021) to build the proposed predictor model iGlu_Adaboost (Dou et al., 2021).
TABLE 1

Number of positive and negative sites in training and test set.

Training setTest set
Positive sites40044444
Negative sites17032031906
2103247
Number of positive and negative sites in training and test set.

2.2 Feature Extraction

The extraction of numerical features from protein sequences or peptides is an important step before they can be utilized by machine learning algorithms. In this study, we investigated two types of features: classic sequence-based features and features derived from pre-trained transformer-based protein embeddings. Classic sequence-based features were extracted using the iFeature Python package (Chen et al., 2018). After preliminary experiments, seven feature groups were chosen for further investigation: AAC, EAAC, Composition/Transition/Distribution (CTD), pseudo-amino acid composition (PAAC), and amphiphilic pseudo-amino acid composition (APAAC). The second type of feature, embeddings from pre-trained transformer-based models, was extracted using models trained and provided by Elnaggar et al. (2021). It consists of six feature sets from six protein models: ProtBERT, ProtBert-BFD, ProtAlbert, ProtT5-XL-UniRef50, ProtT5-XL-BFD, and ProtXLNet. The data for all extracted features are provided in the Supplementary Material.

2.2.1 Amino Acid Composition and Enhanced Amino Acid Composition

The AAC method encodes a protein sequence-based on the frequency of each amino acid (Bhasin and Raghava, 2004). For this type of feature, we used two variants. The first variant is the basic AAC, in which the protein sequence is converted into a vector of length 20, representing the frequency of the 20 amino acids (“ACDEFGHIKLMNPQRSTVWY”). Each element is calculated according to Eq. 1, as follows: where t is the amino acid type, N(t) is the total number of amino acids t appearing in the sequence, and N is the length of the sequence. The second variant is EAAC, introduced by Chen et al. (2018). In this encoding, the EAAC was calculated using sliding windows, that is, from a fixed window size, moving from left to right. To calculate the frequency of each amino acid in each window, see Eq. 2: where N(t,win) represents the number of amino acids t that appear in the window win and N(win) represents the length of the window. To develop our model, a default window size of five was used. How these methods are applied to a protein sequence are provided in Supplementary File S1.

2.2.2 Composition/Transition/Distribution

The CTD method encodes a protein sequence-based on various structural and physicochemical properties (Dubchak et al., 1995; Cai, 2003). Thirteen properties were used to build the features. Each property was divided into three groups (see Table 2). For example, the attribute “Hydrophobicity_PRAM900101” divides the amino acids into polar, neutral, and hydrophobic groups.
TABLE 2

Physicochemical attributes and its division of the amino acids.

AttributeDivision
Hydrophobicity_PRAM900101Polar: RKEDQNNeutral: GASTPHYHydrophobicity: CLVIMFW
Hydrophobicity_ARGP820101Polar: QSTNGDENeutral: RAHCKMVHydrophobicity: LYPFIW
Hydrophobicity_ZIMJ680101Polar: QNGSWTDERANeutral: HMCKVHydrophobicity: LPFYI
Hydrophobicity_PONP930101Polar: KPDESNQTNeutral: GRHAHydrophobicity: YMFWLCVI
Hydrophobicity_CASG920101Polar: KDEQPSRNTGNeutral: AHYMLVHydrophobicity: FIWC
Hydrophobicity_ENGD860101Polar: RDKENQHYPNeutral:SGTAWHydrophobicity: CVLIMF
Hydrophobicity_FASG890101Polar: KERSQDNeutral: NTPGHydrophobicity: AYHWVMFLIC
Normalized van der Waals volumeVolume range: 0–2.78Volume range: 2.95–94.0Volume range: 4.03–8.08
GASTPDNVEQILMHKFRYW
PolarityPolarity value: 4.9–6.2Polarity value: 8.0–9.2Polarity value: 10.4–13.0
LIFWCMVYPATGSHQRKNED
PolarizabilityPolarizability value: 0–1.08Polarizability value: 0.128–120.186Polarizability value: 0.219–0.409
GASDTGPNVEQILKMHFRYW
ChargePositive: KRNeutral: ANCQGHILMFPSTWYVNegative: DE
Secondary structureHelix: EALMQKRHStrand: VIYCWFTCoil: GNPSD
Solvent accessibilityBuried: ALFCGIVWExposed: PKQENDIntermediate: MPSTHY
Physicochemical attributes and its division of the amino acids. The CTD feature comprises three parts: composition (CTDC), transition (CTDT), and distribution (CTDD). For composition, an attribute contributes to three values, representing the global distribution (frequency) of the amino acids in each of the three groups of attributes. The composition is computed as follows: where N(r) is the number of occurrences of type r amino acids in the sequence and N is the length of the sequence. For transition, an attribute also contributes to three values, each representing the number of transitions between any pair of groups. The transition is calculated as follows: where N(r,s) represents the number of occurrences amino acid type r transit to type s (i.e., it appeared as “rs” in the sequence), and N is the length of the sequence. Similarly, N(s,r) is the reverse, that is, the number of “sr” occurrences in the sequence. The distribution feature consists of five values per attribute group, each of which corresponds to the fraction of the sequence length at five different positions in the group: first occurrence, 25%, 50%, 75%, and 100%.

2.2.3 Pseudo Amino Acid Composition

Pseudo amino acid composition feature was proposed by Chou (2001). For protein sequence P with L amino acid residues P = (R1R2R3…RL), the PAAC features can be formulated as where w is the weight factor and is the k-the tier correlation factor, defined as and where Ф (R ) is the q-th function of the amino acid R , and Г the total number of functions. In here Г = 3 and the functions used are hydrophobicity value, hydrophilicity value, and side chain mass of amino acid R . A variant of PAAC called amphiphilic pseudo amino acid composition (APAAC) proposed in Chou (2005). A protein sample P with L amino acid residues P = (R1R2R3…RL), is formulated as where τj is the j-tier sequence-correlation factor calculated using the equations: where H 1 and H 2 are hydrophobicity and hydrophilicity values of the i-th amino acid, described by the following equation:

2.2.4 Pre-Trained Transformer Protein Embeddings

Protein language models has been trained from large protein corpora, using the state-of-the-art transformer models from the latest NLP research (Elnaggar et al., 2021). Six of the models were applied to extract features for our task of predicting glutarylation sites. • ProtBERT and ProtBert-BFD are derived from the BERT model (Devlin et al., 2019), trained on UniRef100 and BFD corpora, respectively. • ProtT5-XL-UniRef50 and ProtT5-XL-BFD are derived from the T5 model (Raffel et al., 2020), trained on UniRef50 and BFD corpora, respectively. • ProtAlbert is derived from the Albert model (Lan et al., 2020) trained on UniRef100 corpora. • ProtXLNet is derived from the XLNet model (Yang et al., 2020), trained on UniRef100 corpora. Protein embeddings (features) were extracted from the last layer of this protein language model to be used for subsequent supervised training. This layer is a 2-dimensional array with a size of 1024 × length of sequence, except for the ProtAlbert model with an array size of 4096 × length of sequence. For the glutarylation prediction problem, this feature is simplified by summing the vectors along the length of the sequence; hence, each feature group is now one-dimensional, with a length of 4,096 for ProtAlbert and 1,024 for the rest.

2.2.5 The Feature Space

The features collected were of different lengths, as summarized in Table 3. These feature groups are evaluated either individually or using various combinations of two or more feature groups. As an example, for the combined feature group AAC-EAAC, a training sample will have 20 + 380 = 400-dimensional features.
TABLE 3

Features investigated for method development.

GroupFeature setLength of features
Amino acid compositionAAC20
EAAC380
C/T/DCTDC39
CTDT39
CTDD195
Pseudo amino acid compositionPAAC35
APAAC50
Embeddings from pretrained transformer-based modelProtBERT1,024
ProtBert-BFD1,024
ProtAlbert4,096
ProtT5-XL-UniRef501,024
ProtT5-XL-BFD1,024
ProtXLNet1,024
Features investigated for method development.

2.3 Imbalanced Data Handling

A class imbalance occurs when the number of samples is unevenly distributed. The class with a higher number of samples is called the majority class or the negative class, whereas the class with a smaller number is called the minority class. In the glutarylation dataset, the number of negative samples was nearly four times that of positive samples. This imbalance may affect the performance of classifiers because they are more likely to predict a positive sample as a negative sample (He and Garcia, 2009). A common strategy to solve this problem is by data re-sampling, either adding minority samples (over-sampling) or reducing majority samples (under-sampling). In this study, we implemented a random under-sampling strategy (He and Ma, 2013) after preliminary experiments with various re-sampling methods.

2.4 Machine Learning Methods

In this study, we used the XGBoost classifier (Chen and Guestrin, 2016) from the XGBoost package on the Python language platform (https://xgboost.ai). This is an implementation of a gradient-boosted tree classifier (Friedman, 2001). Gradient-boosted trees are an ensemble classifier built from multiple decision trees, constructed one by one. XGBoost has been successfully used in various classification tasks, including bioinformatics (Mahmud et al., 2019; Chien et al., 2020; Zhang et al., 2020). In our experiments, several other popular classifiers are also compared and evaluated, including SVM, RF, MLP, and Adaboost, provided by the scikit-learn package (https://scikit-learn.org).

2.5 Model Evaluation

To achieve the model with the best prediction performance, the model was evaluated using 10-fold cross-validation and an independent test. For cross-validation, the training dataset was randomly split into 10 folds of nearly equal size. Nine folds were combined and then randomly under-sampled for training, and the 10th fold was used for evaluation. This process was performed with the other combination of folds (nine for training and one for testing). To remove sampling bias, the cross-validation process was repeated three times, and the mean performance was reported as the CV result. For independent testing, the entire training data were randomly under-sampled, then used to build the model, and later evaluated using the independent test set. Since the randomness in the under-sampling may affect to the performance result, this testing was repeated five times, and the mean performance was reported as an independent test result. The performance of the cross-validation and independent test results was evaluated using seven performance metrics: recall (Rec), specificity (Spe), precision (Pre), accuracy (Acc), MCC, F1-score (F1), and area under the ROC curve (AUC). These metrics were calculated as follows: where TP is True Positive, TN is True Negative, FP is False Positive, and FN is False Negative. The AUC metric is obtained by plotting recall against (1—specificity) for every threshold and then calculating the area under the curve.

3 Results

3.1 Models Based on Sequence-Based Feature Set

We calculated the cross-validation performance for each sequence-based feature set using five supervised classifiers: AdaBoost, MLP, RF, SVM, and XGBoost. The performances of these classifiers are shown in Table 4. It can be observed that no classifier is the best for all feature groups. For example, using AAC features, MLP performs the best based on the AUC score. However, using EAAC features, the RF model has the best performance, whereas MLP has the poorest. Among the six different feature sets, the best model achieved was using EAAC features combined with RF, with an AUC score of 0.6999. This model also had the best specificity, precision, and accuracy compared to the other models.
TABLE 4

Cross validation result of models from sequence-based features.

Feature groupsClassifierRecSpePreAccMCCF1AUC
AACAdaboost0.61200.60130.26540.60330.16900.37000.6433
MLP0.65200.61920.28640.62550.21500.39770.6864
Random Forest0.61900.58090.25750.58810.15760.36350.6378
SVM0.63950.59690.27140.60500.18680.38080.6651
XGBoost0.59170.54820.23530.55650.11020.33620.6101
EAACAdaboost0.59830.60150.26080.60090.15840.36290.6384
MLP0.58500.59460.25300.59280.14220.35290.6323
Random Forest0.64500.65980.30890.65700.24500.41710.6999
SVM0.59670.64340.28210.63450.19230.38270.6571
XGBoost0.64080.63850.29450.63890.22300.40300.6834
CTDCAdaboost0.70500.55180.26990.58090.20190.39010.6641
MLP0.68670.60340.29050.61930.23000.40730.6912
Random Forest0.64080.56760.25790.58150.16390.36760.6556
SVM0.68420.56570.27050.58820.19660.38740.6765
XGBoost0.63670.57540.26050.58710.16720.36930.6450
CTDTAdaboost0.62080.57620.25660.58470.15560.36270.6261
MLP0.64080.57560.26220.58800.17080.37170.6439
Random Forest0.60250.59820.26030.59900.15880.36330.6241
SVM0.64250.58410.26610.59520.17870.37600.6493
XGBoost0.57830.56680.23900.56900.11470.33780.6015
CTDDAdaboost0.63580.60460.27440.61060.19040.38310.6531
MLP0.59420.53650.24340.54750.11200.32970.6065
Random Forest0.69670.61640.29940.63160.24760.41850.6987
SVM0.66750.61110.28770.62180.22060.40170.6794
XGBoost0.66750.62010.29270.62910.22820.40640.6847
PAACAdaboost0.59420.60520.26110.60310.15810.36260.6253
MLP0.59580.57170.24620.57630.13210.34820.6261
Random Forest0.63750.58090.26330.59170.17230.37230.6413
SVM0.66170.59050.27520.60410.19900.38850.6745
XGBoost0.62170.57310.25540.58230.15370.36150.6375
APAACAdaboost0.61250.59760.26340.60040.16620.36820.6367
MLP0.56580.59040.24500.58570.12370.34160.6162
Random Forest0.64580.58310.26710.59500.18050.37760.6464
SVM0.66500.59700.27940.60990.20690.39320.6777
XGBoost0.64250.56940.25960.58330.16680.36950.6375
Cross validation result of models from sequence-based features.

3.2 Models Based on Embeddings From Pre-trained Transformer Models

Based on the embeddings extracted from the pre-trained transformer models, we evaluated the same five supervised classifiers. The performance results of the models are presented in Table 5. The combination of the ProtBERT model and SVM can match the recall score with the classic sequence-based feature result. However, all other metrics were lower. In this experiment, the best model with respect to the AUC score was a combination of features from the ProtAlbert model and SVM classifier (AUC = 0.6744). This model also had the highest cross-validation scores for precision, MCC, and F1-score. It can also be noted that out of the six models, SVM performed best on four of them compared to the other machine learning algorithms.
TABLE 5

Cross validation result of models from pre-trained transformer models.

Feature groupsClassifierRecSpePreAccMCCF1AUC
ProtBERTAdaboost0.57670.56800.23890.56970.11420.33740.5996
MLP0.58920.56080.23950.56620.11870.33960.6128
Random Forest0.55670.64260.26810.62620.16020.36160.6415
SVM0.70420.47750.24200.52070.14750.35780.6275
XGBoost0.60330.60070.26190.60120.16160.36490.6398
ProtBert-BFDAdaboost0.54330.55470.22310.55250.07730.31620.5776
MLP0.59000.56450.24200.56940.12180.34300.6076
Random Forest0.53830.62300.25100.60690.12890.34210.6122
SVM0.62420.58190.25950.58990.16260.36620.6420
XGBoost0.59080.57330.24530.57660.12950.34640.6142
ProtAlbertAdaboost0.58750.57530.24500.57760.12840.34560.6193
MLP0.58580.61890.26570.61260.16460.36150.6407
Random Forest0.58080.63160.27030.62200.16970.36870.6535
SVM0.62830.61360.27670.61640.19190.38400.6744
XGBoost0.60920.59270.26040.59580.15970.36460.6477
ProtT5-XL-UniRef50Adaboost0.55330.56550.23060.56320.09380.32540.5897
MLP0.61920.56330.25010.57390.14390.35580.6296
Random Forest0.56080.61710.25620.60640.14190.35150.6237
SVM0.65830.57100.26530.58760.18070.37770.6600
XGBoost0.59330.58070.24970.58310.13770.35090.6183
ProtT5-XL-BFDAdaboost0.58920.56000.23950.56560.11750.34050.5959
MLP0.60000.57680.25020.58120.13960.35290.6188
Random Forest0.53920.61630.24850.60170.12420.33990.6145
SVM0.65500.56250.26040.58010.17110.37240.6548
XGBoost0.58580.58620.24900.58620.13610.34890.6224
ProtXLNetAdaboost0.51250.53430.20570.53020.03690.29340.5421
MLP0.53250.52480.20810.52620.04500.29910.5463
Random Forest0.50500.56680.21520.55510.05680.30150.5511
SVM0.47420.57700.21030.55750.04080.29000.5460
XGBoost0.56420.55040.22740.55300.09020.32380.5652
Cross validation result of models from pre-trained transformer models.

3.3 Models Based on Combination of Sequence-Based Feature and Pre-trained Transformer Models Feature Set

To obtain the best model, we tested various combinations of two or more feature sets to evaluate further models, such as AAC-EAAC, AAC-CTDC, and AAC-ProtBert. For this, we limited the embedding features to a maximum of one set in the combination. Similar to previous experiments, five classification algorithms were used: AdaBoost, XGBoost, SVM (RBF kernel), RF, and MLP. Our best model, ProtTrans-Glutar, uses a combination of the features CTDD, EAAC, and ProtT5-XL-UniRef50 with the XGBoost classification algorithm. The performance of this model is shown in Table 6, with comparison to the best model from sequence-based features (EAAC with RF classifier) and the best model from embeddings of the protein model (ProtAlbert with SVM classifier). According to the cross-validation performance on training data, this model has the best AUC and recall compared with models with features from only one group. These three models were then evaluated using an independent dataset (Figure 2). This test result shows that ProtTrans-Glutar outperformed the other two models in terms of AUC, recall, precision, MCC, and F1-score. However, it is severely worse in terms of specificity and slightly worse in terms of accuracy compared to the EAAC + RF model.
TABLE 6

Performance comparison of the best models in each group.

EvaluationModelsLengthRecSpePreAccMCCF1AUC
10-fold CV on Training DataProtTrans-Glutar a 1,5990.67830.62770.30040.63740.24330.41580.7093
ProtAlbert + SVM4,0960.62830.61360.27670.61640.19190.38400.6744
EAAC + RF3800.64500.65980.30890.65700.24500.41710.6999
Independent Test SetProtTrans-Glutar a 1,5990.78640.62860.31470.65670.31960.44940.7075
ProtAlbert + SVM4,0960.65000.62860.27530.63240.21610.38660.6393
EAAC + RF3800.64090.67390.29890.66800.24790.40760.6574

Model uses combined features CTDD-EAAC-ProtT5XLUniRef50 with XGBoost classifier.

FIGURE 2

Independent test evaluation of the best models from each group.

Performance comparison of the best models in each group. Model uses combined features CTDD-EAAC-ProtT5XLUniRef50 with XGBoost classifier. Independent test evaluation of the best models from each group. As shown in the ROC curves of the three models (Figure 3), EAAC + RF performed better for low values of FPR, but for larger values, ProtTrans-Glutar performed better. It is also noted that ProtAlbert + SVM performed worse for most values of FPR. Overall, ProtTrans-Glutar was the best model with an AUC of 0.7075.
FIGURE 3

ROC-Curve plot of best models in each group.

ROC-Curve plot of best models in each group.

4 Discussion

From our study, it was shown that building prediction models from traditional sequence-based features only provided limited performance (Table 4). It was also shown that using only embeddings from pre-trained protein models gave slightly worse results, except that the recall performance was almost the same (Table 5). When we combined the features from these two groups, we found that the best performance was achieved by the combination of the features CTDD, EAAC, and ProtT5-XL-UniRef50 with the XGBoost classifier (independent test AUC = 0.7075). This indicated that ProtT5-XL-UniRef50 features on their own are not the best embedding model during the individual feature evaluation (see Table 5), but combined with CTDD and EAAC, it outperformed the other models. It is worth mentioning that Elnaggar et al. (2021), who developed and trained protein models, revealed that ProtT5 models outperformed state-of-the-art models in protein classification tasks, namely in prediction of localization (10-class classification) and prediction of membrane/other (binary classification), compared to other embedding models. For further evaluation, we compared our model with previous glutarylation site prediction models (Table 7). The first three models, GlutPred, iGlu-Lys, and MDDGlutar, used datasets that were different from our model and are shown for reference. The other model, iGlu_Adaboost, utilized the same public dataset as for our model and contained glutarylation sites from the same four species. ProtTrans-Glutar outperformed the other models in terms of the recall performance (Rec = 0.7864 for unbalanced data). This high recall suggests that this model can be useful for uncovering new and potential glutarylation sites.
TABLE 7

Performance comparison of existing models.

ModelsResourcesRecSpePreAccMCCF1AUC
GlutPredPLMD0.51790.78500.23970.75410.2238n/a0.7663
iGlu-LysPLMD0.51430.9531n/a0.88530.52n/a0.8842
MDDGlutarPLMD0.6520.739n/a0.710.38n/an/a
iGlu_AdaBoostPLMD, NCBI, Swiss-Prot0.72730.71920.35960.72070.360.480.6300
ProtTrans-GlutarPLMD, NCBI, Swiss-Prot0.78220.62860.31470.65670.31960.44940.7075
Performance comparison of existing models. Furthermore, we also evaluated our model by using a balanced training and testing dataset using random under-sampling for comparison with the RF-GlutarySite model (Table 8), which uses the same dataset but is balanced before evaluating performance. Because the authors of RF-GlutarySite did not provide their data after the resampling process, we performed the experiments 10 times to handle variance from the under-sampling. The ProtTrans-Glutar model showed a higher recall score of 0.7864 compared to RF-GlutarySite (0.7410), in addition to a slightly higher accuracy, MCC, and F1-score. However, the specificity and precision scores were lower.
TABLE 8

Performance comparison with RF-GlutarySite using balanced train and test data.

ModelsResourcesRecSpePreAccMCCF1AUC
RF-GlutarySite a PLMD, NCBI, Swiss-Prot0.7410.6850.720.7130.430.720.72
ProtTrans-Glutar (balanced)PLMD, NCBI, Swiss-Prot0.78640.64550.69550.71590.43880.73580.7159

RF-GlutarySite model balanced the training and testing dataset using undersampling.

Performance comparison with RF-GlutarySite using balanced train and test data. RF-GlutarySite model balanced the training and testing dataset using undersampling. In summary, the model improved the recall score compared to the existing models but did not improve other metrics. However, we would like to point out that GlutPred, iGlu-Lys, and MDDGlutar based their glutarylation datasets on less diverse sources (two species only), whereas ProtTrans-Glutar with RF-GlutarySite and iGlu_Adaboost utilized newer datasets (four species). The more diverse source of glutarylation sites in the data may present more difficulty in improving performance, especially in terms of specificity and accuracy. Compared with iGlu_Adaboost, which used the same dataset, our model improved their recall and AUC scores. Despite this, the specificity is worse and will be a challenge for future research.

5 Summary

In this study, we presented a new glutarylation site predictor by incorporating embeddings from pretrained protein models as features. This method, which is termed ProtTrans-Glutar, combines three feature sets: EAAC, CTDD, and ProtT5-XL-UniRef50. Random under-sampling was used in conjunction with the XGBoost classifier to train the model. The performance evaluations obtained from this model for recall, specificity, and AUC are 0.7864, 0.6286, and 0.7075, respectively. Compared to other models using the same dataset of more diverse sources of glutarylation sites, this model outperformed the existing model in terms of recall and AUC score and could potentially be used to complement previous models to reveal new glutarylated sites. In the future, refinements can be expected through further experiments, such as applying other feature selection methods, feature processing, and investigating deep learning models.
  20 in total

1.  Classification of nuclear receptors based on amino acid composition and dipeptide composition.

Authors:  Manoj Bhasin; Gajendra P S Raghava
Journal:  J Biol Chem       Date:  2004-03-23       Impact factor: 5.157

Review 2.  The role of mitochondrial sirtuins in health and disease.

Authors:  Brenna Osborne; Nicholas L Bentley; Magdalene K Montgomery; Nigel Turner
Journal:  Free Radic Biol Med       Date:  2016-05-06       Impact factor: 7.376

3.  GT-Finder: Classify the family of glucose transporters with pre-trained BERT language models.

Authors:  Syed Muazzam Ali Shah; Semmy Wellem Taju; Quang-Thai Ho; Trinh-Trung-Duong Nguyen; Yu-Yen Ou
Journal:  Comput Biol Med       Date:  2021-02-07       Impact factor: 4.589

4.  PLMD: An updated data resource of protein lysine modifications.

Authors:  Haodong Xu; Jiaqi Zhou; Shaofeng Lin; Wankun Deng; Ying Zhang; Yu Xue
Journal:  J Genet Genomics       Date:  2017-05-03       Impact factor: 4.275

Review 5.  The Mitochondrial Acylome Emerges: Proteomics, Regulation by Sirtuins, and Metabolic and Disease Implications.

Authors:  Chris Carrico; Jesse G Meyer; Wenjuan He; Brad W Gibson; Eric Verdin
Journal:  Cell Metab       Date:  2018-03-06       Impact factor: 27.287

Review 6.  Features and regulation of non-enzymatic post-translational modifications.

Authors:  Robert Harmel; Dorothea Fiedler
Journal:  Nat Chem Biol       Date:  2018-02-14       Impact factor: 15.040

7.  FAD-BERT: Improved prediction of FAD binding sites using pre-training of deep bidirectional transformers.

Authors:  Quang-Thai Ho; Trinh-Trung-Duong Nguyen; Nguyen Quoc Khanh Le; Yu-Yen Ou
Journal:  Comput Biol Med       Date:  2021-02-08       Impact factor: 4.589

8.  Akt-dependent metabolic reprogramming regulates tumor cell histone acetylation.

Authors:  Joyce V Lee; Alessandro Carrer; Supriya Shah; Nathaniel W Snyder; Shuanzeng Wei; Sriram Venneti; Andrew J Worth; Zuo-Fei Yuan; Hee-Woong Lim; Shichong Liu; Ellen Jackson; Nicole M Aiello; Naomi B Haas; Timothy R Rebbeck; Alexander Judkins; Kyoung-Jae Won; Lewis A Chodosh; Benjamin A Garcia; Ben Z Stanger; Michael D Feldman; Ian A Blair; Kathryn E Wellen
Journal:  Cell Metab       Date:  2014-07-03       Impact factor: 27.287

9.  Using amphiphilic pseudo amino acid composition to predict enzyme subfamily classes.

Authors:  Kuo-Chen Chou
Journal:  Bioinformatics       Date:  2004-08-12       Impact factor: 6.937

10.  ProtTrans: Toward Understanding the Language of Life Through Self-Supervised Learning.

Authors:  Ahmed Elnaggar; Michael Heinzinger; Christian Dallago; Ghalia Rehawi; Yu Wang; Llion Jones; Tom Gibbs; Tamas Feher; Christoph Angerer; Martin Steinegger; Debsindhu Bhowmik; Burkhard Rost
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2022-09-14       Impact factor: 9.322

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.