Literature DB >> 35462533

SNARER: new molecular descriptors for SNARE proteins classification.

Alessia Auriemma Citarella1, Luigi Di Biasi2, Michele Risi2, Genoveffa Tortora2.   

Abstract

BACKGROUND: SNARE proteins play an important role in different biological functions. This study aims to investigate the contribution of a new class of molecular descriptors (called SNARER) related to the chemical-physical properties of proteins in order to evaluate the performance of binary classifiers for SNARE proteins.
RESULTS: We constructed a SNARE proteins balanced dataset, D128, and an unbalanced one, DUNI, on which we tested and compared the performance of the new descriptors presented here in combination with the feature sets (GAAC, CTDT, CKSAAP and 188D) already present in the literature. The machine learning algorithms used were Random Forest, k-Nearest Neighbors and AdaBoost and oversampling and subsampling techniques were applied to the unbalanced dataset. The addition of the SNARER descriptors increases the precision for all considered ML algorithms. In particular, on the unbalanced DUNI dataset the accuracy increases in parallel with the increase in sensitivity while on the balanced dataset D128 the accuracy increases compared to the counterpart without the addition of SNARER descriptors, with a strong improvement in specificity. Our best result is the combination of our descriptors SNARER with CKSAAP feature on the dataset D128 with 92.3% of accuracy, 90.1% for sensitivity and 95% for specificity with the RF algorithm.
CONCLUSIONS: The performed analysis has shown how the introduction of molecular descriptors linked to the chemical-physical and structural characteristics of the proteins can improve the classification performance. Additionally, it was pointed out that performance can change based on using a balanced or unbalanced dataset. The balanced nature of training can significantly improve forecast accuracy.
© 2022. The Author(s).

Entities:  

Keywords:  AdaBoost; KNN; Machine learning; Protein classification; Random forest; SNARE

Mesh:

Substances:

Year:  2022        PMID: 35462533      PMCID: PMC9035248          DOI: 10.1186/s12859-022-04677-z

Source DB:  PubMed          Journal:  BMC Bioinformatics        ISSN: 1471-2105            Impact factor:   3.307


Background

SNARE (Soluble N-ethylmaleimide sensitive factor Attachment protein Receptor) is a protein superfamily involved in the molecular trafficking between the different cellular compartments [1]. This protein family includes members from yeasts to mammalian cells, evolutionarily conserved. Vesicle-mediated transport is essential for basic cellular processes, such as the secretion of proteins and hormones, the release of neurotransmitters, the phagocytosis of pathogens by the immune system and the transport of molecules from one compartment of the cell to another.Vesicular transport involves membrane receptors responsible for the vescicles recognition, the activation of the membrane fusion and reorganization and the consequent release of the vesicular content in the extracellular space (exocytosis) or inside the cell (endocytosis). Specifically, SNARE complexes mediate membrane fusion during diffusion processes, providing bridging bond between SNARE proteins associated with both membranes [2]. SNARE proteins consist of motifs of 60–70 amino acids containing hydrophobic heptad repeats which form coiled-coil structures. The core of the SNARE complex is represented by 4 α helix bundle, as evidenced by the available crystallographic structures [3]. The center of the bundle contains 16 stacked layers which are all hydrophobic, except the central layer “0”, which is called ionic layer and which contains 3 highly conserved glutamines (Q) and a conserved arginine (R) residue (see Fig. 1).
Fig. 1

Visualization of the layers of the bundle of the fusion complex between the 4 parallel α-helices of the SNARE: 7 upstream layers (layers from − 1 to − 7) and 8 downstream layers (layers from + 1 to + 8) of the ionic layer (the layer 0) [4]

Visualization of the layers of the bundle of the fusion complex between the 4 parallel α-helices of the SNARE: 7 upstream layers (layers from − 1 to − 7) and 8 downstream layers (layers from + 1 to + 8) of the ionic layer (the layer 0) [4] SNARE proteins were initially divided into two categories: vesicle or v-SNARE, which are incorporated into the vesicle membranes, and target or t-SNARE, which are associated with the target membranes. A more recent subdivision is based on their structural characteristics by dividing them into R-SNARE and Q-SNARE. The R-SNARE proteins contain an arginine residue (R) which contributes to the formation of the complex while Q-SNARE proteins contain a glutamine residue (Q) and, according to their position in the bundle of four helices, they are classified in turn as , or [4]. In recent years, attention to SNARE proteins has increased due to scientific studies which have shown the implication of SNAREs in some neural disorders for their crucial role in the neuronal and neurosensory release at the level of synaptic endings [5]. The neurotransmitters release is a temporally and spatially regulated process and it occurs thousands of times per minute. In this context, SNARE complexes are continuously subject to tightly regulated assembly and disassembly. Impairment at any stage of this release can lead to hypo or hyperactivity of neurotransmitter release causing dysfunctions which compromise the balance of synaptic communication. There are evidences that these substances seem to be involved in the course of neurodegenerative diseases (such as Alzheimer and Parkinson), in neurodevelopment (autism) and in psychiatric disorders (such as bipolar disorder and schizophrenia as well as depression). Different studies have shown the involvement of mutated or not properly regulated SNARE genes in the development of these disorders [6-11]. Nowadays the protein sequences collection is constantly growing. There is a need to have efficient classification systems able to define the functionality of a protein based on its chemical-physical properties and to label the sequence with greater precision. The more information we can gather about a certain protein, the better our ability to fit it into a more complex biological framework. This is evident and useful especially when considering a protein with an initially unknown function. The most used approach consists in evaluating whether there are functional motifs and domains in the protein which allow to characterize it starting from its amino acid sequence and evaluating its belonging to a protein family in which the members have similar three-dimensional structures, similar functions and significant sequence similarities. Knowledge of the protein family representatives is therefore necessary to define their role and their mechanisms in a specific physiological and pathological biological path. High-throughput sequencing techniques generate lots of big data belonging to different biological domains, including protein sequencing [12]. These huge amounts of data (up to petabytes) must be computationally analyzed with ever newer techniques for the identification of different genomic and protein regions. The current challenge is to contribute to this post-sequencing analysis and classification and to ensure greater precision in the available protein sequences discrimination. The importance of the evolutionary SNAREs super-family is strictly connected to their role in different cellular functions and different pathological conditions [13, 14], which push researchers to deepen their recognition in the biological pathways.

Related works

Since SNARE proteins are involved in numerous biological processes, studies have slightly increased in recent decades in order to identify and classify these proteins but the papers dealing with this topic are still few. In the literature there are documents that are based on different techniques, ranging from statistical models to the use of convolutional neural networks. Kloepper et al. [15] have implemented a web-based interface which allows the new sequences submission to the Hidden Markov Models (HMM) for the four main groups of the SNARE family, in order to classify SNARE proteins based on sequence alignment and reconstruction of the phylogenetic tree. For their study, a set of 150 SNARE proteins is used in conjunction with the highly conserved motif which is the sequence pattern signature representing the family of SNARE proteins. For SNARE proteins, this motif is an extended segment arranged in heptad repeats, a structural motif consisting of a seven-amino-acid repeating pattern. The extraction of HMM profiles, which allow to identify evolutionary changes in a set of correlated sequences, returns information on the occupancy and position-specific frequency of each amino acid in the alignment. Using this method, the authors are able to obtain a classification accuracy of at least 95% for nineteen of the twenty HMM profiles generated and to perform a cluster analysis based on functional subgroups. Nguyen et al. [16] have disclosed a model with two-dimensional convolutional network and position-specific scoring matrix profiles for the SNARE proteins identification.The authors used multiple hidden layers for their models, in particular 2D sub-layers such as zero padding, convolutional, max pooling and fully-connected layers with different number of filters. Their model achieves a sensitivity of 76.6%, an accuracy of 89.7% and a specificity of 93.5%. More recently, in 2020, Guilin Li [17] has proposed an hybrid model which combines the random forest algorithm with the oversampling filter and 188D feature extraction method. His work proposes different combinations of feature extraction methods, filtering methods and classification algorithms such as KNN, RF and AdaBoost for the classification of SNARE proteins. Since those results are shown only graphically, it is not possible to have a clear comparison with our results.

Methods

Dataset preparation

We have constructed two datasets, respectively named DUNI and D128. Both datasets were used for the evaluation of each classifier’s robustness in unbalanced and balanced training environment, in order to avoid learning bias into classification training. In both datasets, SNARE proteins were downloaded from UNIPROT.1 For this purpose, we selected all the proteins with molecular function “SNAP receptor activity”, identified with the unique GENE Ontology [18] alphanumeric code GO: 0005484. The dataset DUNI consists of 276 SNAREs and 806 non-SNAREs. On this unbalanced dataset, we applied the subsampling and ovesampling techniques used in [17]. The balanced dataset D128 is composed of 64 SNARE from UNIPROT and 64 non-SNARE protein sequences downloaded from the PDB database.2 In order to create a balanced and non-redundant dataset and improve the dataset quality, all SNARE protein sequences in FASTA format have been processed with the CD-HIT (Cluster Database at High Identity with Tolerance)3 program which returns a set of non-redundant representative sequences in output. CD-HIT uses an incremental clustering algorithm. In the first analysis, it sorts the sequences in length descending order and creates the first cluster in which the longest sequence is the representative one. Then the sequences are compared with the clusters representatives. If the similarity with a representative is above a certain threshold, the sequence will be grouped in that cluster. Alternatively, a new cluster is created with that sequence as the representative [19]. The similarity threshold chosen was 25%. This step is very important, since it allows the removal of sequences which exceed the similarity threshold and that could invalidate the analysis causing unwanted bias. Sequence similarity is measured by the similar residues percentage between two sequences. The lower the sequence similarity, the greater the likelihood of having representative proteins in the dataset which consequently show no redundancy [20].

Feature extraction methods

In order to analyze the data deriving from protein sequences with ML techniques, a numerical representation is required for each amino acid in the protein. For this reason, a series of numerical parameters are often used which act as chemical-physical and structural descriptors of proteins. The combination of a different set of carefully chosen descriptors increases classification efficiency and allows predicting functional protein families [21]. So there are some feature extraction methods commonly used in machine learning. Identifying the right features for machine learning-based protein classification is one of the open issues in this field. The right features combination is important to ensure greater classifier model accuracy [22]. In the literature, over the years, many indices and features of amino acids have been identified for classification methods, such as amino acid composition (AAC), auto-correlation functions [23] or pseudo amino acid composition (PseAAC) [24]. We chose the following four descriptors to compare our SNARER descriptors with those currently used in the SNARE proteins classification.For our purpose, we have selected 24 descriptors, 19 of which come from AAindex, i.e., the Amino Acid index database [28]. They are extracted manually, on the basis of the chemical-physical, electrical and energy charge characteristics of the SNARE proteins, according to their principal biological information already known in the literature. We chose features that consider the propensity of individual amino acids to create helixes, sheets and coils. Since there is mainly an helix structure in the SNARE proteins, we opted to evaluate features related to this structure. Others features are related to solvent accessibility, to the ability to interact with the surrounding environment and energy effects of amino acid residues in SNARE proteins. GAAC (Grouped amino acid composition) groups the 20 amino acids into five groups based on their chemical-physical properties and calculates the frequency for each of the five groups in a protein sequence. Specifically, the five groups are the following: positive charge (K, R, H), negative charge (D, E), aromatic group (F, Y, W), aliphatic group (A, G, I, L, M, V) and uncharge (C, N, P, Q, S, T) [25]. CTDT (Composition/Transition/Distribution) represents the amino acid composition patterns distribution of a specific chemical-physical or structural property in the protein sequence. The final T represents the transition between three types of patterns (neutral group, hydrophobic group and polar group) of which the percentage of occurrence frequency is calculated [25]. CKSAAP are sequence-based features which, given a sequence, count all adjacent amino acid pairs, considering k-spaced amino acid pairs. Since there are 20 amino acids, for each value of k (from 0 to 5) there are 400 possible pairs of amino acids, for a total of 2400 features [26]. 188D features constitute a features vector of which the first 20 represent the frequencies of each amino acid while eight types of chemical-physical properties (such as hydrophobicity, polarizability, polarity, surface tension, etc) allow us to calculate the remaining 168 features. In fact, for each type of property 21 features are extracted [27]. In this work, we opted to choose these subset of descriptors to assess their behavior in the presence of features that are already widely used in the literature. The other four descriptors (i.e., Steric parameter, polarizability, Volume, Isoelectric point, Helix probability, Sheet probability and Hydrophobicity) are the amino acid parameter sets defined by Fauchere et al. [29]. They are all listed in Table 1.
Table 1

The SNARER descriptors

CodeDescriptionSource
ARGP820102Signal sequence helical potential%AAindex [28]
CHAM830101The Chou-Fasman parameter of the coil conformation
CHAM830107A parameter of charge transfer capability
CHAM830108A parameter of charge transfer donor capability
CHOP780204-CHOP780206Normalized frequency of N-terminal helix-non helical region
CHOP780205-CHOP780207Normalized frequency of C-terminal helix-non helical region
EISD860101Solvation free energy
FAUJ880108Localized electrical effect
FAUJ880111Positive charge
FAUJ880112Negative charge
GUYH850101Partition energy
JANJ780101Average accessible surface area
KRIW790101Side chain interaction parameter
ZIMJ680102Bulkiness
ONEK900102Helix formation parameters (delta delta G)
Steric parameterFauchere et al. [29]
Polarizability
Volume
Isoelectric point
Helix probability
Sheet probability
Hydrophobicity
The SNARER descriptors We used iFeature [25] for feature extraction of GAAC, CKSAAP and CTDT and MSFBinder [30] for 188D.

Classification algorithms

The work of Guilin Li [17] is based on the descriptors GAAC, CTDT, 188D and CKSAAP and subsampling and oversampling methods. This study compared three machine learning algorithms, AdaBoost, K-Nearest Neighbor classifier and Random Forest to predict SNARE proteins. They achieve high accuracy in combination with all four feature extraction methods. In particular, the Random Forest algorithm with oversampling filter and 188D feature extraction approach had the best performance. Following [17], given the high performances reported, we used the same three classification algorithms to evaluate how accuracy varies with the SNARER descriptors utilization. Thus, we have compared three different ML algorithms: AdaBoost (ADA) K-Nearest Neighbor classifier (KNN) and Random Forest (RF). AdaBoost is a machine learning meta-algorithm used in binary classification. AdaBoost is an adaptive algorithm which generates a model that is overall better than the single weak classifiers, adapting to the weak hypothesis accuracy and generating one weighted majority hypothesis in which the weight of each weak hypothesis is a function dependent of its accuracy. At each iteration, a new weak classifier is sequentially added which corrects its predecessor until a final hypothesis with a low relative error is found [31]. KNN is a supervised learning algorithm used for predictive classification and regression problems. The basis of the operation of this algorithm is to classify an object based on the similarity between the data, generally calculated by means of the Euclidean distance. In this way the space is partitioned into regions according to the learning objects similarity. This algorithm identifies a collection of k objects in the training set that are the most similar to the test object. So, a parameter k, chosen arbitrarily, allows us to identify the number of nearest neighbors, considering the k minimum distances. The prevalence of a certain class in this neighborhood becomes a forecast in order to assign a label to the object [32]. RF is a supervised learning algorithm that combines many decision trees into one model by aggregation through bagging. The final result of the RF is represented by the class returned by the largest number of decision trees. In particular, the random forest algorithm learns from a random sample of data and trains on random characteristics subsets by splitting the nodes in each tree [33].

Training and validation sessions

All training sessions were conducted with Weka ML Platform (Waikato Environment for Knowledge Analysis), a software environment written in Java which allows the application of machine learning and data mining algorithms [34]. In order to speed-up analysis, an ad-hoc grid, based on the map/reduce paradigm, were used to distribute the work across multiple slaves [35]. Both data sets were used as the input for the training step for AdaBoost, KNN and RF classifiers. There were only two possible output classes: SNARE/ NON SNARE. Then, for each training session, we used the following cross-validation values: the range between 10 to 100 for k-fold and between 20 to 80% for hold out. As a result, the ratio of the samples in training and validation set is variable. Moreover, in addition to other parameters configured as in [17], we set k = 1 and Euclidean distance for the distanceFunction of KNN; for the AdaBoost algorithm, default values are weightThreshold = 100 and numIterations = 10, whilst for RF numIterations = 100. The complete working set was composed of four logical parts: i) DUNI non-filtered; ii) DUNI oversampled; iii) DUNI subsampled; iv) D128 non-filtered. For each training session, we generated 10 k-fold variants and 7 hold out variants. Then, for each variant we computed 100 training sessions of each of the three classifiers for each of the four descriptors. Thus, we distributed up to 836.000 training sessions among the distributed computing environment.

Performance measurement

We evaluated the ML models (Random Forest, AdaBoost and KNN) on the unbalanced dataset DUNI and on the balanced dataset D128. In order to estimate the prediction performance of the three ML algorithms, accuracy (ACC), sensitivity (SN) and specificity (SP) were used. The chosen metrics are described in the equations below:where TP, TN, FP and FN represent the number of true positives, true negatives, false positives and false negatives, respectively. Sensitivity is the percentage of positive entities correctly identified. Specificity measures the proportion of negative entities that are correctly identified. In a biological sense, having a TP in our experiment means finding that a protein cataloged as a SNARE is recognized by the classifier as a SNARE. The feature extraction methods were initially evaluated separately (GAAC, CKSAAP, CTDT and 188D) on the datasets D128 and DUNI, and subsequently these methods were extended with the SNARER descriptors addition disclosed in this work, here identified as extended classes “ext”.

Results and discussion

We used the SNARER descriptors and the three chosen ML algorithms on the unbalanced dataset DUNI and on the balanced dataset D128. We have first considered four feature sets (GAAC, CTDT, CSKAAP and 188D) separately and then each one in combination with the SNARER descriptors class, identified with “.ext”. The classification performances were evaluated with three metrics: average accuracy (ACC), average sensibility (SN) and average specificity (SP).

Results on the unbalanced dataset DUNI

Below, we have reported the experimental results conducted on the DUNI dataset. Related to the four protein feature extraction methods GAAC, CTDT, CKSAAP and 188D, the average ACCs for the ML algorithms are included in a range between 76 and 94.9%. In Fig. 2, histograms are used for the graphical comparison of the three ML techniques.
Fig. 2

Comparison between GAAC, CTDT, CKSAAP and 188 D ACC with related extended classes with SNARER (on DUNI dataset)

Comparison between GAAC, CTDT, CKSAAP and 188 D ACC with related extended classes with SNARER (on DUNI dataset) As shown in Table 2, the introduction of the SNARER class brings a strong improvement in combination with all the considered protein feature extraction methods. Overall, the best average accuracy is achieved with the KNN model and with the 188D feature set and the SNARER class combination. This combined model achieves also the best average SP while the best average SN is obtained with the RF model trained using both GAAC and CTDT features separately (see Table 3).
Table 2

Performance of average ACC on the DUNI dataset

Accuracy
RFKNN (%)ADA (%)
GAAC76.185.177.9
GAAC.ext90.49186.1
CTDT76.183.176.7
CTDT.ext91.592.681.6
CKSAAP91.190.0483.7
CKSAAP.ext91.790.0287.4
188D93.994.888.1
188D.ext94.194.988.7

The highest values are shown in bold

Table 3

Performance for average SN and SP on the DUNI dataset

SensitivitySpecificity
RFKNN (%)ADA (%)RF (%)KNN (%)ADA (%)
GAAC99.890.383.67761
GAAC.ext97.294.594.870.780.660.6
CTDT99.889.183.37.165.657.6
CTDT.ext96.694.691.176.48754
CKSAAP97.89889.971.766.765.5
CKSAAP.ext97.898927466.774
188D9796.6928589.576.7
188D.ext96.896.592.486.390.178

The highest values are shown in bold

Performance of average ACC on the DUNI dataset The highest values are shown in bold Performance for average SN and SP on the DUNI dataset The highest values are shown in bold For RF, SN decreases imperceptibly in the extended classes with the new descriptors, remaining unchanged for the CKSAAP method. In contrast, for RF, SP increases with the extended classes, notably especially for the GAAC and CTDT extraction methods. The SN of KNN increases significantly in the extended classes referred to GAAC and CTDT and remains substantially unchanged for CKSAAP and 188D. The same trend is also shown for the SP of KNN, with a slight improvement of the extended 188D class. For the AdaBoost algorithm, we observe an increase in SN, mostly for the extended GAAC and CTDT classes, which however show a decrease in SP. The SP ADA, instead, increases for the extended classes CKSAAP and 188D. Overall, on the unbalanced dataset the use of extended classes with our SNARER descriptors results in an improvement in accuracy for the GAAC, CTDT, CKSAAP and 188D classes of all three ML models, except for KNN trained with CKSAAP. All selected ML algorithms achieve SN greater than 83%, with the best SN of 99.8% RF achieved by GAAC and CTDT without extension. By introducing the SNARER class for all four feature sets, the SN settles in a range between 91.1% of the ADA algorithm with the CTDT class and 98% of the KNN algorithm with the extended CKSAAP class. Regarding the SP, without the SNARER’s descriptor extension, the range extends from a minimum of 7% of RF and KNN algorithms for the GAAC class to a maximum of 89.5% of KNN trained with the 188D feature set. With the SNARER class addition, an SN of 54% of ADA with CTDT feature set is obtained at a maximum of 90.1% of KNN trained on the dataset with 188D feature set. More specifically, the KNN model using the 188D extended class with SNARER descriptors, achieves better performance in all metrics except for SN, where the RF model trained with the GAAC features obtains the highest value. In conclusion, on the unbalanced DUNI dataset, the new SNARER descriptors class guarantees an improvement in terms of ACC in combination with all four tested features sets and a clear improvement of SN and SP of some ML tested algorithms.

Results on the unbalanced dataset DUNI with oversampling and with subsampling

Because the dataset DUNI is unbalanced, we have adopted subsampling and oversampling techniques. With the oversampling technique on the DUNI dataset, the SNARER class produces a strong improvement in accuracy, more for the extended GAAC and CTDT classes for the three ML models RF, KNN and ADA, while the contribution to the CKSAPP and 188D feature sets remains substantially unchanged (as shown in Table 4). The same behavior is common to the average SN and average SP calculated for RF, KNN and ADA (see Table 5). Applying the subsampling technique to the DUNI dataset, we observe the same trend for SN but with a slight decrease, around 2% -4%, of the values when considering the extended classes CKSAAP and 188D. The same decrease value is also present for the average SPs of the same classes (see Table 6).
Table 4

Performance of the average ACC on the DUNI dataset with oversampling and subsampling

OversamplingSubsampling
RFKNN (%)ADA (%)RF (%)KNN (%)ADA (%)
GAAC94.796.373.1275.279.272.6
GAAC.ext98.0398.4485.0291.886.482.6
CTDT93.996.170.474.678.171.7
CTDT.ext989886.390.689.786.4
CKSAAP99.0798.678493.184.483.5
CKSAAP.ext99.0198.689.17984.287.3
188D98.598.9089.593.19586.6
188D.ext98.598.9589.693.59489.3

The highest values are shown in bold

Table 5

Performance for average SN and SP on the DUNI dataset with oversampling

SensitivitySpecificity
RF (%)KNN (%)ADA (%)RF (%)KNN (%)ADA (%)
GAAC91.99574.997.597.671.3
GAAC.ext96.697.888.499.499.181.6
CTDT88.99468.898.898.272
CTDT.ext96.496.978.499.599.294.3
CKSAAP9999.280.899.298.287.2
CKSAAP.ext98.799.186.299.398.292
188D97.598.390.299.799.588.8
188D.ext97.798.389.499.399.789.9

The highest values are shown in bold

Table 6

Performance for average SN and SP on the DUNI dataset with subsampling

SensitivitySpecificity
RF (%)KNN (%)ADA (%)RF (%)KNN (%)ADA (%)
GAAC75.776.173.674.682.271.7
GAAC.ext88.885.580.894.987.384.4
CTDT78.376.473.97179.769.6
CTDT.ext86.688.481.994.690.990.9
CKSAAP90.998.683.395.370.383.7
CKSAAP.ext76.498.283.781.570.390.9
188D90.995.38895.394.685.1
188D.ext9293.188.894.994.989.9

The highest values are shown in bold

Performance of the average ACC on the DUNI dataset with oversampling and subsampling The highest values are shown in bold Performance for average SN and SP on the DUNI dataset with oversampling The highest values are shown in bold Performance for average SN and SP on the DUNI dataset with subsampling The highest values are shown in bold

Results on the balanced dataset D128

Below we present the obtained classification results on the balanced dataset D128, with and without the addition of the SNARER descriptors. Table 7 reports the average accuracy performances of the ML algorithms without considering the SNARER descriptors in the balanced D128 dataset. In addition, histograms are depicted graphically in Fig. 3: RF varies from a minimum of 71.1% with the use of the GAAC class to a maximum of 95.4% with the 188D class; KNN settles between a minimum of 64.2% with the use of GAAC to a maximum of 90% with the 188D class; ADA varies from a minimum of 70% with GAAC to a maximum of 90.2% trained on the 188D class. Extended classes with SNARER descriptors shift these average ACC rates. In particular, RF varies from a minimum of 84% using the extension with GAAC to a maximum of 95.3% with the 188D class. KNN starts from a minimum of 65.4% with the extended GAAC class and reaches a maximum of 88.6% with the extended 188D class. ADA varies in a range between 84% with the GAAC.ext class to a maximum of 90% with the combined class 188D.
Table 7

Performance of average ACC for the D128 dataset

Accuracy
RF (%)KNN (%)ADA (%)
GAAC71.164.270
GAAC.ext8465.484
CTDT73.466.470.3
CTDT.ext8868.784.1
CKSAAP92.272.480.7
CKSAAP.ext92.374.189.4
188D95.49090.2
188D.ext95.388.690

The highest values are shown in bold

Fig. 3

Comparison between GAAC, CTDT, CKSAAP and 188D ACC with related extended classes with SNARE (on D128 dataset)

Performance of average ACC for the D128 dataset The highest values are shown in bold Comparison between GAAC, CTDT, CKSAAP and 188D ACC with related extended classes with SNARE (on D128 dataset) By comparing the evaluated average ACCs, the SNARER class addition improves the classification performance in relation to the GAAC, CKSAAP and CTDT feature extraction methods while there is a slight decrease in the average ACCs for the 188D feature extraction class. Further analysis should be conducted to understand the reason for this decrease. In particular, the best classification results are obtained with the RF algorithm. With the extended feature extraction methods, we can note that for the RF algorithm SN increases with GAAC and CTDT while it remains fundamentally unchanged for the other two descriptor classes (see Table 8). Also SP increases showing the same behavior. For the KNN algorithm, SN decreases for the GAAC and CTDT classes by 3% and 1% for the 188D class while it increases by 2% for the CKSAAP class. The SP of KNN instead increases for all classes except 188D, with a decrease of about 2%. ADA improves in terms of SN on all extended classes while it decreases in SP by 0.8% when applied on the extended class 188D.
Table 8

Performance for average SN and SP on the D128 dataset

SensitivitySpecificity
RF (%)KNN (%)ADA (%)RF (%)KNN (%)ADA (%)
GAAC80.165.774.562.26365.4
GAAC.ext8462.288.683.96979.2
CTDT74.770.47072.262.370.5
CTDT.ext87.664.784.788.37383.4
CKSAAP89.755.480.29589.481.3
CKSAAP.ext90.15789.59591.289.4
188D95.78988.595.19192
188D.ext95.58888.895.189.291.2

The highest values are shown in bold

Performance for average SN and SP on the D128 dataset The highest values are shown in bold

Comparison between the DUNI and the D128 datasets

Carrying out experiments on unbalanced datasets or balanced datasets affects the automatic learning of the different ML algorithms. In fact, it has been observed that when tests are performed on an unbalanced dataset, greater accuracy is achieved since the classification of each test sample towards the majority class prevails [36]. Consequently, choosing a balanced dataset for training tests can lead to a higher quality of classification predictions. In the case of binary classifications, the coefficient of correlation between the true class and the expected class can be calculated, dealing with them as two binary variables. Since the ACC calculation is sensitive to the imbalance class in order to compare the DUNI and D128 datasets, following the SNARER descriptors introduction, we have used the Matthews Correlation Coefficient (MCC) [37]. In this context, we started from the hypothesis that the proportion of correct predictions (accuracy) are not useful when the two classes have different sizes. In this case the use of MCC is useful. It represents a quality measure also in cases where the datasets have different sizes. MCC varies in the range . When the MCC value is 1, it indicates a perfect forecast. If it returns a value of -1 it represents a perfect negative correlation while 0 means that the classifier returns only a forecast no better than a random one. So, MCC considers all four values in the confusion matrix (TP, TN, FP and FN) and a high value (around 1) indicates that both classes are adequately covered, even if one is disproportionately under (or over) represented. In Table 9, we presented the comparison between the MCC metrics for RF, KNN and ADA trained on the DUNI and D128 datasets with the extended descriptors classes.
Table 9

Comparison of MCC for the DUNI and D128 datasets

Matthews correlation coefficient
DatasetMCC RFMCC KNNMCC ADA
GAAC.extDUNI0.740.760.61
D1280.690.320.70
CTDT.extDUNI0.770.810.49
D1280.770.390.70
CKSAAP.extDUNI0.770.730.69
D1280.860.530.80
188D.extDUNI0.840.870.70
D1280.910.810.81

The highest values are shown in bold

Comparison of MCC for the DUNI and D128 datasets The highest values are shown in bold The MCC (see Fig. 4) of RF improves on the balanced dataset, except for a decrease with the GAAC discriminant features and for no change on the CTDT class. The MCC of KNN is lowered for all combined descriptors, significantly for GAAC, CTDT and CKSAAP. In contrast, ADA’s MCC is significantly improved in all four conditions. As a result, we can see how the values of MCC reflect the quality of the classifier input data. Only if the classifier successfully predicted the majority of positive data instances and the majority of negative data instances, MCC can generate a high score. In the presence of DUNI, which is a negatively imbalanced dataset, we have high values in terms of ACC, SN and SP compared to the balanced dataset (see Tables 2, 3, 7, 8). Since it ignores the proportion of positive and negative items, accuracy can produce misleading values for unbalanced datasets [38]. In Table 9, we showed how many MCC values are greater when we evaluate the algorithms on a balanced dataset with no positive and negative samples imbalance. In some circumstances, MCC values remain constant, owing to the classifier’s ability to produce accurate predictions regardless of the ratio between classes. The MCC is lower in the case of the KNN algorithm, which reflects the worst performance measured by other measures.
Fig. 4

Graphic visualization of MCC for RF, KNN and ADA algorithms

Graphic visualization of MCC for RF, KNN and ADA algorithms Area under the receiver operating characteristic (AUC) and Area under the precision-recall curve (AUPRC) were used to assess the performance of the various folds of the conducted experiments. AUC is a metric for evaluating the quality of a classification algorithm that is used in various applications. As a summary measure of the Receiver operating characteristic (ROC) curve, the AUC is widely utilized. It is a value between 0 and 1, which considers the area under the curve of the plot of SN versus 1-SP across thresholds. It represents the probability that the model will rate a random positive case higher than a random negative example. AUPRC is the area under the curve of the plot of precision versus SN across thresholds. For imbalanced data, this area is more informative than the AUC and it is thought to be a good measure in order to evaluate the performance of a classifier. AUPRC varies from 0 to 1. In general, a classifier with a high AUC and AUPRC values performs better the given classification task. In Tables 10 and 11, we reported the average values of Area under the receiver operating characteristic (AUC) and Area under the precision-recall curve (AUPRC) for DUNI and D128 datasets, respectively. On the DUNI and D128 datasets, we can observe that the AUC and AUPRC values for the extended classes are higher. In particular, it is more evident for the balanced dataset D128, pointing out the importance of class balance. Furthermore, these results reflect what was previously seen, regarding the failure to improve the 188D extended class.Area under the receiver operating characteristic (AUC) and Area under the precision-recall curve (AUPRC) were used to assess the performance of the various folds of the conducted experiments. AUC is a metric for evaluating the quality of a classification algorithm that is used in various applications. As a summary measure of the Receiver operating characteristic (ROC) curve, the AUC is widely utilized. It is a value between 0 and 1, which considers the area under the curve of the plot of SN versus 1-SP across thresholds. It represents the probability that the model will rate a random positive case higher than a random negative example. AUPRC is the area under the curve of the plot of precision versus SN across thresholds. For imbalanced data, this area is more informative than the AUC and it is thought to be a good measure in order to evaluate the performance of a classifier. AUPRC varies from 0 to 1. In general, a classifier with a high AUC and AUPRC values performs better the given classification task. In Tables 10 and 11, we reported the average values of Area under the receiver operating characteristic (AUC) and Area under the precision-recall curve (AUPRC) for DUNI and D128 datasets, respectively. On the DUNI and D128 datasets, we can observe that the AUC and AUPRC values for the extended classes are higher. In particular, it is more evident for the balanced dataset D128, pointing out the importance of class balance. Furthermore, these results reflect what was previously seen, regarding the failure to improve the 188D extended class.
Table 10

Average AUC and AUPRC on the DUNI dataset

AUCAUPRC
RFKNNADARFKNNADA
GAAC0.840.810.780.930.890.89
GAAC.ext0.970.880.930.990.930.97
CTDT0.840.790.790.940.880.89
CTDT.ext0.970.910.900.990.950.96
CKSAAP0.980.840.890.990.900.96
CKSAAP.ext0.980.840.930.990.900.97
188D0.980.940.940.990.960.98
188D.ext0.980.940.950.990.960.98

The highest values are shown in bold

Table 11

Average AUC and AUPRC on the D128 dataset

AUCAUPRC
RFKNNADARFKNNADA
GAAC0.760.640.750.760.610.74
GAAC.ext0.910.660.920.920.620.92
CTDT0.820.660.770.840.620.78
CTDT.ext0.940.690.930.950.650.94
CKSAAP0.970.720.900.980.700.91
CKSAAP.ext0.970.740.960.980.720.96
188D0.990.900.970.990.870.97
188D.ext0.990.890.970.990.850.97

The highest values are shown in bold

Average AUC and AUPRC on the DUNI dataset The highest values are shown in bold Average AUC and AUPRC on the D128 dataset The highest values are shown in bold

Comparison with the state of the art

In Table 12, we presented the comparison between the proposed method and the literature. The method by [15] is based on Hidden Markov Models (HMM), sequence alignment and phylogenetic tree reconstruction in order to classify SNARE proteins. Nguyen et al. [16] used a model with 2D-CNN and position-specific scoring matrix profiles, while the study of Guilin Li [17] has suggested a hybrid model that incorporates the random forest algorithm, the oversampling filter and the 188D feature extraction approach. As we can see in Table 7, by comparing the use of all extended classes with non extended descriptors, our best result is the combination of SNARER descriptors with CKSAAP feature on the dataset D128 with 92.3% of accuracy, 90.1% for sensitivity and 95% for specificity with the RF. On the other hand, when we considered the results achieved on the balanced D128 dataset with the use of our SNARER descriptors, our highest performance is achieved by the RF algorithm in combination with the 188D features.
Table 12

Comparison with reference literature

AuthorsMethodsACCSPSN
Kloepper et al.HMM95%
Nguyen et al.2D-CNN89.7%93.5%76.6%
Guilin Li188D-RF-oversample90–95%95–100%75–80%
Our methodsDataset D128
(highest value)RF-188D.ext95.3%95.1%95.5%
(best value)RF-CKSAAP.ext92.3%95%90.1%

The highest values are shown in bold

Comparison with reference literature The highest values are shown in bold 188D features include the 20 characteristics about frequencies of each amino acid and 168 features based on using eight types of chemical-physical properties. These features probably strengthen the biological properties of the proteins, allowing to reach high levels of the tested classification algorithms. Further studies are needed to understand the intrinsic reasons for the improvement or decay of some parameters when using 188D features.

Conclusion

In recent decades, following the exponential increase in data from gene sequencing, it has become necessary to explore different ML techniques for the protein identification, in support of traditional methods [39]. Recent studies on SNARE proteins have shown that their complexes are spoken in the release of neurotransmitters and that their dysfunction is the basis of neurodegenerative, neural developmental and neuropsychiatric disorders. The importance of recognizing them with increasingly precision has a significant biological impact for identifying the aforementioned pathological conditions [40]. The aim of classifying these proteins allows researchers to understand the biological pathways in which they are involved and by increasing their knowledge, they can improve the possible therapeutic approach. In this work, we tested different feature extraction methods on a balanced and an unbalanced dataset, with and without the new contribution of SNARER descriptors addition, in order to examine the role of balanced and unbalanced training in the classification of SNARE binary proteins. Consequently, we compared the behavior of three ML algorithms (RF, KNN and ADA) on the homogeneous and non-homogeneous datasets. The ML models were evaluated calculating the ACC, SN and SP average values. Our results showed that the performance of the ML algorithms, with the extension of the SNARER descriptors to the feature sets used, improved on both datasets in terms of average ACC. This improvement is greater for RF, KNN and ADA algorithms with the combination of SNARER descriptors to the 188D class. In particular, our best results on the balanced and non-redundant dataset D128 are 92.3% of ACC, 90.1% for SN and 95% for SP with the RF algorithm and with the extended class CKSAAP.ext. By evaluating the MCC for RF, KNN and ADA on both datasets trained with extended feature sets, the ADA algorithm benefited from better performance applied on the balanced dataset. On the contrary, KNN has worsened in terms of performance, reaching a higher value only for the 188D class. Specifically, the algorithms trained on the balanced dataset produce a better MCC, especially for RF and more for ADA, which recovers both in terms of ACC, SP and SN in all the considered tests. KNN, on the contrary, appears to have lower performance in terms of MCC compared to the other algorithms considered. As future work, it is possible to extend the analysis to also identify the SNARE proteins sub-categories based on their structural features, Q-SNAREs and R-SNAREs. Furthermore, it would be useful to explore the use of different classes of descriptors, also combined with each other, which can guarantee a better classification of the proteins under examination.
  34 in total

1.  AAindex: amino acid index database.

Authors:  S Kawashima; M Kanehisa
Journal:  Nucleic Acids Res       Date:  2000-01-01       Impact factor: 16.971

Review 2.  Protein identification methods in proteomics.

Authors:  K Gevaert; J Vandekerckhove
Journal:  Electrophoresis       Date:  2000-04       Impact factor: 3.535

3.  SNARE protein redistribution and synaptic failure in a transgenic mouse model of Parkinson's disease.

Authors:  Pablo Garcia-Reitböck; Oleg Anichtchik; Arianna Bellucci; Mariangela Iovino; Chiara Ballini; Elena Fineberg; Bernardino Ghetti; Laura Della Corte; PierFranco Spano; George K Tofaris; Michel Goedert; Maria Grazia Spillantini
Journal:  Brain       Date:  2010-06-09       Impact factor: 13.501

Review 4.  High-throughput sequencing technologies.

Authors:  Jason A Reuter; Damek V Spacek; Michael P Snyder
Journal:  Mol Cell       Date:  2015-05-21       Impact factor: 17.970

Review 5.  The SNARE complex in neuronal and sensory cells.

Authors:  Neeliyath A Ramakrishnan; Marian J Drescher; Dennis G Drescher
Journal:  Mol Cell Neurosci       Date:  2012-04-02       Impact factor: 4.314

6.  Genetic and expression analyses reveal elevated expression of syntaxin 1A ( STX1A) in high functioning autism.

Authors:  Kazuhiko Nakamura; Ayyappan Anitha; Kazuo Yamada; Masatsugu Tsujii; Yoshimi Iwayama; Eiji Hattori; Tomoko Toyota; Shiro Suda; Noriyoshi Takei; Yasuhide Iwata; Katsuaki Suzuki; Hideo Matsuzaki; Masayoshi Kawai; Yoshimoto Sekine; Kenji J Tsuchiya; Gen-Ichi Sugihara; Yasuomi Ouchi; Toshiro Sugiyama; Takeo Yoshikawa; Norio Mori
Journal:  Int J Neuropsychopharmacol       Date:  2008-07-02       Impact factor: 5.176

7.  Complexin clamps asynchronous release by blocking a secondary Ca(2+) sensor via its accessory α helix.

Authors:  Xiaofei Yang; Yea Jin Kaeser-Woo; Zhiping P Pang; Wei Xu; Thomas C Südhof
Journal:  Neuron       Date:  2010-12-09       Impact factor: 17.173

Review 8.  Dysfunction of the SNARE complex in neurological and psychiatric disorders.

Authors:  Feng Chen; Huiyi Chen; Yanting Chen; Wenyan Wei; Yuanhong Sun; Lu Zhang; Lili Cui; Yan Wang
Journal:  Pharmacol Res       Date:  2021-01-30       Impact factor: 7.658

9.  Prediction of flexible/rigid regions from protein sequences using k-spaced amino acid pairs.

Authors:  Ke Chen; Lukasz A Kurgan; Jishou Ruan
Journal:  BMC Struct Biol       Date:  2007-04-16

10.  A Model Stacking Framework for Identifying DNA Binding Proteins by Orchestrating Multi-View Features and Classifiers.

Authors:  Xiu-Juan Liu; Xiu-Jun Gong; Hua Yu; Jia-Hui Xu
Journal:  Genes (Basel)       Date:  2018-08-01       Impact factor: 4.096

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.