Literature DB >> 33175836

Deep learning predicts short non-coding RNA functions from only raw sequence data.

Teresa Maria Rosaria Noviello1,2, Francesco Ceccarelli3,4, Michele Ceccarelli1,3, Luigi Cerulo2,5.   

Abstract

Small non-coding RNAs (ncRNAs) are short non-coding sequences involved in gene regulation in many biological processes and diseases. The lack of a complete comprehension of their biological functionality, especially in a genome-wide scenario, has demanded new computational approaches to annotate their roles. It is widely known that secondary structure is determinant to know RNA function and machine learning based approaches have been successfully proven to predict RNA function from secondary structure information. Here we show that RNA function can be predicted with good accuracy from a lightweight representation of sequence information without the necessity of computing secondary structure features which is computationally expensive. This finding appears to go against the dogma of secondary structure being a key determinant of function in RNA. Compared to recent secondary structure based methods, the proposed solution is more robust to sequence boundary noise and reduces drastically the computational cost allowing for large data volume annotations. Scripts and datasets to reproduce the results of experiments proposed in this study are available at: https://github.com/bioinformatics-sannio/ncrna-deep.

Entities:  

Year:  2020        PMID: 33175836      PMCID: PMC7682815          DOI: 10.1371/journal.pcbi.1008415

Source DB:  PubMed          Journal:  PLoS Comput Biol        ISSN: 1553-734X            Impact factor:   4.475


This is a PLOS Computational Biology Methods paper.

Introduction

Recent advances in whole transcriptome sequencing have led to the discovery of novel transcribed elements with no apparent functional or protein-coding potential. In the past considered as “dark matter”, they are recognized nowadays to play key roles in gene expression regulation in many biological processes and diseases [1]. Several classes of non-coding RNAs (ncRNAs) have been discovered in the last years, stressing on their importance as regulators of cellular development and differentiation. Conventionally, ncRNAs are classified into two major classes according to their length, short (<200 nucleotides) and long (>200 nucleotides) ncRNAs. It is common knowledge that ncRNAs regulate gene expression both on post-transcriptional and transcriptional levels, affect the organization, and modification of chromatin, or have catalytic functions [2]. In particular, short ncRNAs include ribosomal RNAs (rRNAs) and transfer RNAs (tRNAs) involved in mRNA translation, small nuclear RNAs (snRNAs) involved in splicing, small nucleolar RNAs (snoRNAs) involved in the modification of rRNAs, and microRNAs (miRNAs) involved in targeted translational repression and gene silencing. The functional characterization of ncRNAs on a wide scale is currently one of the main challenges of modern genome biology as, compared to protein coding RNAs, they are usually less conserved and expressed. One of the main efforts in the systematically and automatically classification of ncRNAs have been provided by Rfam database. Rfam [3] is a database that collects ncRNA sequences into families given that, as for protein coding genes, they have evolved from a common ancestor. At the base of each family construction, Rfam starts with at least one experimentally validated example from the published literature with known functional classification. Then, each family is described from a multiple sequence alignment, called seed alignment, that will be used to build a covariance model in order to search for other possible homologous sequences and then expand the family. In this way, Rfam contributes to systematically annotate and analyse ncRNA sequences into families with known function, common ancestor, and, when available, a secondary structure that could provide indications of consensus biological role of that family. The consolidated evidence that the function of protein coding sequences is strongly associated with the folded secondary and tertiary molecular structure leads to suppose that the secondary structure is a key factor to determine the function of non-coding RNA sequences [4]. Recently, several machine learning approaches have been successfully proven to predict RNA function (according to Rfam classification in families) from secondary structure information. Comparative sequence-based approaches, such as BLAST, are computationally very efficient but exhibit high false negative rates, as they are not able to detect conserved secondary structures. Folding approaches, such as GraPPLE [5], ignore nucleotide composition, are computationally expensive, and incur in high false positive rates, as sequence information is not taken into account. Approaches that combine both structural and sequential information are preferable for a better trade-off between false positives and false negatives. To this aim, INFERNAL adopts a stochastic context-free grammar to capture position-specific conservation and incorporates the RNA secondary structure information directly into the model [6]. A significant improvement with respect to INFERNAL has been obtained with EDeN, a machine learning method that adopts a graph kernel to model the RNA secondary structure input representation [7]. Comparable results have been obtained with nRC, a deep learning approach based on features extracted from secondary structure [8], and RNAGCN, based on a graph convolutional network built on RNA folding data [9]. These methods are all based on the use of structural features to predict ncRNA functions according to their Rfam class. However, the inference of a real secondary structure is still very challenging and requires high computational cost. Managing this task with well consolidated methods, such as ViennaRNA [10] and iPknot [11], does not prevent the presence of multi step error superimposition, leading to a low prediction accuracy for these methods. Recently, deep learning has emerged as one of the best machine learning approaches for prediction and classification problems in a variety of contexts, such as image or speech recognition, computer design and vision, bioinformatics, and medical image analysis [12, 13]. The greatest strength of using a deep learning approach is that discriminative features, also at high levels of abstraction, can be automatically learned from the input data independently from their nature. In this paper, we show that small ncRNA function can be predicted with good accuracy just from raw sequence information without the necessity of computing secondary structure features which are known to be computationally demanding. Besides the advantage in terms of computational time, this finding poses a question against the dogma of secondary structure being a key determinant of function in RNA. Evidence shows that with a 3 layer Convolutional Neural Network (CNN) the sequence alone is enough to predict the function of an RNA. Moreover, compared to recent secondary structure based methods, the proposed solution is more robust to sequence boundary noise and is able to reject effectively non-functional sequences. The last two advantages together with fast classification speed are essential for large genome annotation. CNN has emerged as an approach to extract local feature patterns of high-level abstraction from different and sparsely preprocessed data [14, 15]. Then it is likely that high level functional RNA features are directly learned from sequences by a CNN architecture. How such features are related, if they are, to secondary structure features remains an open question.

Materials and methods

Datasets

We compare our deep learning architectures against EDeN, nRC, and RNAGCN, the current state-of-the-art. We do not include INFERNAL as its computational cost is prohibitively expensive and, in literature, it has been shown outperformed by EDeN [7]. We design the evaluation experiments considering two datasets: i) a novel dataset composed of sequences extracted from the Rfam database [3]; and ii) a publicly available dataset of ncRNA sequences distributed among 13 functional macro-classes adopted to evaluate RNAGCN and nRC [9], as the authors of RNAGCN do not provide a publicly available tool. To build the novel dataset, we started with a set of 650790 sequences distributed among 2570 classes. Sequences encoded with letters different from canonical A, T, C, or G were excluded to simplify computation. This is not a limitation as they constitute a very small subset from the total (∼ 9 out of 1000). To obtain a dataset of only small ncRNA sequences, we excluded classes annotated as long non-coding RNAs and with an average sequence length greater than 200 bases obtaining a dataset with 371619 sequences among 177 Rfam classes. To avoid sequence length dependence we removed classes that can be strongly predicted only by sequence length. To detect such classes, we performed a 10-fold cross validation of a C5.0 decision tree algorithm trained only with sequence lengths. The algorithm performed overall with an average accuracy of 0.46 (±0.0004) and Kappa statistic of 0.44 (±0.0004), while per class performance was strongly variable (average F1 measure ranging between 0.07 and 0.95). Four classes, RF00032, RF00436, RF00951, and RF01990, show a strong sequence length dependence (average F1 measure greater than 0.80) and has been removed. This reduced the number of classes to 173 and the total number of sequences by 10% to 333280. Moreover, to make each Rfam class sufficiently representative, we excluded classes with less than 400 samples. This conducted to a final set of 306016 sequences distributed among 88 different Rfam classes (Fig 1). Table 1 shows how Rfam classes are distributed among different non-coding macro classes and Fig 2 shows how sequence lengths are distributed among Rfam classes.
Fig 1

Distribution of sequences among 88 Rfam classes downloaded from Rfam database.

Table 1

Distribution of downloaded Rfam classes among non-coding macro classes.

non-coding classRfam classes
snRNA snoRNARF00003, RF00004, RF00007, RF00012, RF00015, RF00016, RF00020, RF00026, RF00066, RF00097, RF00149, RF00156, RF00191, RF00309, RF00321, RF00409, RF00432, RF00548, RF00560, RF00561, RF00619, RF01210
Cis-regulatoryRF00037, RF00050, RF00059, RF00080, RF00162, RF00167, RF00168, RF00174, RF00234, RF00379, RF00380, RF00391, RF00442, RF00485, RF00504, RF00515, RF00521, RF00524, RF00557, RF01051, RF01055, RF01057, RF01068, RF01073, RF01497, RF01726, RF01731, RF01734, RF01750, RF02271, RF02913, RF02914
miRNARF00104, RF00451, RF00639, RF00641, RF00643, RF00645, RF00865, RF00875, RF00876, RF00882, RF00886, RF00906, RF01059, RF01911, RF01942, RF02000, RF02096
sRNARF00519, RF01687, RF01690, RF01699, RF01705, RF02924, RF03064
IntronRF00029, RF01998, RF01999, RF02001, RF02003, RF02012
rRNARF00001, RF00002
tRNARF00005, RF01852
Fig 2

Distribution of sequence lengths among 88 Rfam classes downloaded from Rfam database.

Input representation of ncRNA sequences

Data representation can strongly affect the performance of machine learning algorithms as they require a good set of hand designed features to work effectively. Instead, the paradigm of deep learning allows in principle to take a simple representation of raw data at the lowest (input) layer that is increasingly transformed into abstract feature representations in subsequent layers. However, as deep learning evolved historically around image analysis, the input of a neural network is typically a matrix which has the intrinsic property to completely preserve pixels locality. In genomics, as the input is a sequence, typical k-mer representation is able to capture the proximal composition of each nucleotide position. This allows us to learn local patterns of small nucleotide sequence motifs, such as binding sites, but in principle it may not be suited to detect complex spatial patterns of RNA sequences that fold into a 3-dimensional structure, where also distant nucleotides could interact. So, it may be necessary to introduce alternative input representations that allow to map linear structures into bi- or three-dimensional structures where such patterns could be detected effectively. Current literature methods basically rely on 3-dimensional secondary structure features predicted using popular RNA folding tools, such as ViennaRNA [10] and iPknot [11]. Although such features have been proven to predict RNA function effectively, they require high computational cost (Table 2). Here, we investigate whether less computationally expensive sequence encodings are sufficient to predict the RNA function. Specifically, we consider k-mer and space-filling curves, a lightweight input representation that preserves, almost well, space locality.
Table 2

Computational cost required to build the input representations of a sequence of length N.

Input representationComputational costAdopted in
Hilbert O(NN) Noviello et al., 2020
Morton O(NN) Noviello et al., 2020
SnakeO(N)Noviello et al., 2020
k–merO(N)Noviello et al., 2020
iPknotO(N5)nRC [8]
ViennaRNAO(N7)EDeN [7] and RNAGCN [9]
K-mer encoding is the most common and basic representation of genomic sequence data adopted in deep learning architectures. It consists of associating a binary vector with every consecutive non-overlapping k bases. The vector is all zeros except for the i-th entry associated with the unique k word obtained by concatenating k letters from the DNA alphabet (Fig 3). So, for example, a 2-mer encoding of a 100 long sequence produces a sequence of 50 binary vectors of 24 = 16 entries. In our experiments we consider k varying from 1 to 3.
Fig 3

k-mer representation: Examples of one, two, and tri-mer encodings.

A space-filling curve is a way to traverse a multi-dimensional space of cell elements where every cell is visited exactly once [16]. Thus, a space-filling curve imposes a linear order of points in the multi-dimensional space that can be mapped to a linear sequence of elements. Different space-filling curves have been proposed, each differing in their way of traversing the multi-dimensional space. We consider three types of 2D space-filling curves: Hilbert [17], Morton [18], and Snake (Fig 4). Each cell is then encoded with a four length binary vector of zeros except for the i-th entry associated with the unique DNA letter.
Fig 4

Examples of bi-dimensional space-filling curves.

The raw linear 47 base long sequence is encoded into the bi-dimensional space-filling curve depicted in blue. The padding necessary to fill the entire space is depicted in grey.

Examples of bi-dimensional space-filling curves.

The raw linear 47 base long sequence is encoded into the bi-dimensional space-filling curve depicted in blue. The padding necessary to fill the entire space is depicted in grey.

Deep network architecture

We adopt the standard deep learning CNN architecture depicted in Fig 5. The network is composed of multiple layers of parametrized kernel convolutions, each composed with: a rectified linear unit (ReLU) activation function to reduce the effect of gradient vanishing, a max-pooling layer to reduce the size of output, and a 50% drop-out layer to reduce overfitting [19]. We consider an increasing number of CNN layers (ranging from 1 to 3), while the dimension of convolution layers is 1D for k-mer input encodings and 2D for space-filling curve encodings. Input sequence representations are first encoded into binary vectors, where each entry corresponds to a CNN channel, and then padded to the maximum dimension allowed for that representation (Table 3). We consider three padding criteria: i) random, where vacant cells are filled with random symbols; ii) constant, where vacant cells are filled with a constant symbol drawn from the DNA alphabet; and, iii) new, where vacant cells are filled with a new symbol not belonging to the DNA alphabet.
Fig 5

A graphical representation of the deep learning architecture.

The raw RNA sequence is first encoded into an input layer representation (e.g. Hilbert filling-curve), then up to 3 convolution layers with rectifier activation followed by max-pooling layers perform the learning steps of sub-sequences with functional properties. Finally, two dense layers of rectified linear units are added to reduce data dimension to a softmax multi-class classification output layer.

Table 3

Maximum dimension allowed for each input representation and sequence of at most 200 nucleotides.

Dimensions of Hilbert and Morton spaces are the lowest powers of two greater than 200, while the dimension of Snake can be simply obtained consider the ceiling of .

Input representationMaximum dimension allowed
Hilbert16 × 16
Morton16 × 16
Snake15 × 15
k–mer200/k

A graphical representation of the deep learning architecture.

The raw RNA sequence is first encoded into an input layer representation (e.g. Hilbert filling-curve), then up to 3 convolution layers with rectifier activation followed by max-pooling layers perform the learning steps of sub-sequences with functional properties. Finally, two dense layers of rectified linear units are added to reduce data dimension to a softmax multi-class classification output layer.

Maximum dimension allowed for each input representation and sequence of at most 200 nucleotides.

Dimensions of Hilbert and Morton spaces are the lowest powers of two greater than 200, while the dimension of Snake can be simply obtained consider the ceiling of . We set empirically the kernel size to 3 and the number of filters at each i-th layer to 32 ⋅ 2. The architecture is completed with a flatten layer, to turn spatial features into a vector, two dense layers (respectively of 1000 and 500 nodes), and a softmax output to achieve multi-class classification. For the training step, we adopt Adam [20] as optimization algorithm and categorical cross-entropy loss function, suitable for multi-class classification problem [21]. Moreover, in order to have a more comprehensive overview of the most suitable deep learning model for short ncRNA classification, we consider also a Recurrent Neural Network (RNN) architecture in the comparison to the state-of-the-art on the dataset (training and test set) named as “test13” provided by nRC’s author [8]. We test three bidirectional Long Short-Term Memory Network (LSTM) RNN architectures with an increasing number of nodes (50,100,150). Since RNNs can process information as sequential data with no predetermined size limit, we apply these architectures on the sequences encoded as k-mers with no-padding and not as space-filling curves. Each RNN configuration is composed of a sequence input layer, two bidirectional LSTM layers alternating with two 20% drop-out layers. The architectures are completed with a dense layer and a softmax output for classification.

Experiment setup

We considered the ncRNA functional annotation task as a multi-class problem where each class is a collection of functionally related ncRNAs. Accuracy and Kappa statistic are adopted to estimate the overall prediction performance, while per class prediction capability is estimated with weighted F1-measure as more informative in highly unbalanced datasets. To test for the generalization capacity of the algorithm, we split each Rfam class into three random subsets: train (84%), validation (8%), and test (8%). Validation set was used only to tune the hyper-parameters of the learning algorithm, while test set was used to estimate the predictive performance. To limit the bias due to an over-representation of very similar homologous sequences in random splits, we ensured that for each class all sequences in validation and test sets have a similarity—computed in terms of normalized Hamming distance—less than 0.50 with any other sequence in the training set. Following the experimental assessment conducted in [7], we assess the prediction performance also under the uncertainty of where ncRNA sequence starts and ends. This could happen, for example, with noise coming from next-generation sequencing. We added to each sequence a varying boundary noise, consisting of a random number of nucleotides at the beginning and the end of a sequence preserving the nucleotide and di-nucleotide frequency of the original sequence [22]. We consider the length of the added noise varying among 0%, 25%, 50%, 75%, 100%, 125%, 150%, 175%, and 200% of the original sequence length. Moreover, we test the rejection capability of the algorithm, i.e. the behaviour of the algorithm if presented with non-functional RNA sequences, i.e sequences randomly generated by shuffling the initial set and preserving the di-nucleotide composition of each original sequence, or with uncertain sequences. Recently, it has been shown that excluding uncertain samples from test set can drastically improve model performance [23, 24]. To this aim, we adopted Monte Carlo Dropout to estimate the classification uncertainty of a test sample and decide whether to reject or not the sample. We trained the 3 layer CNN architecture with the training set and performed Monte Carlo dropout during test time. Monte Carlo dropout consists to use N different dropout versions of the trained model on the same test sample [23]. In each version, i = 1, …, N, a random set of nodes is deleted allowing to obtain a discrete probability distribution p among all class values, k = 1, …, C. From such a distribution the uncertainty of classification can be estimated in different ways [23, 24]. In our experiment we adopted N = 50 and evaluated two uncertainty estimators: Information Entropy and Top Difference. Information entropy is defined as: where , is the mean over all predicted probabilities for a class k, and ϵ is added for numerical stability. The Top Difference is defined as the difference between the two top, in average most probable, predicted classes k1 and k2, calculated as: where σ is the standard deviation of the discrete probability distribution p among i, and c a constant we set to c = 0.6. We evaluated the capability to predict functional vs. non-functional RNA sequences plotting the ROC curve of each estimator on a doubled test set obtained by adding to each sequence of the original test set a shuffled version preserving di-nucleotides distribution. As an example, we evaluated the gain in classification performance on the original test set where uncertain sequences are filtered out considering a decision threshold estimated empirically. We found the following best thresholds for the estimators: for the Information Entropy estimator and D < 0 for Top Difference estimator.

Results and discussions

Padding with random symbols affects space-filling curve performance

At first, we evaluated the impact on classification performance of different input sequence representations and padding criteria. To this aim, we adopted a 3 CNN layer architecture and evaluated the prediction performance against the novel Rfam dataset. Fig 6 shows the obtained results, in terms of Accuracy (ACC). K-mer encodings are not sensitive to padding criteria, while space-filling curve encodings exhibit a significant accuracy drop (∼ 10-15%) with random padding.
Fig 6

Classification performance in term of Accuracy obtained in the test set with different padding schemas.

The deep learning architecture is composed by 3 CNN layers. Confidence intervals are drawn assuming a normal distribution of classification error.

Classification performance in term of Accuracy obtained in the test set with different padding schemas.

The deep learning architecture is composed by 3 CNN layers. Confidence intervals are drawn assuming a normal distribution of classification error. Having in proximity both distal and close elements of a sequence constitutes a disadvantage when inputs are filled with random padding, while constant and new symbol padding is less prone to affect overall prediction performance.

CNN number of layers contributes to performance improvement

Fig 7 shows the impact of neural network depth, codified with the number of CNN layers, on the classification performances. A number of CNN layers equal to zero corresponds to a dense network. According to the above results, a new symbol has been used as padding criteria and performances were evaluated against the novel Rfam dataset.
Fig 7

Classification performance in term of Accuracy obtained in the test set with different number of layers using CNNs where inputs are padded with a new symbol.

Zero indicates a dense network. Confidence intervals are drawn assuming a normal distribution of classification error.

Classification performance in term of Accuracy obtained in the test set with different number of layers using CNNs where inputs are padded with a new symbol.

Zero indicates a dense network. Confidence intervals are drawn assuming a normal distribution of classification error. As expected, the absence of CNN layers strongly affects the learning step resulting in a low rate of accuracy for all the tested input representations. In a dense network, fully connected layers see the data as 1D vectors so it is likely that high level (spatial) relationships and local patterns are not captured. Conversely, increasing architecture’s depth enhances, almost linearly, the learning process of high-level abstract and spatially localized features supposedly connected to RNA function. Adding just one CNN layer increases the prediction accuracy by two-fold, advancing it to the range 0.80–0.90 for all input representations. A significant increment is registered for k-mer while adding more layers to space-filling curve representations does not significantly affect the prediction performance. The use of space-filling curves as a proxy for modelling long-range interactions between nucleotides show the worst performance. This does not exactly dismiss the importance of structural effects but poses a question on the necessity to go through the RNA structure to learn RNA functions.

K-mer encodings are more robust to boundary noise

Fig 8 shows the impact of boundary noise on classification performances against the novel Rfam dataset for each considered input sequence representations. The comparison, in terms of accuracy, with state-of-the-art methods, EDeN and nRC, is also shown. According to the above results, a new symbol has been used as padding criteria and three CNN layers as the depth of the architecture. At 0% of boundary noise, i.e. original sequences without noise addition, all considered input data representations reach the highest levels of accuracy. EDeN and nRC show performances similar to k-mer representations (0.87–0.90), while spatial curve representations exhibit an accuracy ranging between 0.82 and 0.83. Increasing the percentages of boundary noise a decrement of performance is registered for all methods. The decrement is more slightly for k-mer representations and more prominent for spatial-curves and the state-of-art methods, EDeN and nRC. At 200% of boundary noise the performance, in terms of accuracy, of k-mer representations are in the range 0.81–0.84, while for all others the performance drops in the range 0.64–0.70.
Fig 8

Classification performance in term of Accuracy obtained in the test set at different boundary noise levels.

The deep learning architecture is composed by 3 CNN layers and inputs are padded with a new symbol.

Classification performance in term of Accuracy obtained in the test set at different boundary noise levels.

The deep learning architecture is composed by 3 CNN layers and inputs are padded with a new symbol. S1 and S2 Tables show the breakdown of the classification performances of a 3 CNN layers architecture at class level in terms of F1-measure (F1), and their macro and weighted averages, respectively with the minimum and the maximum noise level (0% and 200%). At 0% noise level all methods show almost similar performances, in terms of both weighted and macro F1 averages. Instead, at 200% noise level, k-mer representations show better performances, exhibiting a decrement around 7% for both weighted and macro F1 averages. The state of the art, EDeN and nRC, are affected by a drop ranging between 25% and 50%, while the performance reduction of spatial curves is attested between 12% and 24%. At 200% noise level a high concordance of per class performance can be observed within the k-mers group, between EDeN and nRC, and within spatial-filled curves. There are 24 classes where all methods are wrong (i.e. F1 less than 0.60) in a similar way and 40 classes where the F1 measure of k-mer representations are in average 50% higher than others in literature methods. The state-of-art outperforms k-mer representations in only 6 classes (average F1-measure 50% higher).

Monte Carlo Dropout robustly recognizes non-functional RNA sequences and improves prediction performance on non-rejected sequences

Fig 9 shows the performance, estimated in terms of Area under ROC, of rejecting non-functional RNA sequences of two classification uncertain estimators, Information Entropy and Top Distance. Both estimators exhibit similar performance, 0.92 for Information Entropy and 0.90 for Top Distance.
Fig 9

Recognizing non-functional RNA with Monte Carlo Dropout.

Sequences are encoded with 1-mer and performance is estimated in terms of Area under ROC (on the left). Figures on the right shows the distributions of functional and non-functional RNA sequences among Information Entropy (H) and Top Distance (D).

Recognizing non-functional RNA with Monte Carlo Dropout.

Sequences are encoded with 1-mer and performance is estimated in terms of Area under ROC (on the left). Figures on the right shows the distributions of functional and non-functional RNA sequences among Information Entropy (H) and Top Distance (D). Table 4 and Fig 10 show, respectively, the overall and per class performance after Monte Carlo Dropout of uncertain samples encoded with no boundary noise. Overall performance is calculated in terms of Accuracy, Kappa statistic, and Matthew Correlation Coefficient (MCC), while per class performance is calculated in terms of F1-measure. The percentage of dropped samples for each class and the overall percentage of dropped samples are also shown.
Table 4

Overall performance improvement, in terms of Accuracy, Kappa, and MCC, after Monte Carlo Dropout of uncertain samples encoded with 1-mer.

EstimatorApproachAccuracyKappaMCC% of rejected samples
Entropy3mer0.990.990.9924.28
2mer0.990.990.9924.32
1mer0.990.990.9918.79
Snake0.980.980.9841.04
Morton0.980.970.9738.50
Hilbert0.980.980.9837.76
Top3mer0.980.980.9820.07
2mer0.980.980.9821.55
1mer0.990.990.9918.61
Snake0.970.970.9731.17
Morton0.960.960.9631.69
Hilbert0.970.970.9731.09
Fig 10

The effect on per class prediction performance of rejecting uncertain samples (1-mer encoded) with Monte Carlo Dropout.

For all input representations, an overall increment of accuracy can be registered. Comparing results reported in Table 4 with S1 Table the following increments can be observed for Information Entropy, 3-mer 11.23%, 2-mer 12.50%, 1-mer 13.79%, Snake and Morton 19.51%, Hilbert 18.07%; and the following for Top Distance, 3-mer 10.11%, 2-mer 11.36%, 1-mer 13.79%, Snake 18.29%, Morton 17.07%, Hilbert 16.86%. The highest percentage of dropout samples is registered for Sanke with Information Entropy (41.04%), while the lowest is registered for 1-mer with Top Distance (18.61%) For Information Entropy, the worst per class performance (i.e. F1 less than 0.60) is registered for a number of classes ranging between 8 and 13; instead, for Top Distance the number of worst performing classes ranges in 8–11 for k-mer and 11–18 for spatial curves. For Information Entropy a strong improvement of per class performance (i.e. F1 50% higher) is registered for a number of classes ranging between 23–26 for k-mer and 28–34 for spatial curves, while for Top Distance the number of classes ranges in 24–26 for k-mer and 26–30 for spatial curves.

Comparison with RNAGCN

The recent proposed RNAGCN method, based on a graph convolutional network, is evaluated against a dataset where ncRNA sequences are classified over 13 functional macro-classes [9]. As the authors of this method do not provide an executable tool, we were not able to evaluate the proposed method against our novel Rfam dataset. So, we evaluated our approach against the publicly available datasets which were originally adopted by the authors of nRC [8]. In particular, we choose the dataset called test13 which is the one that shows the best RNAGCN results. Table 5 reports the results obtained. Surprisingly EDeN exhibits a performance that is significantly lower, in terms of Accuracy, than those obtained against the novel Rfam dataset. Instead with both a standard CNN architecture (Fig 5) and different configuration of Bidirectional LSTM RNNs we obtained performances almost similar to RNAGCN and nRC and almost consistent with those obtained in previous experiments. To test if there is room of improvement, we explored alternative CNN architectures. After a thorough empirical evaluation of different architectures, we obtained an improved configuration that shows an increment between 5% and 10%, in terms of Accuracy, with respect to the standard architecture adopted in previous experiments (Table 5). The improved architecture is composed of 5 CNN layers interleaved with batch normalization, Leaky ReLU activation, and max-pooling. GaussianNoise to reduce overfitting is added every 2 CNN layers and a dropout rate at 20% is added after the last CNN layer. The network is completed with two dense layers, respectively of 128 and 64 units, to reduce input dimensions, and a final softmax layer for the output class. AMSGrad optimization [25], with a learning rate at 0.0005, has been adopted in the learning step.
Table 5

Summary of results on the dataset called test13 containing 13 non-coding classes.

Results for nRC and RNAGCN are taken from [9].

ArchitectureApproachAccuracyRecallPrecisionF1-scoreMCC
EDeN0.670.600.750.650.61
nRC0.820.820.810.820.80
RNAGCN0.860.860.860.860.85
RNN 50 nodes1mer0.860.860.860.860.85
2mer0.770.770.770.770.75
3mer0.770.770.770.770.75
RNN 100 nodes1mer0.880.880.880.870.87
2mer0.790.790.790.790.77
3mer0.780.780.790.780.75
RNN 150 nodes1mer0.890.900.900.900.89
2mer0.800.800.800.790.78
3mer0.790.790.790.790.77
CNN standard1mer0.880.880.890.880.87
2mer0.830.830.840.830.82
3mer0.810.810.820.810.79
Morton0.780.780.790.780.77
Snake0.820.820.830.810.80
Hilbert0.810.810.840.820.80
CNN improved1mer0.960.960.960.960.96
2mer0.920.920.920.920.91
3mer0.880.880.880.880.86
Morton0.860.860.880.860.85
Snake0.860.860.880.860.85
Hilbert0.860.870.890.870.86

Summary of results on the dataset called test13 containing 13 non-coding classes.

Results for nRC and RNAGCN are taken from [9].

Conclusion

Recent advances in high throughput technologies have allowed the discovery of a large number of novel transcript elements, called ncRNAs, and previously considered to lack functional potential. ncRNAs represent a very heterogeneous group of RNA in terms of their length, biogenesis, and functions which can be divided into long non-coding RNAs and short non-coding RNAs. Due to their complex nature, great challenges still remain for reaching a full comprehension of ncRNAs, demanding the development of computational approaches able to detect and annotate their biological functions according to family identity. In this work, we proposed a deep learning approach to classify short ncRNA sequences into Rfam classes. A comparative assessment with the state-of-the-art graph kernel methods shows that the deep learning approach is more robust to boundary noise when k-mer input representations are adopted. CNN number of layers contributes to performance improvement while random padding schema affects negatively space-filling curve performance. The deep learning architecture allows for less computational cost input representations than sequence-structure input representations of graph kernel methods. This allows for the classification of large scale genomic data and poses an interesting question against the dogma of secondary structure being a key determinant of function in RNA. Moreover, since both standard configurations of CNN and RNN architectures are suitable for inferring the functionality of RNAs from their sequences without structural information, a hybrid approach with both CNN and RNN layer blocks could represent the best choice and improve the classification performance. Ideally, a first convolutional layer block could identify short sequence motifs correlated with the biological role of the short ncRNA family, and then a recurrent layer block could learn long-term relationships between inferred functional motifs. The empirical evaluation of deep learning models in this study, especially regarding CNNs, let us suppose that abstract features associated with RNA functions are effectively learned from simple input representations (i.e. k-mer) and that any further structural encoding in the input representation, such as those carried by space-filling curves, does not contribute to performance improvement. To what extent such features are related to secondary structure features remains an open question.

Per class performance evaluated with a 3 layer CNN network, new padding symbol, and 0% of boundary noise.

(XLSX) Click here for additional data file.

Per class performance evaluated with a 3 layer CNN network, new padding symbol, and 200% of boundary noise.

(XLSX) Click here for additional data file. 12 Jul 2020 Dear Dr. Cerulo, Thank you very much for submitting your manuscript "Deep learning predicts non-coding RNA functions from only raw sequence data" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments. We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation. When you are ready to resubmit, please upload the following: [1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. [2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file). Important additional instructions are given below your reviewer comments. Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts. Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments. Sincerely, Ilya Ioshikhes Associate Editor PLOS Computational Biology Weixiong Zhang Deputy Editor PLOS Computational Biology *********************** Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: 1. This paper proposes a deep learning method to predict the classes of some non-coding RNAs. It is a little misleading to say “Deep learning predicts non-coding RNA functions”. Besides the well-known classes of non-coding RNAs, the proposed method may not be used to predict the functions of most non-coding RNAs such as lncRNAs, circRNAs, etc. 2. It is unclear why the authors “removed sequences greater than 150 bases” as most non-coding RNAs are longer than 200 nucleotides. It is also unclear how sequence redundancy in the dataset was handled. 3. While k-mer compositions are widely used for input encoding of sequence, they do not capture all the sequential information. Strictly speaking, the k-mer features are not “raw sequence data”. These k-mer features may work well for some ncRNA classes, but not for others. Also, since RNA sequence determines structure that often underlies function, it is unclear how “this finding poses a question against the dogma of secondary structure being a key determinant of function in RNA”. 4. The deep learning architecture in this study used convolutional neural network (CNN). I wonder whether some other deep learning techniques, such as recurrent neural network and word embedding, were also tested for this problem. 5. What are “non-functional RNA sequences”? The RNA sequences that do not belong to the considered classes can also be biologically functional. 6. The classifier developed in this study should be compared directly with the previous models, especially the ones using RNA structural features. The results for nRC and RNAGCN in Table 6 were taken from a previous study. It is unclear whether the same datasets and testing strategy were also used in the previous study. Reviewer #2: Summary In recent years, there has been research evidence that secondary structure is the key factor to know the function of RNA. Some machine learning based methods have been successfully proved to be able to predict RNA function from secondary structure information. At present, there are more or less deficiencies in the existing methods for predicting RNA function on the market, such as BLAST, which has a high false negative rate, GraPPLE, which has a high false positive rate, and INFERNAL, which has a high computational cost. In this case, the author proposes a method based on the original sequence without calculating the known secondary structure features. The method is more robust to the sequence boundary noise and reduces drastically the computational cost allowing for large data volume annotations. The last two advantages together with fast classification speed are essential for large genome annotation. Major Comments � In general, the idea of this paper is to find a new way to predict RNA function from the original sequence information instead of the existing methods of predicting RNA function through secondary structure, which is of great significance. However, when using k-mer and space filling curve to represent input, the author can add some improvements to these two existing methods to some extent. Secondly, two uncertainty estimators, information entropy and top difference, were evaluated in the prediction of RNA function. For the two uncertain estimators threshold setting, the author lacks the corresponding information. Finally, in assessing RNA function, the author assumes that any further structural coding in the input representation does not help improve performance, which remains to be debated and requires corresponding arguments to prove. Minor Comments � Picture layout: The graphs and tables in the paper are far apart from the content of the text that concerns them and it seems very inconvenient. � Supplementary Notes: (13th line from the bottom, page 4) Sentence “In our experiments we consider k varying from 1 to 3” needs to be supplemented to explain why k varies from 1 to 3 and the effect of K on the experiment. � Subjective argument: (1st line from the bottom, page 5) The sentence “We set empirically the kernel size to 3 and the number of filters at each i-th layer to 32 * 2i” is too subjective in a sense and the author should make it clear what experience the size of the kernel and the number of filters at each i-th layer are based on. Reviewer #3: The article by Noviello et al. is a nice investigation on non-coding RNAs for which functional annotations are beneficial for the biological community. The authors exploited deep learning methods to tackle the challenge and their results shed new light on the structure-function relationship in this class of biomolecules. The authors also provided all the scripts and documentation to reproduce their work and compared their work to other state-of-the-art methods. The work is nicely written and logical to follow, I have only minor comments to be addressed: 1. a general proofreading to get rid of the remaining typos and some grammatical errors, or too wordy sentences 2. I am not an expert on non-coding RNAs and I was wondering in reading about the dataset curation how the 41 classes have been selected and in general to know more about how the classification of non-coding RNA sequences in classes is done This might be beneficial also for a broad audience as the one of PLOS COMP BIOL. 3. It will be nice if the authors could explain a little bit more the rationale behind the choice of the deep network architecture to this case study instead than other approaches also to benefit a broader audience 4. make the conclusions less technical and more accessible to biologists so that they can really appreciate the value of the work ********** Have all data underlying the figures and results presented in the manuscript been provided? Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information. Reviewer #1: Yes Reviewer #2: None Reviewer #3: Yes ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: Yes: Elena Papaleo Figure Files: While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at . Data Requirements: Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5. Reproducibility: To enhance the reproducibility of your results, PLOS recommends that you deposit laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions, please see 12 Sep 2020 Submitted filename: response.pdf Click here for additional data file. 28 Sep 2020 Dear Dr. Cerulo, We are pleased to inform you that your manuscript 'Deep learning predicts short non-coding RNA functions from only raw sequence data' has been provisionally accepted for publication in PLOS Computational Biology. Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests. Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated. IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript. Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS. Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology. Best regards, Ilya Ioshikhes Associate Editor PLOS Computational Biology Weixiong Zhang Deputy Editor PLOS Computational Biology *********************************************************** Reviewer's Responses to Questions Comments to the Authors: Please note here if the review is uploaded as an attachment. Reviewer #1: The authors did a good job in addressing my concerns. Reviewer #2: The paper is very much improved and I have no problem in recommending it for publication. ********** Have all data underlying the figures and results presented in the manuscript been provided? Large-scale datasets should be made available via a public repository as described in the PLOS Computational Biology data availability policy, and numerical data that underlies graphs or summary statistics should be provided in spreadsheet form as supporting information. Reviewer #1: Yes Reviewer #2: None ********** PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 3 Nov 2020 PCOMPBIOL-D-20-00903R1 Deep learning predicts short non-coding RNA functions from only raw sequence data Dear Dr Cerulo, I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course. The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript. Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers. Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work! With kind regards, Nicola Davies PLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol
  13 in total

Review 1.  Non-coding RNAs in human disease.

Authors:  Manel Esteller
Journal:  Nat Rev Genet       Date:  2011-11-18       Impact factor: 53.242

2.  Know When You Don't Know: A Robust Deep Learning Approach in the Presence of Unknown Phenotypes.

Authors:  Oliver Dürr; Elvis Murina; Daniel Siegismund; Vasily Tolkachev; Stephan Steigele; Beate Sick
Journal:  Assay Drug Dev Technol       Date:  2018 Aug/Sep       Impact factor: 1.738

3.  An efficient graph kernel method for non-coding RNA functional prediction.

Authors:  Nicolò Navarin; Fabrizio Costa
Journal:  Bioinformatics       Date:  2017-09-01       Impact factor: 6.937

4.  IPknot: fast and accurate prediction of RNA secondary structures with pseudoknots using integer programming.

Authors:  Kengo Sato; Yuki Kato; Michiaki Hamada; Tatsuya Akutsu; Kiyoshi Asai
Journal:  Bioinformatics       Date:  2011-07-01       Impact factor: 6.937

5.  ViennaRNA Package 2.0.

Authors:  Ronny Lorenz; Stephan H Bernhart; Christian Höner Zu Siederdissen; Hakim Tafer; Christoph Flamm; Peter F Stadler; Ivo L Hofacker
Journal:  Algorithms Mol Biol       Date:  2011-11-24       Impact factor: 1.405

6.  Infernal 1.1: 100-fold faster RNA homology searches.

Authors:  Eric P Nawrocki; Sean R Eddy
Journal:  Bioinformatics       Date:  2013-09-04       Impact factor: 6.937

7.  nRC: non-coding RNA Classifier based on structural features.

Authors:  Antonino Fiannaca; Massimo La Rosa; Laura La Paglia; Riccardo Rizzo; Alfonso Urso
Journal:  BioData Min       Date:  2017-08-01       Impact factor: 2.522

Review 8.  Opportunities and obstacles for deep learning in biology and medicine.

Authors:  Travers Ching; Daniel S Himmelstein; Brett K Beaulieu-Jones; Alexandr A Kalinin; Brian T Do; Gregory P Way; Enrico Ferrero; Paul-Michael Agapow; Michael Zietz; Michael M Hoffman; Wei Xie; Gail L Rosen; Benjamin J Lengerich; Johnny Israeli; Jack Lanchantin; Stephen Woloszynek; Anne E Carpenter; Avanti Shrikumar; Jinbo Xu; Evan M Cofer; Christopher A Lavender; Srinivas C Turaga; Amr M Alexandari; Zhiyong Lu; David J Harris; Dave DeCaprio; Yanjun Qi; Anshul Kundaje; Yifan Peng; Laura K Wiley; Marwin H S Segler; Simina M Boca; S Joshua Swamidass; Austin Huang; Anthony Gitter; Casey S Greene
Journal:  J R Soc Interface       Date:  2018-04       Impact factor: 4.293

9.  Identification and classification of ncRNA molecules using graph properties.

Authors:  Liam Childs; Zoran Nikoloski; Patrick May; Dirk Walther
Journal:  Nucleic Acids Res       Date:  2009-04-01       Impact factor: 16.971

10.  Rfam 13.0: shifting to a genome-centric resource for non-coding RNA families.

Authors:  Ioanna Kalvari; Joanna Argasinska; Natalia Quinones-Olvera; Eric P Nawrocki; Elena Rivas; Sean R Eddy; Alex Bateman; Robert D Finn; Anton I Petrov
Journal:  Nucleic Acids Res       Date:  2018-01-04       Impact factor: 16.971

View more
  2 in total

Review 1.  Non-coding RNAs in malaria infection.

Authors:  Valeria Lodde; Matteo Floris; Maria Rosaria Muroni; Francesco Cucca; Maria Laura Idda
Journal:  Wiley Interdiscip Rev RNA       Date:  2021-10-14       Impact factor: 9.349

2.  PINC: A Tool for Non-Coding RNA Identification in Plants Based on an Automated Machine Learning Framework.

Authors:  Xiaodan Zhang; Xiaohu Zhou; Midi Wan; Jinxiang Xuan; Xiu Jin; Shaowen Li
Journal:  Int J Mol Sci       Date:  2022-10-05       Impact factor: 6.208

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.