Literature DB >> 34078256

Comparative analysis and prediction of nucleosome positioning using integrative feature representation and machine learning algorithms.

Guo-Sheng Han1,2, Qi Li3,4, Ying Li3,4.   

Abstract

BACKGROUND: Nucleosome plays an important role in the process of genome expression, DNA replication, DNA repair and transcription. Therefore, the research of nucleosome positioning has invariably received extensive attention. Considering the diversity of DNA sequence representation methods, we tried to integrate multiple features to analyze its effect in the process of nucleosome positioning analysis. This process can also deepen our understanding of the theoretical analysis of nucleosome positioning.
RESULTS: Here, we not only used frequency chaos game representation (FCGR) to construct DNA sequence features, but also integrated it with other features and adopted the principal component analysis (PCA) algorithm. Simultaneously, support vector machine (SVM), extreme learning machine (ELM), extreme gradient boosting (XGBoost), multilayer perceptron (MLP) and convolutional neural networks (CNN) are used as predictors for nucleosome positioning prediction analysis, respectively. The integrated feature vector prediction quality is significantly superior to a single feature. After using principal component analysis (PCA) to reduce the feature dimension, the prediction quality of H. sapiens dataset has been significantly improved.
CONCLUSIONS: Comparative analysis and prediction on H. sapiens, C. elegans, D. melanogaster and S. cerevisiae datasets, demonstrate that the application of FCGR to nucleosome positioning is feasible, and we also found that integrative feature representation would be better.

Entities:  

Keywords:  Convolutional neural networks; Extreme gradient boosting; Extreme learning machine; Frequency chaos game representation; Nucleosome classification; Support vector machine

Mesh:

Substances:

Year:  2021        PMID: 34078256      PMCID: PMC8170966          DOI: 10.1186/s12859-021-04006-w

Source DB:  PubMed          Journal:  BMC Bioinformatics        ISSN: 1471-2105            Impact factor:   3.307


Background

The nucleosome is the basic structural unit of eukaryotic chromatin. It is formed by the combination of histones and DNA. The core is an octamer formed by two copies of each histones H2A, H2B, H3 and H4, DNA is wound around it about 1.65 turns. Among them, the DNA wrapped around the octamer is called core DNA, which is 147 base pairs in length; the DNA sequence that connects two adjacent nucleosomes is called linker DNA, which ranges from 20 to 60 base pairs [1]. In eukaryotic cells, nucleosomes play a crucial role in the process of genome expression, DNA replication, DNA repair and transcription [2-6]. In addition, studies have demonstrated that abnormal histone modifications in the nucleosome structure are directly related to diseases such as tumors [7] and lupus erythematosus [8]. Therefore, the mechanism of nucleosome positioning in DNA sequence has an extremely important research value, which is also one of the hot spots in current epigenetics research. The precise position of the nucleosome on the DNA sequence in the whole genome is called nucleosome positioning. Early experiments mainly used micrococcal nuclease to process chromatin to achieve nucleosome positioning [9]. In recent years, benefiting from the development and application of high-throughput experimental techniques, such as chromatin immunoprecipitation-chip (ChIP-chip), chromatin immunoprecipitation sequencing (ChIP-Seq), many breakthroughs have been made in nucleosome positioning experiments. The nucleosome positioning maps of different species such as Saccharomyces cerevisiae [10, 11], Homo sapiens [12], Caenorhabditis elegans [13], Drosophila melanogaster [14], etc. have been obtained, which provides a large amount of data basis for researchers to carry out theoretical research and prediction. Much of the research in nucleosome positioning is based on DNA sequence analysis [15, 16]. The DNA sequence consists of four nucleotides: A, T, C and G. Studies have shown that the affinity between genomic DNA sequences and histones is clearly dependent on sequence order, which indicates that the DNA sequence order does affect the position of nucleosome formation. Although some provide the support that nucleosome positioning is affected by multiple factors such as DNA sequence, ATP-dependent nucleosome remodeling enzymes and transcription factors [17, 18]. Many researchers used sequence analysis methods to express nucleosome DNA sequence characteristics and then performed nucleosome positioning and recognition. In the past decade, with the popularity of machine learning algorithms, a multitude of computational models based on DNA sequence information have been proposed. Chen et al. proposed the "iNuc-Physchem" nucleosome prediction model using 12 physicochemical features of DNA, which identified the core DNA and linker DNA of the yeast genome nucleosome [19]. Later, the research group also established a biophysical model based on the deformation energy of DNA sequences to predict the sequence of nucleosomes [20]. Guo et al. used pseudo k-tuple nucleotide composition to successfully express the feature vector of the DNA sequence, and used the support vector machine (SVM) classifier to train H. sapiens, C. elegans and D. melanogaster [21]. 3LS model used similar methods and combined the distribution of different numbers of nucleotide combinations in the sequence to further improve the prediction accuracy [22]. ZCMM model based on the Z-curve (z-curve) theory and the position weight matrix (PWM), the prediction performance is excellent on D. melanogaster [23]. Deep learning is also applied to nucleosome positioning and achieved good prediction quality. These deep learning models all used one-hot encoding. Gangi et al. [24] constructed a deep learning model that integrates convolutional layers and long short-term memory networks. LeNup model added the Inception module and gated convolutional network to the convolutional neural network to improve the nucleosome positioning [25]. In this work, we firstly will use frequency chaos game representation to construct DNA sequence features. This feature representation method has not been used in nucleosome positioning before. Secondly, we also integrated FCGR with other feature vectors and adopted the principal component analysis (PCA) algorithm to achieve the feature dimensionality reduction. Finally, various machine learning algorithms such as support vector machine (SVM), extreme learning machine (ELM), extreme gradient boosting (XGBoost), multi-layer perceptron (MLP), and convolutional neural networks (CNN) will be used to perform comparative analysis and prediction of nucleosome positioning.

Results

Rule of performance evaluation

Cross validation is a statistical analysis method used to validate the model. The basic idea is to divide the original data into a training set and a test set. First, use the training set to train the model, and then use the test set to test the classification or prediction performance of the obtained model. In this work, we used K-fold cross-validation to evaluate the performance of the predictor through four parameters: sensitivity (), specificity (), accuracy (ACC), and Mathew's correlation coefficient (MCC). The specific definition are as follows:where TP, TN, FP and FN are the numbers of true positives, true negatives, false positives and false negatives, respectively [25]. is the true positive rate. When  = 1, it means that all core DNA of nucleosomes have been correctly predicted. is true negative rate. When  = 1, it means that all linker DNAs are correctly predicted. ACC reflects the ratio of the number of correctly predicted samples of each category to the total sample. MCC comprehensively evaluates the prediction results. MCC ∈ [− 1,1]. MCC =  − 1 means that the correlation is completely opposite. MCC = 1 means that the prediction result is completely correlated with the true category. MCC = 0 means that the prediction is completely random. Receiver operating characteristic curve (ROC curve) and area under curve (AUC) are often used to evaluate the pros and cons of a binary classifier. Area under curve (AUC) is the area under the Roc curve, usually between 0.5 and 1. As a value, AUC can be used to evaluate the quality of the classifier more intuitively. The larger the AUC value, the better. Taking into account the length of the paper, this paper only calculates the AUC value and does not draw the ROC curve one by one.

Performance of predictors

According to the characteristics of FCGR described above, the different values of K nucleotide will affect the feature expression of the DNA sequence [26]. A large K value means a high feature dimension. And generally, high-dimensional features are relatively sparse, and the fitting quality may not be outstanding. Obviously, choosing an appropriate K value will have a greater impact on the classification effect of each classifier. Some studies have combined DNA sequence features [22, 23, 27, 28]. Similarly, FCGR can also use different combinations of K nucleotide values as feature vectors.

Feasibility of FCGR

In this work, we flatten the FCGR matrix into a normalized vector (1-D) corresponding to the frequency of K nucleotides as the input of SVM and ELM [27]. The input of MLP and CNN models are not only single-channel FCGR images (2D) [26, 27], but also multiple K-value images, the image size is 64 × 64. For the input of multi-K-value images, we leveraged multiple channels to feed in the combination of K values when training the model, and used simple averaging to calculate the final prediction result. To find the appropriate value of K or combination, we use 10-fold cross-validation. Figure 1 shows the classification accuracy of each classifier with different K values and combinations.
Fig. 1

The histogram (a–d) shows the accuracy of using SVM, ELM, MLP and CNN with K = 1, 2, 3, 4, 5 or combinations

The histogram (a–d) shows the accuracy of using SVM, ELM, MLP and CNN with K = 1, 2, 3, 4, 5 or combinations For SVM, the accuracy of H. sapiens, C. elegans reaches its peak with K = 1, 2 and 4; the accuracy of D. melanogaster was the highest with K = 2 and 4. For ELM, the accuracy of D. melanogaster reaches an peak when K = 2; the accuracy of H. sapiens reaches its peak when K = 2 and 4; the classification accuracy of C. elegans is best with K = 1, 2 and 4 like using SVM. For MLP, the accuracy of H. sapiens and D. melanogaster reaches its peak with K = 3, 4 and 5; the classification accuracy of C. elegans is best with K = 3 and 4. For CNN, H. sapiens have the best classification quality when using the FCGR image with K = 4; the accuracy of C. elegans reaches its peak with K = 4 and 5; the accuracy of D. melanogaster reaches its peak with K = 3, 4 and 5. Table 1 clearly shows the best prediction results for four species via 10-fold cross-validation.
Table 1

The prediction results for four species via 10-fold cross-validation by SVM, ELM, MLP, CNN

SpeciesMethodKACCSnSpMCCAUC
H. sapiensFCGR-SVM1 + 2 + 40.87080.89800.84390.74320.9300
FCGR-ELM2 + 40.83320.87730.78960.66950.8969
FCGR-MLP3 + 4 + 50.85650.87680.83650.71440.9186
FCGR-CNN40.85850.87460.84260.71850.9214
C. elegansFCGR-SVM1 + 2 + 40.86030.89480.82630.72290.9295
FCGR-ELM1 + 2 + 40.87540.89440.85660.75150.9421
FCGR-MLP3 + 40.85370.86130.84620.70920.9225
FCGR-CNN4 + 50.84950.88390.81560.7020.9181
D. melanogasterFCGR-SVM2 + 40.81130.78310.84000.62410.8791
FCGR-ELM20.79100.76480.81750.58330.8595
FCGR-MLP3 + 4 + 50.81170.80000.82350.62380.8848
FCGR-CNN3 + 4 + 50.81080.80140.82040.62280.8854
S. cerevisiaeFCGR-SVM411111
FCGR-ELM3 or 411111
FCGR-MLP411111
FCGR-CNN40.999710.99940.99951

Best values are in bold

The prediction results for four species via 10-fold cross-validation by SVM, ELM, MLP, CNN Best values are in bold For S. cerevisiae dataset, we used SVM, ELM and MLP to achieve  =  = ACC = MCC = AUC = 1 via 10-fold cross-validation when K = 3 or 4. There may be room for improvement in the predicted quality of the other three datasets.

Comparison of the results with integrative features

In addition, we also integrated FCGR with other feature representations [29-32], such as DAC, TAC, DACC, TACC, PC-PseDNC, PC-PseTNC, and input them into SVM and ELM. Besides, we added the extreme gradient boosting (XGBoost) algorithm. The comparative analysis results are shown in Tables 2, 3 and 4 respectively.
Table 2

The prediction results of integrative feature representation for H. sapiens via 10-fold cross-validation by SVM, ELM and XGBoost

MethodFeatureparameterACCSnSpMCCAUC
SVMFCGR + DACK = 4, lag = 20.87080.88960.85220.74250.9315
FCGR + TACK = 4, lag = 20.86790.88780.84830.73690.9288
FCGR + DACCK = 4, lag = 20.85370.85310.85440.70790.9208
FCGR + TACCK = 4, lag = 20.84150.83190.85090.68370.9113
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.86730.89360.84130.73590.9286
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.87080.89660.84520.74290.9273
All features0.81370.75180.87480.63220.8996
ELMFCGR + DACK = 4, lag = 20.82920.85390.80480.65980.9007
FCGR + TACK = 4, lag = 20.82970.85310.80650.66040.8977
FCGR + DACCK = 4, lag = 20.83360.86270.80480.66890.9009
FCGR + TACCK = 4, lag = 20.83250.86320.80220.66680.8983
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.83140.86580.79740.66480.8985
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.82480.85440.79570.65160.8947
All features0.83560.86320.80830.67350.9013
XGBoostFCGRK = 1 + 2 + 40.85850.893090.82440.719340.9197
FCGR + DACK = 4, lag = 20.84500.875030.81520.691820.9160
FCGR + TACK = 4, lag = 20.84020.87330.80740.682210.9136
FCGR + DACCK = 4, lag = 20.84230.865830.81910.685880.9127
FCGR + TACCK = 4, lag = 20.83910.872870.80570.680590.9115
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.85590.889130.82300.713960.9207
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.84980.882540.81740.701680.9183
All features0.84720.873740.82090.695810.9170

All features means the feature vector = FCGR + DACC + TACC + PC-PseDNC + PC-pseTAC, and the parameters are consistent with the parameters of the corresponding feature. Parameter K indicates the values of K nucleotide in FCGR; lag indicates the distance of lag along the sequence; λ represents the highest counted rank (or tier) of the correlation along a DNA sequence; w is the weight factor ranged from 0 to 1

Best values are in bold

Table 3

The prediction results of integrative feature representation for C. elegans via 10-fold cross-validation by SVM, ELM and XGBoost

MethodFeatureparameterACCSnSpMCCAUC
SVMFCGR + DACK = 4, lag = 20.85740.88630.82900.71640.9283
FCGR + TACK = 4, lag = 20.85610.88240.83020.71370.9272
FCGR + DACCK = 4, lag = 20.84710.87770.81710.69610.9122
FCGR + TACCK = 4, lag = 20.84700.86410.83010.69490.9179
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.85760.89210.82360.71760.9275
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.85390.88390.82440.70960.9275
All features0.84310.84610.84010.68670.9139
ELMFCGR + DACK = 4, lag = 20.87070.88630.85550.74210.9355
FCGR + TACK = 4, lag = 20.86960.88900.85050.74000.9359
FCGR + DACCK = 4, lag = 20.86840.88310.85390.73760.9358
FCGR + TACCK = 4, lag = 20.86800.89170.84470.73710.9329
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.86240.88470.84050.72580.9318
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.85570.88470.82710.71320.9262
All features0.85970.88630.83360.72100.9271
XGBoostFCGRK = 1 + 2 + 40.84870.87970.81820.69950.9202
FCGR + DACK = 4, lag = 20.84160.86520.81820.68420.9165
FCGR + TACK = 4, lag = 20.84330.87070.81630.68820.9169
FCGR + DACCK = 4, lag = 20.84620.87030.82250.69380.9170
FCGR + TACCK = 4, lag = 20.84170.86760.81630.68480.9162
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.84500.87490.81560.69170.9199
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.84930.87890.82020.70040.9178
All features0.84810.86950.82710.69730.9195

All features means the feature vector = FCGR + DACC + TACC + PC-PseDNC + PC-pseTAC, and the parameters are consistent with the parameters of the corresponding feature. Parameter K indicates the values of K nucleotide in FCGR; lag indicates the distance of lag along the sequence; λ represents the highest counted rank (or tier) of the correlation along a DNA sequence; w is the weight factor ranged from 0 to 1

Best values are in bold

Table 4

The prediction results of integrative feature representation for D. melanogaster via 10-fold cross-validation by SVM, ELM and XGBoost

MethodFeatureparameterACCSnSpMCCAUC
SVMFCGR + DACK = 4, lag = 20.80470.78620.82350.61030.8762
FCGR + TACK = 4, lag = 20.80890.78350.83470.61900.8747
FCGR + DACCK = 4, lag = 20.77530.77720.77330.55090.8295
FCGR + TACCK = 4, lag = 20.75600.67720.83610.51990.8247
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.80730.77970.83540.61620.8803
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.80570.78350.82840.61290.8769
All features0.75100.68280.82040.50780.7987
ELMFCGR + DACK = 2, lag = 20.79200.77790.80630.58470.8644
FCGR + TACK = 2, lag = 20.79170.78070.80280.58390.8651
FCGR + DACCK = 2, lag = 20.77690.76170.79230.55440.8503
FCGR + TACCK = 2, lag = 20.76940.77350.76530.53910.8460
FCGR + PCPseDNCK = 2, λ = 8, w = 0.50.78960.76310.81650.58060.8651
FCGR + PCPseTNCK = 2, λ = 8, w = 0.50.75950.73410.78530.52060.8400
All features0.78470.78100.78840.57000.8576
XGBoostFCGRK = 1 + 2 + 40.79760.77970.81580.59590.8725
FCGR + DACK = 4, lag = 20.78730.77170.80320.57510.8613
FCGR + TACK = 4, lag = 20.78770.76240.81330.57680.8647
FCGR + DACCK = 4, lag = 20.77240.78140.76320.54500.8532
FCGR + TACCK = 4, lag = 20.78240.76930.79580.56580.8542
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.79970.78240.81720.60010.8725
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.79880.77900.81900.59890.8775
All features0.79510.77930.81120.59090.8718

All features means the feature vector = FCGR + DACC + TACC + PC-PseDNC + PC-pseTAC, and the parameters are consistent with the parameters of the corresponding feature. Parameter K indicates the values of K nucleotide in FCGR; lag indicates the distance of lag along the sequence; λ represents the highest counted rank (or tier) of the correlation along a DNA sequence; w is the weight factor ranged from 0 to 1

Best values are in bold

From the results in Tables 2, 3 and 4, the combination of FCGR and DAC as feature vectors have a greater prediction quality. XGBoost performance is relatively stable, and each prediction results have little difference, especially for inputting high-dimensional features. However, after some high-dimensional feature vectors are input into SVM and ELM, the prediction results are relatively poor. It shows that XGBoost is more suitable for processing high-dimensional features. The prediction results of integrative feature representation for H. sapiens via 10-fold cross-validation by SVM, ELM and XGBoost All features means the feature vector = FCGR + DACC + TACC + PC-PseDNC + PC-pseTAC, and the parameters are consistent with the parameters of the corresponding feature. Parameter K indicates the values of K nucleotide in FCGR; lag indicates the distance of lag along the sequence; λ represents the highest counted rank (or tier) of the correlation along a DNA sequence; w is the weight factor ranged from 0 to 1 Best values are in bold The prediction results of integrative feature representation for C. elegans via 10-fold cross-validation by SVM, ELM and XGBoost All features means the feature vector = FCGR + DACC + TACC + PC-PseDNC + PC-pseTAC, and the parameters are consistent with the parameters of the corresponding feature. Parameter K indicates the values of K nucleotide in FCGR; lag indicates the distance of lag along the sequence; λ represents the highest counted rank (or tier) of the correlation along a DNA sequence; w is the weight factor ranged from 0 to 1 Best values are in bold The prediction results of integrative feature representation for D. melanogaster via 10-fold cross-validation by SVM, ELM and XGBoost All features means the feature vector = FCGR + DACC + TACC + PC-PseDNC + PC-pseTAC, and the parameters are consistent with the parameters of the corresponding feature. Parameter K indicates the values of K nucleotide in FCGR; lag indicates the distance of lag along the sequence; λ represents the highest counted rank (or tier) of the correlation along a DNA sequence; w is the weight factor ranged from 0 to 1 Best values are in bold

Comparison of the results with dimensionality reduction

Considering the high dimensionality of the integrative feature vector, it is possible that high-dimensional feature vectors would bring the curse of dimensionality, which leads to overfitting of the prediction result. Therefore, we also adopted the principal component analysis (PCA) algorithm [33] to achieve feature dimensionality reduction. Then, the feature vector after dimensionality reduction is input into SVM, ELM and XGBoost respectively. In the process of using PCA to dimensionality reduction, the cumulative contribution rate of the retained principal components will directly affect the dimensionality reduction effect. Therefore, we calculated the accuracy of 95%, 93%, 90%, 88% and 85% of the contribution rate of the retained principal components respectively. Figures 2, 3 and 4 shows the classification accuracy of each classifier with different contributing rate of principal component. And the results of the optimal contribution rate of the principal components corresponding to each predictor are shown in Tables 5, 6 and 7 respectively.
Fig. 2

The histogram (a–c) shows the accuracy of using SVM, ELM and XGBoost with contributing rate of principal component = 0.95, 0.93, 0.9, 0.88, 0.85 for H. sapiens

Fig. 3

The histogram (a–c) shows the accuracy of using SVM, ELM and XGBoost with contributing rate of principal component = 0.95, 0.93, 0.9, 0.88, 0.85 for C. elegans

Fig. 4

The histogram (a–c) shows the accuracy of using SVM, ELM and XGBoost with contributing rate of principal component = 0.95, 0.93, 0.9, 0.88, 0.85 for D. melanogaster

Table 5

PCA dimensionality reduction results via 10-fold cross-validation for H. sapiens

MethodFeatureParametersPCA%ACCSnSpMCCAUC
SVMFCGRK = 1 + 2 + 40.850.87580.89660.85520.75280.9288
FCGR + DACK = 4, lag = 20.90.87490.88560.86440.75070.9314
FCGR + TACK = 4, lag = 20.850.87520.88780.86260.75130.9306
FCGR + DACCK = 4, lag = 20.950.84100.82360.85830.68250.9138
FCGR + TACCK = 4, lag = 20.950.83690.81700.85650.67490.9099
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.880.87270.88960.85610.74630.9284
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.90.87190.88780.85610.74440.9281
All features0.950.77460.63880.90870.56980.8906
ELMFCGRK = 1 + 2 + 40.880.84280.86360.82220.68660.9075
FCGR + DACK = 4, lag = 20.880.84610.87240.82000.69360.9128
FCGR + TACK = 4, lag = 20.850.84690.86980.82440.69520.9129
FCGR + DACCK = 4, lag = 20.90.84580.87630.81570.69360.9095
FCGR + TACCK = 4, lag = 20.880.84390.88080.80740.69020.9072
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.850.84540.86450.82650.69180.9107
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.880.84370.86270.82480.68820.9069
All features0.880.84470.88430.80570.69230.9118
XGBoostFCGRK = 1 + 2 + 40.950.85130.87330.82960.70370.9175
FCGR + DACK = 4, lag = 20.850.85370.86670.84090.70800.9172
FCGR + TACK = 4, lag = 20.950.88590.90450.86740.77250.9491
FCGR + DACCK = 4, lag = 20.930.83640.86010.81300.67410.9014
FCGR + TACCK = 4, lag = 20.950.83950.86670.81260.68050.9050
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.950.84630.87110.82170.69370.9147
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.930.84980.86450.83520.70030.9155
All features0.950.84230.87290.81220.68640.9051

“PCA%” means contributing rate of principal component

Best values are in bold

Table 6

PCA dimensionality reduction results via 10-fold cross-validation for C. elegans

MethodFeatureParametersPCA%ACCSnSpMCCAUC
SVMFCGRK = 1 + 2 + 40.880.85510.89600.81480.71300.9242
FCGR + DACK = 4, lag = 20.90.85620.88700.82590.71420.9245
FCGR + TACK = 4, lag = 20.930.85580.88240.82970.71320.9245
FCGR + DACCK = 4, lag = 20.950.82650.91470.73970.66420.9057
FCGR + TACCK = 4, lag = 20.950.83360.80790.85890.66820.9052
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.850.85430.89130.81790.71120.9236
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.950.85160.89290.81100.70640.9243
All features0.950.82490.80290.84660.65130.8823
ELMFCGRK = 1 + 2 + 40.950.85350.88820.81940.70930.9193
FCGR + DACK = 4, lag = 20.880.84890.87420.82400.69900.9124
FCGR + TACK = 4, lag = 20.930.85000.87420.82630.70120.9157
FCGR + DACCK = 4, lag = 20.90.84760.87030.82520.69620.9159
FCGR + TACCK = 4, lag = 20.90.85370.88080.82710.70900.9183
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.950.84660.87770.81600.69510.9158
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.850.84520.86790.82290.69150.9160
All features0.930.85050.88160.81980.70300.9183
XGBoostFCGRK = 1 + 2 + 40.900.84580.88700.80520.69460.9175
FCGR + DACK = 4, lag = 20.900.85260.88310.82250.70680.9234
FCGR + TACK = 4, lag = 20.950.85080.87380.82820.70280.9195
FCGR + DACCK = 4, lag = 20.850.83960.85700.82250.68000.9147
FCGR + TACCK = 4, lag = 20.950.83850.86210.81520.67820.9110
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.930.84560.88080.81100.69340.9200
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.900.84720.88350.81140.69670.9191
All features0.950.84000.86130.81900.68120.9143

“PCA%” means contributing rate of principal component

Best values are in bold

Table 7

PCA dimensionality reduction results via 10-fold cross-validation for D. melanogaster

MethodFeatureParametersPCA%ACCSnSpMCCAUC
SVMFCGRK = 1 + 2 + 40.880.81080.77860.84350.62350.8785
FCGR + DACK = 4, lag = 20.950.80700.78550.82880.61520.8768
FCGR + TACK = 4, lag = 20.930.81150.78310.84040.62450.8766
FCGR + DACCK = 4, lag = 20.950.78090.79310.76840.56210.8343
FCGR + TACCK = 4, lag = 20.950.76780.68790.84910.54400.8363
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.90.80850.77520.84250.61900.8773
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.930.80970.77720.84280.62150.8761
All features0.950.75930.704140.815440.522750.80283
ELMFCGR + DACK = 2, lag = 20.950.78170.76900.79470.56420.8544
FCGR + TACK = 2, lag = 20.950.78590.77350.79860.57230.8552
FCGR + DACCK = 2, lag = 20.90.75300.75240.75370.50640.8262
FCGR + TACCK = 2, lag = 20.950.73650.74720.72560.47330.8018
FCGR + PCPseDNCK = 2, λ = 8, w = 0.50.930.78370.75970.80810.56850.8587
FCGR + PCPseTNCK = 2, λ = 8, w = 0.50.880.76780.72830.80810.53790.8448
All featuresK = 20.950.77270.77140.77400.54550.8437
XGBoostFCGRK = 1 + 2 + 40.90.80370.78240.82530.60850.8772
FCGR + DACK = 4, lag = 20.90.78770.76830.80740.57630.8630
FCGR + TACK = 4, lag = 20.880.79300.76350.82320.58790.8671
FCGR + DACCK = 4, lag = 20.930.77410.76900.77930.54860.8506
FCGR + TACCK = 4, lag = 20.880.76540.75760.77330.53130.8461
FCGR + PCPseDNCK = 4, λ = 8, w = 0.50.880.79880.77690.82110.59870.8753
FCGR + PCPseTNCK = 4, λ = 8, w = 0.50.850.79740.77720.81790.59600.8727
All features0.930.76470.75900.77050.52950.8406

“PCA%” means contributing rate of principal component

Best values are in bold

The histogram (a–c) shows the accuracy of using SVM, ELM and XGBoost with contributing rate of principal component = 0.95, 0.93, 0.9, 0.88, 0.85 for H. sapiens The histogram (a–c) shows the accuracy of using SVM, ELM and XGBoost with contributing rate of principal component = 0.95, 0.93, 0.9, 0.88, 0.85 for C. elegans The histogram (a–c) shows the accuracy of using SVM, ELM and XGBoost with contributing rate of principal component = 0.95, 0.93, 0.9, 0.88, 0.85 for D. melanogaster From Tables 5, 6 and 7, We have noticed that the prediction quality has been improved after dimensionality reduction through PCA for H. sapiens. It is increased by 4.57%, 3.12%, 6.00%, 9.03%, 3.56% in ACC, , , MCC and AUC when we combined FCGR vectors and TAC for using with XGBoost. However, the prediction quality has been not improved significantly for C. elegans. Especially when ELM was used, its prediction quality decreased slightly. For D. melanogaster, similarly, there is no significant improvement. PCA dimensionality reduction results via 10-fold cross-validation for H. sapiens “PCA%” means contributing rate of principal component Best values are in bold PCA dimensionality reduction results via 10-fold cross-validation for C. elegans “PCA%” means contributing rate of principal component Best values are in bold PCA dimensionality reduction results via 10-fold cross-validation for D. melanogaster “PCA%” means contributing rate of principal component Best values are in bold

Comparison with other algorithms

To verify the effectiveness of our method, we compared the prediction results of the optimal performing predictors in Tables 1, 2, 3 and 4 with other models using the same datasets. DLNN-5 [24] is a deep learning model with a convolution kernel size of 5, and ZCMM [23] is based on SVM. Tables 8, 9, 10 and 11 shows that our methods perform prominently on H. sapiens and S. cerevisiae datasets. For S. cerevisiae dataset, we used SVM, ELM and MLP to achieve  = ACC = MCC = AUC = 1 via 10-fold cross-validation when K = 3 or 4. Compared with the model that based on DNA deformation energy in the original paper [20], the prediction performance has been obviously lifted. For H. sapiens, combined FCGR vectors and TAC for using with XGBoost is higher than ZCMM in ACC, , , MCC, AUC by 10.87%, 15.58%, 5.23%, 21.25%, 8.81%, respectively; likewise, it is higher than DLNN-5 in ACC, , by 3.22%, 2.11%, 4.45%, respectively. The performance of CNN is slightly better than ZCMM and DLNN-5. For C. elegans, compared with ZCMM, we use ELM to increase the evaluation indicators by 2.20%, 10.64%, 1.56%, 13.15%, 3.01% when combined FCGR vectors with K = 1, 2 and 4. For D. melanogaster, our prediction accuracy is lower, and ZCMM's prediction accuracy (ACC) is the highest at 93.62%. Results imply that our final prediction is positive, it only performed unfavorably on the D. melanogaster dataset.
Table 8

Comparison of our predictors with other models via 10-fold cross-validation for S. cerevisiae

MethodACCSnSpMCCAUC
Deformation energy [20]0.9810.9820.9800.963 ~ 
FCGR-SVM11111
FCGR-ELM11111
FCGR-MLP11111
FCGR-CNN0.999710.99940.99951
Table 9

Comparison of our predictors with other models via 10-fold cross-validation for H. sapiens

MethodFeatureACCSnSpMCCAUC
DLNN-5 [24]0.85370.88340.8229 ~  ~ 
ZCMM [23]0.77720.74870.81510.56000.8610
SVMFCGR0.87580.89660.85520.75280.9288
FCGR + DAC0.87490.88560.86440.75070.9314
FCGR + TAC0.87520.88780.86260.75130.9306
FCGR + DACC0.85370.85310.85440.70790.9208
FCGR + TACC0.84150.83190.85090.68370.9113
FCGR + PCPseDNC0.87270.88960.85610.74630.9284
FCGR + PCPseTNC0.87190.88780.85610.74440.9281
All features0.81370.75180.87480.63220.8996
ELMFCGR0.84280.86360.82220.68660.9075
FCGR + DAC0.84610.87240.82000.69360.9128
FCGR + TAC0.84690.86980.82440.69520.9129
FCGR + DACC0.84580.87630.81570.69360.9095
FCGR + TACC0.84390.88080.80740.69020.9072
FCGR + PCPseDNC0.84540.86450.82650.69180.9107
FCGR + PCPseTNC0.84370.86270.82480.68820.9069
All features0.84470.88430.80570.69230.9118
XGBoostFCGR0.85850.893090.82440.719340.9197
FCGR + DAC0.85370.86670.84090.7080.9172
FCGR + TAC0.88590.90450.86740.77250.9491
FCGR + DACC0.84230.865830.81910.685880.9127
FCGR + TACC0.83950.86670.81260.68050.905
FCGR + PCPseDNC0.85590.889130.8230.713960.9207
FCGR + PCPseTNC0.84980.86450.83520.70030.9155
All features0.84720.873740.82090.695810.917
MLPFCGR0.85650.87680.83650.71440.9186
CNNFCGR0.85850.87460.84260.71850.9214

The table shows the optimal results of each classifier, and the specific parameters are shown in the previous tables

Best values are in bold

Table 10

Comparison of our predictors with other models via 10-fold cross-validation for C. elegans

MethodFeatureACCSnSpMCCAUC
DLNN-5 [24]0.89620.93040.8634 ~  ~ 
ZCMM [23]0.85340.78800.84100.62000.9120
SVMFCGR0.86030.89480.82630.72290.9295
FCGR + DAC0.85740.88630.82900.71640.9283
FCGR + TAC0.85610.88240.83020.71370.9272
FCGR + DACC0.84710.87770.81710.69610.9122
FCGR + TACC0.84700.86410.83010.69490.9179
FCGR + PCPseDNC0.85760.89210.82360.71760.9275
FCGRPCPseTNC0.85390.88390.82440.70960.9275
All features0.84310.84610.84010.68670.9139
ELMFCGR0.87540.89440.85660.75150.9421
FCGR + DAC0.87070.88630.85550.74210.9355
FCGR + TAC0.86960.88900.85050.74000.9359
FCGR + DACC0.86840.88310.85390.73760.9358
FCGR + TACC0.86800.89170.84470.73710.9329
FCGR + PCPseDNC0.86240.88470.84050.72580.9318
FCGR + PCPseTNC0.85570.88470.82710.71320.9262
All features0.85970.88630.83360.72100.9271
XGBoostFCGR0.84870.87970.81820.69950.9202
FCGR + DAC0.85260.88310.82250.70680.9234
FCGR + TAC0.85080.87380.82820.70280.9195
FCGR + DACC0.84620.87030.82250.69380.917
FCGR + TACC0.84170.86760.81630.68480.9162
FCGR + PCPseDNC0.84560.88080.8110.69340.92
FCGR + PCPseTNC0.84930.87890.82020.70040.9178
All features0.84810.86950.82710.69730.9195
MLPFCGR0.85370.86130.84620.70920.9225
CNNFCGR0.84950.88390.81560.7020.9181

The table shows the optimal results of each classifier, and the specific parameters are shown in the previous tables

Best values are in bold

Table 11

Comparison of our predictors with other models via 10-fold cross-validation for D. melanogaster

MethodFeatureACCSnSpMCCAUC
DLNN-5 [24]0.85600.87810.8333 ~  ~ 
ZCMM [23]0.93620.92260.79640.70000.9110
SVMFCGR0.81130.78310.840.62410.8791
FCGR + DAC0.80890.78350.83470.6190.8747
FCGR + TAC0.81150.78310.84040.62450.8766
FCGR + DACC0.78090.79310.76840.56210.8343
FCGR + TACC0.76780.68790.84910.5440.8363
FCGR + PCPseDNC0.80850.77520.84250.6190.8773
FCGR + PCPseTNC0.80970.77720.84280.62150.8761
All features0.75930.704140.815440.522750.80283
ELMFCGR0.7910.76480.81750.58330.8595
FCGR + DAC0.7920.77790.80630.58470.8644
FCGR + TAC0.79170.78070.80280.58390.8651
FCGR + DACC0.77690.76170.79230.55440.8503
FCGR + TACC0.76940.77350.76530.53910.846
FCGR + PCPseDNC0.78960.76310.81650.58060.8651
FCGR + PseTNC0.76780.72830.80810.53790.8448
All features0.78470.7810.78840.570.8576
XGBoostFCGR0.80370.78240.82530.60850.8772
FCGR + DAC0.78770.76830.80740.57630.863
FCGR + TAC0.7930.76350.82320.58790.8671
FCGR + DACC0.77410.7690.77930.54860.8506
FCGR + TACC0.78240.76930.79580.56580.8542
FCGR + PCPseDNC0.79970.78240.81720.60010.8725
FCGR + PseTNC0.79880.7790.8190.59890.8775
All features0.79510.77930.81120.59090.8718
MLPFCGR0.81170.80000.82350.62380.8848
CNNFCGR0.81080.80140.82040.62280.8854

The table shows the optimal results of each classifier, and the specific parameters are shown in the previous tables

Best values are in bold

Comparison of our predictors with other models via 10-fold cross-validation for S. cerevisiae Comparison of our predictors with other models via 10-fold cross-validation for H. sapiens The table shows the optimal results of each classifier, and the specific parameters are shown in the previous tables Best values are in bold Comparison of our predictors with other models via 10-fold cross-validation for C. elegans The table shows the optimal results of each classifier, and the specific parameters are shown in the previous tables Best values are in bold Comparison of our predictors with other models via 10-fold cross-validation for D. melanogaster The table shows the optimal results of each classifier, and the specific parameters are shown in the previous tables Best values are in bold

Comparison with other advanced methods

In addition to DLNN-5 and ZCMM models, there are some other advanced methods for nucleosome prediction in the same dataset. LeNup model utilizes improved convolutional neural networks, which adds inception modules and gated convolutional networks [25]. 3LS is based on the linear regression model [22]. LeNup used the 20-fold cross-validation and provided comparison data with 3LS for H. sapiens, C. elegans and D. melanogaster. Therefore, we utilized the results provided by LeNup for comparative analysis in Tables 12, 13 and 14.
Table 12

Comparison of our predictors with other advanced models via 20-fold cross-validation for H. sapiens

MethodFeatureACCSnSpMCCAUC
LeNup [25]0.88890.92120.85620.79060.9412
3LS [22]0.90010.91690.88350.80060.9588
SVMFCGR0.87600.89400.85830.75350.9288
FCGR + DAC0.87510.88740.86300.75130.9310
FCGR + TAC0.87540.88690.86390.75190.9318
FCGR + DACC0.85630.85440.85830.71380.9217
FCGR + TACC0.84230.83370.85090.68580.9114
FCGR + PseDNC0.87360.88830.85910.74810.9294
FCGR + PseTNC0.87400.89050.85780.74910.9280
All features0.81540.75450.87570.63550.8998
ELMFCGR0.84560.87020.82130.69310.9092
FCGR + DAC0.84690.87070.82350.69520.9087
FCGR + TAC0.84780.87500.82090.69740.9142
FCGR + DACC0.85000.87720.82300.70170.9054
FCGR + TACC0.84540.88650.80480.69410.9104
FCGR + PseDNC0.84760.86400.83130.69680.9111
FCGR + PseTNC0.84390.87020.81780.68930.9111
All features0.84740.89090.80440.69800.9141
XGBoostFCGR0.86020.8970.82390.72350.9237
FCGR + DAC0.85610.86270.84960.71300.9186
FCGR + TAC0.88650.90350.86960.77340.9394
FCGR + DACC0.84390.86670.82130.68940.9136
FCGR + TACC0.84060.87110.81040.68310.9046
FCGR + PseDNC0.85630.89310.820.71520.9208
FCGR + PseTNC0.85040.87120.83000.70290.9185
All features0.85110.87550.8270.70390.9193
MLPFCGR0.85790.88390.83220.71720.9186
CNNFCGR0.86160.87460.84870.72390.9222

Best values are in bold

Table 13

Comparison of our predictors with other advanced models via 20-fold cross-validation for C. elegans

MethodFeatureACCSnSpMCCAUC
LeNup [25]0.91880.93390.90410.84440.9663
3LS [22]0.87860.86540.89210.75760.9605
SVMFCGR0.86230.89460.83040.72680.9301
FCGR + DAC0.85780.88820.82780.71730.9274
FCGR + TAC0.85640.88040.83280.71430.9234
FCGR + DACC0.84830.87030.82660.69820.9170
FCGR + TACC0.84810.87260.8240.69750.9201
FCGR + PseDNC0.85850.89480.82280.71980.9278
FCGR + PseTNC0.85510.88590.82480.71330.9285
All features0.84500.83590.85390.69000.9210
ELMFCGR0.87570.89400.85770.75250.9419
FCGR + DAC0.87150.88820.85510.74440.9371
FCGR + TAC0.87110.88780.85470.74330.9356
FCGR + DACC0.87150.89010.85320.74420.9374
FCGR + TACC0.86920.89480.84390.74000.9342
FCGR + PseDNC0.86780.88750.84860.73690.9348
FCGR + PseTNC0.85630.88510.82790.71440.9293
All features0.86140.89210.83120.72510.9299
XGBoostFCGR0.85200.88310.82130.70600.9207
FCGR + DAC0.85410.88550.82320.71080.9223
FCGR + TAC0.85370.87570.83210.7090.9195
FCGR + DACC0.84650.86710.82550.6940.9161
FCGR + TACC0.84710.87020.82440.69590.9188
FCGR + PseDNC0.84870.87810.81980.69960.9204
FCGR + PseTNC0.85010.88040.82020.70220.9224
All features0.85180.88510.81900.70590.9188
MLPFCGR0.85890.88640.83180.72060.9281
CNNFCGR0.85290.87780.82840.70760.9181

Best values are in bold

Table 14

Comparison of our predictors with other advanced models via 20-fold cross-validation for D. melanogaster

MethodFeatureACCSnSpMCCAUC
LeNup [25]0.88470.89740.87130.78280.9401
3LS [22]0.83410.84070.82740.66820.9147
SVMFCGR0.81170.78410.83960.62510.8782
FCGR + DAC0.80940.78760.83160.62010.8783
FCGR + TAC0.81180.80730.81630.62520.8863
FCGR + DACC0.78660.79970.77330.57380.8384
FCGR + TACC0.77670.71280.84180.55930.8412
FCGR + PseDNC0.80950.79920.81990.62090.8843
FCGR + PseTNC0.81080.80340.81830.62340.8848
All features0.76020.70140.82000.52550.8059
ELMFCGR0.79120.76510.82040.58420.8601
FCGR + DAC0.79240.77520.80980.58620.8689
FCGR + TAC0.79320.77760.80910.58770.8619
FCGR + DACC0.77930.77100.78770.55990.8537
FCGR + TACC0.76970.76860.77090.54030.8456
FCGR + PseDNC0.79100.76590.81650.58370.8648
FCGR + PseTNC0.76910.74550.79300.53950.8433
All features0.78780.78590.78990.57630.8637
XGBoostFCGR0.80370.78210.82570.60880.8771
FCGR + DAC0.78910.77410.80420.57910.8648
FCGR + TAC0.79480.77620.81370.59100.8690
FCGR + DACC0.78140.77900.78390.56340.8540
FCGR + TACC0.77060.76480.77650.54170.8508
FCGR + PseDNC0.80100.77860.82390.60360.8728
FCGR + PseTNC0.80740.79580.81930.61650.8831
All features0.79790.77450.82180.59720.8739
MLPFCGR0.81270.80030.82530.62720.8893
CNNFCGR0.81160.80360.81980.62520.8854

Best values are in bold

Comparison of our predictors with other advanced models via 20-fold cross-validation for H. sapiens Best values are in bold Comparison of our predictors with other advanced models via 20-fold cross-validation for C. elegans Best values are in bold Comparison of our predictors with other advanced models via 20-fold cross-validation for D. melanogaster Best values are in bold LeNup has the best overall prediction effect. The accuracy of C. elegans is 0.9188, and the average accuracy of other species are also over 0.88. The prediction result of our method is relatively close to it on the H. sapiens dataset. For C. elegans, ELM with FCGR performs slightly worse than 3LS, ACC, , MCC, AUC decreased by 0.29%, 3.44%, 0.51%, 1.86% respectively.

Discussion

Firstly, the results in Table 1 and Fig. 1 clearly showed that the FCGR feature of the combined K value is better than the single K value, and the SVM output better prediction results. When training CNN and MLP models, we utilized multi-channel multiple K-value input images, and the prediction accuracy had been improved. All these indicated that FCGR feature combinations with different K values can better express sequence features, thereby improving models' prediction accuracy. Secondly, we further integrated FCGR with other feature representations, and combined three types of machine learning algorithms to compare prediction results (Tables 2, 3, 4). Besides, we performed PCA dimensionality reduction processing on feature vectors to prevent high-dimensional features from causing overfitting (Tables 5, 6, 7). Although the overall prediction quality has improved after the PCA dimensionality reduction processing with the integrated feature, superior results are obtained for using FCGR feature representation. These also further illustrated the advantages of FCGR features representation. Here we compared the results of the proposed method with other advanced algorithms. Slightly superior results are achieved with our algorithm on H. sapiens and S. cerevisiae datasets, but there are gaps in the other two datasets. On the one hand, it explains the feasibility of our method; on the other, our work has room for improvement.

Conclusions

In this work, we used FCGR to represent the features of the DNA sequence and applied it to the nucleosome positioning. Our experiments have achieved positive results. Especially when multiple features are used in combination, the prediction quality can be improved. The advantage of this representation is that the time consumed in the process of constructing features is shortened, and the features are clear and intuitive. The quality of integrating features representation is also acceptable. Particularly after we use PCA for dimensionality reduction, the prediction quality of H. sapiens dataset has been improved. This demonstrates the feasibility of the method. In this paper, we also tried a simple CNN model with FCGR image and got mediocre results. Since deep learning is now increasingly used in bioinformatics. In the further research of nucleosome positioning, we will try to build a more efficient deep learning prediction model to achieve prediction of DNA represented in the form of images, such as FCGR image.

Methods

Dataset descriptions

To compare the results of the predictors, the datasets of this work downloaded from two published papers [20, 21]. The first group of datasets involved H. sapiens, C. elegans and D. melanogaster from the paper by Guo et al. [21]. The length of each DNA sequence is 147 bp. The second dataset involved S. cerevisiae genome from the paper by Chen et al. [20]. The length of each DNA sequence is 150 bp. Both of the datasets contain two types of samples: nucleosome-forming sequences (positive data) and nucleosome-inhibiting sequences (negative data). And none of the sequences included has ≥ 80% pairwise sequence identity with any other. The details of the datasets are shown in Table 15.
Table 15

The quantity composition of the four species datasets

SpeciesN-fN-iTotal
H. sapiens227323004573
C. elegans256726085175
D. melanogaster290028505750
S. cerevisiae188017403620

N-f indicates nucleosome-forming sequences (positive data) and N-i indicates nucleosome-inhibiting sequences (negative data)

DNA sequence feature representation

Except for the above mentioned, common DNA sequence representation methods include basic kmer (Kmer) [34], reverse complementary kmer (RevKmer) [35], etc. based on deoxyribonucleic acid composition, and some are based on the correlation between nucleotide physical and chemical indicators, such as dinucleotide-based autocovariance (DAC), trinucleotide-based autocovariance (TAC) [29], etc. and pseudo k-tuple nucleotide composition (PseKNC) [21] based on pseudo deoxyribonucleic acid composition. These feature representation methods have specific calculation formulas and iterative functions, and some calculations are more complex and require a long time. This paper will mainly use a simple and intuitive feature representation. Chaos game representation (CGR) is a graphical representation method of gene sequence based on chaos theory proposed by Jeffrey in 1990 [36]. The method is as follows: The four nucleotides {A, T, G, C} are located at the four vertices of the plane coordinate system, and the position of each nucleotide in the DNA sequence in the plane is . According to formula (2) draw the coordinate point of each nucleotide: Among them, is the given starting point, L is the length of the DNA sequence, and represents the corresponding coordinate of the i-th nucleotide, where A = (0,0), T = (1,0), G = (1,1), C = (0,1). This method draws a corresponding image of a DNA sequence through the iterative function and makes the nucleotides in the sequence correspond to the points on the image one by one [36-40]. From Fig. 5, we can see the CGR graphical representation of the two types of sample sequences in the H. sapiens dataset.
Fig. 5

CGR of DNA sequences: a H. sapiens nucleosome-inhibiting sample and b H. sapiens nucleosome-forming sample

CGR of DNA sequences: a H. sapiens nucleosome-inhibiting sample and b H. sapiens nucleosome-forming sample Divide the CGR image into sub-blocks and calculate the number of points appearing on each sub-block, we can determine the frequency of K nucleotide combinations, and then convert the CGR image into a matrix, which is called frequency chaos game representation (FCGR) [39]. For example, we divided the CGR graph of Fig. 5a into a matrix and calculated the number of occurrences of the midpoint of each sub-block, and obtain the frequency matrix shown in Table 16.
Table 16

The frequency matrix of CGR image on H. sapiens nucleosome-inhibiting sample

13110000
31411023
32011220
44347022
51042300
37225120
84504041
122213222
The quantity composition of the four species datasets N-f indicates nucleosome-forming sequences (positive data) and N-i indicates nucleosome-inhibiting sequences (negative data) The frequency matrix of CGR image on H. sapiens nucleosome-inhibiting sample FCGR can be used not only as a numerical matrix, but also as a grayscale image. The original CGR image is divided into sub-blocks. The darker the sub-block, the more dots appear in the sub-blocks; the lighter sub-blocks, indicates that the number of dots in the color block is small, and the pixel value of the image is between 0 and 255 [39]. From Fig. 6, we can see the FCGR image of the sample sequence with K = 3, 4 and 5, respectively.
Fig. 6

FCGR image of H. sapiens nucleosome-inhibiting sample with different K: a K = 3, b K = 4 and c K = 5

FCGR image of H. sapiens nucleosome-inhibiting sample with different K: a K = 3, b K = 4 and c K = 5

Support vector machine

Support vector machine (SVM) is a commonly used two-class classification model. Compared with other classification algorithms, it has a good classification effect and strong generalization ability on small data sets. It can also handle nonlinear classification problems through nuclear techniques. Thus, support vector machines have also been widely used in the field of bioinformatics [19, 21, 23]. Its basic idea is to map the sample from the original low-dimensional space to a high-dimensional space, so that the sample can find a partitioning hyperplane with the largest interval in the feature space, and separate samples of different categories. In this paper, we will use the python package (Scikit-learn 0.23), which can be downloaded from https://scikit-learn.org/stable/index.html. This package contains the SVM module, and the implementation is based on libsvm. We will train the SVM with the radial basis function (RBF) kernel, meanwhile two parameters will be considered: penalty parameter C and kernel coefficient Gamma. In the training process, we used the grid optimization method to determine the best values of the two parameters.

Extreme learning machine

Extreme learning machine (ELM) was proposed by Guang-Bin Huang. The algorithm is a new machine learning algorithm based on single hidden layer feedforward neural networks (SLFNs). Compared with traditional algorithms, ELM has a faster learning speed while maintaining learning accuracy. The core idea is to randomly select the input layer weight and hidden layer bias of the network, and get the corresponding hidden node output [41]. The network structure of ELM model is shown in Fig. 7.
Fig. 7

Basic architecture of ELM

Basic architecture of ELM The experiment reference used David Lambert's Python version of ELM resources, which can be downloaded from the ELM web portal (https://www.ntu.edu.sg/home/egbhuang/). The code can be found on https://github.com/dclambert/Python-ELM.

Extreme gradient boosting

Extreme gradient boosting (XGBoost) is an open source machine learning project developed by Tianqi Chen et al. [42]. It is one of the boosting algorithms, which has the characteristics of high efficiency, flexibility, high accuracy, and strong portability. It is applied in the field of biomedicine [43]. The idea of XGBoost algorithm is to continuously add trees and perform feature splitting to complete the construction of a tree. In the whole process, each addition of a tree is learning a new function to fit the residual of the previous prediction. When the training is completed, K trees will be obtained. If we want to predict the score of a sample, according to the features of this sample, each tree will fall to a corresponding leaf node, and each leaf node corresponds to a score. Finally, we only need to add up the scores corresponding to each tree to get the predicted value of the sample. In this experiment, we used the python package (xgboost 1.2.0), which can be downloaded from https://github.com/dmlc/xgboost.

Multilayer perceptron

Multilayer perceptron (MLP) is also called deep neural networks (DNNs) [44]. MLP is based on the extension of perception. Multiple hidden layers are introduced between the input layer and the output layer, and the neurons between the layers are fully connected. So, both the hidden layer and the output layer in MLP are fully connected layers. For the MLP, we used the AI Studio (https://aistudio.baidu.com/aistudio/index) experimental platform and PaddlePaddle (https://www.paddlepaddle.org.cn/) deep learning framework provided by Baidu (https://www.baidu.com/) to implement the experimental model with python (https://www.python.org/). MLP has three hidden layers with Relu activation function [45], each layer contains 50 neurons, the output layer uses a softmax activation function. Besides, MLP is trained by 5 epchos, with Adamax optimizer a learning rate of 0.001. Adamax algorithm is a variant of Adam algorithm based on infinite norm, which makes the algorithm of learning rate update more stable and simple [46]. We use cross entropy as our loss function.

Convolutional neural network

Convolutional Neural Network (CNN) is a representative algorithm of deep learning. It has demonstrated extraordinary advantages in the field of computer vision and has also been widely used in bioinformatics [47, 48]. Convolutional neural networks can automatically extract features from input data. Compared with fully connected neural networks, it can simplify model complexity and effectively reduce model parameters [49]. Convolutional neural networks are applied to the general framework of image mode, mainly composed of convolutional layers, activation function, pooling layers and fully connected layers [49, 50]. Owing to the limitation of the sample data volume, during the training process, we need to prevent the over-fitting problem faced by CNN, so we add a batch normalization (BN) layer [51] after the convolutional layer and add a dropout layer [52] after the fully connected layer. In our network, the convolutional layer uses a 3 × 3 convolution kernel, the number of filters in the first layer is 64, and the second is 32. The pooling layer use the maximum pooling of 2 × 2, with stride = 2. The first fully connected layer neurons' number is 100, and the second is 50. Then, the dropout probability of the subsequent dropout layer is 0.5. Except the softmax activation function used in the output layer, the activation function in the other layers is Relu. CNN is training by 20 epchos, with Adamax optimizer a learning rate of 0.001. The loss function is cross entropy. Like MLP, we also used the AI Studio experimental platform and PaddlePaddle deep learning framework provided by Baidu to implement the experimental model in python. The specific network structure is shown in Fig. 8.
Fig. 8

The architecture of our CNN model

The architecture of our CNN model
  36 in total

Review 1.  Gene regulation by nucleosome positioning.

Authors:  Lu Bai; Alexandre V Morozov
Journal:  Trends Genet       Date:  2010-09-09       Impact factor: 11.639

Review 2.  Impact of nucleosome dynamics and histone modifications on cell proliferation during Arabidopsis development.

Authors:  B Desvoyes; M P Sanchez; E Ramirez-Parra; C Gutierrez
Journal:  Heredity (Edinb)       Date:  2010-04-28       Impact factor: 3.821

3.  X-ray structure of a tetranucleosome and its implications for the chromatin fibre.

Authors:  Thomas Schalch; Sylwia Duda; David F Sargent; Timothy J Richmond
Journal:  Nature       Date:  2005-07-07       Impact factor: 49.962

4.  Genome-scale identification of nucleosome positions in S. cerevisiae.

Authors:  Guo-Cheng Yuan; Yuen-Jong Liu; Michael F Dion; Michael D Slack; Lani F Wu; Steven J Altschuler; Oliver J Rando
Journal:  Science       Date:  2005-06-16       Impact factor: 47.728

5.  Conserved nucleosome positioning defines replication origins.

Authors:  Matthew L Eaton; Kyriaki Galani; Sukhyun Kang; Stephen P Bell; David M MacAlpine
Journal:  Genes Dev       Date:  2010-03-29       Impact factor: 11.361

6.  Crystal structure of the nucleosome core particle at 2.8 A resolution.

Authors:  K Luger; A W Mäder; R K Richmond; D F Sargent; T J Richmond
Journal:  Nature       Date:  1997-09-18       Impact factor: 49.962

Review 7.  Structure of chromatin.

Authors:  R D Kornberg
Journal:  Annu Rev Biochem       Date:  1977       Impact factor: 23.643

8.  A high-resolution atlas of nucleosome occupancy in yeast.

Authors:  William Lee; Desiree Tillo; Nicolas Bray; Randall H Morse; Ronald W Davis; Timothy R Hughes; Corey Nislow
Journal:  Nat Genet       Date:  2007-09-16       Impact factor: 38.330

9.  Nucleosome repositioning underlies dynamic gene expression.

Authors:  Nicolas Nocetti; Iestyn Whitehouse
Journal:  Genes Dev       Date:  2016-03-10       Impact factor: 11.361

10.  Nucleosome occupancy as a novel chromatin parameter for replication origin functions.

Authors:  Jairo Rodriguez; Laura Lee; Bryony Lynch; Toshio Tsukiyama
Journal:  Genome Res       Date:  2016-11-28       Impact factor: 9.043

View more
  2 in total

Review 1.  Chaos game representation and its applications in bioinformatics.

Authors:  Hannah Franziska Löchel; Dominik Heider
Journal:  Comput Struct Biotechnol J       Date:  2021-11-10       Impact factor: 7.271

2.  Nucleosome positioning based on DNA sequence embedding and deep learning.

Authors:  Guo-Sheng Han; Qi Li; Ying Li
Journal:  BMC Genomics       Date:  2022-04-13       Impact factor: 3.969

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.