| Literature DB >> 29688313 |
Mehmet Emin Bakir1, Savas Konur2, Marian Gheorghe2, Natalio Krasnogor3, Mike Stannett1.
Abstract
Motivation: Formal verification is a computational approach that checks system correctness (in relation to a desired functionality). It has been widely used in engineering applications to verify that systems work correctly. Model checking, an algorithmic approach to verification, looks at whether a system model satisfies its requirements specification. This approach has been applied to a large number of models in systems and synthetic biology as well as in systems medicine. Model checking is, however, computationally very expensive, and is not scalable to large models and systems. Consequently, statistical model checking (SMC), which relaxes some of the constraints of model checking, has been introduced to address this drawback. Several SMC tools have been developed; however, the performance of each tool significantly varies according to the system model in question and the type of requirements being verified. This makes it hard to know, a priori, which one to use for a given model and requirement, as choosing the most efficient tool for any biological application requires a significant degree of computational expertise, not usually available in biology labs. The objective of this article is to introduce a method and provide a tool leading to the automatic selection of the most appropriate model checker for the system of interest.Entities:
Mesh:
Year: 2018 PMID: 29688313 PMCID: PMC6137970 DOI: 10.1093/bioinformatics/bty282
Source DB: PubMed Journal: Bioinformatics ISSN: 1367-4803 Impact factor: 6.937
Fig. 1.Computational time and feature importance. Average computational time and feature importance associated with model topological properties
The number of models verified against different property patterns
| PRISM | PLASMA-Lab | Ymer | MRMC | MC2 | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Patterns | Verif. | Fast. | Verif. | Fast. | Verif. | Fast. | Verif. | Fast. | Verif. | Fast. |
| Eventually | 364 | 18 | 675 | 248 | 644 | 402 | 116 | 3 | 668 | 4 |
| Always | 480 | 80 | 675 | 132 | 644 | 457 | 118 | 2 | 668 | 4 |
| Follows | N/A | N/A | 675 | 575 | N/A | N/A | 116 | 39 | 664 | 61 |
| Precedes | 672 | 170 | 675 | 18 | 644 | 486 | 113 | 0 | 664 | 1 |
| Never | 542 | 103 | 675 | 147 | 644 | 422 | 116 | 1 | 668 | 2 |
| Steady state | N/A | N/A | 675 | 579 | N/A | N/A | 80 | 30 | 668 | 66 |
| Until | 592 | 125 | 675 | 82 | 644 | 465 | 112 | 0 | 664 | 3 |
| Infinitely often | N/A | N/A | 675 | 604 | N/A | N/A | N/A | N/A | 668 | 71 |
| Next | 658 | 581 | 675 | 17 | N/A | N/A | 118 | 36 | 675 | 41 |
| Release | 622 | 151 | 675 | 49 | 644 | 472 | 111 | 0 | 664 | 3 |
| Weak until | 591 | 126 | 675 | 82 | 644 | 465 | 112 | 0 | 664 | 2 |
Note: Columns labelled Verif. show the number of models verified by each tool. Columns labelled Fast. show for how many models the corresponding tool was the fastest. N/A, not applicable, means the corresponding pattern is not supported by the tool.
Fig. 2.Fastest SMC tools verifying each model against each property pattern. The X-axis represents logarithmic scale of model size; the Y-axis shows the property patterns. For each model a one-unit vertical line is drawn against each pattern. The line’s colour shows the fastest SMC
Fig. 3.Performance comparison. For each property pattern, each tool performance is compared against the best performance. Here, X-axes represent the model size (species × reactions) in logarithmic scale (log2), Y-axes show the relative performance of each SMC tool in comparison with the fastest one, and Z-axes show (log10 scale) the consumed time in nanoseconds
Fig. 4.Predictive accuracies. Accuracies (S1) for the fastest SMC prediction with different algorithms
Accuracy values using first score (S1)
| SVM | ERT | RF | LR | KNN | SD | RD | ||
|---|---|---|---|---|---|---|---|---|
| Eventually | 92.4% | 92.2% | 91.6% | 92.0% | 88.8% | 47.7% | 12.4% | 3.4e-09 |
| Always | 88.9% | 90.5% | 90.1% | 85.9% | 84.7% | 53.4% | 16.3% | 8.1e-09 |
| Follows | 95.0% | 93.6% | 93.9% | 92.4% | 92.6% | 70.5% | 29.1% | 3.5e-08 |
| Precedes | 95.4% | 97.2% | 97.0% | 93.5% | 94.4% | 63.3% | 27.4% | 1.1e-09 |
| Never | 88.5% | 91.0% | 89.8% | 85.6% | 85.2% | 48.8% | 19.1% | 3.5e-09 |
| Steady state | 94.2% | 93.2% | 92.6% | 93.0% | 92.7% | 70.7% | 29.6% | 1.1e-07 |
| Until | 91.0% | 92.8% | 92.2% | 87.8% | 88.0% | 54.7% | 28.7% | 4.5e-08 |
| Infinitely often | 91.6% | 95.0% | 94.7% | 95.0% | 93.6% | 81.5% | 61.2% | 7.4e-09 |
| Next | 94.3% | 93.5% | 92.9% | 92.6% | 93.5% | 72.9% | 36.9% | 1.9e-07 |
| Release | 94.2% | 93.8% | 93.1% | 89.8% | 91.3% | 58.8% | 28.4% | 1.9e-08 |
| Weak until | 90.8% | 92.3% | 91.7% | 88.8% | 87.1% | 57.4% | 27.3% | 2.4e-08 |
| 8.0e-08 | 2.9e-04 | 1.1e-05 | 1.7e-09 | 2.1e-09 | 2.4e-14 | 4.6e-13 |
Predictive accuracy with different score settings
| Eventually | Always | Follows | Precedes | Never | Steady state | Until | Infinitely often | Next | Release | Weak until | |||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| SVM | S2 | 94.1% | 91.1% | 95.7% | 96.6% | 89.8% | 95.3% | 91.9% | 92.3% | 95.4% | 95.4% | 92.6% | 1.9e-07 |
| S3 | 98.7% | 96.4% | 99.1% | 98.1% | 94.4% | 99.3% | 95.1% | 100.0% | 97.2% | 97.9% | 96.7% | 2.8e-10 | |
| ERT | S2 | 93.7% | 92.2% | 94.8% | 98.1% | 92.7% | 94.1% | 94.3% | 95.9% | 94.7% | 95.3% | 93.9% | 5.6e-04 |
| S3 | 98.4% | 96.9% | 98.5% | 99.6% | 96.1% | 99.0% | 97.8% | 100.0% | 96.6% | 97.8% | 97.6% | 2.8e-07 | |
| RF | S2 | 93.4% | 91.8% | 95.0% | 98.7% | 91.5% | 93.6% | 93.6% | 95.4% | 94.3% | 95.4% | 93.2% | 2.6e-04 |
| S3 | 99.0% | 97.0% | 99.3% | 99.9% | 96.0% | 99.1% | 97.3% | 100.0% | 96.3% | 97.6% | 97.0% | 2.8e-09 | |
| LR | S2 | 93.8% | 88.6% | 93.6% | 95.1% | 87.5% | 94.2% | 90.0% | 95.9% | 93.8% | 91.9% | 90.4% | 3.2e-08 |
| S3 | 99.0% | 95.4% | 99.1% | 97.3% | 93.8% | 99.7% | 95.1% | 100.0% | 96.1% | 95.1% | 95.1% | 4.9e-11 | |
| KNN | S2 | 90.2% | 87.5% | 93.6% | 96.0% | 87.7% | 93.6% | 90.5% | 94.5% | 94.7% | 92.8% | 89.6% | 2.6e-08 |
| S3 | 96.7% | 94.5% | 98.1% | 98.7% | 93.2% | 98.5% | 95.4% | 100.0% | 96.0% | 95.1% | 94.4% | 1.3e-09 | |
| SD | S2 | 49.3% | 55.7% | 70.5% | 64.9% | 52.3% | 71.4% | 56.9% | 82.4% | 78.1% | 60.7% | 59.4% | 3.3e-14 |
| S3 | 71.7% | 71.9% | 90.5% | 75.0% | 72.4% | 90.4% | 70.8% | 100.0% | 86.8% | 72.4% | 74.8% | 9.9e-13 | |
| RD | S2 | 13.6% | 17.3% | 30.5% | 31.8% | 19.7% | 30.7% | 30.9% | 61.5% | 38.4% | 31.7% | 28.9% | 1.4e-12 |
| S3 | 32.9% | 35.0% | 65.0% | 49.5% | 39.4% | 64.3% | 48.0% | 99.3% | 56.0% | 47.6% | 47.9% | 8.6e-15 | |
| S2 | 3.3e-09 | 2.8e-08 | 3.5e-08 | 2.4e-09 | 3.1e-09 | 8.3e-08 | 1.1e-08 | 5.0e-09 | 1.8e-07 | 2.1e-08 | 7.0e-09 | ||
| S3 | 2.2e-09 | 5.3e-08 | 1.6e-08 | 5.4e-10 | 2.4e-08 | 2.6e-08 | 7.3e-09 | 5.2e-04 | 2.4e-07 | 2.1e-08 | 1.1e-08 |
Fig. 5.Total time consumed for verifying all models
Fig. 6.The mean performance loss when the best classifiers predict incorrectly
Best, worst and predicted model checking time (seconds) for various models and patterns
| Always | Eventually | Until | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Model | Best | Worst | Predicted | Best | Worst | Predicted | Best | Worst | Predicted |
| 1 | 0.09 | >3600 | 0.09 | 0.25 | >3600 | 0.25 | 0.08 | >3600 | 0.08 |
| 2 | 0.08 | >3600 | 0.08 | 0.26 | >3600 | 0.26 | 0.08 | >3600 | 0.08 |
| 3 | 0.07 | >3600 | 0.07 | 0.14 | >3600 | 0.14 | 0.08 | >3600 | 0.08 |
| 4 | 0.63 | >3600 | 0.63 | 1.26 | >3600 | 1.26 | 0.34 | >3600 | 0.34 |
| 5 | 0.67 | >3600 | 0.67 | 1.30 | >3600 | 1.30 | 0.34 | >3600 | 0.34 |
| 6 | 0.31 | >3600 | 0.31 | 0.58 | >3600 | 0.58 | 0.33 | >3600 | 0.33 |
| 7 | 1.49 | >3600 | 1.49 | 8.63 | >3600 | 8.63 | 1.49 | >3600 | 2.14 |
| 8 | 1.51 | >3600 | 1.51 | 6.01 | >3600 | 6.01 | 2.92 | >3600 | 5.01 |
| 9 | 1.54 | >3600 | 1.54 | 4.29 | >3600 | 4.29 | 1.50 | >3600 | 1.50 |
| 10 | 121.00 | >3600 | 121.00 | 195.00 | >3600 | 195.00 | 3.23 | >3600 | 3.23 |
| 11 | 118.00 | >3600 | 118.00 | 201.00 | >3600 | 201.00 | 6.24 | >3600 | 6.24 |
| 12 | 64.06 | >3600 | 64.06 | 385.00 | >3600 | 385.00 | 3.37 | >3600 | 3.37 |