| Literature DB >> 28045922 |
Stefan Rass1, Sandra König2, Stefan Schauer2.
Abstract
Advanced persistent threats (APT) combine a variety of different attack forms ranging from social engineering to technical exploits. The diversity and usual stealthiness of APT turns them into a central problem of contemporary practical system security, since information on attacks, the current system status or the attacker's incentives is often vague, uncertain and in many cases even unavailable. Game theory is a natural approach to model the conflict between the attacker and the defender, and this work investigates a generalized class of matrix games as a risk mitigation tool for an advanced persistent threat (APT) defense. Unlike standard game and decision theory, our model is tailored to capture and handle the full uncertainty that is immanent to APTs, such as disagreement among qualitative expert risk assessments, unknown adversarial incentives and uncertainty about the current system state (in terms of how deeply the attacker may have penetrated into the system's protective shells already). Practically, game-theoretic APT models can be derived straightforwardly from topological vulnerability analysis, together with risk assessments as they are done in common risk management standards like the ISO 31000 family. Theoretically, these models come with different properties than classical game theoretic models, whose technical solution presented in this work may be of independent interest.Entities:
Mesh:
Year: 2017 PMID: 28045922 PMCID: PMC5207710 DOI: 10.1371/journal.pone.0168675
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Infrastructure from [38] to illustrate game-theoretic APT modeling.
Fig 2Example Attack Graph [38].
Security controls (selection).
| Countermeasure | Comment |
|---|---|
| deactivation of services (FTP, RSH, SSH) | these may not be permanently disabled, but could be temporarily turned off or be requested on demand (provided that either is feasible in the organizational structure and its workflows) |
| software patches | this may catch known vulnerabilities (but not necessarily all of them), but can be done only if a patch is currently available |
| reinstalling entire machines | this wipes out unknown malware but comes at the cost of a temporary outage of a machine (thus, causing potential trouble with the overall system services) |
| organizational precautions | for example, repeated security trainings for the employees. These may also have only a temporary effect, since the security awareness is raised during the training, but the effect decays over time, which makes a repetition of the training necessary to have a permanent effect. |
Example assessment of a security precaution.
| Countermeasure: | |
|---|---|
| Aspect | Expert’s assessment |
| applicability | not always available |
| effectiveness | low or high (depending on the exploit) |
| cost | low to medium (e.g., if the system needs to be rebooted) |
Fig 3Agreeing vs. disagreeing expert ratings.
Fig 4Comparing Different Preference Rules.
APT scenarios (adversary’s action set AS2, based on Fig 2).
| 1 |
|
| 2 |
|
| 3 |
|
| 4 |
|
| 5 |
|
| 6 |
|
| 7 |
|
| 8 |
|
Fig 5Example of ⪯-choosing among two empirical distributions (inconsistent expert opinions).
Correspondence of Attack Trees/Graphs and Extensive Form Games.
| Extensive form game | Attack tree/graph |
|---|---|
| start of the game | root of the tree/graph |
| stage of the gameplay | node in the tree/graph |
| allowed moves at each stage (for the adversary) | possible exploits at each node |
| end of the game | leaf node (attack target) |
| strategies | paths from the root to the leaf (= attack vectors) |
| information sets | uncertainty in the attacker’s current position and move |
Possible mapping of graph distance to risk categories.
| Distance | Risk |
|---|---|
| 7…8 | low |
| 3…6 | medium |
| 0…2 | high |
Fig 6Loss Assessment of Counteraction vs. Threat.
Fig 7Specification of an APT Game (Example Workflow Snapshot).
Benefits of Distribution-Valued Game-Modeling over Classical Game-Modeling.
| Issue | Classical game-theoretic modelling | How this is handled in distribution-valued games |
|---|---|---|
| Payoff uncertainty | Either switching to special forms of equilibria (disturbed, trembling hands, etc.) or agreeing on a simultaneously representative value for all possible outcomes (“consolidation of different opinions”) | No consolidation or representation needed; we can simply work with the (normalized) histogram of all possible outcomes (or opinions on what could happen) |
| Non-realizable strategy | Separating out cases where a strategy can be played or not. This would amount to specifying two versions of the strategy (one that is successful and one that fails) | Since actions can by construction have many different outcomes, success and failure are just two realizations of the corresponding loss RV |
| Imperfect information | Working with hypotheses on expected moves in stages of the game where no precise information is available. The hypotheses can be learnt from past history and are taken into account when defining the optimal behavior (e.g., Bayesian perfect equilibrium) | Is directly incorporated in the uncertainty of the outcome, since an unknown move corresponds in a perceived random payoff; thus, there is no intrinsic conceptual difference here |
| Random changes in the game-play (stochastic games [ | Resorting to special forms of equilibria, such as distorted or trembling hands equilibria or stochastic games [ | As long as the outcome remains identically (stationarily) distributed across several rounds of the gameplay, there is no specific treatment required upon random changes in the gameplay. The known theory of Markov chains can be used here to analyze the changes in the gameplay for stationarity. |
Fig 8Applying Fictitious Play.
Fig 9Equilibrium loss distribution for the example APT mitigation game.
Fig 10Optimal Tradeoffs (simple case).
Selected Strategies for the Example.
| Attack strategies | Defense actions |
|---|---|
Example Expert Assessments.
| Scenario ↓ / Expert → | 1 | 2 | 3 | 4 | 5 | 6 | |
|---|---|---|---|---|---|---|---|
| L | L | M | M | M | H | ||
| H | H | H | M | ||||
| H | H | M | L | H | |||
| M | L | L | M | M | |||
Fig 11R-plot of our example APT matrix game.