| Literature DB >> 32063872 |
Zahid Maqbool1, Palvi Aggarwal2, V S Chandrasekhar Pammi3, Varun Dutt1.
Abstract
Cyber-attacks are deliberate attempts by adversaries to illegally access online information of other individuals or organizations. There are likely to be severe monetary consequences for organizations and its workers who face cyber-attacks. However, currently, little is known on how monetary consequences of cyber-attacks may influence the decision-making of defenders and adversaries. In this research, using a cyber-security game, we evaluate the influence of monetary penalties on decisions made by people performing in the roles of human defenders and adversaries via experimentation and computational modeling. In a laboratory experiment, participants were randomly assigned to the role of "hackers" (adversaries) or "analysts" (defenders) in a laboratory experiment across three between-subject conditions: Equal payoffs (EQP), penalizing defenders for false alarms (PDF) and penalizing defenders for misses (PDM). The PDF and PDM conditions were 10-times costlier for defender participants compared to the EQP condition, which served as a baseline. Results revealed an increase (decrease) and decrease (increase) in attack (defend) actions in the PDF and PDM conditions, respectively. Also, both attack-and-defend decisions deviated from Nash equilibriums. To understand the reasons for our results, we calibrated a model based on Instance-Based Learning Theory (IBLT) theory to the attack-and-defend decisions collected in the experiment. The model's parameters revealed an excessive reliance on recency, frequency, and variability mechanisms by both defenders and adversaries. We discuss the implications of our results to different cyber-attack situations where defenders are penalized for their misses and false-alarms.Entities:
Keywords: adversaries; cybersecurity; decision-making; defenders; frequency; instance-based learning theory; monetary penalties; recency
Year: 2020 PMID: 32063872 PMCID: PMC6999552 DOI: 10.3389/fpsyg.2020.00011
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1Set of actions and costs for adversaries and defenders. The first value in each cell corresponds to adversary’s cost [e.g., A(a, d)] and the second value corresponds to defender’s cost [e.g., D(a, d)]. Negative cost is a benefit.
FIGURE 2Payoffs for both defenders and adversaries across different conditions. (A) Penalizing Defenders for False-alarms (PDF). (B) Penalizing Defenders for Misses (PDM). The Nash Proportions of attack (p) and defend (q) actions are also shown.
FIGURE 3The experimental interface presented to participants in cyber-security game. The interface provided feedback on the actions taken and payoffs obtained in the last trial by both players. Also, the interface showed the cost matrices to participants. The interface seen by participants acting as hackers (A) and analysts (B) in a trial.
FIGURE 4Attack and defend proportions across different conditions from human participants, ACT-R model and the IBL model. (A) The comparison between model and human adversaries. (B) The comparison between model and human defenders. The black line on each bar shows the corresponding Nash proportions and the error bars represent the 95% confidence interval.
Parameter and RMSDs from the models across EQP, PDF, and PDM conditions.
| Condition | Model | dA | dD | σA | σD | RMSDA | RMSDD |
| EQP | Calibrated-IBL | 27.67 | 27.67 | 9.10 | 9.10 | 0.15 | 0.11 |
| ACT-R | 0.50 | 0.50 | 0.25 | 0.25 | 0.18 | 0.15 | |
| Calibrated-IBL | 28.41 | 28.41 | 13.20 | 13.20 | 0.18 | 0.15 | |
| ACT-R | 0.50 | 0.50 | 0.25 | 0.25 | 0.30 | 0.2 | |
| PDM | Calibrated-IBL | 29.57 | 29.57 | 8.43 | 8.43 | 0.13 | 0.12 |
| ACT-R | 0.50 | 0.50 | 0.25 | 0.25 | 0.16 | 0.31 |
FIGURE 5Average proportion of attack and defend actions from adversaries and defenders across the three conditions, Equal-Payoff (EQ),: Rewarding Analyst (RA) and Rewarding Hacker (RH). The horizontal bars show the corresponding optimal/Nash proportions. The error-bars show the 95% CI around the mean.
The Generalization of IBL model and its parameters to different conditions in Maqbool et al. (2017).
| Generalization conditions | |||||||
| Calibration condition | Model | EQ ( | RA ( | RH ( | |||
| RMSDA | RMSDD | RMSDA | RMSDD | RMSDA | RMSDD | ||
| EQP | Calibrated IBL | 0.26 | 0.25 | 0.28 | 0.30 | ||
| ACT-R | 0.22 | 0.29 | 0.25 | 0.12 | 0.18 | 0.41 | |
| Calibrated IBL | 0.48 | 0.23 | 0.24 | 0.54 | |||
| ACT-R | 0.29 | 0.40 | 0.24 | 0.2 | 0.44 | 0.46 | |
| PDM | Calibrated IBL | 0.35 | 0.34 | 0.26 | 0.20 | ||
| ACT-R | 0.35 | 0.31 | 0.33 | 0.16 | 0.20 | 0.33 | |
FIGURE 6The generalization of the best performing parameters in the calibrated model to different conditions in Maqbool et al. (2017) for the adversary role (A) and defender role (B).