A BEHAVIORAL APPROACH TO REPEATED BAYESIAN SECURITY GAMES
The prevalence of security threats to organizational defense demands models that support real-world policymaking. Security games are a potent tool in this regard; however, although canonical models effectively allocate limited resources, they generally do not consider adaptive, boundedly rational adversaries. Empirical findings suggest this characterization describes real-world human behavior, so the development of decision-support frame-works against such adversaries is a critical need. We examine a family of policies applicable to repeated games in which a boundedly rational adversary is modeled using a behavioral-economic theory of learning, that is, experience-weighted attraction learning. These policies take into account realistic uncertainty about the competition by adopting the perspective of adversarial risk analysis. Using Bayesian reasoning, these repeated games are decomposed into multiarm bandit problems. A collection of cost-function approximation policies are given to solve these problems. The efficacy of our approach is shown via extensive computational testing on a defense-related case study.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Statistics & Probability
- 4905 Statistics
- 1403 Econometrics
- 0104 Statistics
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Statistics & Probability
- 4905 Statistics
- 1403 Econometrics
- 0104 Statistics