Capacity bounded differential privacy
Differential privacy has emerged as the gold standard for measuring the risk posed by an algorithm's output to the privacy of a single individual in a dataset. It is defined as the worst-case distance between the output distributions of an algorithm that is run on inputs that differ by a single person. In this work, we present a novel relaxation of differential privacy, capacity bounded differential privacy, where the adversary that distinguishes the output distributions is assumed to be capacity-bounded - i.e. bounded not in computational power, but in terms of the function class from which their attack algorithm is drawn. We model adversaries of this form using restricted f-divergences between probability distributions, and study properties of the definition and algorithms that satisfy them. Our results demonstrate that these definitions possess a number of interesting properties enjoyed by differential privacy and some of its existing relaxations; additionally, common mechanisms such as the Laplace and Gaussian mechanisms enjoy better privacy guarantees for the same added noise under these definitions.
Duke Scholars
Published In
ISSN
Publication Date
Volume
Related Subject Headings
- 4611 Machine learning
- 1702 Cognitive Sciences
- 1701 Psychology
Citation
Published In
ISSN
Publication Date
Volume
Related Subject Headings
- 4611 Machine learning
- 1702 Cognitive Sciences
- 1701 Psychology