Learning to identify while failing to discriminate
Privacy and fairness are critical in computer vision applications, in particular when dealing with human identification. Achieving a universally secure, private, and fair systems is practically impossible as the exploitation of additional data can reveal private information in the original one. Faced with this challenge, we propose a new line of research, where the privacy is learned and used in a closed environment. The goal is to ensure that a given entity, trusted to infer certain information with our data, is blocked from inferring protected information from it. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task (face verification) is harder than the negative one (gender classification). The framework opens the door to privacy and fairness in very important closed scenarios, ranging from private data accumulation companies to law-enforcement and hospitals.
Sokolić, J; Qiu, Q; Rodrigues, MRD; Sapiro, G
Proceedings 2017 Ieee International Conference on Computer Vision Workshops, Iccvw 2017
Volume / Issue
Start / End Page
International Standard Book Number 13 (ISBN-13)
Digital Object Identifier (DOI)