Learning to identify while failing to discriminate

Published

Conference Paper

© 2017 IEEE. Privacy and fairness are critical in computer vision applications, in particular when dealing with human identification. Achieving a universally secure, private, and fair systems is practically impossible as the exploitation of additional data can reveal private information in the original one. Faced with this challenge, we propose a new line of research, where the privacy is learned and used in a closed environment. The goal is to ensure that a given entity, trusted to infer certain information with our data, is blocked from inferring protected information from it. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task (face verification) is harder than the negative one (gender classification). The framework opens the door to privacy and fairness in very important closed scenarios, ranging from private data accumulation companies to law-enforcement and hospitals.

Full Text

Duke Authors

Cited Authors

  • Sokolić, J; Qiu, Q; Rodrigues, MRD; Sapiro, G

Published Date

  • January 19, 2018

Published In

  • Proceedings 2017 Ieee International Conference on Computer Vision Workshops, Iccvw 2017

Volume / Issue

  • 2018-January /

Start / End Page

  • 2537 - 2544

International Standard Book Number 13 (ISBN-13)

  • 9781538610343

Digital Object Identifier (DOI)

  • 10.1109/ICCVW.2017.298

Citation Source

  • Scopus