Salgaze: Personalizing gaze estimation using visual saliency

Published

Journal Article

© 2019 IEEE. Traditional gaze estimation methods typically require explicit user calibration to achieve high accuracy. This process is cumbersome and recalibration is often required when there are changes in factors such as illumination and pose. To address this challenge, we introduce SalGaze, a framework that utilizes saliency information in the visual content to transparently adapt the gaze estimation algorithm to the user without explicit user calibration. We design an algorithm to transform a saliency map into a differentiable loss map that can be used for the optimization of CNN-based models. SalGaze is also able to greatly augment standard point calibration data with implicit video saliency calibration data using a unified framework. We show accuracy improvements over 24% using our technique on existing methods.

Full Text

Duke Authors

Cited Authors

  • Chang, Z; DI Martino, JM; Qiu, Q; Espinosa, S; Sapiro, G

Published Date

  • October 1, 2019

Published In

  • Proceedings 2019 International Conference on Computer Vision Workshop, Iccvw 2019

Start / End Page

  • 1169 - 1178

Digital Object Identifier (DOI)

  • 10.1109/ICCVW.2019.00148

Citation Source

  • Scopus