GazeGraph: Graph-based few-shot cognitive context sensing from human visual behavior

Conference Paper

In this work, we present GazeGraph, a system that leverages human gazes as the sensing modality for cognitive context sensing. GazeGraph is a generalized framework that is compatible with different eye trackers and supports various gaze-based sensing applications. It ensures high sensing performance in the presence of heterogeneity of human visual behavior, and enables quick system adaptation to unseen sensing scenarios with few-shot instances. To achieve these capabilities, we introduce the spatial-temporal gaze graphs and the deep learning-based representation learning method to extract powerful and generalized features from the eye movements for context sensing. Furthermore, we develop a few-shot gaze graph learning module that adapts the 'learning to learn' concept from meta-learning to enable quick system adaptation in a data-efficient manner. Our evaluation demonstrates that GazeGraph outperforms the existing solutions in recognition accuracy by 45% on average over three datasets. Moreover, in few-shot learning scenarios, GazeGraph outperforms the transfer learning-based approach by 19% to 30%, while reducing the system adaptation time by 80%.

Full Text

Duke Authors

Cited Authors

  • Lan, G; Heit, B; Scargill, T; Gorlatova, M

Published Date

  • November 16, 2020

Published In

  • Sensys 2020 Proceedings of the 2020 18th Acm Conference on Embedded Networked Sensor Systems

Start / End Page

  • 422 - 435

International Standard Book Number 13 (ISBN-13)

  • 9781450375900

Digital Object Identifier (DOI)

  • 10.1145/3384419.3430774

Citation Source

  • Scopus