Fixation Based Object Recognition in Autism Clinic Setting
With the increasing popularity of portable eye tracking devices, one can conveniently use them to find fixation points, i.e., the location and region one is attracted by and looking at. However, region of interest alone is not enough to fully support further behavior and psychological analysis since it ignores the abundant information of visual information one perceives. Rather than the raw coordinates, we are interested to know the visual content one is looking at. In this work, we first collect a video dataset using a wearable eye tracker in an autism screening room setting with 14 different commonly used assessment tools. We then propose an improved fixation identification algorithm to select stable and reliable fixation points. The fixation points are used to localize and select object proposals in combination with object proposal generation methods. Moreover, we propose a cropping generation algorithm to determine the optimal bounding boxes of viewing objects based on the input proposals and fixation points. The resulted cropped images form a dataset for the subsequent object recognition task. We adopt the AlexNet based convolutional neural network framework for object recognition. Our evaluation metrics include classification accuracy and intersection-over-union (IoU), and the proposed framework achieves $$92.5\%$$ and $$88.3\%$$ recognition accuracy on different testing sessions, respectively.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences