Can Pupillometry be used to Detect Driver Hazard Awareness?
Modern Advanced Driver-Assistance Systems (ADAS) increasingly rely on interactions between vehicle and human driver. To inform these interactions, it is helpful for a vehicle system to have a good understanding of a driver's situational awareness. In this work we explore a relatively under-exploited, passively measurable signal which might provide insight into a driver's awareness: the constriction and dilation of their pupils over time, or pupillometry. We ask whether pupillometry might be practically useful to detect if and when a driver becomes aware of a road hazard. Using a dataset of driver responses to both hazardous and routine scenarios during simulated semi-automated driving, we compare models trained on pupillometric data to a model trained on facial responses, and demonstrate how their performances differ in terms of accuracy and latency. While a driver's facial expressions are, as expected, a useful cue to determine awareness (0.82 AUC on held-out test stimuli), we find that pupillometric data alone can provide an even more meaningful signal (0.93 AUC). In addition, we find that the pupillometric model performance degrades more gracefully than the face model when tested on unseen subjects, while fusing models yields further accuracy and latency improvements given sufficient training data. We characterize the shape of the performance vs. latency curve for all models and make our code available for reproducibility.