Efficiently Learning a Robust Self-Driving Model with Neuron Coverage Aware Adaptive Filter Reuse
Human drivers learn driving skills from both regular (non-Accidental) and accidental driving experiences, while most of current self-driving research focuses on regular driving only. We argue that learning from accidental driving data is necessary for robustly modeling driving behavior. A main challenge, however, is how accident data can be effectively used together with regular data to learn vehicle motion, since manually labeling accident data without expertise is significantly difficult. Our proposed solution for robust vehicle motion learning, in this paper, is to integrate layer-level discriminability and neuron coverage(neuron-level robustness) regulariziers into an unsupervised generative network for video prediction. Layer-level discriminability increases divergence of feature distribution between the regular data and accident data at network layers. Neuron coverage regulariziers enlarge interval span of neuron activation adopted by training samples, to reduce probability that a sample falls into untested interval regions. To accelerate training process, we propose adaptive filter reuse based on neuron coverage. Our strategies of filter reuse reduce structural network parameters, encourage memory reuse, and guarantee effectiveness of robust vehicle motion learning. Experiments results show that our model improves the inference accuracy by 1.1% compared to FCMLSTM, and cut down 10.2% training time over the traditional method with negligible accuracy loss.