Improving deep neural network performance with kernelized min-max objective
In this paper, we present a novel training strategy using kernelized Min-Max objective to enable improved object recognition performance on deep neural networks (DNN), e.g., convolutional neural networks (CNN). Without changing the other part of the original model, the kernelized Min-Max objective works by combining the kernel trick with the Min-Max objective and being embedded into a high layer of the networks in the training phase. The proposed kernelized objective explicitly enforces the learned object feature maps to maintain in a kernel space the least compactness for each category manifold and the biggest margin among different category manifolds. With very few additional computation costs, the proposed strategy can be widely used in different DNN models. Extensive experiments with shallow convolutional neural network model, deep convolutional neural network model, and deep residual neural network model on two benchmark datasets show that the proposed approach outperforms those competitive models.
Duke Scholars
DOI
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences
Citation
DOI
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences