Skip to main content
Journal cover image

Improving deep neural network performance by integrating kernelized Min-Max objective

Publication ,  Journal Article
Wang, QF; Yao, K; Zhang, R; Hussain, A; Huang, K
Published in: Neurocomputing
September 30, 2020

Deep neural networks (DNN), such as convolutional neural networks (CNN) have been widely used for object recognition. However, they are usually unable to ensure the required intra-class compactness and inter-class separability in the kernel space. These are known to be important in pattern recognition for achieving both robustness and accuracy. In this paper, we propose to integrate a kernelized Min-Max objective in the DNN training in order to explicitly enforce both kernelized within-class compactness and between-class margin. The involved kernel space is implicitly mapped from the feature space associated with a certain upper layer of DNN by exploiting a kernel trick, while the Min-Max objective in this space is interpolated with the original DNN loss function and finally optimized in the training phase. With a very small additional computation cost, the proposed strategy can be easily integrated in different DNN models without changing any other part of the original model. The comparative recognition accuracy of the proposed method is evaluated with multiple DNN models (including shallow CNN, deep CNN and deep residual neural network models) on two benchmark datasets: CIFAR-10 and CIFAR-100. Extensive experimental results demonstrate that the integration of kernelized Min-Max objective in the training of DNN models can achieve better results compared to state-of-the-art models, without incurring additional model complexity.

Duke Scholars

Published In

Neurocomputing

DOI

EISSN

1872-8286

ISSN

0925-2312

Publication Date

September 30, 2020

Volume

408

Start / End Page

82 / 90

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 52 Psychology
  • 46 Information and computing sciences
  • 40 Engineering
  • 17 Psychology and Cognitive Sciences
  • 09 Engineering
  • 08 Information and Computing Sciences
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wang, Q. F., Yao, K., Zhang, R., Hussain, A., & Huang, K. (2020). Improving deep neural network performance by integrating kernelized Min-Max objective. Neurocomputing, 408, 82–90. https://doi.org/10.1016/j.neucom.2019.08.101
Wang, Q. F., K. Yao, R. Zhang, A. Hussain, and K. Huang. “Improving deep neural network performance by integrating kernelized Min-Max objective.” Neurocomputing 408 (September 30, 2020): 82–90. https://doi.org/10.1016/j.neucom.2019.08.101.
Wang QF, Yao K, Zhang R, Hussain A, Huang K. Improving deep neural network performance by integrating kernelized Min-Max objective. Neurocomputing. 2020 Sep 30;408:82–90.
Wang, Q. F., et al. “Improving deep neural network performance by integrating kernelized Min-Max objective.” Neurocomputing, vol. 408, Sept. 2020, pp. 82–90. Scopus, doi:10.1016/j.neucom.2019.08.101.
Wang QF, Yao K, Zhang R, Hussain A, Huang K. Improving deep neural network performance by integrating kernelized Min-Max objective. Neurocomputing. 2020 Sep 30;408:82–90.
Journal cover image

Published In

Neurocomputing

DOI

EISSN

1872-8286

ISSN

0925-2312

Publication Date

September 30, 2020

Volume

408

Start / End Page

82 / 90

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 52 Psychology
  • 46 Information and computing sciences
  • 40 Engineering
  • 17 Psychology and Cognitive Sciences
  • 09 Engineering
  • 08 Information and Computing Sciences