Skip to main content

ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition

Publication ,  Conference
Yan, Z; Younes, R; Forsyth, J
Published in: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
January 1, 2022

Human activity recognition (HAR) has been adopting deep learning to substitute well-established analysis techniques that rely on hand-crafted feature extraction and classication techniques. However, the architecture of convolutional neural network (CNN) models used in HAR tasks still mostly uses VGG-like models while more and more novel architectures keep emerging. In this work, we present a novel approach to HAR by incorporating elements of residual learning in our ResNet-like CNN model to improve existing approaches by reducing the computational complexity of the recognition task without sacrificing accuracy. Specifically, we design our ResNet-like CNN based on residual learning and achieve nearly 1% better accuracy than the state-of-the-art, with over 10 times parameter reduction. At the same time, we adopt the Saliency Map method to visualize the importance of every input channel. This enables us to conduct further work such as dimension reduction to improve computational efficiency or finding the optimal sensor node(s) position(s).

Duke Scholars

Published In

Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST

DOI

EISSN

1867-822X

ISSN

1867-8211

Publication Date

January 1, 2022

Volume

434 LNICST

Start / End Page

129 / 143
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Yan, Z., Younes, R., & Forsyth, J. (2022). ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST (Vol. 434 LNICST, pp. 129–143). https://doi.org/10.1007/978-3-030-99203-3_9
Yan, Z., R. Younes, and J. Forsyth. “ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition.” In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, 434 LNICST:129–43, 2022. https://doi.org/10.1007/978-3-030-99203-3_9.
Yan Z, Younes R, Forsyth J. ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition. In: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST. 2022. p. 129–43.
Yan, Z., et al. “ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition.” Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST, vol. 434 LNICST, 2022, pp. 129–43. Scopus, doi:10.1007/978-3-030-99203-3_9.
Yan Z, Younes R, Forsyth J. ResNet-Like CNN Architecture and Saliency Map for Human Activity Recognition. Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST. 2022. p. 129–143.

Published In

Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST

DOI

EISSN

1867-822X

ISSN

1867-8211

Publication Date

January 1, 2022

Volume

434 LNICST

Start / End Page

129 / 143