Skip to main content
Journal cover image

Cross-modality interactive attention network for multispectral pedestrian detection

Publication ,  Journal Article
Zhang, L; Liu, Z; Zhang, S; Yang, X; Qiao, H; Huang, K; Hussain, A
Published in: Information Fusion
October 1, 2019

Multispectral pedestrian detection is an emerging solution with great promise in many around-the-clock applications, such as automotive driving and security surveillance. To exploit the complementary nature and remedy contradictory appearance between modalities, in this paper, we propose a novel cross-modality interactive attention network that takes full advantage of the interactive properties of multispectral input sources. Specifically, we first utilize the color (RGB) and thermal streams to build up two detached feature hierarchy for each modality, then by taking the global features, correlations between two modalities are encoded in the attention module. Next, the channel responses of halfway feature maps are recalibrated adaptively for subsequent fusion operation. Our architecture is constructed in the multi-scale format to better deal with different scales of pedestrians, and the whole network is trained in an end-to-end way. The proposed method is extensively evaluated on the challenging KAIST multispectral pedestrian dataset and achieves state-of-the-art performance with high efficiency.

Duke Scholars

Published In

Information Fusion

DOI

ISSN

1566-2535

Publication Date

October 1, 2019

Volume

50

Start / End Page

20 / 29

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 4605 Data management and data science
  • 4603 Computer vision and multimedia computation
  • 4602 Artificial intelligence
  • 0801 Artificial Intelligence and Image Processing
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Zhang, L., Liu, Z., Zhang, S., Yang, X., Qiao, H., Huang, K., & Hussain, A. (2019). Cross-modality interactive attention network for multispectral pedestrian detection. Information Fusion, 50, 20–29. https://doi.org/10.1016/j.inffus.2018.09.015
Zhang, L., Z. Liu, S. Zhang, X. Yang, H. Qiao, K. Huang, and A. Hussain. “Cross-modality interactive attention network for multispectral pedestrian detection.” Information Fusion 50 (October 1, 2019): 20–29. https://doi.org/10.1016/j.inffus.2018.09.015.
Zhang L, Liu Z, Zhang S, Yang X, Qiao H, Huang K, et al. Cross-modality interactive attention network for multispectral pedestrian detection. Information Fusion. 2019 Oct 1;50:20–9.
Zhang, L., et al. “Cross-modality interactive attention network for multispectral pedestrian detection.” Information Fusion, vol. 50, Oct. 2019, pp. 20–29. Scopus, doi:10.1016/j.inffus.2018.09.015.
Zhang L, Liu Z, Zhang S, Yang X, Qiao H, Huang K, Hussain A. Cross-modality interactive attention network for multispectral pedestrian detection. Information Fusion. 2019 Oct 1;50:20–29.
Journal cover image

Published In

Information Fusion

DOI

ISSN

1566-2535

Publication Date

October 1, 2019

Volume

50

Start / End Page

20 / 29

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 4605 Data management and data science
  • 4603 Computer vision and multimedia computation
  • 4602 Artificial intelligence
  • 0801 Artificial Intelligence and Image Processing