Cross-modality interactive attention network for multispectral pedestrian detection
Multispectral pedestrian detection is an emerging solution with great promise in many around-the-clock applications, such as automotive driving and security surveillance. To exploit the complementary nature and remedy contradictory appearance between modalities, in this paper, we propose a novel cross-modality interactive attention network that takes full advantage of the interactive properties of multispectral input sources. Specifically, we first utilize the color (RGB) and thermal streams to build up two detached feature hierarchy for each modality, then by taking the global features, correlations between two modalities are encoded in the attention module. Next, the channel responses of halfway feature maps are recalibrated adaptively for subsequent fusion operation. Our architecture is constructed in the multi-scale format to better deal with different scales of pedestrians, and the whole network is trained in an end-to-end way. The proposed method is extensively evaluated on the challenging KAIST multispectral pedestrian dataset and achieves state-of-the-art performance with high efficiency.
Duke Scholars
Published In
DOI
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 4605 Data management and data science
- 4603 Computer vision and multimedia computation
- 4602 Artificial intelligence
- 0801 Artificial Intelligence and Image Processing
Citation
Published In
DOI
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 4605 Data management and data science
- 4603 Computer vision and multimedia computation
- 4602 Artificial intelligence
- 0801 Artificial Intelligence and Image Processing