Skip to main content
construction release_alert
Scholars@Duke will be down for maintenance for approximately one hour starting Tuesday, 11/11 @1pm ET
cancel

CROSS-CHANNEL ATTENTION-BASED TARGET SPEAKER VOICE ACTIVITY DETECTION: EXPERIMENTAL RESULTS FOR THE M2MET CHALLENGE

Publication ,  Conference
Wang, W; Qin, X; Li, M
Published in: ICASSP IEEE International Conference on Acoustics Speech and Signal Processing Proceedings
January 1, 2022

In this paper, we present the speaker diarization system for the Multichannel Multi-party Meeting Transcription Challenge (M2MeT) from team DKU DukeECE. As the highly overlapped speech exists in the dataset, we employ an x-vector-based target-speaker voice activity detection (TS-VAD) to find the overlap between speakers. Firstly, we separately train a single-channel model for each of the 8 channels and fuse the results. In addition, we also employ the cross-channel self-attention to further improve the performance, where the non-linear spatial correlations between different channels are learned and fused. Experimental results on the evaluation set show that the single-channel TS-VAD reduces the DER by over 75% from 12.68% to 3.14%. The multi-channel TS-VAD further reduces the DER by 28% and achieves a DER of 2.26%. Our final submitted system achieves a DER of 2.98% on the AliMeeting test set, which ranks 1st in the M2MET challenge. In this challenge, our team is denoted as A41.

Duke Scholars

Published In

ICASSP IEEE International Conference on Acoustics Speech and Signal Processing Proceedings

DOI

ISSN

1520-6149

Publication Date

January 1, 2022

Volume

2022-May

Start / End Page

9171 / 9175
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wang, W., Qin, X., & Li, M. (2022). CROSS-CHANNEL ATTENTION-BASED TARGET SPEAKER VOICE ACTIVITY DETECTION: EXPERIMENTAL RESULTS FOR THE M2MET CHALLENGE. In ICASSP IEEE International Conference on Acoustics Speech and Signal Processing Proceedings (Vol. 2022-May, pp. 9171–9175). https://doi.org/10.1109/ICASSP43922.2022.9747019
Wang, W., X. Qin, and M. Li. “CROSS-CHANNEL ATTENTION-BASED TARGET SPEAKER VOICE ACTIVITY DETECTION: EXPERIMENTAL RESULTS FOR THE M2MET CHALLENGE.” In ICASSP IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, 2022-May:9171–75, 2022. https://doi.org/10.1109/ICASSP43922.2022.9747019.
Wang W, Qin X, Li M. CROSS-CHANNEL ATTENTION-BASED TARGET SPEAKER VOICE ACTIVITY DETECTION: EXPERIMENTAL RESULTS FOR THE M2MET CHALLENGE. In: ICASSP IEEE International Conference on Acoustics Speech and Signal Processing Proceedings. 2022. p. 9171–5.
Wang, W., et al. “CROSS-CHANNEL ATTENTION-BASED TARGET SPEAKER VOICE ACTIVITY DETECTION: EXPERIMENTAL RESULTS FOR THE M2MET CHALLENGE.” ICASSP IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, vol. 2022-May, 2022, pp. 9171–75. Scopus, doi:10.1109/ICASSP43922.2022.9747019.
Wang W, Qin X, Li M. CROSS-CHANNEL ATTENTION-BASED TARGET SPEAKER VOICE ACTIVITY DETECTION: EXPERIMENTAL RESULTS FOR THE M2MET CHALLENGE. ICASSP IEEE International Conference on Acoustics Speech and Signal Processing Proceedings. 2022. p. 9171–9175.

Published In

ICASSP IEEE International Conference on Acoustics Speech and Signal Processing Proceedings

DOI

ISSN

1520-6149

Publication Date

January 1, 2022

Volume

2022-May

Start / End Page

9171 / 9175