Enhancement and analysis of conversational speech: JSALT 2017

Conference Paper

Automatic speech recognition is more and more widely and effectively used. Nevertheless, in some automatic speech analysis tasks the state of the art is surprisingly poor. One of these is 'diarization', the task of determining who spoke when. Diarization is key to processing meeting audio and clinical interviews, extended recordings such as police body cam or child language acquisition data, and any other speech data involving multiple speakers whose voices are not cleanly separated into individual channels. Overlapping speech, environmental noise and suboptimal recording techniques make the problem harder. During the JSALT Summer Workshop at CMU in 2017, an international team of researchers worked on several aspects of this problem, including calibration of the state of the art, detection of overlaps, enhancement of noisy recordings, and classification of shorter speech segments. This paper sketches the workshop's results, and announces plans for a 'Diarization Challenge' to encourage further progress.

Full Text

Duke Authors

Cited Authors

  • Ryanta, N; Bergelson, E; Church, K; Cristia, A; Du, J; Ganapathy, S; Khudanpur, S; Kowalski, D; Krishnamoorthy, M; Kulshreshta, R; Liberman, M; Lu, YD; Maciejewski, M; Metze, F; Profant, J; Sun, L; Tsao, Y; Yu, Z

Published Date

  • September 10, 2018

Published In

Volume / Issue

  • 2018-April /

Start / End Page

  • 5154 - 5158

International Standard Serial Number (ISSN)

  • 1520-6149

International Standard Book Number 13 (ISBN-13)

  • 9781538646588

Digital Object Identifier (DOI)

  • 10.1109/ICASSP.2018.8462468

Citation Source

  • Scopus