Enhancement and analysis of conversational speech: JSALT 2017
Automatic speech recognition is more and more widely and effectively used. Nevertheless, in some automatic speech analysis tasks the state of the art is surprisingly poor. One of these is 'diarization', the task of determining who spoke when. Diarization is key to processing meeting audio and clinical interviews, extended recordings such as police body cam or child language acquisition data, and any other speech data involving multiple speakers whose voices are not cleanly separated into individual channels. Overlapping speech, environmental noise and suboptimal recording techniques make the problem harder. During the JSALT Summer Workshop at CMU in 2017, an international team of researchers worked on several aspects of this problem, including calibration of the state of the art, detection of overlaps, enhancement of noisy recordings, and classification of shorter speech segments. This paper sketches the workshop's results, and announces plans for a 'Diarization Challenge' to encourage further progress.
Ryanta, N; Bergelson, E; Church, K; Cristia, A; Du, J; Ganapathy, S; Khudanpur, S; Kowalski, D; Krishnamoorthy, M; Kulshreshta, R; Liberman, M; Lu, YD; Maciejewski, M; Metze, F; Profant, J; Sun, L; Tsao, Y; Yu, Z
Volume / Issue
Start / End Page
International Standard Serial Number (ISSN)
International Standard Book Number 13 (ISBN-13)
Digital Object Identifier (DOI)