Leveraging Pre-Trained Representations to Improve Access to Untranscribed Speech from Endangered Languages

Nay San, Martijn Bartelds, Mitchell Browne, Lilly Clifford, Fiona Gibson, John Mansfield, David Nash, Jane Simpson, Myfany Turpin, Maria Carina Vollmer, Sasha Wilmoth, Dan Jurafsky

    Research output: Contribution to conferencePaper

    Abstract

    Pre-trained speech representations like wav2vec 2.0 are a powerful tool for automatic speech recognition (ASR). Yet many endangered languages lack sufficient data for pre-training such models, or are predominantly oral vernaculars without a standardised writing system, precluding fine-tuning. Query-by-example spoken term detection (QbE-STD) offers an alternative for iteratively indexing untranscribed speech corpora by locating spoken query terms. Using data from 7 Australian Aboriginal languages and a regional variety of Dutch, all of which are endangered or vulnerable, we show that QbE-STD can be improved by leveraging representations developed for ASR (wav2vec 2.0: the English monolingual model and XLSR53 multilingual model). Surprisingly, the English model outperformed the multilingual model on 4 Australian language datasets, raising questions around how to optimally leverage self-supervised speech representations for QbE-STD. Nevertheless, we find that wav2vec 2.0 representations (either English or XLSR53) offer large improvements (56-86% relative) over state-of-the-art approaches on our endangered language datasets.
    Original languageEnglish
    Pages1094-1101
    DOIs
    Publication statusPublished - 2021
    Event2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) - Cartagena, Colombia
    Duration: 1 Jan 2021 → …

    Conference

    Conference2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
    Period1/01/21 → …

    Cite this