Textless Low-Resource Speech-to-Speech Translation With Unit Language Models

1University of Texas at Austin 2New York University
Overview of speech-to-speech translation systems Training a unit-based encoder-decoder model for S2ST

We present a Pretrain-Finetune-Backtranslate framework for training textless S2ST models that require just dozens of hours of parallel speech data.

Abstract

Existing speech-to-speech translation (S2ST) models fall into two camps: they either leverage text as an intermediate step or require hundreds of hours of parallel speech data. Both approaches are incompatible with textless languages or language pairs with limited parallel data. We present PFB, a framework for training textless S2ST models that require just dozens of hours of parallel speech data. We first pretrain a model on large-scale monolingual speech data, finetune it with a small amount of parallel speech data (20-60 hours), and lastly train with an unsupervised backtranslation objective. We train and evaluate our models for English-to-German, German-to-English and Marathi-to-English translation on three different domains (European Parliament, Common Voice, and All India Radio) with single-speaker synthesized speech. Evaluated using the ASR-BLEU metric, our models achieve reasonable performance on all three domains, with some being within 1-2 points of our higher-resourced topline.

BibTeX

@misc{diwan2024textlesslowresourcespeechtospeechtranslation,
    title={Textless Low-Resource Speech-to-Speech Translation With Unit Language Models},
    author={Anuj Diwan and Anirudh Srinivasan and David Harwath and Eunsol Choi},
    year={2024},
    eprint={2305.15405},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2305.15405},
    }