Dynamic Latency for CTC-Based Streaming Automatic Speech Recognition With Emformer

التفاصيل البيبلوغرافية
العنوان: Dynamic Latency for CTC-Based Streaming Automatic Speech Recognition With Emformer
المؤلفون: Sun, Jingyu, Zhong, Guiping, Zhou, Dinghao, Li, Baoxiang
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Sound, Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing
الوصف: An inferior performance of the streaming automatic speech recognition models versus non-streaming model is frequently seen due to the absence of future context. In order to improve the performance of the streaming model and reduce the computational complexity, a frame-level model using efficient augment memory transformer block and dynamic latency training method is employed for streaming automatic speech recognition in this paper. The long-range history context is stored into the augment memory bank as a complement to the limited history context used in the encoder. Key and value are cached by a cache mechanism and reused for next chunk to reduce computation. Afterwards, a dynamic latency training method is proposed to obtain better performance and support low and high latency inference simultaneously. Our experiments are conducted on benchmark 960h LibriSpeech data set. With an average latency of 640ms, our model achieves a relative WER reduction of 6.0% on test-clean and 3.0% on test-other versus the truncate chunk-wise Transformer.
Comment: 5 pages, 2 figures, submitted to interspeech 2022
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/2203.15613Test
رقم الانضمام: edsarx.2203.15613
قاعدة البيانات: arXiv