دورية أكاديمية

Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction.

التفاصيل البيبلوغرافية
العنوان: Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction.
المؤلفون: Brima, Yusuf1,2 (AUTHOR) gheidema@uni-osnabrueck.de, Krumnack, Ulf1 (AUTHOR), Pika, Simone2 (AUTHOR), Heidemann, Gunther1 (AUTHOR)
المصدر: Information (2078-2489). Feb2024, Vol. 15 Issue 2, p114. 13p.
مصطلحات موضوعية: *SPEECH, *TRANSFER of training, *CLINICAL supervision, *REDUNDANCY in engineering
مستخلص: Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeled data. By designing pretext tasks that exploit statistical regularities, SSL models can capture useful representations that are transferable to downstream tasks. Barlow Twins (BTs) is an SSL technique inspired by theories of redundancy reduction in human perception. In downstream tasks, BTs representations accelerate learning and transfer this learning across applications. This study applies BTs to speech data and evaluates the obtained representations on several downstream tasks, showing the applicability of the approach. However, limitations exist in disentangling key explanatory factors, with redundancy reduction and invariance alone being insufficient for factorization of learned latents into modular, compact, and informative codes. Our ablation study isolated gains from invariance constraints, but the gains were context-dependent. Overall, this work substantiates the potential of Barlow Twins for sample-efficient speech encoding. However, challenges remain in achieving fully hierarchical representations. The analysis methodology and insights presented in this paper pave a path for extensions incorporating further inductive priors and perceptual principles to further enhance the BTs self-supervision framework. [ABSTRACT FROM AUTHOR]
قاعدة البيانات: Academic Search Index
الوصف
تدمد:20782489
DOI:10.3390/info15020114