PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior

التفاصيل البيبلوغرافية
العنوان: PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior
المؤلفون: Lee, Sang-gil, Kim, Heeseung, Shin, Chaehun, Tan, Xu, Liu, Chang, Meng, Qi, Qin, Tao, Chen, Wei, Yoon, Sungroh, Liu, Tie-Yan
بيانات النشر: arXiv, 2021.
سنة النشر: 2021
مصطلحات موضوعية: FOS: Computer and information sciences, Computer Science - Machine Learning, Sound (cs.SD), Statistics - Machine Learning, Computer Science::Sound, Audio and Speech Processing (eess.AS), FOS: Electrical engineering, electronic engineering, information engineering, Machine Learning (stat.ML), Computer Science - Sound, Electrical Engineering and Systems Science - Audio and Speech Processing, Machine Learning (cs.LG)
الوصف: Denoising diffusion probabilistic models have been recently proposed to generate high-quality samples by estimating the gradient of the data density. The framework defines the prior noise as a standard Gaussian distribution, whereas the corresponding data distribution may be more complicated than the standard Gaussian distribution, which potentially introduces inefficiency in denoising the prior noise into the data sample because of the discrepancy between the data and the prior. In this paper, we propose PriorGrad to improve the efficiency of the conditional diffusion model for speech synthesis (for example, a vocoder using a mel-spectrogram as the condition) by applying an adaptive prior derived from the data statistics based on the conditional information. We formulate the training and sampling procedures of PriorGrad and demonstrate the advantages of an adaptive prior through a theoretical analysis. Focusing on the speech synthesis domain, we consider the recently proposed diffusion-based speech generative models based on both the spectral and time domains and show that PriorGrad achieves faster convergence and inference with superior performance, leading to an improved perceptual quality and robustness to a smaller network capacity, and thereby demonstrating the efficiency of a data-dependent adaptive prior.
Comment: ICLR 2022. 19 pages, 7 figures, 8 tables. Audio samples: https://speechresearch.github.io/priorgradTest/
DOI: 10.48550/arxiv.2106.06406
الوصول الحر: https://explore.openaire.eu/search/publication?articleId=doi_dedup___::12c8e469a4d171c428329716256f642eTest
حقوق: OPEN
رقم الانضمام: edsair.doi.dedup.....12c8e469a4d171c428329716256f642e
قاعدة البيانات: OpenAIRE