دورية أكاديمية

Integrated visual transformer and flash attention for lip-to-speech generation GAN.

التفاصيل البيبلوغرافية
العنوان: Integrated visual transformer and flash attention for lip-to-speech generation GAN.
المؤلفون: Yang, Qiong1,2 (AUTHOR), Bai, Yuxuan1 (AUTHOR) 18691711979@163.com, Liu, Feng1 (AUTHOR), Zhang, Wei3 (AUTHOR)
المصدر: Scientific Reports. 2/24/2024, Vol. 14 Issue 1, p1-12. 12p.
مصطلحات موضوعية: *SPEECH perception, *LIPS, *SPEECH, *TECHNOLOGICAL innovations, *IMAGE representation, *COMMUNICATIVE competence, *ERROR rates
مستخلص: Lip-to-Speech (LTS) generation is an emerging technology that is highly visible, widely supported, and rapidly evolving. LTS has a wide range of promising applications, including assisting speech impairment and improving speech interaction in virtual assistants and robots. However, the technique faces the following challenges: (1) Chinese lip-to-speech generation is poorly recognized. (2) The wide range of variation in lip-speaking is poorly aligned with lip movements. Addressing these challenges will contribute to advancing Lip-to-Speech (LTS) technology, enhancing the communication abilities, and improving the quality of life for individuals with disabilities. Currently, lip-to-speech generation techniques usually employ the GAN architecture but suffer from the following problems: The primary issue lies in the insufficient joint modeling of local and global lip movements, resulting in visual ambiguities and inadequate image representations. To solve these problems, we design Flash Attention GAN (FA-GAN) with the following features: (1) Vision and audio are separately coded, and lip motion is jointly modelled to improve speech recognition accuracy. (2) A multilevel Swin-transformer is introduced to improve image representation. (3) A hierarchical iterative generator is introduced to improve speech generation. (4) A flash attention mechanism is introduced to improve computational efficiency. Many experiments have indicated that FA-GAN can recognize Chinese and English datasets better than existing architectures, especially the recognition error rate of Chinese, which is only 43.19%, the lowest among the same type. [ABSTRACT FROM AUTHOR]
قاعدة البيانات: Academic Search Index
الوصف
تدمد:20452322
DOI:10.1038/s41598-024-55248-6