دورية أكاديمية

Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer

التفاصيل البيبلوغرافية
العنوان: Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer
المؤلفون: Rizwan Ullah, Muhammad Asif, Wahab Ali Shah, Fakhar Anjam, Ibrar Ullah, Tahir Khurshaid, Lunchakorn Wuttisittikulkij, Shashi Shah, Syed Mansoor Ali, Mohammad Alibakhshikenari
المصدر: Sensors, Vol 23, Iss 13, p 6212 (2023)
بيانات النشر: MDPI AG
سنة النشر: 2023
المجموعة: Directory of Open Access Journals: DOAJ Articles
مصطلحات موضوعية: speech emotion recognition, convolutional neural networks, convolutional Transformer encoder, multi-head attention, spatial features, temporal features, Chemical technology, TP1-1185
الوصف: Speech emotion recognition (SER) is a challenging task in human–computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not leverage advanced fusion algorithms for the generation of effective representations of emotional features in speech utterances. To address this problem, we describe the fusion of spatial and temporal feature representations of speech emotion by parallelizing convolutional neural networks (CNNs) and a Transformer encoder for SER. We stack two parallel CNNs for spatial feature representation in parallel to a Transformer encoder for temporal feature representation, thereby simultaneously expanding the filter depth and reducing the feature map with an expressive hierarchical feature representation at a lower computational cost. We use the RAVDESS dataset to recognize eight different speech emotions. We augment and intensify the variations in the dataset to minimize model overfitting. Additive White Gaussian Noise (AWGN) is used to augment the RAVDESS dataset. With the spatial and sequential feature representations of CNNs and the Transformer, the SER model achieves 82.31% accuracy for eight emotions on a hold-out dataset. In addition, the SER system is evaluated with the IEMOCAP dataset and achieves 79.42% recognition accuracy for five emotions. Experimental results on the RAVDESS and IEMOCAP datasets show the success of the presented SER system and demonstrate an absolute performance improvement over the state-of-the-art (SOTA) models.
نوع الوثيقة: article in journal/newspaper
اللغة: English
تدمد: 1424-8220
العلاقة: https://www.mdpi.com/1424-8220/23/13/6212Test; https://doaj.org/toc/1424-8220Test; https://doaj.org/article/13b86cbc2e2b43c9a60951f461cf3c5cTest
DOI: 10.3390/s23136212
الإتاحة: https://doi.org/10.3390/s23136212Test
https://doaj.org/article/13b86cbc2e2b43c9a60951f461cf3c5cTest
رقم الانضمام: edsbas.811F8FB1
قاعدة البيانات: BASE
الوصف
تدمد:14248220
DOI:10.3390/s23136212