يعرض 1 - 10 نتائج من 115 نتيجة بحث عن '"vocal parameters"', وقت الاستعلام: 1.93s تنقيح النتائج
  1. 1
    دورية أكاديمية

    المصدر: Frontiers in Veterinary Science, Vol 11 (2024)

    الوصف: There is a critical need to develop and validate non-invasive animal-based indicators of affective states in livestock species, in order to integrate them into on-farm assessment protocols, potentially via the use of precision livestock farming (PLF) tools. One such promising approach is the use of vocal indicators. The acoustic structure of vocalizations and their functions were extensively studied in important livestock species, such as pigs, horses, poultry, and goats, yet cattle remain understudied in this context to date. Cows were shown to produce two types of vocalizations: low-frequency calls (LF), produced with the mouth closed, or partially closed, for close distance contacts, and open mouth emitted high-frequency calls (HF), produced for long-distance communication, with the latter considered to be largely associated with negative affective states. Moreover, cattle vocalizations were shown to contain information on individuality across a wide range of contexts, both negative and positive. Nowadays, dairy cows are facing a series of negative challenges and stressors in a typical production cycle, making vocalizations during negative affective states of special interest for research. One contribution of this study is providing the largest to date pre-processed (clean from noises) dataset of lactating adult multiparous dairy cows during negative affective states induced by visual isolation challenges. Here, we present two computational frameworks—deep learning based and explainable machine learning based, to classify high and low-frequency cattle calls and individual cow voice recognition. Our models in these two frameworks reached 87.2 and 89.4% accuracy for LF and HF classification, with 68.9 and 72.5% accuracy rates for the cow individual identification, respectively.

    وصف الملف: electronic resource

  2. 2
    رسالة جامعية

    المؤلفون: Rolinat, Amélie

    مرشدي الرسالة: Rigoulot, Simon

    الوصف: Mémoire de maîtrise présenté en vue de l'obtention de la maîtrise en psychologie (M. Sc)
    Contexte : La prosodie de la parole, c'est-à-dire les variations du ton de la voix lorsque l'on parle, joue un rôle clé dans les interactions sociales en apportant entre autres des informations importantes liées à l'identité, l'état émotionnel ou encore l'origine géographique. La prosodie est modifiée par les accents d’une personne, en particulier si elle parle une langue étrangère. Ces accents ont un impact important sur la façon dont la parole est reconnue, avec des conséquences significatives sur la façon dont le locuteur est perçu socialement, comme une baisse d’empathie ou encore une moins grande confiance. Cependant, il est moins clair si cet impact, généralement négatif, persiste dans le contexte des accents régionaux qui constituent des variations plus subtiles du signal vocal. Objectif et hypothèse : L'objectif de ce présent mémoire est de comprendre comment des individus francophones de différentes régions (France, Québec) expriment et reconnaissent des phrases émotionnelles prononcées par des personnes originaires de la même région ou non. Plusieurs études suggèrent un avantage de groupe, qui renvoie à l’idée que même si les émotions pourraient être reconnues de manière universelle, nous reconnaissons mieux les productions émotionnelles de personnes de notre propre groupe culturel que de personnes extérieures à ce groupe. Est-ce que cet avantage persiste dans le cas des accents régionaux, pour lesquels deux populations partagent la même langue ? Cette question reste très peu étudiée et ne l’a jamais été avec la langue française. Nous souhaitons 1) créer et valider une banque de phrases émotionnelles prononcées en français avec des accents de France et du Québec ; 2) caractériser les profils acoustiques de ces productions émotionnelles. Sur la base de données de la littérature (e.g., Mauchand et Pell, 2020), nous nous attendons à ce que les québécois (Qc) montrent une prosodie émotionnelle plus expressive que les Français (Fr). Méthode : Nous avons créé de courtes phrases émotionnelles dans 5 émotions (Joie, Tristesse, Colère, Fierté, Honte), prononcées par des acteurs quebecoie.s.es et français. Cette de banque de stimuli a été validé avec une étude en ligne par des françaises et québécoises. 4 Avec un modèle général mixte, nous avons analysé les paramètres vocaux: moyenne et l’écart type de la fréquence fondamentale, l’écart-type et la moyenne de l'intensité, Shimmer moyen, Jitter, HNR, l’indice Hammarberg, pente spectrale et durée des phrases. Résultats : Les paramètres de la fréquence fondamentale moyenne (F0M), d’intensité, de durée, de pente spectrale et d’indice Hammarberg sont significativement différents selon les émotions et entre les origines, Nous avons aussi noté une interaction entre les sexes des locuteurs et leurs origines. De manière globale, sur les cinq émotions considérées, les Fr parlent avec une F0M plus élevée, sauf pour la tristesse. Les Qc parlent eux, pour toutes les émotions, avec une plus grande intensité et une plus longue durée. Au final, nous pouvons considérer que les Qc expriment de manière plus prononcée les émotions que les Fr, sauf au niveau de la colère. En ce qui concerne les différences liées au sexe des locuteurs, nous avons remarqué que les hommes Qc ont une prosodie émotionnelle plus forte que les hommes Fr. Des différences entre les femmes Qc et Fr ont été seulement observées dans les émotions de honte et de fierté, des émotions plus sociales. Conclusion : Nous avons pu caractériser l’expression vocale émotionnelle des Fr et Qc qui, malgré leur langue commune, s’expriment de manière très distincte pour transmettre leurs émotions. Ces résultats ouvrent des perspectives intéressantes sur les interactions interculturelles d’une même langue mais de régions différentes et confirment la prosodie de langage, en particulier émotionnelle, comme un véritable marqueur identitaire.
    Context: The prosody of speech, i.e. the variations in tone of voice when speaking, plays a key role in social interactions by providing important information linked to identity, emotional state and geographical origin. Foreign accents have a major impact on the way speech is recognized, with significant consequences for social evaluations, such as reduced empathy towards the speaker. However, it is less clear whether this impact persists in the context of regional accents, which are more subtle variations of the speech signal. Objective and hypothesis: The aim of this dissertation is to understand how French-speaking individuals from different regions (France, Quebec) express and recognize emotional phrases spoken by people from the same or different regions. Given the hypothesis of group advantage, we believe that expression and perception differ according to culture. Based on Mauchand and Pell’s (2020) study, we expect Quebecers (Qc) to show a more expressive emotional prosody than French people (Fr). Method: We created short emotional sentences in 5 emotions (Joy, Sadness, Anger, Pride, Shame), spoken by Quebecois and French actors. This bank of stimuli was validated with an online study by French and Quebecers. Using a general mixed model, we analyzed the vocal parameters: mean and standard deviation of the fundamental frequency, standard deviation and mean of the intensity, mean Shimmer, Jitter, HNR, Hammarberg index, Spectral Slope, Duration. Results: The parameters of fundamental frequency (F0M), intensity, duration, spectral slope and Hammarberg index differed significantly between the emotions, origins and sexes. For example, of the five emotions, Frs spoke with a higher F0M except for sadness, but Qc spoke with greater intensity and longer duration. The Qc expressed the emotions in a more pronounced way than the Fr, except for anger. Also, many significant differences show that Qc men have a stronger emotional prosody than Fr men. Finally, only differences between Qc and Fr women were observed in the emotions of shame and pride, emotions that are not in the 6 primary emotions, but which 6 would be more cultural emotions. Conclusion: We were able to characterize the emotional vocal expression of Fr and Qc who, despite their common language, express themselves in very distinct ways to convey their emotions. These results open up interesting perspectives on intercultural interactions in the same language but in different regions and show that speech prosody, particularly emotional prosody, could be considered as a marker of regional identity.

    وصف الملف: application/pdf

  3. 3
    دورية أكاديمية

    المصدر: European Psychiatry, Vol 64 (2021)

    الوصف: Abstract Background Certain neuropsychiatric symptoms (NPS), namely apathy, depression, and anxiety demonstrated great value in predicting dementia progression, representing eventually an opportunity window for timely diagnosis and treatment. However, sensitive and objective markers of these symptoms are still missing. Therefore, the present study aims to investigate the association between automatically extracted speech features and NPS in patients with mild neurocognitive disorders. Methods Speech of 141 patients aged 65 or older with neurocognitive disorder was recorded while performing two short narrative speech tasks. NPS were assessed by the neuropsychiatric inventory. Paralinguistic markers relating to prosodic, formant, source, and temporal qualities of speech were automatically extracted, correlated with NPS. Machine learning experiments were carried out to validate the diagnostic power of extracted markers. Results Different speech variables are associated with specific NPS; apathy correlates with temporal aspects, and anxiety with voice quality—and this was mostly consistent between male and female after correction for cognitive impairment. Machine learning regressors are able to extract information from speech features and perform above baseline in predicting anxiety, apathy, and depression scores. Conclusions Different NPS seem to be characterized by distinct speech features, which are easily extractable automatically from short vocal tasks. These findings support the use of speech analysis for detecting subtypes of NPS in patients with cognitive impairment. This could have great implications for the design of future clinical trials as this cost-effective method could allow more continuous and even remote monitoring of symptoms.

    وصف الملف: electronic resource

  4. 4

    المصدر: Journal of Veterinary Behavior. 47:93-98

    الوصف: Vocalization and other behavior signals are used as tools to assess animal welfare in beef calves. This paper aimed to compare vocal parameters and behavior signals expressed by beef calves submitted either to ear tagging procedure for identification (Effective Tagging - ET) or to a human touch on the ears (Simulated Tagging - ST), considering calf sex and age. A total of 52, 30 male and 22 female, 91.3 ± 28.1 d-old taurine beef calves participated in the study, in Santa Catarina, Southern Brazil. Calves were randomly divided into two treatments (ET and ST) and recorded for 1 minute after the beginning of the stimulus for both ET and ST situations, as this was the total time animals were restrained. We analysed the immediate vocal and behavioral responses to the procedure, following regular handling procedures used on the farm. Vocal data was further analysed using specific audacity software and behavioral signs were counted. More animals in the ET treatment vocalized during the trial (14 and 5 calves, respectively), with a higher average number of vocal calls (1.7 and 0.3 calls/animal), as well as of head movements (7.8 and 4.0 movements), tail flapping (56.1 and 29.8) and leg movements (28.4 and 16.4). Male vocalizations were longer than female (2.07 and 1.61 s), with higher fundamental frequency (249.6 and 178.6 Hz). Additionally, older calves vocalized with higher fundamental frequency (241.0 and 212.8 Hz) and showed more head movements (6.5 and 5.3 occurrences) than younger ones. The results suggest that the vocalization characteristics associated with other behavior signals may be used as tools to assess pain in beef calves during invasive procedures such as identification handling.

  5. 5
    دورية أكاديمية

    المؤلفون: Valentin Ghisa, Nicoleta Ghisa

    المصدر: Journal of Defense Resources Management, Vol 7, Iss 1(12), Pp 159-168 (2016)

    مصطلحات موضوعية: correlation analysis, vocal parameters, SPSS, Military Science

    الوصف: To analyze communication we need to study the main parameters that describe the vocal sounds from the point of view of information content transfer efficiency. In this paper we analyze the physical quality of the “on air" information transfer, according to the audio streaming parameters and from the particular phonetic nature of the human factor. Applying this statistical analysis we aim to identify and record the correlation level of the acoustical parameters with the vocal ones and the impact which the presence of this cross-correlation can have on communication structures’ improvement.

    وصف الملف: electronic resource

  6. 6

    المصدر: Logopedics Phoniatrics Vocology. 47:202-208

    الوصف: Background As the duration of diabetes progresses, various disease related complications might occur in patients. The main goal of this paper is to compare acoustic and aerodynamic measures of patients with type 2 diabetes mellitus (T2DM) with a control group of healthy subjects. Methods A total of 91 subjects, 51 individuals with type 2 diabetes mellitus (DM group) and 40 healthy volunteers (HV group) were participated in the study. Maximum phonation time (MPT) was captured for assessing phonatory mechanics. Acoustic voice parameters, including mean fundamental frequency (mean fo), jitter local (Jlocal), jitter absolute (Jabs), shimmer local (Slocal), shimmer decibel (SdB), and harmonics to noise ratio (HNR) were detected using the Praat software program. Results Only for Jabs, statically significant difference was found between the groups. There were no statically significant differences between any voice parameters of HV versus those with the duration of diabetes >= 10 years and the HbA1c level >= 7%. However, statically significant differences for MPT and Slocal were found between patients with neuropathy versus HV. In addition, a comparison between patients with voice complaint versus HV showed significant differences for Slocal and SdB. Conclusions The findings of the present study do not provide strong evidence about the possible effect of DM on the human voice. However, diabetic neuropathy is considered to be a factor affecting the voice parameters in the target population. The physicians should pay attention to the acoustic and aerodynamic voice parameters in patients with diabetes, particularly in those with neuropathy or voice complaints.

  7. 7

    المصدر: Evolution and Human Behavior. 42:91-103

    الوصف: Men's voices may provide cues to overall condition; however, little research has assessed whether health status is reliably associated with perceivable voice parameters. In Study 1, we investigated whether listeners could classify voices belonging to men with either relatively lower or higher self-reported health. Participants rated voices for speaker health, disease likelihood, illness frequency, and symptom severity, as well as attractiveness (women only) and dominance (men only). Listeners' were mostly unable to judge the health of male speakers from their voices; however, men rated the voices of men with better self-reported health as sounding more dominant. In Study 2, we tested whether men's vocal parameters (fundamental frequency mean and variation, apparent vocal tract length, and harmonics-to-noise ratio) and aspects of their self-reported health predicted listeners' health and disease resistance ratings of those voices. Speakers' fundamental frequency (fo) negatively predicted ratings of health. However, speakers' self-reported health did not predict ratings of health made by listeners. In Study 3, we investigated whether separately manipulating two sexually dimorphic vocal parameters—fo and apparent vocal tract length (VTL)—affected listeners' health ratings. Listeners rated men's voices with lower fo (but not VTL) as healthier, supporting findings from Study 2. Women rated voices with lower fo and VTL as more attractive, and men rated them as more dominant. Thus, while both VTL and fo affect dominance and attractiveness judgments, only fo appears to affect health judgments. Results of the above studies suggest that, although listeners assign higher health ratings to speakers with more masculine fo, these ratings may not be accurate at tracking speakers' self-rated health.

  8. 8

    المصدر: Scientific Reports, Vol 9, Iss 1, Pp 1-9 (2019)
    Scientific Reports

    الوصف: Cattle mother-offspring contact calls encode individual-identity information; however, it is unknown whether cattle are able to maintain individuality when vocalising to familiar conspecifics over other positively and negatively valenced farming contexts. Accordingly, we recorded 333 high-frequency vocalisations from 13 Holstein-Friesian heifers during oestrus and anticipation of feed (putatively positive), as well as denied feed access and upon both physical and physical & visual isolation from conspecifics (putatively negative). We measured 21 source-related and nonlinear vocal parameters and stepwise discriminant function analyses (DFA) were performed. Calls were divided into positive (n = 170) and negative valence (n = 163) with each valence acting as a ‘training set’ to classify calls in the oppositely valenced ‘test set’. Furthermore, MANOVAs were conducted to determine which vocal parameters were implicated in individual distinctiveness. Within the putatively positive ‘training set’, the cross-validated DFA correctly classified 68.2% of the putatively positive calls and 52.1% of the putatively negative calls to the correct individual, respectively. Within the putatively negative ‘training set’, the cross-validated DFA correctly assigned 60.1% of putatively negative calls and 49.4% of putatively positive calls to the correct individual, respectively. All DFAs exceeded chance expectations indicating that vocal individuality of high-frequency calls is maintained across putatively positive and negative valence, with all vocal parameters except subharmonics responsible for this individual distinctiveness. This study shows that cattle vocal individuality of high-frequency calls is stable across different emotionally loaded farming contexts. Individual distinctiveness is likely to attract social support from conspecifics, and knowledge of these individuality cues could assist farmers in detecting individual cattle for welfare or production purposes.

  9. 9

    المساهمون: Inria Sophia Antipolis - Méditerranée (CRISAM), Institut National de Recherche en Informatique et en Automatique (Inria), Spatio-Temporal Activity Recognition Systems (STARS), Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Cognition Behaviour Technology (CobTek), Université Nice Sophia Antipolis (1965 - 2019) (UNS), COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-COMUE Université Côte d'Azur (2015-2019) (COMUE UCA)-Centre Hospitalier Universitaire de Nice (CHU Nice)-Institut Claude Pompidou [Nice] (ICP - Nice)-Université Côte d'Azur (UCA), ki:elements [Saarbrücken], ANR-15-IDEX-0001,UCA JEDI,Idex UCA JEDI(2015)

    المصدر: European Psychiatry
    European Psychiatry, Cambridge University press, 2021, 64 (1), ⟨10.1192/j.eurpsy.2021.2236⟩
    European Psychiatry, 2021, 64 (1), pp.e64. ⟨10.1192/j.eurpsy.2021.2236⟩

    الوصف: International audience; Abstract Background Certain neuropsychiatric symptoms (NPS), namely apathy, depression, and anxiety demonstrated great value in predicting dementia progression, representing eventually an opportunity window for timely diagnosis and treatment. However, sensitive and objective markers of these symptoms are still missing. Therefore, the present study aims to investigate the association between automatically extracted speech features and NPS in patients with mild neurocognitive disorders. Methods Speech of 141 patients aged 65 or older with neurocognitive disorder was recorded while performing two short narrative speech tasks. NPS were assessed by the neuropsychiatric inventory. Paralinguistic markers relating to prosodic, formant, source, and temporal qualities of speech were automatically extracted, correlated with NPS. Machine learning experiments were carried out to validate the diagnostic power of extracted markers. Results Different speech variables are associated with specific NPS; apathy correlates with temporal aspects, and anxiety with voice quality—and this was mostly consistent between male and female after correction for cognitive impairment. Machine learning regressors are able to extract information from speech features and perform above baseline in predicting anxiety, apathy, and depression scores. Conclusions Different NPS seem to be characterized by distinct speech features, which are easily extractable automatically from short vocal tasks. These findings support the use of speech analysis for detecting subtypes of NPS in patients with cognitive impairment. This could have great implications for the design of future clinical trials as this cost-effective method could allow more continuous and even remote monitoring of symptoms.

  10. 10

    المصدر: CoDAS, Vol 33, Iss 4 (2021)
    CoDAS, Volume: 33, Issue: 4, Article number: e20200035, Published: 02 AUG 2021
    CoDAS v.33 n.4 2021
    CoDAS
    Sociedade Brasileira de Fonoaudiologia (SBFA)
    instacron:SBFA

    الوصف: RESUMO Objetivo identificar sinais e sintomas de DTM, bem como analisar os resultados de parâmetros vocais, do exame clínico físico de palpação muscular, da autopercepção de sintomas vocais, dor e fadiga vocal de mulheres com DTM e comparar com mulheres vocalmente saudáveis. Métodos estudo transversal com 45 mulheres (23 com DTM e 22 controles), mediana de idade similar entre os grupos. A avaliação fonoaudiológica e otorrinolaringológica determinaram o diagnóstico de DTM. Todas as participantes responderam aos protocolos Escala de Sintomas Vocais (ESV), Índice de Fadiga Vocal (IFV) e Questionário Nórdico de Sintomas Osteomusculares (QNSO). Elas também foram avaliadas pelo exame de palpação da musculatura perilaríngea, avaliação perceptivo-auditiva e análise acústica da voz da frequência fundamental. A amostra de fala incluiu vogais “a”, “i” e “é” sustentadas e fala encadeada, gravada em ambiente silente, e submetida à avaliação perceptivo-auditiva por três juízes. Na análise acústica, a frequência fundamental e tempos máximos de fonação foram extraídos. Resultados O grupo DTM apresentou piores resultados na ESV, na IFV e no QNSO, além de maior resistência à palpação e posição vertical de laringe alta. Os parâmetros vocais também apresentaram maior desvio na DTM, exceto para a frequência fundamental. Não houve relação entre sintomas vocais, fadiga ou dor com o grau geral da disfonia no grupo DTM, indicando sintomas importantes em desvios vocais leves ou moderados. Conclusão mulheres com DTM apresentaram sintomas vocais, fadiga vocal, dor muscular, resistência à palpação e parâmetros vocais desviados quando comparadas às mulheres vocalmente saudáveis. ABSTRACT Purpose To identify muscle tension dysphonia (MTD) signs and symptoms, as well as to analyze the results of vocal parameters, the physical clinical examination of muscle palpation, the self-perception of vocal symptoms, vocal pain, and fatigue of women with MTD and compare them with women with healthy voices. Methods a cross-sectional study with 45 women (23 with MTD and 22 controls), similar median age between groups. The speech-language and otorhinolaryngological evaluation determined the diagnosis of MTD. All participants responded to the Voice Symptoms Scale (VoiSS), Vocal Fatigue Index (VFI), and Nordic Musculoskeletal Questionnaire (NMQ) protocols. They were also assessed by a palpatory evaluation of the perilaryngeal musculature, auditory-perceptual evaluation, and acoustic analysis of the voice fundamental frequency. The speech sample included sustained vowels “a”, “i” and “e” and connected speech, recorded in a silent environment, and submitted to auditory-perceptual evaluation by three judges. In the acoustic analysis, the fundamental frequency and maximum phonation times were extracted. Results The MTD group had worse results in VoiSS, VFI, and NMQ, in addition to greater resistance to palpation and a high vertical position of the larynx. The vocal parameters also showed greater deviation in the MTD group, except for the fundamental frequency. There was no relationship between vocal symptoms, fatigue, or pain with the general degree of dysphonia in the MTD group, indicating important symptoms in mild or moderate vocal deviations. Conclusion women with MTD presented vocal symptoms, vocal fatigue, muscle pain, resistance to palpation and deviated vocal parameters when compared to vocally healthy women.

    وصف الملف: text/html