Accueil > Pages Perso > Fanny Meunier

Fanny Meunier

DR  -  CNRS

Dernières publicationsHAL

pour l'idHal "fanny-meunier" :

titre
Whistled Phoneme Categorization: the Effect of Vowel Space Range
auteur
Anaïs Tran Ngoc, Julien Meyer, Fanny Meunier
article
2023
annee_publi
2023
resume
We explore whistled vowel categorization by untrained listeners, focusing specifically on the impact of the different vocalic frequency ranges of two whistlers (for the vowels /i/, /e/, /a/, /o/) and the effect of training on performance. In the experiment, we included stimuli that show inter-individual and intra-individual variations of production. In the analyses we looked at the whistler identity effect and at the learning effect through the experiment for the studied vowels. The results showed an effect of the whistler, where the larger vocalic range led to improved categorization, and highlighted the robustness of the vowel recognition hierarchy. There was no general learning effect, albeit for one vowel and for the whistler with a narrower vocalic range. This study provides insight into one's representation of the vowel space in non-tonal languages.
typdoc
Pré-publication, Document de travail
Accès au texte intégral et bibtex
https://hal.science/hal-04293388/file/exling-2023-proceedings-TranNgoc-Meyer-Meunier-PreprintFIn2.pdf BibTex
titre
Prosodic cues to word boundaries in a segmentation task assessed using reverse correlation
auteur
Alejandro Osses, Elsa Spinelli, Fanny Meunier, Etienne Gaudrain, Léo Varnet
article
JASA Express Letters, 2023, 3 (9), pp.095205-1. ⟨10.1121/10.0021022⟩
annee_publi
2023
resume
When listening to speech sounds, listeners are able to exploit acoustic features that mark the boundaries between successive words, the so-called segmentation cues. These cues are typically investigated by directly manipulating features that are hypothetically related to segmentation. The current study uses a different approach based on reverse correlation, where the stimulus manipulations are based on minimal experimental assumptions. The method was evaluated using pairs of phonemically-identical sentences in French, whose prosody was changed in each trial by introducing random f0 trajectories and segment durations. Our results support a prominent perceptual role of the f0 rise and vowel duration at the beginning of content words.
typdoc
Article dans une revue
Accès au texte intégral et bibtex
https://hal.science/hal-04121858/file/Osses%2C%20Spinelli%2C%20Meunier%2C%20Gaudrain%2C%20Varnet.pdf BibTex
titre
The Effect of Whistled Vowels on Whistled Word Categorization for Naive Listeners
auteur
Anais Tran Ngoc, Fanny Meunier, Julien Meyer
article
Interspeech 2023 - 24th Annual Conference of the International Speech Communication Association, ISCA, Aug 2023, Dublin, Ireland. pp.3063-3067, ⟨10.21437/interspeech.2023-1967⟩
annee_publi
2023
resume
In this paper, we explore whistled word perception by naive French speakers. In whistled words of non-tonal languages, vowels are transposed to relatively stable pitches, which contrast with consonant movements or interruptions. Previous studies on whistled speech with naive listeners have tested vowels and consonants separately. Other studies on spoken word recognition have found that vowels and consonants contribute differently to intelligibility, where the role of vowels was highly mediated by the context. Here, naive participants recognize disyllabic whistled words above chance, and vowels are shown to contribute differently than consonants. When focusing on the role of vowels, we found different scales of performance between the vowels tested, mediated by their position in the word. We also highlighted the importance of the vowels' relative frequency difference (called 'interval') in the word.
typdoc
Communication dans un congrès
Accès au texte intégral et bibtex
https://hal.science/hal-04373273/file/tranngoc23_interspeech.pdf BibTex
titre
Raw and post-processed data for the study of prosodic cues to word boundaries in a segmentation task using reverse correlation
auteur
Alejandro Osses, Elsa Spinelli, Fanny Meunier, Etienne Gaudrain, Léo Varnet
article
2023, ⟨10.5281/zenodo.7865424⟩
annee_publi
2023
resume
The current dataset provides all the stimuli (folder ../01-Stimuli/), raw data (folder ../02-Raw-data/) and post-processed data (../03-Post-proc-data/) used in a prosody reverse correlation study with the title "prosodic cues to word boundaries in a segmentation task using reverse correlation" by the same authors. The listening experiment was implemented using one-interval trials with target words of the structure l'aX (option 1) and la'X (option 2). The experiment was designed and implemented using the fastACI toolbox under the name 'segmentation'. A between-subject design was used with a total of 47 participants, who evaluated one of five conditions, LAMI (N=16), LAPEL (N=18), LACROCH (N=5), LALARM (N=5), and LAMI_SHIFTED (N=3). More details are given in the related publication (to be submitted to JASA-EL in May 2023).
typdoc
Autre publication scientifique
Accès au bibtex
BibTex
titre
Neural correlates of acoustic and semantic cues during speech segmentation in French
auteur
Maria del Mar Cordero, Ambre Denis-Noël, Elsa Spinelli, Fanny Meunier
article
Interspeech 2022, Sep 2022, Incheon, South Korea. pp.4058-4062, ⟨10.21437/Interspeech.2022-10986⟩
annee_publi
2022
resume
Natural speech is highly complex and variable. Particularly, spoken language, in contrast to written language, has no clear word boundaries. Adult listeners can exploit different types of information to segment the continuous stream such as acoustic and semantic information. However, the weight of these cues, when co-occurring, remains to be determined. Behavioural tasks are not conclusive on this point as they focus participants’ attention on certain sources of information, thus biasing the results. Here, we looked at the processing of homophonic utterances such as l’amie vs la mie (both /lami/) which include fine acoustic differences and for which the meaning changes depending on segmentation. To examine the perceptual resolution of such ambiguities when semantic information is available, we measured the online processing of sentences containing such sequences in an ERP experiment involving no active task. In a congruent context, semantic information matched the acoustic signal of the word amie, while, in the incongruent condition, the semantic information carried by the sentence and the acoustic signal were leading to different lexical candidates. No clear neural markers for the use of acoustic cues were found. Our results suggest a preponderant weight of semantic information over acoustic information during natural spoken sentence processing.
typdoc
Communication dans un congrès
Accès au texte intégral et bibtex
https://hal.science/hal-03916475/file/cordero22_interspeech.pdf BibTex
titre
Sentence repetition span in primary progressive aphasia and Alzheimer's disease: insights from preliminary results
auteur
Seçkin Arslan, Alexandra Plonka, Aurélie Mouton, Justine Lemaire, Magali Cogordan Payne, Guillaume Sacco, Valeria Manera, Auriane Gros, Fanny Meunier
article
Frontiers in Communication, 2022
annee_publi
2022
typdoc
Article dans une revue
Accès au texte intégral et bibtex
https://shs.hal.science/halshs-03851824/file/fcomm-07-934487.pdf BibTex
titre
Theta activity and phase resetting during perception of French homophonous utterances
auteur
Noelia Do Carmo-Blanco, Michel Hoen, Elsa Spinelli, Fanny Meunier
article
Language, Cognition and Neuroscience, 2022, 37 (2), pp.154-164. ⟨10.1080/23273798.2021.1950786⟩
annee_publi
2022
resume
Speech perception involves segmenting a continuous stream of speech into its word components. This can be challenging in the case of homophonous utterances only differing in non-contrastive subphonemic features. Yet, the speech perception system seems able to discriminate subphonemic deviation in homophonous utterances, since it has been shown to elicit a mismatch response (MMN). Here, we focused on the oscillatory correlates, namely phase resetting and power, of non-contrastive subphonemic deviation processing in language. An oddball task that considered natural intraspeaker variability was used. Subphonemic deviance elicited intertrial phase coherence (ITC) differences in the theta band at Fz during the time window of the MMN. No differences in power were found. This suggests that the processing of subphonemic deviation in speech signals, reflected by the MMN, might rely on mechanisms of phase resetting. ITC might facilitate the synchronous firing of functional networks involved in the processing of subphonemic deviance
typdoc
Article dans une revue
Accès au texte intégral et bibtex
https://hal.univ-grenoble-alpes.fr/hal-03348382/file/tf_MMN_HAL.pdf BibTex
titre
Acoustic and semantic processes during speech segmentation in French
auteur
Mar Cordero-Rull, Stéphane Pota, Elsa Spinelli, Fanny Meunier
article
12th International Conference of Experimental Linguistics, Oct 2021, Athènes, France. ⟨10.36505/ExLing-2021/12/0014/000487⟩
annee_publi
2021
resume
We designed two experiments that tested the listeners' perceptual capacities during online segmentation of homophonic word boundaries while processing sentential information. In French, listeners often use variations in fine acoustic indices to detect word beginnings. We measured event-related potentials (ERPs) evoked by phonemically identical sequences, such as l'affiche ("the poster") and la fiche ("the sheet"), both [lafiʃ], which were contained in either congruent or incongruent sentences. Results showed that although listeners can detect acoustic variations in homophonic sequences, these may not be salient enough when contextual information is also present. Shifting attention from sentence meaning (Task 1) to lexical information (Task 2), enhanced the listeners' perception of fine-grained acoustic details. Thus, topdown processes are likely to modulate speech perception and segmentation.
typdoc
Communication dans un congrès
Accès au texte intégral et bibtex
https://hal.science/hal-03508038/file/12_0014_000487.pdf BibTex
titre
The Dawn of the Human-Machine Era: A forecast of new and emerging language technologies.
auteur
Dave Sayers, Rui Sousa-Silva, Sviatlana Höhn, Lule Ahmedi, Kais Allkivi-Metsoja, Dimitra Anastasiou, Štefan Beňuš, Lynne Bowker, Eliot Bytyçi, Alejandro Catala, Anila Çepani, Rubén Chacón-Beltrán, Sami Dadi, Fisnik Dalipi, Vladimir Despotovic, Agnieszka Doczekalska, Sebastian Drude, Karën Fort, Robert Fuchs, Christian Galinski, Federico Gobbo, Tunga Gungor, Siwen Guo, Klaus Höckner, Petralea Láncos, Tomer Libal, Tommi Jantunen, Dewi Jones, Blanka Klimova, Eminerkan Korkmaz, Sepesy Maučec Mirjam, Miguel Melo, Fanny Meunier, Bettina Migge, Barbu Mititelu Verginica, Aurélie Névéol, Arianna Rossi, Antonio Pareja-Lora, Christina Sanchez-Stockhammer, Aysel Şahin, Angela Soltan, Claudia Soria, Sarang Shaikh, Marco Turchi, Sule Yildirim Yayilgan
article
2021
annee_publi
2021
resume
The 'human-machine era' is coming soon: a time when technology is integrated with our senses, not confined to mobile devices. The hardware will move from our hands into our eyes and ears. Intelligent eyewear and earwear will be able to translate another person's words, and make it look and sound like they were talking to you in your language. Technology will mediate what we see, hear and say, in real time. In addition, we will be having increasingly complex conversations with smart devices. This is not science fiction or marketing hype. These devices are currently in prototype, set for widespread consumer adoption in the coming years. All this will disrupt and transform our use and understanding of language use. Are we ready?A new EU 'COST Action' (https://cost.eu) research network 'Language in the Human-Machine Era' (LITHME), with members from 52 countries, explores how such technological advances are likely to change our everyday communication, and ultimately language itself. As a first major collaborative effort, LITHME has published an open access report 'The Dawn of the Human-Machine Era: A Forecast of New and Emerging Language Technologies': https://doi.org/10.17011/jyx/reports/20210518/1.Accessible to a wide audience, the report brings together insights from specialists in the fields of language technology and linguistic research.The forecast report was authored by 52 researchers, and edited by LITHME's Chair Dave Sayers (University of Jyväskylä, Finland), Vice-Chair Sviatlana Höhn (University of Luxembourg), and the Chair of LITHME's Computational Linguistics working group Rui Sousa Silva (University of Porto, Portugal). It describes the current state and probable futures of various language technologies – for written, spoken, haptic and signed modalities of language.The publication is intended to be both authoritative and accessible, aimed at language and technology professionals but also policymakers and the wider public. It describes how a range of new technologies will soon transform the way we use language, while discussing the software powering these advances behind the scenes, as well as consumer devices like Augmented Reality eyepieces and immersive Virtual Reality spaces. The report also shines a light on critical issues such as inequality of access to technologies, privacy and security, and new forms of deception and crime.It is a result of unique collaboration, as LITHME brings together people from different directions in language research who would not otherwise meet or collaborate. LITHME has eight thematic working groups; and members from each working group have contributed to the report.
typdoc
Autre publication scientifique
Accès au texte intégral et bibtex
https://hal.science/hal-03230287/file/The%20Forecast%20report%202021%20%28May%2019%2008.28%29.pdf BibTex
titre
Influence of homophone processing during auditory language comprehension on executive control processes: A dual-task paradigm
auteur
Samuel El Bouzaïdi Tiali, Elsa Spinelli, Fanny Meunier, Richard Palluel-Germain, Marcela Perrone-Bertolotti
article
PLoS ONE, 2021, 16 (7), pp.e0254237. ⟨10.1371/journal.pone.0254237⟩
annee_publi
2021
resume
In the present preregistered study, we evaluated the possibility of a shared cognitive mechanism during verbal and non-verbal tasks and therefore the implication of domain-general cognitive control during language comprehension. We hypothesized that a behavioral cost will be observed during a dual-task including both verbal and non-verbal difficult processing. Specifically, to test this claim, we designed a dual-task paradigm involving: an auditory language comprehension task (sentence comprehension) and a non-verbal Flanker task (including congruent and incongruent trials). We manipulated sentence ambiguity and evaluated if the ambiguity effect modified behavioral performances in the non-verbal Flanker task. Under the assumption that ambiguous sentences induce a more difficult process than unambiguous sentences, we expected non-verbal flanker task performances to be impaired only when a simultaneous difficult language processing is performed. This would be specifically reflected by a performance cost during incongruent Flanker items only during ambiguous sentence presentation. Conversely, we observed a facilitatory effect for the incongruent Flanker items during ambiguous sentence suggesting better non-verbal inhibitory performances when an ambiguous sentence was simultaneously processed. Exploratory data analysis suggests that this effect is not only related to a more difficult language processing but also to the previous ( n-1 ) Flanker item. Indeed, results showed that incongruent n-1 Flanker items led to a facilitation of the incongruent synchronized Flanker items only when ambiguous sentences were conjointly presented. This result, even if it needs to be corroborated in future studies, suggests that the recruitment of executive control mechanisms facilitates subsequent executive control implication during difficult language processing. The present study suggests a common executive control mechanism during difficult verbal and non-verbal tasks.
typdoc
Article dans une revue
Accès au texte intégral et bibtex
https://hal.univ-grenoble-alpes.fr/hal-03344842/file/ElBuzaidiThialietal2021_Plos.pdf BibTex
  • + de résultats dans la Collection HAL du laboratoire BCL
  • Voir l'ensemble des résultats sur la plateforme HAL