Research Article: Speech-in-speech perception and executive function involvement

Date Published: July 14, 2017

Publisher: Public Library of Science

Author(s): Marcela Perrone-Bertolotti, Maxime Tassin, Fanny Meunier, Jyrki Ahveninen.


This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation). The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost) and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities.

Partial Text

Speech perception rarely occurs in an optimal acoustic environment. Ecological speech perception arises in noisy contexts such as traffic or simultaneous speakers’ voices in the background, typically referred to as speech-in-speech—SiS—situations or “Cocktail Party” [1]. These SiS situations degrade the information conveyed by speech, hampering the understanding of linguistic messages. The aim of the present study was to investigate the ability to access words’ semantic information in SiS situations.

This study have been approved by the Ethics Committee Sud-Est II, and have been conducted according to the principles expressed in the Declaration of Helsinki. No minors were included in the study.

The aim of this study was to investigate the role of EFs in semantic processing in speech-in-speech situations. To do so, we constructed a cross-modal semantic priming paradigm in which we asked participants to pay attention to only one of two simultaneously pronounced sentences and, at the same time, to perform a lexical decision task on visual target item. The auditory prime (implemented in one of the two sentences) and the visual target words were semantically related or unrelated. Thus, semantic activation was evaluated through a cross-modal semantic priming effect. Participants’ attention was manipulated by the gender of the speaking voice. Participants were instructed to pay attention to one sentence pronounced by a male or female voice, that contained or not the prime, and to ignore the other sentence. In addition, we measured each participant’s executive function capacities (by measuring response-suppression cost, switching cost, inhibitory-control cost, and WM capacity) using a modified version of the anti-saccade task proposed by Bialystok et al. (2006) [42] and the digit span task of a French version of WAIS IV. To establish the link between EFs and semantic priming effects, we performed correlation analyses between the semantic priming effects and EF performances.




0 0 vote
Article Rating
Notify of
Inline Feedbacks
View all comments