Browsing by Author "Seedat, Ammaara"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
Item Audio-visual speech perception amongst bilingual speakers(University of the Witwatersrand, Johannesburg, 2024-03) Seedat, Ammaara; Ramona, Kunene NicolasWhy does a face articulating the syllable [ga] presented alongside an auditory /ba/ syllable result in a perceived /da/ syllable/? Language is more than words, and the human face has shown enormous communicatory significance as a mode of nonverbal communication. Multisensory integration is used in audio-visual speech perception when auditory and visual information are integrated at the same time. This integration, however, can be viewed as an involuntary process that occurs automatically. The audio-visual benefit effect occurs when auditory and visual information is synchronized, this is when the visual cue is congruent with its auditory counterpart. Literature on audio-visual speech perception, states that the magnitude of visual influences on audio-visual speech perception varies not only across languages but also across developmental stages. The reasons underlying cross-linguistic and developmental differences in audio-visual speech perception however remain unclear. With bilingualism becoming the norm rather than the exception around the world (Grosjean & Byers-Heinlein, 2018), strong research foundations for spoken-word comprehension in bilinguals have been carried out. These foundations have been grounded in classical frameworks from monolinguals and formalised in models such as the Bilingual Model of Lexical Access (BIMOLA) (Léwy, 2008) and the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS) (Shook & Marian, 2013). Bilinguals may experience increased audio-visual integration when using their less dominant language because less familiarity with a language creates a greater reliance on the visual channel to make sense of auditory input. This study will therefore examine the extent to which young adult bilinguals benefit from audio-visual speech. We examine how different listening conditions affect how L2 bilinguals perceive audio-visual speech. Participants in this study were L1 English speakers learning L2 isiZulu between 17-29 years of age. Each participant was introduced to four different conditions. Namely, an audio only condition, a visual-only condition an and audio-visual condition and an incongruent condition. In the audio-only condition, the stimuli were only auditory, in the visual-only condition the stimuli were perceived without an auditory stimulus. The audio-visual stimulus was made up of both an auditory and visual stimulus whilst the incongruent stimulus was created through dubbing the audio of one word over the visual of another word. The results of the study highlighted the importance of audio-visual speech in late L2 bilingual acquisition. The differences in the phonetics and phonology of language systems might play an important role in how late L2 bilinguals perceive language in different conditions.