The intelligibility and comprehensibility of French-accented English in an academic context
Abstract This study investigates the intelligibility and comprehensibility of French-accented speech in an academic context. L1 French and L1 English listeners heard speech samples in three different accent conditions: a marked French accent, an unmarked French accent and a Southern British English (SBE) accent. They were asked to perform two word recognition tasks, a speech comprehension task and provided subjective ratings of certainty, comprehensibility, cognitive load and accentedness. Results showed that for English listeners sharing the same first language (L1) had a facilitating effect, whereas varying the levels of French-accentedness had a detrimental effect. French listeners, however, did not find French-accented speech significantly more intelligible and comprehensible than SBE-accented speech. These findings deepen our knowledge of the relationship between intelligibility, comprehensibility, accentedness and cognitive load.
- Research Article
- 10.1121/1.4933752
- Sep 1, 2015
- The Journal of the Acoustical Society of America
Speech recognition in noise is affected by the accents of the speakers and listeners, but it is not clear how overall accuracy is linked to the underlying perceptual and lexical processes. The present study investigated speech recognition for two native-accent groups (Southern British English and Glaswegian) and one non-native group (Spanish learners of English). Listeners were tested behaviorally on speech-recognition in noise, and using EEG measures of vowel perception (cortical evoked potentials to vowel spectra change) and lexical processing (N400). As expected, southern British English listeners were most accurate for southern British speech, Glaswegians were accurate for both Glaswegian and southern British English speech, and Spanish speakers had particular difficulty with Glaswegian. The EEG results demonstrated differences between groups in terms of both vowel and lexical processing. In particular, Glaswegian listeners differed in their lexical processing for the two native accents despite having similar speech-in-noise accuracy, and Spanish speakers appeared to use contextual information less than the other two groups. The results begin to demonstrate how differences at a perceptual level can be compensated for during lexical processing, in ways that are not apparent purely from recognition accuracy scores.
- Research Article
6
- 10.3390/brainsci11121631
- Dec 10, 2021
- Brain Sciences
Recent studies have shown that people make more utilitarian decisions when dealing with a moral dilemma in a foreign language than in their native language. Emotion, cognitive load, and psychological distance have been put forward as explanations for this foreign language effect. The question that arises is whether a similar effect would be observed when processing a dilemma in one’s own language but spoken by a foreign-accented speaker. Indeed, foreign-accented speech has been shown to modulate emotion processing, to disrupt processing fluency and to increase psychological distance due to social categorisation. We tested this hypothesis by presenting 435 participants with two moral dilemmas, the trolley dilemma and the footbridge dilemma online, either in a native accent or a foreign accent. In Experiment 1, 184 native Spanish speakers listened to the dilemmas in Spanish recorded by a native speaker, a British English or a Cameroonian native speaker. In Experiment 2, 251 Dutch native speakers listened to the dilemmas in Dutch in their native accent, in a British English, a Turkish, or in a French accent. Results showed an increase in utilitarian decisions for the Cameroonian- and French-accented speech compared to the Spanish or Dutch native accent, respectively. When collapsing all the speakers from the two experiments, a similar increase in the foreign accent condition compared with the native accent condition was observed. This study is the first demonstration of a foreign accent effect on moral judgements, and despite the variability in the effect across accents, the findings suggest that a foreign accent, like a foreign language, is a linguistic context that modulates (neuro)cognitive mechanisms, and consequently, impacts our behaviour. More research is needed to follow up on this exploratory study and to understand the influence of factors such as emotion reduction, cognitive load, psychological distance, and speaker’s idiosyncratic features on moral judgments.
- Research Article
16
- 10.1371/journal.pone.0181709
- Jul 24, 2017
- PLoS ONE
This study investigates whether listeners’ experience with a second language learned later in life affects their use of fundamental frequency (F0) as a cue to word boundaries in the segmentation of an artificial language (AL), particularly when the cues to word boundaries conflict between the first language (L1) and second language (L2). F0 signals phrase-final (and thus word-final) boundaries in French but word-initial boundaries in English. Participants were functionally monolingual French listeners, functionally monolingual English listeners, bilingual L1-English L2-French listeners, and bilingual L1-French L2-English listeners. They completed the AL-segmentation task with F0 signaling word-final boundaries or without prosodic cues to word boundaries (monolingual groups only). After listening to the AL, participants completed a forced-choice word-identification task in which the foils were either non-words or part-words. The results show that the monolingual French listeners, but not the monolingual English listeners, performed better in the presence of F0 cues than in the absence of such cues. Moreover, bilingual status modulated listeners’ use of F0 cues to word-final boundaries, with bilingual French listeners performing less accurately than monolingual French listeners on both word types but with bilingual English listeners performing more accurately than monolingual English listeners on non-words. These findings not only confirm that speech segmentation is modulated by the L1, but also newly demonstrate that listeners’ experience with the L2 (French or English) affects their use of F0 cues in speech segmentation. This suggests that listeners’ use of prosodic cues to word boundaries is adaptive and non-selective, and can change as a function of language experience.
- Dataset
- 10.1037/e512682013-391
- Jan 1, 2007
In situations where the to perform well is high or where the desire to excel is maximal, people may perform at a suboptimal level, a today well-established phenomenon called “Choking under pressure” (Baumeister, 1984; Beilock & Carr, 2001; Beilock, Kulp, Holt, & Carr, 2004). In academic contexts, talents are often selected relying on such high-stake examinations that maximize the possibility for choking. Really competent individuals may therefore be ignored. Consistent with this, Beilock and Carr (2005) predicted, and found (on the most difficult part of an arithmetical task), that the individuals most likely to choke under performance are those who, in the absence of pressure, have the highest potential for success (as indexed by their higher Working Memory Capacity or WMC). Beilock and Carr then reasoned that, relative to the low WMC subjects (LWMs), pressure-induced anxiety hinders high-WMC subjects (HWMs) by consuming the WMC they use in low-pressure circumstances to devise more complex (i.e. resource-demanding) strategies and produce superior performances (hereafter referred to as the “cognitive load” hypothesis). Generalizing Beilock and Carr (2005) findings from arithmetic problems to analytical reasoning or fluid intelligence (Gf, assessed from the standard Raven Matrices), Gimmig, Huguet, Caverni, and Cury (2006) found that state anxiety increased among HWMs alone. Although increased anxiety mediated HWMs performance decrement under (on the most difficult part of the Raven matrices), the lack of pressure-induced anxiety among LWMs compromised Beilock and Carr’s (2005) “cognitive load” hypothesis. Gimmig et al., therefore, concluded that occurs only in individuals high in WMC due to their anxiety-ridden perceptions of high-stake situations at the onset, and hypothesized a causal role of perceived difficulty. So far, however, we just do not know whether, among HWMs, effectively impedes the same working memory processes a cognitive load does. To examine this point, Gimmig, Huguet, Caverni, Barouillet, & Lepine (under review) had participants to perform a computer-paced WMC task (the Reading Letter Span or RLS; Barrouillet, Bernardin, & Camos, 2004) while and cognitive load (pace of the RLS) were manipulated orthogonally. If evaluative impedes executive control (operates as a cognitive load) then a heightened should increase the same type of errors (i.e., omissions and to a lesser extent transpositions; see Unsworth & Engle, 2006) a faster pace would (a heightened cognitive load, see Barrouillet et al., 2004). Errors at recall were analysed following Unsworth and Engle (2006). As predicted, a higher cognitive load increased the number of omissions and transpositions, an index of reduced executive control (Unsworth and Engle, 2006). However, these errors were unaffected by our anxiety-provoking manipulation (as indexed by reported state anxiety). This suggests that anxiety had no impact on the executive control of attention. On the other hand, interacted with cognitive load to influence a third category of errors (i.e., intrusions), an index of deficit in the decision/monitoring processes (Unsworth & Engle, 2006). This effect was statistically mediated by a change in perceived difficulty of the task. Based on these new findings, Gimmig et al. (under review) derived several conclusions: 1) in line with recent electrophysiological data (Hajcak, McDonald, & Simons, 2003), an anxiety-provoking situation (here performance pressure) does not necessary lead to a reduction in WMC ; 2) in line with Gimmig et al. (2006), but at odd with Beilock and Carr’s “cognitive load” hypothesis (2005), performance and cognitive load do interact to modify the perception one has of the requirement of the situation, and 3) under these circumstances (high-pressure/high-cognitive load), HWMs will then modify their criteria for emitting unsure responses, perhaps as a means to adjust themselves to the perceived social requirements. References Barrouillet, P., Bernardin, S., & Camos, V. (2004). Time constraints and resource sharing in adults' working memory spans. Journal of Experimental Psychology: General, 133(1), 83. Baumeister, R. F. (1984). Choking under pressure: Self-consciousness and paradoxical effects of incentives on skillful performance. Journal of Personality & Social Psychology, 46(3), 610-620. Beilock, S. L., & Carr, T. H. (2001). On the fragility of skilled performance: What governs under pressure? Journal of Experimental Psychology: General, 130(4), 701-725. Beilock, S. L., & Carr, T. H. (2005). When high-powered people fail: Working memory and choking under pressure in math. Psychological Science, 16(2), 101-105. Beilock, S. L., Kulp, C. A., Holt, L. E., & Carr, T. H. (2004). More on the fragility of performance: Choking under in mathematical problem solving. Journal of Experimental Psychology: General, 133(4), 584-600. Gimmig, D., Huguet, P., Caverni, J.-P., & Cury, F. (2006). Choking under and working-memory capacity: When performance reduces fluid intelligence (gf). Psychonomic Bulletin & Review, 13(6), 1005-1010. Hajcak, G., McDonald, N., & Simons, R. F. (2003). Anxiety and error-related brain activity. Biological Psychology, 64(1), 77. Unsworth, N., & Engle, R. W. (2006). A temporal-contextual retrieval account of complex span: An analysis of errors. Journal of Memory and Language, 54(3), 346-362.
- Research Article
41
- 10.1016/j.aap.2013.04.038
- May 25, 2013
- Accident Analysis & Prevention
Concurrent processing of vehicle lane keeping and speech comprehension tasks
- Research Article
1
- 10.1177/02676583231181472
- Aug 24, 2023
- Second Language Research
During spoken word processing, native (L1) listeners use allophonic variation to predictively rule out word competitors and speed up word recognition. There is some evidence that second language (L2) learners develop an awareness of allophonic distributions in their L2, but whether they use their knowledge to facilitate word recognition online, like native listeners do, is largely unknown. In an offline gating experiment and an online eye-tracking experiment in the visual world paradigm, we compare advanced French learners of English and a control group of L1 English listeners on their processing of English vowel nasalization during spoken word recognition. In the gating task, the French listeners’ performance did not differ from that of the English ones. The eye-tracking results show that French listeners used the allophonic distribution in the same way as English listeners, although they were not as fast. Together, these results reveal that L2 learners can develop novel processing strategies using sounds in allophonic distribution to facilitate spoken word recognition.
- Research Article
1
- 10.1121/1.4777966
- May 1, 2002
- The Journal of the Acoustical Society of America
Recent findings show that discrimination of the English /d–ð/ does not differ for English and French infants (6–8-month-olds and 10–12-month-olds), although English adults clearly outperform French adults on this contrast, which is not phonemic in French. With respect to age effects, English listeners’ perception of /d–ð/ improves between infancy and adulthood, whereas French listeners’ perception remains unchanged [Polka et al., J. Acoust. Soc. Am. 109, 2190–2200 (2001)]. In the present study, we tested monolingual English, monolingual French, and early English–French bilingual 4-year-olds on the same contrast using the same stimuli and procedures to clarify when facilitative effects of language experience emerge and whether they are affected by bilingualism. Four findings are reported. First, a language effect (English>French) is evident by 4 years of age. Second, among native (English) listeners facilitative effects are evident by 4-years of age (infants<4-year-olds<adults). Third, among non-native (French) listeners discrimination performance is comparable across the age groups tested (infants=4-year-olds=adults). Fourth, bilingual 4-year-olds’ performance is virtually identical to that of their French-speaking peers, revealing a strong effect of bilingualism on the perception of this contrast. Several factors contributing to these findings will be discussed.
- Research Article
- 10.1097/01.hj.0000922292.15379.9d
- Feb 23, 2023
- The Hearing Journal
Do Visual Cues Aid Comprehension of a Dialogue?
- Research Article
2
- 10.1080/09658416.2012.670243
- May 1, 2013
- Language Awareness
The present research applies the concepts of attention, awareness, and noticing to a previously unresolved strand of inquiry: accent marks in L2 (second language) French. Previous research found that learners who typed accented words had better recall of the accent marks than those who wrote the same words by hand. Sturm suggested that it may have been the increased attention to accented letters in the typing group that led to better performance. The typing groups in Gascoigne and Sturm referred to instructions in order to type accented letters, while the handwriting groups copied the items with pen and paper. The present study was designed to make the handwriting group more aware of the accented letters. Participants, grouped by class section, practised the target items either by writing by hand, with accented letters in a different colour ink, or typing the accented letters using alt+numeric codes. They completed recognition and dictation post-tests, immediately following and one week after treatment. Two-tailed t-tests show no significant difference between the groups, suggesting that the increased attention to accented letters led to greater accuracy on the post-test tasks in both conditions.
- Research Article
2
- 10.1515/applirev-2018-0093
- Oct 22, 2019
- Applied Linguistics Review
This study investigated L2 English listeners’ processing of formulas, in terms of the impact of two different factors inherent in these formulas. One was the formulas’ level of coherence and the other was the formulas’ level of frequency. High-coherence formulas are considered to have specialized meanings, while high-frequency formulas are considered to be less specialized in meaning, commonly being composed of relatively simple words that often co-occur in speech. In previous research, in an academic context, Ellis, Simpson-Vlach and Maynard (2008. Formulaic language in native and second-language speakers: Psycholinguistics, corpus linguistics, and TESOL. TESOL Quarterly 41. 375–396. doi:10.1002/j.1545-7249.2008.tb00137.x) had found that a high level of coherence was the main factor facilitating L1 users’ receptive processing of formulas, while a high level of frequency was the main factor facilitating advanced L2 users’ receptive processing of formulas. Ellis, Simpson-Vlach and Maynard (2008. Formulaic language in native and second-language speakers: Psycholinguistics, corpus linguistics, and TESOL. TESOL Quarterly 41. 375–396. doi:10.1002/j.1545-7249.2008.tb00137.x), from a usage-based perspective, attributed these differences mainly to the greater length of time the L1 users had spent in learning formulas. Consequently, the current study investigated whether these processing differences between the two user groups in an academic context (seen as a possible developmental trend) would be apparent between proficient and less-proficient L2 listeners in a relatively less-challenging, general English environment. The study was considered important for possibly signaling the types of aural receptive formulas to foreground by L2 general English instructors and materials designers. The research examined two groups of L2 learners, one advanced and the other intermediate level, while they listened to four texts. A paused transcription technique elicited the listeners’ identification of targeted segments from the texts, many of which were classified through corpus analysis as containing more/less-coherent formulas or more/less-frequent formulas. Examination of how these formula types were processed by both proficiency groups, however, did not find major differences between the groups in their processing of the different formula types, and thus little evidence of a possible formula developmental trend.
- Research Article
2
- 10.1121/1.4877267
- Apr 1, 2014
- The Journal of the Acoustical Society of America
While directional asymmetries are ubiquitous in cross-language studies of vowel perception, their underlying mechanisms have not been established. One hypothesis is that listeners display a universal perceptual bias favoring vowels with greater formant frequency convergence, or focalization (Polka and Bohn, 2011). A second, but not mutually exclusive, hypothesis is that listeners are biased toward prototypical vowel exemplars in their native language (Kuhl, 1993). In a test of these hypotheses, English listeners discriminated synthesized English /u/ and French /u/ vowels presented in pairs. While the French /u/ tokens exhibit greater formant convergence (between F1 and F2), English listeners have previously been shown to rate the English /u/ tokens as “better” instances of the category (Molnar, 2010). Preliminary results demonstrate that the degree of focalization affects vowel discrimination. When discriminating vowel changes presented in the direction going from the more focal (French) to less focal (English) /u/ vowels, English listeners' reaction times were slower, relative to the same changes presented in the reverse direction. These results suggest that listeners treat the more focal vowels as perceptual reference points. Additional data collection with French listeners is ongoing. The implications of these findings for theories of vowel perception will be discussed.
- Research Article
53
- 10.1016/j.jml.2016.12.002
- Dec 24, 2016
- Journal of Memory and Language
Cognitive load makes speech sound fast, but does not modulate acoustic context effects
- Research Article
23
- 10.1016/j.wocn.2009.09.001
- Jan 1, 2010
- Journal of Phonetics
Perception of initial obstruent voicing is influenced by gestural organization
- Research Article
3
- 10.1016/j.acalib.2023.102705
- Mar 21, 2023
- The Journal of Academic Librarianship
Exploring influencing mechanism of herd behavior in academic information use: The perspective of cognitive load
- Research Article
38
- 10.1109/embc.2015.7318693
- Aug 1, 2015
- Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Despite of technological innovations, noisy environments still constitute a challenging and stressful situation for words recognition by hearing impaired subjects. The evaluation of the mental workload imposed by the noisy environments for the recognition of the words in prelingually deaf children is then of paramount importance since it could affect the speed of the learning process during scholar period.The aim of the present study was to investigate different electroencephalographic (EEG) power spectral density (PSD) components (in theta 4-8 Hz - and alpha - 8-12 Hz - frequency bands) to estimate the mental workload index in different noise conditions during a word recognition task in prelingually deaf children, a population not yet investigated in relation to the workload index during auditory tasks. A pilot study involving a small group of prelingually deaf children was then subjected to EEG recordings during an auditory task composed by a listening and a successive recognition of words with different noise conditions. Results showed that in the pre-word listening phase frontal EEG PSD in theta band and the ratio of the frontal EEG PSD in theta band and the parietal EEG PSD in alpha band (workload index; IWL) reported highest values in the most demanding noise condition. In addition, in the phase preceding the word forced-choice task the highest parietal EEG PSD in alpha band and IWL values were reported at the presumably simplest condition (noise emitted in correspondence of the subject's deaf ear). These results could suggest the prominence of EEG PSD theta component activity in the pre-word listening phase. In addition, a more challenging noise situation in the pre-choice phase would be so "over-demanding" to fail to enhance both the alpha power and the IWL in comparison to the already demanding "simple" condition.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.