SRCLD Presentation Details
  Title  
       
    Speech perception as a window into language processing: Real-time spoken word recognition, specific language impairment, and CIs  
Author(s)
Bob McMurray - University of Iowa

SRCLD Info
SRCLD Year: 2011
Presentation Type: Invited Speaker
Presentation Time: (na)
Abstract
In traditional models of language, perceptual processes code speech into phonological categories and these are used to access meaningful units like words. This assumption has had important implications for research on individual differences in populations like people with specific language impairment (SLI) or who use cochlear implants (CIs). It has led to explanations focused on levels of the system (e.g., phonological representations), or on performance differences at specific levels. However, there has been theoretical debate over the relationship between phonology and word recognition, casting doubt on the validity of this approach to individual differences. Moreover, within work on clinical populations, accounts based on single levels of analysis or outcome measures have been difficult to link to other levels of processing.
This talk offers a new approach. I start by presenting evidence from eye-tracking and event-related potentials measures with normal listeners that challenge the traditional framework. These studies show that fine-grained detail that should have been discarded by phonological categorization survives to affect spoken word recognition, and that speech perception and word recognition are tightly integrated. Given these findings, assessing individual differences at the level of spoken word recognition may offer unique insight by bypassing debates over the nature and/or presence of phonological representations and focusing on the output of speech perception as it interfaces with meaning. Crucially, word recognition also offers excellent measures of real-time process (e.g., eye-tracking) that are closely tied to mechanistic models of processing.
I next present a series of studies on real-time word recognition in listeners with SLI. These used eye-tracking to assess competition between candidate words in real-time, as SLI and TD children performed a simple referent selection task. While accuracy was high in both groups, the SLI listeners’ eye-movements revealed a marked increase in activation for lexical competitors. Computational modeling with the TRACE model further demonstrated that this pattern does not derive from deficits in perceptual or phonological processes. Rather, differences in lexical activation dynamics may offer a better account of these deficits, and this is confirmed by a second study manipulating perceptual cues in this paradigm. I will then discuss two ways in which such results could help account for the wide-range of deficits shown by listeners with SLI.
Finally, I present preliminary results applying this paradigm to CI users. Here we show a strikingly different pattern of online processing; CI users show some differences in the degree of competition, but unlike listeners with SLI, they also show a marked delay in the timecourse of processing. Critically, it illustrates how understanding individual differences in online processing may yield more insight into the nature of such differences.
Author Biosketch(es)

click here for author biosketch