Speech perception at the interface of neurobiology and linguistics. Abstract Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output.
Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. Keywords: 1. Figure 1 schematizes the operative representations in speech perception in the context of the present proposal. Memory Disorders Following Focal Neocortical Damage. Comparative Aspects of Studies of Amnesia.
In recent years important advances have been made in reconciling some of the conflicting evidence regarding the contribution of the medial temporal lobe - hippocampal structures to long-term memory in man compared with laboratory animals.
Despite the severe amnesic state that is seen clinically in patients, it has nevertheless emerged that both in animals and man damage to the structures leaves learning and retention of certain types of long-term memory tasks intact. The evidence from man suggests that in the amnesic syndrome the integrity is preserved of those forms of long-term memory that do not depend on the operation of a `mediational' memory system.
In particular, items stored in semantic memory can be facilitated by repetition, and simple associations can be formed if no mediating links are required, but impairments are seen in tasks in which memory depends upon the stored benefits of matching, reordering and comparing. Morphology, language and the brain: the decompositional substrate for language comprehension. Abstract This paper outlines a neurocognitive approach to human language, focusing on inflectional morphology and grammatical function in English.
Taking as a starting point the selective deficits for regular inflectional morphology of a group of non-fluent patients with left hemisphere damage, we argue for a core decompositional network linking left inferior frontal cortex with superior and middle temporal cortex, connected via the arcuate fasciculus. This network handles the processing of regularly inflected words (such as joined or treats), which are argued not to be stored as whole forms and which require morpho-phonological parsing in order to segment complex forms into stems and inflectional affixes. This parsing process operates early and automatically upon all potential inflected forms and is triggered by their surface phonological properties. Keywords: 1.
Human language, in contrast, stands in a more ambiguous and less direct relationship to its neurobiological precursors. 2. The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Abstract This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding.
Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Keywords: 1. The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Abstract This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding.
The 'division of labour' model of eye evolution. Abstract The ‘division of labour’ model of eye evolution is elaborated here.
We propose that the evolution of complex, multicellular animal eyes started from a single, multi-functional cell type that existed in metazoan ancestors. This ancient cell type had at least three functions: light detection via a photoreceptive organelle, light shading by means of pigment granules and steering through locomotor cilia. Located around the circumference of swimming ciliated zooplankton larvae, these ancient cells were able to mediate phototaxis in the absence of a nervous system. This precursor then diversified, by cell-type functional segregation, into sister cell types that specialized in different subfunctions, evolving into separate photoreceptor cells, shading pigment cells (SPCs) or ciliated locomotor cells. Lexical-semantic priming effects during infancy. Abstract When and how do infants develop a semantic system of words that are related to each other?
We investigated word–word associations in early lexical development using an adaptation of the inter-modal preferential looking task where word pairs (as opposed to single target words) were used to direct infants’ attention towards a target picture. Two words (prime and target) were presented in quick succession after which infants were presented with a picture pair (target and distracter). Prime–target word pairs were either semantically and associatively related or unrelated; the targets were either named or unnamed.
Experiment 1 demonstrated a lexical–semantic priming effect for 21-month olds but not for 18-month olds: unrelated prime words interfered with linguistic target identification for 21-month olds. Models of lexical representation assume the existence of an interconnected network. Biological Constraints on Orthographic Representation. Language processing in the natural world. Abstract The authors argue that a more complete understanding of how people produce and comprehend language will require investigating real-time spoken-language processing in natural tasks, including those that require goal-oriented unscripted conversation.
One promising methodology for such studies is monitoring eye movements as speakers and listeners perform natural tasks. Three lines of research that adopt this approach are reviewed: (i) spoken word recognition in continuous speech, (ii) reference resolution in real-world contexts, and (iii) real-time language processing in interactive conversation. In each domain, results emerge that provide insights which would otherwise be difficult to obtain. These results extend and, in some cases, challenge standard assumptions about language processing. Keywords: 1. The motivation for these measures comes from two observations. 2.
We now review three streams of research. 3. Figure 1 Figure 2 4. Apraxia and the Neurophysiology of Motor Control. Basic auditory processes involved in the analysis of speech sounds. The Psycholinguistic Analysis of Acquired Dyslexias: Some Illustrations. Three approaches to the neuropsychology of cognitive function are distinguished: the neuroanatomical (where the primary concern is to correlate particular disorders of cognitive function with particular lesion sites), the `general-cognitive' (in which associations are sought between impairments of performance on specific cognitive tasks and general disorders of broadly defined cognitive processes) and the model-building (in which one attempts to interpret the pattern of impairments and preservations of some cognitive function produced by brain damage in terms of an explicit model of the normal operation of this function).
I claim that the model-building approach to the neuropsychology of cognitive function must take precedence over the other two. One reason for this is that any disorder of cognitive function can only be defined with reference to some model of that function. Introduction. The perception of speech: from sound to meaning. The paper by Young (2008) describes the representation of speech sounds in the auditory nerve and at higher levels in the central nervous system, focusing especially on vowel sounds.
The experimental data are derived mainly from animal models (especially the cat), so some caution is needed in interpreting the results in terms of the human auditory system. However, it seems probable that at least the early stages of auditory processing, as measured in the auditory nerve, are similar across all mammals. Neural representation of spectral and temporal information in speech.
Abstract Speech is the most interesting and one of the most complex sounds dealt with by the auditory system. Introduction. Sensory learning: from neural mechanisms to rehabilitation. The last decade has seen a spectacular resurgence of scientific interest and advances in our understanding in both the basic neural mechanisms and applications of sensory learning. Given the diverse nature of this problem and the proliferation of data relating to it, we have now reached a critical point where drawing together the various strands of investigation would be extremely beneficial.
Different levels of investigation have the potential to inform each other and create situations where step changes in understanding can be made. A complementary systems account of word learning: neural and behavioural evidence. Abstract In this paper we present a novel theory of the cognitive and neural processes by which adults learn new spoken words. This proposal builds on neurocomputational accounts of lexical processing and spoken word recognition and complementary learning systems (CLS) models of memory. We review evidence from behavioural studies of word learning that, consistent with the CLS account, show two stages of lexical acquisition: rapid initial familiarization followed by slow lexical consolidation. These stages map broadly onto two systems involved in different aspects of word learning: (i) rapid, initial acquisition supported by medial temporal and hippocampal learning, (ii) slower neocortical learning achieved by offline consolidation of previously acquired information.
Neural mechanisms of recovery following early visual deprivation. Abstract Natural patterned early visual input is essential for the normal development of the central visual pathways and the visual capacities they sustain. Without visual input, the functional development of the visual system stalls not far from the state at birth, and if input is distorted or biased the visual system develops in an abnormal fashion resulting in specific visual deficits.
Monocular deprivation, an extreme form of biased exposure, results in large anatomical and physiological changes in terms of territory innervated by the two eyes in primary visual cortex (V1) and to a loss of vision in the deprived eye reminiscent of that in human deprivation amblyopia. We review work that points to a special role for binocular visual input in the development of V1 and vision. Our unique approach has been to provide animals with mixed visual input each day, which consists of episodes of normal and biased (monocular) exposures. Keywords: Perception and apperception in autism: rejecting the inverse assumption. Abstract In addition to those with savant skills, many individuals with autism spectrum conditions (ASCs) show superior perceptual and attentional skills relative to the general population. These superior skills and savant abilities raise important theoretical questions, including whether they develop as compensations for other underdeveloped cognitive mechanisms, and whether one skill is inversely related to another weakness via a common underlying neurocognitive mechanism.
The processing of audio-visual speech: empirical and neural bases. Abstract In this selective review, I outline a number of ways in which seeing the talker affects auditory perception of speech, including, but not confined to, the McGurk effect. To date, studies suggest that all linguistic levels are susceptible to visual influence, and that two main modes of processing can be described: a complementary mode, whereby vision provides information more efficiently than hearing for some under-specified parts of the speech stream, and a correlated mode, whereby vision partially duplicates information about dynamic articulatory patterning. The evolution of eyes and visually guided behaviour. Abstract. Use of auditory learning to manage listening problems in children. Use of auditory learning to manage listening problems in children. Abstract This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life.
Use of auditory learning to manage listening problems in children. Neural specializations for speech and pitch: moving beyond the dichotomies. Abstract The idea that speech processing relies on unique, encapsulated, domain-specific mechanisms has been around for some time. Another well-known idea, often espoused as being in opposition to the first proposal, is that processing of speech sounds entails general-purpose neural mechanisms sensitive to the acoustic features that are present in speech.
Here, we suggest that these dichotomous views need not be mutually exclusive. Specifically, there is now extensive evidence that spectral and temporal acoustical properties predict the relative specialization of right and left auditory cortices, and that this is a parsimonious way to account not only for the processing of speech sounds, but also for non-speech sounds such as musical tones. Keywords: 1. The power of speech is such that it is often considered nearly synonymous with being human.