McGurk effect

McGurk effect

The McGurk effect is a perceptual phenomenon which demonstrates an interaction between hearing and vision in speech perception. It is a compelling illusion in which humans perceive mismatched audiovisual speech as a completely different syllable[1]. Visual information provided by lipreading changes the way sound is heard[2]. Some people are not susceptible to the effect, among these people may be those who are used to watching dubbed movies and have therefore learned to ignore visual cues to some extent[3]. A person will be more susceptible to the McGurk effect if they are receiving poor auditory information and good visual information[4]. People who are more susceptible to the McGurk effect are also better at integrating auditory and visual speech cues[2]. Many people are affected differently by the McGurk effect based on many factors, brain damages or disorders.

Contents

Background

Discovery

The McGurk effect is sometimes called the McGurk-MacDonald effect. It was first described in a paper by Harry McGurk and John MacDonald in 1976. This effect was discovered by accident when McGurk and his research assistant, MacDonald, asked a technician to dub a video with a different phoneme other than the one spoken while conducting a study on how infants perceive language at different developmental stages. When the video was played back, both researchers heard a third phoneme rather than the one spoken or mouthed in the video.[5]

Experiment

This effect may be experienced when a video of one phoneme's production is dubbed with a sound-recording of a different phoneme being spoken. Often, the perceived phoneme is a third, intermediate phoneme. As an example, the syllable /ba-ba/ is spoken over the lip movements of /ga-ga/, and the perception is of /da-da/. McGurk and McDonald originally believed that this resulted from the common phonetic and visual properties of /b/ and /g/[6]. Two types of illusion in response to incongruent audiovisual stimuli have been observed: fusions ('ba' auditory and 'ga' visual produce 'da') and combinations ('ga' auditory and 'ba' visual produce 'bga')[7].Your brain is trying to provide your consciousness with its best guess about what the senses are telling it[8]. In this case there is a contradiction between what the eyes and ears are telling it to perceive[8]. In this instance, the eyes have it[8].

History

Humans are primarily vision dominated animals[2], but speech perception is multimodal, which means that it involves information from more than one sensory modality, in particular, audition and vision. The McGurk effect arises due to early audiovisual integration at the level of phonetic processing[6] in speech perception. The effect is very robust; that is, knowledge about it seems to have little effect on one's perception of it. This is different from certain optical illusions, which break down once one 'sees through' them. Some people, including those that have been researching the phenomenon for more than twenty years, experience the effect even when they are aware that it is taking place[7][9]. With the exception of people who can identify most of what is being said from speech-reading alone, most people are quite limited in their ability to identify speech from visual-only signals[2]. A more pervasive phenomenon is the ability of visual speech to enhance the intelligibility of auditory speech in noise[2]. Visible speech can even alter the perception of perfectly audible speech sounds when the visual speech stimuli are mismatched with the auditory speech, as demonstrated in the McGurk effect[2]. Speech perception is normally regarded as a purely auditory process[2], however, our use of information is immediate, automatic, and, to a large degree, unconscious[9]and therefore, despite our intuintions, speech isn’t something we hear[9]. Speech is perceived by seeing, touching, and listening to a face move[9]. The brain is often unaware of the separate sensory contributions to what it perceives[9]. Therefore, when it comes to recognizing speech your brain cannot differentiate whether it is seeing or hearing[9].

Why is it important?

The McGurk effect is being used to produce more accurate speech recognition programs by making use of a video camera and lip reading software. It has also been examined in relation to witness testimony. Wareham & Wright's 2005 study showed that inconsistent visual information can change the perception of spoken utterances, suggesting that the McGurk effect may have many influences in everyday perception. Not limited to syllables, the effect can occur in whole words[6][10] and have an effect on daily interactions that people are unaware of. Research into this area can provide information on not only theoretical questions, but also it can provide therapeutic and diagnostic relevance for those with disorders relating to audio and visual integration of speech cues[11].

Brain Influences

Damage

Brain hemispheres

Both hemispheres of the brain make a contribution to the McGurk effect[12]. They work together to integrate speech information that is received through the auditory and visual senses. A McGurk response is more likely to occur in right handed individuals when the face has privileged access to the right hemisphere and words to the left hemisphere[12]. In people that have had callostomies done, the McGurk effect is still present but significantly slower[12].

Left hemisphere lesions

In people with lesions to the left hemisphere of the brain, visual features often play a critical role in speech and language therapy[11]. People with lesions in the left hemisphere of the brain show a greater McGurk effect than normal controls[11]. Visual information strongly influences speech perception in these people[11]. There is a lack of susceptibility to the McGurk illusion if left hemisphere damage resulted in a deficit to visual segmental speech perception[13].

Right hemisphere damage

In people with right hemisphere damage, impairment on both visual-only and audio-visual integration tasks is exhibited, although they are still able to integrate the information to produce a McGurk effect[13]. Integration only appears if the auditory information is audible and visual stimuli is used to improve performance when the auditory signal is impoverished[13]. Therefore, there is a McGurk effect exhibited in people with damage to the right hemisphere of the brain but the effect is not as strong as a normal group.

Disorders

Dyslexia

Dyslexic individuals exhibit a smaller McGurk effect than normal readers of the same chronological age, but they showed the same effect as reading-level age-matched readers[14]. Dyslexics particularly differed for combination responses, not fusion responses[14]. The smaller McGurk effect may be due to the difficulties dyslexics have in perceiving and producing consonant clusters[14].

Specific language impairment

Children with specific language impairment show a significantly lower McGurk effect than normal children[15]. They use less visual information in speech perception, or have a reduced attention to articulatory gestures, but have no trouble perceiving auditory-only cues[15].

Autism spectrum disorders

Children with autism spectrum disorders (ASD) showed a significantly reduced McGurk effect than children without[16]. However, if the stimuli was nonhuman (for example bouncing a tennis ball to the sound of a bouncing beach ball) then they scored similarly to children without ASD[16]. Younger children with ASD show a very reduced McGurk effect, however, this diminishes with age. As the individuals grow up, the effect they show becomes closer to those that did not have ASD[17].

Language-learning disabilities

Adults with language-learning disabilities exhibit a much smaller McGurk effect than normal aduts[18]. These people are not as influenced by visual input as normal people[18]. Therefore, people with poor language skills will produce a smaller McGurk effect. A reason for the smaller effect in this population is that there may be uncoupled activity between anterior and posterior regions of the brain, or left and right hemishperes[18].

Alzheimer’s disease

In patients with Alzheimer’s disease (AD), there is a smaller McGurk effect exhibited than in normals[19]. Often a reduced size of the corpus callosum produces a hemisphere disconnection process[19]. Less influence on visual stimulus is seen in patients with AD, which is a a reason for the lowered McGurk effect[19].

Schizophrenia

The McGurk effect is not as pronounced in schizophrenic individuals as in normal individuals, however, it is not significantly different[20]. Schizophrenia slows down the development of audiovisual integration and does not allow it to reach its developmental peak, however, no degradation is observed[20]. Schizophrenics are more likely to rely on auditory cues than visual cues in speech perception[20].

Aphasia

People with aphasia show impaired perception of speech in all conditions (visual-only, auditory-only, and audio-visual), and therefore exhibited a small McGurk effect[21]. The greatest difficulty for aphasics is in the visual-only condition showing that they use more auditory stimuli in speech perception. [21].

Factors

Cross-dubbing

Discrepancy in vowel category significantly reduced the magnitude of the McGurk effect for fusion responses[22]. Auditory /a/ tokens dubbed onto visual /i/ articulations were more compatible than the reverse[22]. This could be because /a/ has a wide range of articulatory configurations whereas /i/ is more limited[22], which makes it much easier for subjects to detect discrepancies in the stimuli[22]. /i/ vowel contexts produce the strongest effect, while /a/ produces a moderate effect, and /u/ has almost no effect[23].

Mouth visibility

The McGurk effect is stronger when the right side of the mouth is visible[24]. People tend to get more visual information from the right side of a speakers mouth than the left or even the whole mouth[24]. This relates to the hemispheric attention factors discussed in the brain hemispheres section above.

Visual distractors

The McGurk effect is weaker when there is a visual distractor present that the listener is attending to[25]. Visual attention modulates audiovisual speech perception[25]. Another form of distraction is movement of the speaker. A stronger McGurk effect is elicited if the speakers face/head is motionless, rather than moving[26].

Syllable structure

A strong McGurk effect can be seen for click-vowel syllables compared to weak effects for isolated clicks[27]. This shows that the McGurk effect can happen in a non-speech environment[27]. Phonological significance is not a necessary condition for a McGurk effect to occur, however, it does increase the strength of the effect[27].

Gender

Females show a stronger McGurk effect than males. Women show significantly greater visual influence on auditory speech than men did for brief visual stimuli, but no difference is apparent for full stimuli[26]. Another aspect regarding gender is the issue of male faces and voices as stimuli in comparison to female faces and voices as stimuli. Although, there is no difference in the strength of the McGurk effect for either situation[28]. If a male face is dubbed with a female voice, or vice versa, there is still no difference in strength of the McGurk effect[28]. Knowing that the voice you hear is different from the face you see - even if different genders – doesn’t eliminate the McGurk effect[9].

Familiarity

Subjects who are familiar with the faces of the speakers are less susceptible to the McGurk effect than those who are unfamiliar with the faces of the speakers[23][2]. On the other hand, there was no difference regarding voice familiarity[23].

Expectation

Semantic congruency had a significant impact on the McGurk illusion[29]. The effect is experienced more often and rated as clearer in the semantically congruent condition relative to the incongruent condition[29]. When a person was expecting a certain visual or auditory appearance based on the semantic information leading up to it, the McGurk effect was greatly increased[29].

Self influence

The McGurk effect can be observed when the listener is also the speaker or articulator[30]. While looking at oneself in the mirror and articulating visual stimuli while listening to another auditory stimulus, a strong McGurk effect can be observed[30]. In the other condition, where the listener speaks auditory stimuli softly while watching another person articulate the conflicting visual gestures, a McGurk effect can still be seen, although it is weaker[30].

Temporal Synchrony

Temporal synchrony is not necessary for the McGurk effect to be present[31]. Subjects are still strongly influenced by auditory stimuli even when it lagged the visual stimuli by 180 milliseconds (point at which McGurk effect begins to weaken). [31]. There was less tolerance for the lack of synchrony if the auditory stimuli preceded the visual stimuli[31]. In order to produce a significant weakening of the McGurk effect, the auditory stimuli had to precede the visual stimuli by 60 milliseconds, or lag by 240 milliseconds[2].

Physical task diversion

The McGurk effect was greatly reduced when attention was diverted to a tactile task (touching something)[32]. Touch is a sensory perception like vision and audition, therefore increasing attention to touch decreases the attention to auditory and visual senses.

Gaze

Your eyes do not need to fixate in order to integrate audio and visual information in speech perception[33]. There was no difference in the McGurk effect when the listener was focusing anywhere on the speakers face[33]. The effect does not appear if the listener focuses beyond the speakers face[2]. In order for the McGurk effect to become insignificant, the listeners gaze must deviate from the speakers mouth by at least 60 degrees[33].

Other Languages

People of all languages rely to some extent on visual information in speech perception, but the intensity of the McGurk effect can change between languages. Dutch[34], English, Spanish, German and Italian language listeners experience a robust McGurk effect, while it is weaker for Japanese and Chinese listeners[35]. Most research on the McGurk effect between languages has been conducted between English and Japanese. There is a smaller McGurk effect in Japanese listeners than in English listeners[34][36][37][38][39][40]. The cultural practice of face avoidance in Japanese people may have an effect on the McGurk effect, as well as tone and syllabic structures of the language[34]. This could also be why Chinese listeners are less susceptible to visual cues, and similar to Japanese, produce a smaller effect than English listeners[34]. Studies have also shown that Japanese listeners do not show a developmental increase in visual influence after the age of six, as English children do[36][37]. Japanese listeners are more able to identify an incompatibility between the visual and auditory stimulus than English listeners are[34][37]. This result could be in relation to the fact that in Japanese, consonant clusters do not exist[34][38]. In noisy environments where speech is unintelligible, however, people of all languages resort to using visual stimuli and are then equally subject to the McGurk effect[34][38].McGurk effect works with speech perceivers of every language for which it has been tested[9]

Hearing Impairment

Experiments have been conducted involving hearing impaired individuals as well as individuals that have had cochlear implants. These individuals tend to weigh visual information from speech more heavily than auditory information[41]. In comparison to normal hearing individuals, this is not different unless there is more than one syllable, such as a word[41]. Regarding the McGurk experiment, responses from cochlear implanted users produced the same responses as normal hearing individuals when an auditory bilabial stimulus is dubbed onto a visual velar stimulus[41]. However, when an auditory dental stimulus is dubbed onto a visual bilabial stimulus, the responses are quite different. The McGurk effect is still present in individuals with impaired hearing or using cochlear implants, although it is quite different in some aspects.

Infants

By measuring an infants attention to certain audiovisual stimuli, a response that is consistent with the McGurk effect can be recorded.[9][42][43][44][2] From just minutes to a couple of days old, infants can imitate adult facial movements, and within weeks of birth, infants can recognize lip movements and speech sounds.[45] At this point, the integration of audio and visual information can happen, but not at a proficient level.[45] The first evidence of the McGurk effect can be seen at four months of age,[42][43] however, more evidence is found for 5 month olds.[9][44][2][46] Through the process of habituating an infant to a certain stimulus and then changing the stimulus (or part of it, such as ba-voiced/va-visual to da-voiced/va-visual), a response that simulates the McGurk effect becomes apparent.[9][44] The strength of the McGurk effect displays a developmental pattern and increases throughout childhood into extends into adulthood.[43][44]

See also

Bibliography

  • McGurk, H & MacDonald, J (1976); "Hearing lips and seeing voices," Nature, Vol 264(5588), pp. 746–748
  • Wright, Daniel and Wareham, Gary (2005); "Mixing sound and vision: The interaction of auditory and visual information for earwitnesses of a crime scene," Legal and Criminological Psychology, Vol 10(1), pp. 103–108.

References

  1. ^ Nath, A.R. & Beauchamp, M.S. (2011). A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion. NeuroImage, 59(1), 781-787
  2. ^ a b c d e f g h i j k l Calvert, G., Spence, C. & Stein, B. (2004). Handbook of multi sensory processes. Ipswich, MA:MIT Press
  3. ^ Boersma, P. (2006). A constraint based explanation of the McGurk effect
  4. ^ Massaro, D. & Cohen, M. (2000). Tests of auditory-visual integration efficiency within the framework of the fuzzy logical model of perception. ‘’Journal of Acoustical Society of America, 108’’(2), 784-789
  5. ^ "Haskins Laboratories">"The McGurk Effect: Hearing lips and seeing voices". http://www.haskins.yale.edu/featured/heads/mcgurk.html. Retrieved 2 October 2011. 
  6. ^ a b c Barutchu, Ayla; Crewther, Kiely, Murphy (2008). "When /b/ill with /g/ill becomes /d/ill: Evidence for a lexical effect in audiovisual speech perception". European Journal of Cognitive Psychology 20 (1): 1–11. doi:10.1080/09541440601125623. 
  7. ^ a b Colin, C., Radeau, M. & Deltenre, P. (2011). Top-down and bottom-up modulation of audiovisual integration in speech. European Journal of Cognitive Psychology, 17(4), 541-560
  8. ^ a b c O’Shea, M. (2005). The Brain: A Very Short Introduction. Oxford University Press
  9. ^ a b c d e f g h i j k Rosenblum, L.D. (2010). See what I’m saying: The extraordinary powers of our five senses. New York, NY: W. W. Norton & Company Inc.
  10. ^ Gentilucci, M. & Cattaneo, L. (2005). Automatic audiovisual integration in speech perception. Experimental Brain Research, 167(1), 66-75
  11. ^ a b c d Schmid, G., Thielmann, A. & Ziegler, W. (2009). The influence of visual and auditory information on the perception of speech and non-speech oral movements in patients with left hemisphere lesions. Clinical Linguistics and Phonetics, 23(3), 208-221
  12. ^ a b c Baynes, K., Fummell, M. & Fowler, C. (1994). Hemispheric contributions to the integration of visual and auditory information in speech perception. Perception and Psychophysics, 55(6), 633-641
  13. ^ a b c Nicholson, K., Baum, S., Cuddy, L. & Munhall, K. (2002). A case of impaired auditory and visual speech prosody perception after right hemisphere damage. Neurocase, 8, 314-322
  14. ^ a b c Bastien-Toniazzo, M., Stroumza, A. & Cavé, C. (2009). Audio-visual perception and integration in developmental dyslexia: An exploratory study using the McGurk effect. Current Psychology Letters, 25(3), 2-14
  15. ^ a b Norrix, L., Plante, E., Vance, R. & Boliek, C. (2007). Auditory-visual integration for speech by children with and without specific language impairment. Journal of Speech, Language, and Hearing Research, 50, 1639-1651
  16. ^ a b Mongillo, E., Irwin, J., Whalen, D. & Klaiman, C. (2008). Audiovisual processing in children with and without autism spectrum disorders. Journal of Autism and Developmental Disorders, 38, 1349-1358
  17. ^ Taylor, N., Isaac, C. & Milne, E. (2010). A comparison of the development of audiovisual integration in children with autism spectrum disorders and typically developing children. Journal of Autism and Developmental Disorders, 40, 1403-1411
  18. ^ a b c Norrix, L., Plante, E. & Vance, R. (2006). Auditory-visual speech integration by adults with and without language-learning disabilities. Journal of Communication Disorders, 39, 22-36
  19. ^ a b c Delbeuck, X., Collette, F. & Van der Linden, M. (2007). Is Alzheimer’s disease a disconnection syndrome? Evidence from a crossmodal audio-visual illusory experiment. Neuropsychologia, 45, 3315-3323
  20. ^ a b c Pearl, D., Yodashkin-Porat, D., Nachum, K., Valevski, A., Aizenberg, D., Sigler, M., Weizman, A. & Kikinzon, L. (2009). Differences in audiovisual integration, as measured by McGurk phenomenon, among adult and adolescent patients with schizophrenia and age-matched healthy control groups. Comprehensive Psychology, 50, 186-192
  21. ^ a b Youse, K., Cienkowski, K. & Coelho, C. (2004). Auditory-visual speech perception in an adult with aphasia. Brain Injury, 18(8), 825-834
  22. ^ a b c d Green, K.P., & Gerdeman, A. (1995). Cross-modal discrepancies in coarticulation and the integration of speech information: The McGurk effect with mismatched vowels. Journal of Experimental Psychology: Human Perception and Performance, 21(6), 1409-1426
  23. ^ a b c Walker, S., Bruce, V. & O’malley, C. (1995). Facial identity and facial speech processing: Familiar faces and voices in the McGurk effect. Perception & Psychophysics, 57(8), 1124-1133
  24. ^ a b Nicholls, M., Searle, D., & Bradshaw, J. (2004). Read my lips: Asymmetries in the visual expression and perception of speech revealed through the McGurk effect. Psychological Science, 15(2), 138-141
  25. ^ a b Tiippana, K., Andersen, T.S. & Sams, M. (2004). Visual attention modulates audiovisual speech perception. European Journal of Cognitive Psychology, 16(3), 457-472
  26. ^ a b Irwin, J.R, Whalen, D.H. & Fowler, C.A. (2006). A sex difference in visual influence on heard speech. Perception and Psychophysics, 68(4), 582-592
  27. ^ a b c Brancazio, L., Best, C.T. & Fowler, C.A. (2006). Visual influences on perception of speech and nonspeech vocal-tract events. Language and Speech, 49(1), 21-53
  28. ^ a b Green, K., Kuhl, P., Meltzoff, A. & Stevens, E. (1991). Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect. Perception and Psychophysics, 50(6), 524-536
  29. ^ a b c Mindmann, S. (2004). Effects of sentence context and expectation on the McGurk illusion. Journal of Memory and Language, 50(1), 212-230
  30. ^ a b c Sams, M., Mottonen, R. & Sihvonen, T. (2005). Seeing and hearing others and oneself talk. Cognitive Brain Research, 23(1), 429-435
  31. ^ a b c Munhall, K., Gribble, P., Sacco, L. & Ward, M. (1996). Temporal constraints on the McGurk effect. Perception and Psychophysics, 58(3), 351-362
  32. ^ Alsius, A., Navarra, J. & Soto-Faraco, S. (2007). Attention to touch weakens audiovisual speech integration. Experimental Brain Research, 183(1), 399-404. doi: 10.1007/s00221-007-1110-1
  33. ^ a b c Paré, M., Richler, C., Hove, M. & Munhall, K. (2003). Gaze behavior in audiovisual speech perception: The influence on ocular fixations on the McGurk effect. Perception and Psychophysics, 65(4), 533-567
  34. ^ a b c d e f g Sekiyama, K. (1997). Cultural and linguistic factors in audiovisual speech processing: The McGurk effect in Chinese subjects. Perception and Psychophysics 59(1), 73-80
  35. ^ Bavo, R., Ciorba, A., Prosser, S. & Martini, A. (2009). The McGurk phenomenon in Italian listeners. Acta Otorhinolaryngologica Italica, 29(4), 203-208
  36. ^ a b Hisanaga, S., Sekiyama, K., Igasaki, T. & Murayama, N. (2009). Audiovisual speech perception in Japanese and English: Inter-language differences examined by event-related potentials. Retrieved from http://www.isca-speech.org/archive_open/avsp09/papers/av09_038.pdf
  37. ^ a b c Sekiyama, K. & Burnham, D. (2008). Impact of language on development of auditory-visual speech perception. Developmental Science 11(2), 306-320
  38. ^ a b c Sekiyama, K. & Tohkura, Y. (1991). McGurk effect in non-English listeners: Few visual effects for Japanese subjects hearing Japanese syllables of high auditory intelligibility. Journal of Acoustical Society of America, 90(4, Pt 1), 1797-1805
  39. ^ Wu, J. (2009). Speech perception and the McGurk effect: A cross cultural study using event-related potentials. Dissertation
  40. ^ Gelder, B., Bertelson, P., Vroomen, J. & Chin Chen, H. (1995). Inter-language differences in the McGurk effect for Dutch and Cantonese listeners. Retrieved from http://www.isca-speech.org/archive/eurospeech_1995/e95_1699.html
  41. ^ a b c Rouger, J., Fraysse, B., Deguine, O. & Barone, P. (2008). McGurk effects in cochlear-implanted deaf subjects. Brain Research, 1188, 87-99
  42. ^ a b Bristow, D., Dehaene-Lambertz, G., Mattout, J., Soares, C., Gliga, T., Baillet, S. & Mangin, J.F. (2009). Hearing faces: How the infant brain matches the face it sees with the speech it hears. Journal of Cognitive Neuroscience, 21(5), 905-921
  43. ^ a b c Burnham, D. & Dodd, B. (2004). Auditory-Visual Speech Integration by Prelinguistic Infants: Perception of an Emergent Consonant in the McGurk Effect. Developmental Psychobiology, 45(4), 204-220
  44. ^ a b c d Rosenblum, L.D., Schmuckler, M.A. & Johnson, J.A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59(3), 347-357
  45. ^ a b Woodhouse, L., Hickson, L. & Dodd, B. (2009). Review of visual speech perception by hearing and hearing-impaired people: Clinical implications. International Journal of Language and Communication Disorders, 44(3), 253-270
  46. ^ Kushnerenko, E., Teinonen, T., Volein, A. & Csibra, G. (2008). Electrophysiological evidence of illusory audiovisual speech percept in human infants. Proceedings of the National Academy of Sciences of the United States of America, 105(32), 11442-11445

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • McGurk — is the name of several people: Adam McGurk, Northern Irish footballer Anna McGurk, English council worker Dean Brian McGurk, Irish Priest during the Penal Times David McGurk, English footballer Harry McGurk, known for the McGurk effect (also… …   Wikipedia

  • McGurk-Effekt — Als McGurk Effekt bezeichnet man die Beeinflussung der Wahrnehmung eines akustischen Sprachsignals durch die gleichzeitige Beobachtung einer Lippenbewegung bzw. unbewusstes Lippenlesen. Diese audio visuelle Täuschung gilt als Meilenstein in der… …   Deutsch Wikipedia

  • Multimodal integration — Multimodal integration, also known as multisensory integration, is the study of how information from the different sensory modalities, such as sight, sound, touch, smell, self motion and taste, may be integrated by the nervous system. A coherent… …   Wikipedia

  • Scientific phenomena named after people — This is a list of scientific phenomena and concepts named after people (eponymous phenomena). For other lists of eponyms, see eponym. NOTOC A* Abderhalden ninhydrin reaction Emil Abderhalden * Abney effect, Abney s law of additivity William de… …   Wikipedia

  • Language module — refers to a hypothesized structure in the human brain (anatomical module) or cognitive system (functional module) that some psycholinguists (e.g., Steven Pinker) claim contains innate capacities for language. According to Jerry Fodor the sine qua …   Wikipedia

  • List of effects — This is a list of names for observable phenonema that contain the word effect, amplified by reference(s) to their respective fields of study. #*3D audio effect (audio effects)A*Accelerator effect (economics) *Accordion effect (physics) (waves)… …   Wikipedia

  • Motor theory of speech perception — When we hear spoken words we sense that they are made of auditory sounds. The motor theory of speech perception argues that behind the sounds we hear are the intended movements of the vocal tract that pronounces them. The motor theory of speech… …   Wikipedia

  • Speech perception — is the process by which the sounds of language are heard, interpreted and understood. The study of speech perception is closely linked to the fields of phonetics and phonology in linguistics and cognitive psychology and perception in psychology.… …   Wikipedia

  • Auditory illusion — An auditory illusion is an illusion of hearing, the aural equivalent of an optical illusion: the listener hears either sounds which are not present in the stimulus, or impossible sounds. In short, audio illusions highlight areas where the human… …   Wikipedia

  • MOGUL framework — The MOGUL framework is a research framework aiming to provide a theoretical perspective on the nature of language. MOGUL (Modular On line Growth and Use of Language) draws on the common ground underlying various related areas of cognitive science …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”