Cognitive, Auditory, & Neural Bases of Language & Speech Seminar: "Sound symbolism and cross-modal correspondences in spoken language"
My research seeks to develop an understanding of human communication. I am interested in how listeners interpret a talker's intentions, thoughts, and feelings, using both linguistic and non-linguistic aspects of spoken language. Speech is a highly complex signal. Speakers convey information intentionally with the syllables, words, and sentences that they utter. In addition, however, an enormous amount of information is conveyed through a speaker's individual vocal characteristics and style. Spoken language requires that the listener integrate what is being said with how it is being said. Understanding the interplay between the perception of the words and sentences of spoken language with the processing of non-linguistic properties of speech is essential for a complete account of spoken communication.
A fundamental assumption regarding spoken language is that the relationship between the sound structure of spoken words and semantic or conceptual meaning is arbitrary. Although exceptions to this arbitrariness assumption have been reported (e.g., onomatopoeia), these instances are thought to be special cases, with little relevance to spoken language and reference more generally. In this talk, I will review a series of findings that suggest that not only do non-arbitrary mappings between sound and meaning exist in spoken language, but that listeners are sensitive to these correspondences cross-linguistically and that non-arbitrary mappings have functional significance for language processing and word learning. These findings suggest a general sensitivity to cross-modal perceptual similarities may underlie the ability to match word to meaning in spoken language.