Psycholinguistics: Language Processing, Speech Perception & Word Learning
1. Introduction to Psycholinguistics
Linguistics: structure of language
Psycholinguistics: how people use language
Competence vs. Performance: ideal knowledge vs. actual use
Descriptive vs. Prescriptive: what people do vs. what is prescribed
Metalinguistics: reflection on ones own language
Levels of Language:
Phonology (sounds), Morphology (word parts), Lexicon (words), Syntax (structure)
Structure-dependent rules: meaning depends on syntax, not only word order
Example: “The butcher’s brother cut himself” “himself” refers to “brother”.
2. Language and Communication
Formalism: language as a computational system
Functionalism: language as a communication system
Hockett’s 13 Design Features: only human language has them all
FLB (Broad): features shared with other species
FLN (Narrow): features unique to humans recursion (embedding: “I know that you know…”)
3. Language and Thought
Sapir-Whorf Hypothesis:
Determinism: language determines thought
Relativism: language influences thought
Whorfs Evidence:
“Empty oil cans” (dangerous assumptions)
Claims about Apache and Hopi time were criticized as anecdotal
PirahE3 numbers: speakers can perceive quantities but may lack number words language aids but does not always create concepts
Color Terms:
More color words better memory for colors
Right visual field (left hemisphere) advantage for categorical color perception
Counterfactuals:
Bloom: claimed Taiwanese lack counterfactual syntax and therefore cannot reason counterfactually
Au: argued this was poor translation, not necessarily a syntax absence
4. Speech Production
Phonetics: speech sounds
Articulatory: how sounds are produced
Acoustic: physical sound waves
Aural: how sounds are perceived
Phonology: how sounds pattern and combine
Phones vs. Phonemes:
Phones: any speech sound
Phonemes: contrastive sounds (e.g., /b/ vs. /p/)
Minimal pairs: differ by one phoneme (bat/pat)
Allophones: same phoneme, different contexts (e.g., aspirated [p] in “pit” vs. unaspirated [p] in “spit”)
Consonant features:
Voicing: do the vocal folds vibrate?
Place of Articulation (POA): where is the sound produced?
Manner of Articulation (MOA): how is airflow modified?
Examples:
/s/ vs /f/ place of articulation differs
/s/ vs /z/ voicing differs
/z/ vs /n/ manner differs
Orthography Phonology: spelling does not equal sound
Coarticulation: phonemes overlap; articulation is context-dependent
Speech is not strictly discrete: phonemes, words, and sentences blend dynamically
5. Speech Perception
Segmentation problem: speech is continuous but perceived as discrete units
Lack of invariance: the same phoneme can have different acoustics
Perceptual constancy: stable perception despite variation
Perception Models
Motor Theory (Liberman): we perceive articulatory gestures
Auditory Theory: perception is based on acoustic signal alone
Fuzzy Logic Model:
Bottom-up: match incoming sounds
Top-down: context helps select the best interpretation
Perception Phenomena
Categorical Perception (CP): sharp phonemic boundaries
Occurs primarily for native-language phonemes
Observed in chinchillas and infants as well (not motor-specific)
“Use it or lose it” by around age 1
Coarticulation Assimilation: context changes perceived sound
Duplex Perception: combined processing streams can occur
McGurk Effect: visual information influences auditory perception
Lexical context (Ganong Effect): word knowledge biases sound identification
Semantic context (Phoneme Restoration): top-down processes fill in missing sounds
6. Words and Meaning
Sense vs. Reference: meaning vs. what it refers to
Intension: defining properties
Extension: the set of examples or instances
Ostensive definition: pointing or illustration
Grounding problem: how words connect to the world
Fuzzy vs. clear boundaries: degrees of category membership
Concept Theories
Classical view: necessary and sufficient features
Prototype (Rosch): best example; family resemblance
Exemplar model: compare to stored examples
Semantic networks: meaning as linked nodes
Embodied semantics: meaning tied to sensorimotor systems (mirror neurons, affordances)
7. Word Learning
Problem: map sound to meaning
Component theory: learn features one by one (e.g., big = BIG; little = BIG + NOT)
Evidence: adjective acquisition, overextensions and underextensions
Heuristics:
Whole object: a new word refers to the whole object
Basic object level: prefer middle-level categories (dog > animal or Dalmatian)
Contrast principle: a new word maps to a new meaning
Syntactic bootstrapping: syntax helps infer meaning (“I saw a wug” indicates a noun)
Ontological constraints:
Objects: shape matters
Substances: material matters
Fast mapping: learn meaning from few exposures
Poverty of the stimulus / Gavagai problem: input underdetermines meaning
Cross-situational learning: not a full solution on its own
8. Lexical Access
Lexical access: retrieving a word’s meaning, sound, or spelling
Production vs. comprehension: different processing routes
Access routes: semantic, phonological, orthographic
Methods
Rapid naming and fluency tasks
Tip-of-the-tongue: know meaning but not sound evidence for multi-stage retrieval
Speech errors: reveal processing structure
Sound-based: spoonerisms (“you hissed my mystery lecture”), anticipations, perseverations, deletions
Meaning-based: semantic substitutions, antonyms
Picture naming and interference tasks: timing of retrieval
9. Morphology and Lexical Access
Morpheme: smallest unit of meaning
Root vs. affix: e.g., teach-er, walk-ed
Morphological composition: build words from morphemes
Decomposition: break words into morphemes (e.g., teacher teach + er)
Lexical entries: often stored as roots rather than every inflected form
Different “-er” morphemes:
greater (“more”) is not the same morpheme as teacher (“agent”)
