When Hearing Lips and Seeing Voices Becomes Perceiving Speech: Auditory-Visual Integration in Lexical Access.
Rachel OstrandSheila E. BlumsteinJames L. MorganPublished in: CogSci (2011)
Keyphrases
- visual speech
- hidden markov models
- visual information
- speaker identification
- lip reading
- noisy environments
- hearing impaired
- uniform access
- speech signal
- lexical features
- evoked potentials
- speech recognition
- audio signals
- signal processing
- visual stimuli
- video signals
- sign language
- access control
- cross modal
- audio visual
- text to speech
- face recognition
- visual programming
- low level
- image sequences
- acoustic features
- natural language processing
- broadcast news
- high level
- context sensitive
- domain specific
- visual features