Exploring the spatial frequency requirements of audio-visual speech using superimposed facial motion.
Douglas M. ShillerChristian KroosEric Vatikiotis-BatesonKevin G. MunhallPublished in: AVSP (2003)
Keyphrases
- spatial frequency
- visual speech
- facial motion
- hidden markov models
- texture segmentation
- low frequency
- speaker identification
- noisy environments
- audio signals
- audio signal
- facial expressions
- multimedia
- speech signal
- video sequences
- high frequency
- speech recognition
- pattern recognition
- acoustic features
- gabor filters
- high quality
- broadcast news
- graph cuts
- feature extraction
- image segmentation