Semi-blind speech extraction for robot using visual information and noise statistics.
Hiroshi SaruwatariNobuhisa HirataToshiyuki HattaRyo WakisakaKiyohiro ShikanoTomoya TakataniPublished in: ISSPIT (2011)
Keyphrases
- visual information
- audio visual
- visual features
- low level
- mobile robot
- visual input
- visual content
- visual data
- noisy environments
- visual cues
- eye movements
- watermark extraction
- textual information
- speech recognition
- content based image retrieval systems
- information extraction
- semantic information
- visual information retrieval
- human visual system
- speech signal
- image processing
- low level features
- content based image
- relational databases
- artificial intelligence