Login / Signup

Considering multi-modal speech visualization for deaf and hard of hearing people.

Yusuke TobaHiroyasu HoriuchiShinsuke MatsumotoSachio SaikiMasahide NakamuraTomohito UchinoTomohiro YokoyamaYasuhiro Takebayashi
Published in: APSITT (2015)
Keyphrases
  • multi modal
  • hearing impaired
  • sign language
  • audio visual
  • multi modality
  • cross modal
  • high dimensional
  • fusing multiple
  • machine learning
  • image segmentation
  • semantic concepts
  • video search