A Multi-modal Machine Learning Approach and Toolkit to Automate Recognition of Early Stages of Dementia among British Sign Language Users.
Xing LiangAnastassia AngelopoulouEpaminondas KapetaniosBencie WollReda Al-batatTyron WoolfePublished in: CoRR (2020)
Keyphrases
- multi modal
- sign language
- early stage
- machine learning
- sign language recognition
- hand gestures
- gesture recognition
- sign recognition
- american sign language
- hand tracking
- multi modality
- cross modal
- user interface
- audio visual
- human computer interaction
- image annotation
- object recognition
- feature extraction
- high dimensional
- computer vision
- artificial intelligence
- word recognition
- activity recognition
- hand postures
- information theoretic
- multi view
- uni modal