A Multi-modal Machine Learning Approach and Toolkit to Automate Recognition of Early Stages of Dementia Among British Sign Language Users.
Xing LiangAnastassia AngelopoulouEpaminondas KapetaniosBencie WollReda Al-batatTyron WoolfePublished in: ECCV Workshops (2) (2020)
Keyphrases
- multi modal
- sign language
- early stage
- sign language recognition
- machine learning
- hand gestures
- sign recognition
- gesture recognition
- american sign language
- hand tracking
- multi modality
- audio visual
- cross modal
- human computer interaction
- user interface
- feature selection
- video search
- computer vision
- image annotation
- high dimensional
- uni modal
- hand gesture recognition
- traffic signs
- mutual information
- image analysis
- feature extraction
- artificial intelligence