Expected exponential loss for gaze-based video and volume ground truth annotation.
Laurent LejeuneMario ChristoudiasRaphael SznitmanPublished in: CoRR (2017)
Keyphrases
- ground truth
- video annotation
- video sequences
- eye tracking data
- video content
- video material
- eye tracking
- video streams
- video data
- space time
- multimedia
- weakly labeled
- human observers
- digital photos
- video frames
- active learning
- multimedia data
- ground truth data
- content description
- video database
- real time
- motion estimation
- semantic concepts
- image annotation
- visual concepts
- eye gaze
- eye movements
- eye contact
- video analysis
- eye typing
- worst case bounds
- manually labeled
- gaze estimation
- digital video
- visual search
- semantic annotation
- multi modal
- metadata
- computer vision