Semi-Automatic Annotation with Predicted Visual Saliency Maps for Object Recognition in Wearable Video.
Jenny Benois-PineauMireya S. García-VázquezL. A. Oropesa MoralezAlejandro Alvaro Ramírez-AcostaPublished in: WearMMe@ICMR (2017)
Keyphrases
- semi automatic
- visual saliency
- saliency map
- object recognition
- natural images
- semantic annotation
- manual annotation
- video frames
- eye tracking data
- saliency model
- semi automatically
- visual features
- visual attention
- fully automatic
- low level features
- labor intensive
- eye fixations
- visual search
- visual saliency detection
- human visual system
- natural scenes
- image annotation
- computer vision
- eye tracking
- visual concepts
- input image
- visual information
- saliency detection
- automatic annotation
- human detection
- salient regions
- video content
- lifelog
- low level
- video data
- key frames
- semantic concepts
- object categories
- visual analysis
- video summarization
- metadata
- salient objects
- human computer interaction
- receptive fields
- higher level
- eye movements
- image retrieval
- object detection
- image classification
- image representation
- eye tracker
- active learning
- medical images
- high level
- region of interest
- image segmentation
- multimedia
- information retrieval