Supervised saliency maps for first-person videos based on sparse coding.
Yujie LiAtsunori KanemuraHideki AsohTaiki MiyanishiMotoaki KawanabePublished in: APSIPA (2018)
Keyphrases
- image quality
- sparse coding
- saliency map
- human visual system
- natural images
- video frames
- unsupervised learning
- dictionary learning
- visual quality
- natural scenes
- sparse representation
- linear combination
- image data
- generative model
- image classification
- input image
- visual attention
- higher order
- image patches
- semi supervised
- multiscale
- video sequences
- supervised learning
- video data
- machine learning
- visual features
- feature selection
- denoising
- object recognition
- high dimensional
- active learning
- visual words
- video surveillance
- maximum likelihood
- superpixels
- high resolution
- image segmentation
- image processing
- learning algorithm