Graph-based visual semantic perception for humanoid robots.
Markus GrotzPeter KaiserEren Erdal AksoyFabian PausTamim AsfourPublished in: Humanoids (2017)
Keyphrases
- humanoid robot
- visual perception
- motion planning
- motor control
- high level
- semantic content
- biologically inspired
- multi modal
- spreading activation
- visual processing
- visual information
- visual motion
- semantic knowledge
- human robot interaction
- low level
- semantic annotation
- motion capture
- semantically relevant
- semantic web
- semantic information
- motion patterns
- semantic concepts
- semantic similarity
- natural language
- visual concepts
- motor skills
- feature selection
- virtual environment
- graph model
- semi supervised
- human motion
- spatio temporal
- path planning
- visual attention
- body movements
- semantic network
- visual features
- mobile robot