Bootstrapping humanoid robot skills by extracting semantic representations of human-like activities from virtual reality.
Karinne Ramirez-AmaroTetsunari InamuraEmmanuel C. Dean-LeonMichael BeetzGordon ChengPublished in: Humanoids (2014)
Keyphrases
- virtual reality
- humanoid robot
- semantic representations
- human robot interaction
- semantic lexicon
- natural language understanding
- virtual environment
- motion planning
- semantic representation
- multi modal
- computer graphics
- three dimensional
- interactive virtual
- virtual humans
- semantic similarity
- semantic matching
- virtual world
- virtual reality technology
- human motion
- information extraction
- semantic features
- motion capture
- visual similarity
- automatically extracted
- real time
- knowledge representation
- feature extraction
- virtual reality environments