Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation.
Yuliang CaiJesse ThomasonMohammad RostamiPublished in: EMNLP (Findings) (2023)
Keyphrases
- learning systems
- learning mechanisms
- knowledge acquisition
- prior knowledge
- multiple tasks
- domain knowledge
- learning algorithm
- online learning
- learning process
- learning tasks
- background knowledge
- previously learned
- acquire knowledge
- transferring knowledge
- computer vision
- subject matter
- knowledge level
- organizational learning
- knowledge transfer
- design principles
- power system
- fuzzy logic
- knowledge representation
- cognitive architecture
- concept maps
- learning mechanism
- learning capabilities
- human experts
- visual processing
- design process
- active learning
- developmental psychology