Login / Signup
Autoregressive Knowledge Distillation through Imitation Learning.
Alexander Lin
Jeremy Wohlwend
Howard Chen
Tao Lei
Published in:
EMNLP (1) (2020)
Keyphrases
</>
autoregressive
imitation learning
moving average
non stationary
knowledge base
prior knowledge
random fields
gaussian markov random field
knowledge representation
support vector
background knowledge
representing knowledge