Login / Signup

On the importance of pre-training data volume for compact language models.

Vincent MicheliMartin d'HoffschmidtFrançois Fleuret
Published in: EMNLP (1) (2020)
Keyphrases