Login / Signup

Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?

En-Shiun Annie LeeSarubi ThillainathanShravan NayakSurangika RanathungaDavid Ifeoluwa AdelaniRuisi SuArya McCarthy
Published in: ACL (Findings) (2022)
Keyphrases
  • probabilistic model
  • data sets
  • pre trained
  • machine learning
  • machine translation system
  • computer vision
  • training data
  • cross language