Login / Signup

Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?

En-Shiun Annie LeeSarubi ThillainathanShravan NayakSurangika RanathungaDavid Ifeoluwa AdelaniRuisi SuArya D. McCarthy
Published in: CoRR (2022)
Keyphrases
  • probabilistic model
  • pre trained
  • data sets
  • computer vision
  • image sequences
  • high dimensional
  • cross language information retrieval
  • parallel corpus