Facebook AI's WMT20 News Translation Task Submission
This paper describes Facebook AI's submission to WMT20 shared news translation task. We focus on the low resource setting and participate in two language pairs, Tamil English and Inuktitut English, where there are limited out-of-domain bitext and monolingual data. We approach the low resource p...
Main Authors: | , , , , , , |
---|---|
Format: | Article in Journal/Newspaper |
Language: | unknown |
Published: |
arXiv
2020
|
Subjects: | |
Online Access: | https://dx.doi.org/10.48550/arxiv.2011.08298 https://arxiv.org/abs/2011.08298 |
id |
ftdatacite:10.48550/arxiv.2011.08298 |
---|---|
record_format |
openpolar |
spelling |
ftdatacite:10.48550/arxiv.2011.08298 2023-05-15T16:55:36+02:00 Facebook AI's WMT20 News Translation Task Submission Chen, Peng-Jen Lee, Ann Wang, Changhan Goyal, Naman Fan, Angela Williamson, Mary Gu, Jiatao 2020 https://dx.doi.org/10.48550/arxiv.2011.08298 https://arxiv.org/abs/2011.08298 unknown arXiv arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Computation and Language cs.CL FOS Computer and information sciences Article CreativeWork article Preprint 2020 ftdatacite https://doi.org/10.48550/arxiv.2011.08298 2022-03-10T15:09:53Z This paper describes Facebook AI's submission to WMT20 shared news translation task. We focus on the low resource setting and participate in two language pairs, Tamil English and Inuktitut English, where there are limited out-of-domain bitext and monolingual data. We approach the low resource problem using two main strategies, leveraging all available data and adapting the system to the target news domain. We explore techniques that leverage bitext and monolingual data from all languages, such as self-supervised model pretraining, multilingual models, data augmentation, and reranking. To better adapt the translation system to the test domain, we explore dataset tagging and fine-tuning on in-domain data. We observe that different techniques provide varied improvements based on the available data of the language pair. Based on the finding, we integrate these techniques into one training pipeline. For En->Ta, we explore an unconstrained setup with additional Tamil bitext and monolingual data and show that further improvement can be obtained. On the test set, our best submitted systems achieve 21.5 and 13.7 BLEU for Ta->En and En->Ta respectively, and 27.9 and 13.0 for Iu->En and En->Iu respectively. Article in Journal/Newspaper inuktitut DataCite Metadata Store (German National Library of Science and Technology) |
institution |
Open Polar |
collection |
DataCite Metadata Store (German National Library of Science and Technology) |
op_collection_id |
ftdatacite |
language |
unknown |
topic |
Computation and Language cs.CL FOS Computer and information sciences |
spellingShingle |
Computation and Language cs.CL FOS Computer and information sciences Chen, Peng-Jen Lee, Ann Wang, Changhan Goyal, Naman Fan, Angela Williamson, Mary Gu, Jiatao Facebook AI's WMT20 News Translation Task Submission |
topic_facet |
Computation and Language cs.CL FOS Computer and information sciences |
description |
This paper describes Facebook AI's submission to WMT20 shared news translation task. We focus on the low resource setting and participate in two language pairs, Tamil English and Inuktitut English, where there are limited out-of-domain bitext and monolingual data. We approach the low resource problem using two main strategies, leveraging all available data and adapting the system to the target news domain. We explore techniques that leverage bitext and monolingual data from all languages, such as self-supervised model pretraining, multilingual models, data augmentation, and reranking. To better adapt the translation system to the test domain, we explore dataset tagging and fine-tuning on in-domain data. We observe that different techniques provide varied improvements based on the available data of the language pair. Based on the finding, we integrate these techniques into one training pipeline. For En->Ta, we explore an unconstrained setup with additional Tamil bitext and monolingual data and show that further improvement can be obtained. On the test set, our best submitted systems achieve 21.5 and 13.7 BLEU for Ta->En and En->Ta respectively, and 27.9 and 13.0 for Iu->En and En->Iu respectively. |
format |
Article in Journal/Newspaper |
author |
Chen, Peng-Jen Lee, Ann Wang, Changhan Goyal, Naman Fan, Angela Williamson, Mary Gu, Jiatao |
author_facet |
Chen, Peng-Jen Lee, Ann Wang, Changhan Goyal, Naman Fan, Angela Williamson, Mary Gu, Jiatao |
author_sort |
Chen, Peng-Jen |
title |
Facebook AI's WMT20 News Translation Task Submission |
title_short |
Facebook AI's WMT20 News Translation Task Submission |
title_full |
Facebook AI's WMT20 News Translation Task Submission |
title_fullStr |
Facebook AI's WMT20 News Translation Task Submission |
title_full_unstemmed |
Facebook AI's WMT20 News Translation Task Submission |
title_sort |
facebook ai's wmt20 news translation task submission |
publisher |
arXiv |
publishDate |
2020 |
url |
https://dx.doi.org/10.48550/arxiv.2011.08298 https://arxiv.org/abs/2011.08298 |
genre |
inuktitut |
genre_facet |
inuktitut |
op_rights |
arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ |
op_doi |
https://doi.org/10.48550/arxiv.2011.08298 |
_version_ |
1766046588331884544 |