Transfer learning and subword sampling for asymmetric-resource one-to-many neural translation
| openaire: EC/H2020/780069/EU//MeMAD There are several approaches for improving neural machine translation for low-resource languages: monolingual data can be exploited via pretraining or data augmentation; parallel corpora on related language pairs can be used via parameter sharing or transfer lea...
Published in: | Machine Translation |
---|---|
Main Authors: | , , |
Other Authors: | , , , |
Format: | Article in Journal/Newspaper |
Language: | English |
Published: |
Springer Netherlands
2020
|
Subjects: | |
Online Access: | https://aaltodoc.aalto.fi/handle/123456789/102739 https://doi.org/10.1007/s10590-020-09253-x |
Summary: | | openaire: EC/H2020/780069/EU//MeMAD There are several approaches for improving neural machine translation for low-resource languages: monolingual data can be exploited via pretraining or data augmentation; parallel corpora on related language pairs can be used via parameter sharing or transfer learning in multilingual models; subword segmentation and regularization techniques can be applied to ensure high coverage of the vocabulary. We review these approaches in the context of an asymmetric-resource one-to-many translation task, in which the pair of target languages are related, with one being a very low-resource and the other a higher-resource language. We test various methods on three artificially restricted translation tasks—English to Estonian (low-resource) and Finnish (high-resource), English to Slovak and Czech, English to Danish and Swedish—and one real-world task, Norwegian to North Sámi and Finnish. The experiments show positive effects especially for scheduled multi-task learning, denoising autoencoder, and subword sampling. Peer reviewed |
---|