Evaluating Transferability of BERT Models on Uralic Languages

Transformer-based language models such as BERT have outperformed previous models on a large number of English benchmarks, but their evaluation is often limited to English or a small number of well-resourced languages. In this work, we evaluate monolingual, multilingual, and randomly initialized lang...

Full description

Bibliographic Details
Main Authors: Ács, Judit, Lévai, Dániel, Kornai, András
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2021
Subjects:
Ner
Online Access:https://dx.doi.org/10.48550/arxiv.2109.06327
https://arxiv.org/abs/2109.06327
Description
Summary:Transformer-based language models such as BERT have outperformed previous models on a large number of English benchmarks, but their evaluation is often limited to English or a small number of well-resourced languages. In this work, we evaluate monolingual, multilingual, and randomly initialized language models from the BERT family on a variety of Uralic languages including Estonian, Finnish, Hungarian, Erzya, Moksha, Karelian, Livvi, Komi Permyak, Komi Zyrian, Northern Sámi, and Skolt Sámi. When monolingual models are available (currently only et, fi, hu), these perform better on their native language, but in general they transfer worse than multilingual models or models of genetically unrelated languages that share the same character set. Remarkably, straightforward transfer of high-resource models, even without special efforts toward hyperparameter optimization, yields what appear to be state of the art POS and NER tools for the minority Uralic languages where there is sufficient data for finetuning. : Seventh International Workshop for Computational Linguistics of Uralic Languages (IWCLUL 2021)