Neural Polysynthetic Language Modelling

Research in natural language processing commonly assumes that approaches that work well for English and and other widely-used languages are "language agnostic". In high-resource languages, especially those that are analytic, a common approach is to treat morphologically-distinct variants o...

Full description

Bibliographic Details
Main Authors: Schwartz, Lane, Tyers, Francis, Levin, Lori, Kirov, Christo, Littell, Patrick, Lo, Chi-kiu, Prud'hommeaux, Emily, Park, Hyunji Hayley, Steimel, Kenneth, Knowles, Rebecca, Micher, Jeffrey, Strunk, Lonny, Liu, Han, Haley, Coleman, Zhang, Katherine J., Jimmerson, Robbie, Andriyanets, Vasilisa, Muis, Aldrian Obaja, Otani, Naoki, Park, Jong Hyuk, Zhang, Zhisong
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2020
Subjects:
Online Access:https://dx.doi.org/10.48550/arxiv.2005.05477
https://arxiv.org/abs/2005.05477
id ftdatacite:10.48550/arxiv.2005.05477
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2005.05477 2023-05-15T15:53:45+02:00 Neural Polysynthetic Language Modelling Schwartz, Lane Tyers, Francis Levin, Lori Kirov, Christo Littell, Patrick Lo, Chi-kiu Prud'hommeaux, Emily Park, Hyunji Hayley Steimel, Kenneth Knowles, Rebecca Micher, Jeffrey Strunk, Lonny Liu, Han Haley, Coleman Zhang, Katherine J. Jimmerson, Robbie Andriyanets, Vasilisa Muis, Aldrian Obaja Otani, Naoki Park, Jong Hyuk Zhang, Zhisong 2020 https://dx.doi.org/10.48550/arxiv.2005.05477 https://arxiv.org/abs/2005.05477 unknown arXiv arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Computation and Language cs.CL FOS Computer and information sciences Article CreativeWork article Preprint 2020 ftdatacite https://doi.org/10.48550/arxiv.2005.05477 2022-03-10T15:43:29Z Research in natural language processing commonly assumes that approaches that work well for English and and other widely-used languages are "language agnostic". In high-resource languages, especially those that are analytic, a common approach is to treat morphologically-distinct variants of a common root as completely independent word types. This assumes, that there are limited morphological inflections per root, and that the majority will appear in a large enough corpus, so that the model can adequately learn statistics about each form. Approaches like stemming, lemmatization, or subword segmentation are often used when either of those assumptions do not hold, particularly in the case of synthetic languages like Spanish or Russian that have more inflection than English. In the literature, languages like Finnish or Turkish are held up as extreme examples of complexity that challenge common modelling assumptions. Yet, when considering all of the world's languages, Finnish and Turkish are closer to the average case. When we consider polysynthetic languages (those at the extreme of morphological complexity), approaches like stemming, lemmatization, or subword modelling may not suffice. These languages have very high numbers of hapax legomena, showing the need for appropriate morphological handling of words, without which it is not possible for a model to capture enough word statistics. We examine the current state-of-the-art in language modelling, machine translation, and text prediction for four polysynthetic languages: GuaranĂ­, St. Lawrence Island Yupik, Central Alaskan Yupik, and Inuktitut. We then propose a novel framework for language modelling that combines knowledge representations from finite-state morphological analyzers with Tensor Product Representations in order to enable neural language models capable of handling the full range of typologically variant languages. Article in Journal/Newspaper central alaskan yupik inuktitut St Lawrence Island St. Lawrence Island Yupik Yupik DataCite Metadata Store (German National Library of Science and Technology) Lawrence Island ENVELOPE(-103.718,-103.718,56.967,56.967)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Computation and Language cs.CL
FOS Computer and information sciences
spellingShingle Computation and Language cs.CL
FOS Computer and information sciences
Schwartz, Lane
Tyers, Francis
Levin, Lori
Kirov, Christo
Littell, Patrick
Lo, Chi-kiu
Prud'hommeaux, Emily
Park, Hyunji Hayley
Steimel, Kenneth
Knowles, Rebecca
Micher, Jeffrey
Strunk, Lonny
Liu, Han
Haley, Coleman
Zhang, Katherine J.
Jimmerson, Robbie
Andriyanets, Vasilisa
Muis, Aldrian Obaja
Otani, Naoki
Park, Jong Hyuk
Zhang, Zhisong
Neural Polysynthetic Language Modelling
topic_facet Computation and Language cs.CL
FOS Computer and information sciences
description Research in natural language processing commonly assumes that approaches that work well for English and and other widely-used languages are "language agnostic". In high-resource languages, especially those that are analytic, a common approach is to treat morphologically-distinct variants of a common root as completely independent word types. This assumes, that there are limited morphological inflections per root, and that the majority will appear in a large enough corpus, so that the model can adequately learn statistics about each form. Approaches like stemming, lemmatization, or subword segmentation are often used when either of those assumptions do not hold, particularly in the case of synthetic languages like Spanish or Russian that have more inflection than English. In the literature, languages like Finnish or Turkish are held up as extreme examples of complexity that challenge common modelling assumptions. Yet, when considering all of the world's languages, Finnish and Turkish are closer to the average case. When we consider polysynthetic languages (those at the extreme of morphological complexity), approaches like stemming, lemmatization, or subword modelling may not suffice. These languages have very high numbers of hapax legomena, showing the need for appropriate morphological handling of words, without which it is not possible for a model to capture enough word statistics. We examine the current state-of-the-art in language modelling, machine translation, and text prediction for four polysynthetic languages: GuaranĂ­, St. Lawrence Island Yupik, Central Alaskan Yupik, and Inuktitut. We then propose a novel framework for language modelling that combines knowledge representations from finite-state morphological analyzers with Tensor Product Representations in order to enable neural language models capable of handling the full range of typologically variant languages.
format Article in Journal/Newspaper
author Schwartz, Lane
Tyers, Francis
Levin, Lori
Kirov, Christo
Littell, Patrick
Lo, Chi-kiu
Prud'hommeaux, Emily
Park, Hyunji Hayley
Steimel, Kenneth
Knowles, Rebecca
Micher, Jeffrey
Strunk, Lonny
Liu, Han
Haley, Coleman
Zhang, Katherine J.
Jimmerson, Robbie
Andriyanets, Vasilisa
Muis, Aldrian Obaja
Otani, Naoki
Park, Jong Hyuk
Zhang, Zhisong
author_facet Schwartz, Lane
Tyers, Francis
Levin, Lori
Kirov, Christo
Littell, Patrick
Lo, Chi-kiu
Prud'hommeaux, Emily
Park, Hyunji Hayley
Steimel, Kenneth
Knowles, Rebecca
Micher, Jeffrey
Strunk, Lonny
Liu, Han
Haley, Coleman
Zhang, Katherine J.
Jimmerson, Robbie
Andriyanets, Vasilisa
Muis, Aldrian Obaja
Otani, Naoki
Park, Jong Hyuk
Zhang, Zhisong
author_sort Schwartz, Lane
title Neural Polysynthetic Language Modelling
title_short Neural Polysynthetic Language Modelling
title_full Neural Polysynthetic Language Modelling
title_fullStr Neural Polysynthetic Language Modelling
title_full_unstemmed Neural Polysynthetic Language Modelling
title_sort neural polysynthetic language modelling
publisher arXiv
publishDate 2020
url https://dx.doi.org/10.48550/arxiv.2005.05477
https://arxiv.org/abs/2005.05477
long_lat ENVELOPE(-103.718,-103.718,56.967,56.967)
geographic Lawrence Island
geographic_facet Lawrence Island
genre central alaskan yupik
inuktitut
St Lawrence Island
St. Lawrence Island Yupik
Yupik
genre_facet central alaskan yupik
inuktitut
St Lawrence Island
St. Lawrence Island Yupik
Yupik
op_rights arXiv.org perpetual, non-exclusive license
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
op_doi https://doi.org/10.48550/arxiv.2005.05477
_version_ 1766388939934924800