ORCA: A Challenging Benchmark for Arabic Language Understanding ...

Due to the crucial role pretrained language models play in modern NLP, several benchmarks have been proposed to evaluate their performance. In spite of these efforts, no public benchmark of diverse nature currently exists for evaluating Arabic NLU. This makes it challenging to measure progress for b...

Full description

Bibliographic Details
Main Authors: Association for Computational Linguistics 2023, Abdul-Mageed, Muhammad, Elmadany, AbdelRahim, Nagoudi, El Moatez Billah
Format: Article in Journal/Newspaper
Language:unknown
Published: Underline Science Inc. 2022
Subjects:
Online Access:https://dx.doi.org/10.48448/jw7t-we25
https://underline.io/lecture/77938-orca-a-challenging-benchmark-for-arabic-language-understanding
Description
Summary:Due to the crucial role pretrained language models play in modern NLP, several benchmarks have been proposed to evaluate their performance. In spite of these efforts, no public benchmark of diverse nature currently exists for evaluating Arabic NLU. This makes it challenging to measure progress for both Arabic and multilingual language models. This challenge is compounded by the fact that any benchmark targeting Arabic needs to take into account the fact that Arabic is not a single language but rather a collection of languages and language varieties. In this work, we introduce a publicly available benchmark for Arabic language understanding evaluation dubbed ORCA. It is carefully constructed to cover diverse Arabic varieties and a wide range of challenging Arabic understanding tasks exploiting 60 different datasets (across seven NLU task clusters). To measure current progress in Arabic NLU, we use ORCA to offer a comprehensive comparison between 18 multilingual and Arabic language models. We also provide a ...