Reducing Confusion in Active Learning for Part-Of-Speech Tagging

Read the paper on the folowing link: https://transacl.org/ojs/index.php/tacl/article/view/2155 Abstract: Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers suc...

Full description

Bibliographic Details
Main Authors: NAACL 2021 2021, Anastasopoulos, Antonios, Chaudhary, Aditi, Neubig, Graham, Sheikh, Zaid
Format: Article in Journal/Newspaper
Language:unknown
Published: Underline Science Inc. 2021
Subjects:
Online Access:https://dx.doi.org/10.48448/yd5y-fv44
https://underline.io/lecture/20053-reducing-confusion-in-active-learning-for-part-of-speech-tagging
id ftdatacite:10.48448/yd5y-fv44
record_format openpolar
spelling ftdatacite:10.48448/yd5y-fv44 2023-05-15T18:12:31+02:00 Reducing Confusion in Active Learning for Part-Of-Speech Tagging NAACL 2021 2021 Anastasopoulos, Antonios Chaudhary, Aditi Neubig, Graham Sheikh, Zaid 2021 https://dx.doi.org/10.48448/yd5y-fv44 https://underline.io/lecture/20053-reducing-confusion-in-active-learning-for-part-of-speech-tagging unknown Underline Science Inc. Artificial Intelligence Computer Science and Engineering Intelligent System Natural Language Processing MediaObject article Conference talk Audiovisual 2021 ftdatacite https://doi.org/10.48448/yd5y-fv44 2022-02-09T11:28:00Z Read the paper on the folowing link: https://transacl.org/ojs/index.php/tacl/article/view/2155 Abstract: Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed on the principle of selecting uncertain yet representative training instances, where annotating these instances may reduce a large number of errors. However, in an empirical study across six typologically diverse languages (German, Swedish, Galician, North Sami, Persian, and Ukrainian), we found the surprising result that even in an oracle scenario where we know the true uncertainty of predictions, these current heuristics are far from optimal. Based on this analysis, we pose the problem of AL as selecting instances which maximally reduce the confusion between particular pairs of output tags. Extensive experimentation on the aforementioned languages shows that our proposed AL strategy outperforms other AL strategies by a significant margin. We also present auxiliary results demonstrating the importance of proper calibration of models, which we ensure through cross-view training, and analysis demonstrating how our proposed strategy selects examples that more closely follow the oracle data distribution. Article in Journal/Newspaper sami DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Artificial Intelligence
Computer Science and Engineering
Intelligent System
Natural Language Processing
spellingShingle Artificial Intelligence
Computer Science and Engineering
Intelligent System
Natural Language Processing
NAACL 2021 2021
Anastasopoulos, Antonios
Chaudhary, Aditi
Neubig, Graham
Sheikh, Zaid
Reducing Confusion in Active Learning for Part-Of-Speech Tagging
topic_facet Artificial Intelligence
Computer Science and Engineering
Intelligent System
Natural Language Processing
description Read the paper on the folowing link: https://transacl.org/ojs/index.php/tacl/article/view/2155 Abstract: Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed on the principle of selecting uncertain yet representative training instances, where annotating these instances may reduce a large number of errors. However, in an empirical study across six typologically diverse languages (German, Swedish, Galician, North Sami, Persian, and Ukrainian), we found the surprising result that even in an oracle scenario where we know the true uncertainty of predictions, these current heuristics are far from optimal. Based on this analysis, we pose the problem of AL as selecting instances which maximally reduce the confusion between particular pairs of output tags. Extensive experimentation on the aforementioned languages shows that our proposed AL strategy outperforms other AL strategies by a significant margin. We also present auxiliary results demonstrating the importance of proper calibration of models, which we ensure through cross-view training, and analysis demonstrating how our proposed strategy selects examples that more closely follow the oracle data distribution.
format Article in Journal/Newspaper
author NAACL 2021 2021
Anastasopoulos, Antonios
Chaudhary, Aditi
Neubig, Graham
Sheikh, Zaid
author_facet NAACL 2021 2021
Anastasopoulos, Antonios
Chaudhary, Aditi
Neubig, Graham
Sheikh, Zaid
author_sort NAACL 2021 2021
title Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_short Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_full Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_fullStr Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_full_unstemmed Reducing Confusion in Active Learning for Part-Of-Speech Tagging
title_sort reducing confusion in active learning for part-of-speech tagging
publisher Underline Science Inc.
publishDate 2021
url https://dx.doi.org/10.48448/yd5y-fv44
https://underline.io/lecture/20053-reducing-confusion-in-active-learning-for-part-of-speech-tagging
genre sami
genre_facet sami
op_doi https://doi.org/10.48448/yd5y-fv44
_version_ 1766185038040268800