Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators?
International audience This paper is concerned with human assessments of the severity of errors in ASR outputs. We did not design any guidelines so that each annotator involved in the study could consider the " seriousness " of an ASR error using their own scientific background. Eight huma...
Main Authors: | , , , , , , , , , |
---|---|
Other Authors: | , , , , , , , , , |
Format: | Conference Object |
Language: | English |
Published: |
HAL CCSD
2014
|
Subjects: | |
Online Access: | https://hal.archives-ouvertes.fr/hal-01134802 |
id |
ftunivnantes:oai:HAL:hal-01134802v1 |
---|---|
record_format |
openpolar |
spelling |
ftunivnantes:oai:HAL:hal-01134802v1 2023-05-15T16:49:27+02:00 Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators? Luzzati, Daniel Grouin, Cyril Vasilescu, Ioana Adda-Decker, Martine Bilinski, Eric Camelin, Nathalie Kahn, Juliette Lailler, Carole Lamel, Lori Rosset, Sophie Laboratoire d'Informatique de l'Université du Mans (LIUM) Le Mans Université (UM) Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI) Université Paris-Sud - Paris 11 (UP11)-Sorbonne Université - UFR d'Ingénierie (UFR 919) Sorbonne Université (SU)-Sorbonne Université (SU)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Université Paris Saclay (COmUE) LPP - Laboratoire de Phonétique et Phonologie - UMR 7018 (LPP) Université Sorbonne Nouvelle - Paris 3-Centre National de la Recherche Scientifique (CNRS) LNE European Language Resources Association (ELRA) ANR-12-BS02-0006,VERA,Analyse d'erreurs avancée pour la reconnaissance de la parole(2012) Reykjavik, Iceland 2014-05-26 https://hal.archives-ouvertes.fr/hal-01134802 en eng HAL CCSD hal-01134802 https://hal.archives-ouvertes.fr/hal-01134802 Ninth International Conference on Language Resources and Evaluation (LREC'14) https://hal.archives-ouvertes.fr/hal-01134802 Ninth International Conference on Language Resources and Evaluation (LREC'14), May 2014, Reykjavik, Iceland. Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pp.3050-3056, 2014 http://lrec2014.lrec-conf.org/en/ Annotation ASR Seriousness Errors Speech Recognition [SHS.LANGUE]Humanities and Social Sciences/Linguistics [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] info:eu-repo/semantics/conferenceObject Poster communications 2014 ftunivnantes 2022-10-19T00:21:57Z International audience This paper is concerned with human assessments of the severity of errors in ASR outputs. We did not design any guidelines so that each annotator involved in the study could consider the " seriousness " of an ASR error using their own scientific background. Eight human annotators were involved in an annotation task on three distinct corpora, one of the corpora being annotated twice, hiding this annotation in duplicate to the annotators. None of the computed results (inter-annotator agreement, edit distance, majority annotation) allow any strong correlation between the considered criteria and the level of seriousness to be shown, which underlines the difficulty for a human to determine whether a ASR error is serious or not. Conference Object Iceland Université de Nantes: HAL-UNIV-NANTES |
institution |
Open Polar |
collection |
Université de Nantes: HAL-UNIV-NANTES |
op_collection_id |
ftunivnantes |
language |
English |
topic |
Annotation ASR Seriousness Errors Speech Recognition [SHS.LANGUE]Humanities and Social Sciences/Linguistics [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] |
spellingShingle |
Annotation ASR Seriousness Errors Speech Recognition [SHS.LANGUE]Humanities and Social Sciences/Linguistics [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] Luzzati, Daniel Grouin, Cyril Vasilescu, Ioana Adda-Decker, Martine Bilinski, Eric Camelin, Nathalie Kahn, Juliette Lailler, Carole Lamel, Lori Rosset, Sophie Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators? |
topic_facet |
Annotation ASR Seriousness Errors Speech Recognition [SHS.LANGUE]Humanities and Social Sciences/Linguistics [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL] |
description |
International audience This paper is concerned with human assessments of the severity of errors in ASR outputs. We did not design any guidelines so that each annotator involved in the study could consider the " seriousness " of an ASR error using their own scientific background. Eight human annotators were involved in an annotation task on three distinct corpora, one of the corpora being annotated twice, hiding this annotation in duplicate to the annotators. None of the computed results (inter-annotator agreement, edit distance, majority annotation) allow any strong correlation between the considered criteria and the level of seriousness to be shown, which underlines the difficulty for a human to determine whether a ASR error is serious or not. |
author2 |
Laboratoire d'Informatique de l'Université du Mans (LIUM) Le Mans Université (UM) Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI) Université Paris-Sud - Paris 11 (UP11)-Sorbonne Université - UFR d'Ingénierie (UFR 919) Sorbonne Université (SU)-Sorbonne Université (SU)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-Université Paris Saclay (COmUE) LPP - Laboratoire de Phonétique et Phonologie - UMR 7018 (LPP) Université Sorbonne Nouvelle - Paris 3-Centre National de la Recherche Scientifique (CNRS) LNE European Language Resources Association (ELRA) ANR-12-BS02-0006,VERA,Analyse d'erreurs avancée pour la reconnaissance de la parole(2012) |
format |
Conference Object |
author |
Luzzati, Daniel Grouin, Cyril Vasilescu, Ioana Adda-Decker, Martine Bilinski, Eric Camelin, Nathalie Kahn, Juliette Lailler, Carole Lamel, Lori Rosset, Sophie |
author_facet |
Luzzati, Daniel Grouin, Cyril Vasilescu, Ioana Adda-Decker, Martine Bilinski, Eric Camelin, Nathalie Kahn, Juliette Lailler, Carole Lamel, Lori Rosset, Sophie |
author_sort |
Luzzati, Daniel |
title |
Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators? |
title_short |
Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators? |
title_full |
Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators? |
title_fullStr |
Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators? |
title_full_unstemmed |
Human Annotation of ASR Error Regions: is "gravity" a Sharable Concept for Human Annotators? |
title_sort |
human annotation of asr error regions: is "gravity" a sharable concept for human annotators? |
publisher |
HAL CCSD |
publishDate |
2014 |
url |
https://hal.archives-ouvertes.fr/hal-01134802 |
op_coverage |
Reykjavik, Iceland |
genre |
Iceland |
genre_facet |
Iceland |
op_source |
Ninth International Conference on Language Resources and Evaluation (LREC'14) https://hal.archives-ouvertes.fr/hal-01134802 Ninth International Conference on Language Resources and Evaluation (LREC'14), May 2014, Reykjavik, Iceland. Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pp.3050-3056, 2014 http://lrec2014.lrec-conf.org/en/ |
op_relation |
hal-01134802 https://hal.archives-ouvertes.fr/hal-01134802 |
_version_ |
1766039595854594048 |