Representing Multimodal Linguistics Annotated Data

International audience The question of interoperability for linguistic annotated resources requires to cover different aspects. First, it requires a representation framework making it possible to compare, and potentially merge, different annotation schema. In this paper, a general description level...

Full description

Bibliographic Details
Main Authors: Bigi, Brigitte, Watanabe, Tatsuya, Prévot, Laurent
Other Authors: Laboratoire Parole et Langage (LPL), Aix Marseille Université (AMU)-Centre National de la Recherche Scientifique (CNRS), ANR-11-EQPX-0032,ORTOLANG,Outils et Ressources pour un Traitement Optimisé des LANGues(2011), ANR-16-CONV-0002,ILCB,ILCB: Institute of Language Communication and the Brain(2016)
Format: Conference Object
Language:English
Published: HAL CCSD 2014
Subjects:
XML
Online Access:https://hal.science/hal-01500719
Description
Summary:International audience The question of interoperability for linguistic annotated resources requires to cover different aspects. First, it requires a representation framework making it possible to compare, and potentially merge, different annotation schema. In this paper, a general description level representing the multimodal linguistic annotations is proposed. It focuses on time and data content representation: This paper reconsiders and enhances the current and generalized representation of annotations. An XML schema of such annotations is proposed. A Python API is also proposed. This framework is implemented in a multi-platform software and distributed under the terms of the GNU Public License.