Supporting Multimodality in Service-oriented Model-Based Development Environments
International audience While multimodal interfaces are becoming more and more used and supported, their development is still difficult and there is a lack of authoring tools for this purpose. The goal of this work is to discuss how multimodality can be specified in model-based languages and apply su...
Main Authors: | , |
---|---|
Other Authors: | , |
Format: | Conference Object |
Language: | English |
Published: |
HAL CCSD
2010
|
Subjects: | |
Online Access: | https://hal.inria.fr/hal-01055209 https://hal.inria.fr/hal-01055209/document https://hal.inria.fr/hal-01055209/file/p13_15.pdf https://doi.org/10.1007/978-3-642-16488-0_11 |
Summary: | International audience While multimodal interfaces are becoming more and more used and supported, their development is still difficult and there is a lack of authoring tools for this purpose. The goal of this work is to discuss how multimodality can be specified in model-based languages and apply such solution to the composition of graphical and vocal interactions. In particular, we show how to provide structured support that aims to identify the most suitable solutions for modelling multimodality at various detail levels. This is obtained using, amongst other techniques, the well-known CARE properties in the context of a model-based language able to support service-based applications and modern Web 2.0 interactions. The method is supported by an authoring environment, which provides some specific solutions that can be modified by the designers to better suit their specific needs, and is able to generate implementations of multimodal interfaces in Web environments. An example of modelling a multimodal application and the corresponding, automatically generated, user interfaces is reported as well. |
---|