Animating a conversational agent with user expressivity
International audience Our objective is to animate an embodied conversational agent (ECA) with communicative gestures rendered with the expressivity of a real human user it represents. We describe an approach to estimate a subset of expressivity parameters defined in the literature (namely spatial a...
Main Authors: | , , |
---|---|
Other Authors: | , , , , , |
Format: | Conference Object |
Language: | English |
Published: |
HAL CCSD
2011
|
Subjects: | |
Online Access: | https://hal.science/hal-01301881 https://hal.science/hal-01301881/document https://hal.science/hal-01301881/file/IVA2011%20Rajagopal.pdf https://doi.org/10.1007/978-3-642-23974-8_64 |
Summary: | International audience Our objective is to animate an embodied conversational agent (ECA) with communicative gestures rendered with the expressivity of a real human user it represents. We describe an approach to estimate a subset of expressivity parameters defined in the literature (namely spatial and temporal extent) from captured motion trajectories. We first validate this estimation against synthesis motion and then show results with real human motion. The estimated expressivity is then sent to the animation engine of an ECA that becomes a personalized autonomous representative of that user |
---|