Watch and Listen to My Face: The Potential for Facial Communication in Human-Agent Interaction

Publication Date: 16/11/2018


Author(s): Dr. Abdulmalik Yusuf Ofemile, Innocent Ejimofor Agu, Evangelista Chimebere Agu.

Volume/Issue: Volume 1 , Issue 1 (2018)



Abstract:

Most research around listenership in Human-Agent Interaction has focused on assessing listener feedback using participant utterances during interaction, narratives after interaction or posed facial actions. However, little attention is paid to spontaneous facial actions displayed when interacting with software agents in instruction-giving contexts. This paper reports a study aimed at developing a better understanding of the nature and communicative potentials of spontaneous facial actions displayed during these interactions. Forty-eight participants were tasked with assembling two Lego models using verbal instructions from a computer interface. The interface used three voices of which two were synthesised and one provided by a voice actor. A 24-hour-long multimodal corpus was built and marked instances analysed from these interactions. The results suggest that it is possible for humans to show their perceptions of agent identity through their facial actions as positive, negative or indifferent during interaction. Furthermore, there is a potential for formulating a theoretical basis for researching interaction in similar contexts. Findings suggest that agents enhanced emotive functionality may enhance Human-Agent interaction in emerging contexts, but this requires further research.



No. of Downloads: 10

View: 706




This article is published under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
CC BY-NC-ND 4.0