Research output

Chapter (in Edited Book) ()

Towards IMACA: Intelligent multimodal affective conversational agent

Hussain A, Cambria E, Mazzocco T, Grassi M, Wang Q & Durrani T (2012) Towards IMACA: Intelligent multimodal affective conversational agent. In: Huang T, Zeng Z, Li C, Leung CS (ed.). Neural Information Processing: 19th International Conference, ICONIP 2012, Doha, Qatar, November 12-15, 2012, Proceedings, Part I. Lecture Notes in Computer Science, 7663, Berlin Heidelberg: Springer, pp. 656-663.

A key aspect when trying to achieve natural interaction in machines is multimodality. Besides verbal communication, in fact, humans interact also through many other channels, e.g., facial expressions, gestures, eye contact, posture, and voice tone. Such channels convey not only semantics, but also emotional cues that are essential for interpreting the message transmitted. The importance of the affective information and the capability of properly managing it, in fact, has been more and more understood as fundamental for the development of a new generation of emotion-aware applications for several scenarios like e-learning, e-health, and human-computer interaction. To this end, this work investigates the adoption of different paradigms in the fields of text, vocal, and video analysis, in order to lay the basis for the development of an intelligent multimodal affective conversational agent.

AI; HCI; Multimodal Sentiment Analysis

EditorHuang T, Zeng Z, Li C, Leung CS
AuthorsHussain Amir, Cambria Erik, Mazzocco Thomas, Grassi Marco, Wang Qiu-Feng, Durrani Tariq
Title of seriesLecture Notes in Computer Science
Number in series7663
Publication date2012
Place of publicationBerlin Heidelberg
ISSN of series 0302-9743
ISBN 978-3-642-34474-9

Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2012)

© University of Stirling FK9 4LA Scotland UK • Telephone +44 1786 473171 • Scottish Charity No SC011159
My Portal