Article

Fusing audio, visual and textual clues for sentiment analysis from multimodal content

Details

Citation

Poria S, Cambria E, Howard N, Huang G & Hussain A (2016) Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing, 174 (A), pp. 50-59. https://doi.org/10.1016/j.neucom.2015.01.095

Abstract
A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both feature- and decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%.

Keywords
Multimodal fusion; Big social data analysis; Opinion mining; Multimodal sentiment analysis; Sentic computing

Journal
Neurocomputing: Volume 174, Issue A

StatusPublished
FundersThe Royal Society of Edinburgh, Scottish Funding Council, The Royal Society of Edinburgh and The Royal Society of Edinburgh
Publication date22/01/2016
Publication date online17/08/2015
Date accepted by journal02/01/2015
URLhttp://hdl.handle.net/1893/23767
PublisherElsevier
ISSN0925-2312