Article

A Novel Context-Aware Multimodal Framework for Persian Sentiment Analysis

Details

Citation

Dashtipour K, Gogate M, Cambria E & Hussain A (2021) A Novel Context-Aware Multimodal Framework for Persian Sentiment Analysis. Neurocomputing, 457, pp. 377-388. https://doi.org/10.1016/j.neucom.2021.02.020

Abstract
Most recent works on sentiment analysis have exploited the text modality. However, millions of hours of video recordings posted on social media platforms everyday hold vital unstructured information that can be exploited to more effectively gauge public perception. Multimodal sentiment analysis offers an innovative solution to computationally understand and harvest sentiments from videos by contextually exploiting audio, visual and textual cues. In this paper, we, firstly, present a first of its kind Persian multimodal dataset comprising more than 800 utterances, as a benchmark resource for researchers to evaluate multimodal sentiment analysis approaches in Persian language. Secondly, we present a novel context-aware multimodal sentiment analysis framework, that simultaneously exploits acoustic, visual and textual cues to more accurately determine the expressed sentiment. We employ both decision-level (late) and feature-level (early) fusion methods to integrate affective cross-modal information. Experimental results demonstrate that the contextual integration of multimodal features such as textual, acoustic and visual features deliver better performance (91.39%) compared to unimodal features (89.24%).

Keywords
Multimodal Sentiment Analysis; Persian Sentiment Analysis

Journal
Neurocomputing: Volume 457

StatusPublished
Publication date07/10/2021
Publication date online02/03/2021
Date accepted by journal09/02/2021
URLhttp://hdl.handle.net/1893/32381
PublisherElsevier BV
ISSN0925-2312