Multimodal Emotion Recognition in Response to Videos


This paper presents a user-independent emotion recognition technique with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We tend to 1st selected twenty video clips with extrinsic emotional content from movies and on-line resources. Then, EEG responses and eye gaze knowledge were recorded from 24 participants whereas watching emotional video clips. Ground truth was outlined primarily based on the median arousal and valence scores given to clips in a very preliminary study using an online questionnaire. Based mostly on the participants' responses, 3 categories for every dimension were outlined. The arousal categories were calm, medium aroused, and activated and the valence categories were unpleasant, neutral, and pleasant. One in every of the three affective labels of either valence or arousal resolve by classification of bodily responses. A 1-participant-out cross validation was used to analyze the classification performance during a user-independent approach. The best classification accuracies of sixty eight.five p.c for 3 labels of valence and 76.4 percent for three labels of arousal were obtained using a modality fusion strategy and a support vector machine. The results over a population of twenty four participants demonstrate that user-freelance emotion recognition will outperform individual self-reports for arousal assessments and don't underperform for valence assessments.

Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here

PROJECT TITLE : Spatio-Contextual Deep Network Based Multimodal Pedestrian Detection For Autonomous Driving ABSTRACT: The most important component of an autonomous driving system is the module that is responsible for pedestrian
PROJECT TITLE : Tufts Dental Database A Multimodal Panoramic X-Ray Dataset for Benchmarking Diagnostic Systems
PROJECT TITLE : MM-UrbanFAC Urban Functional Area Classification Model Based on Multimodal Machine Learning ABSTRACT: The majority of the classification methods that are currently used for urban functional areas are only based
PROJECT TITLE : Learning Multimodal Representations for Drowsiness Detection ABSTRACT: The detection of drowsiness is an essential step toward ensuring safe driving. A significant amount of work has been put into developing an
PROJECT TITLE : Joint detection and matching of feature points in multimodal images ABSTRACT: In this work, we propose a novel architecture for Convolutional Neural Networks (CNNs) for the joint detection and matching of feature

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry