Multimodal Emotion Recognition in Response to Videos PROJECT TITLE :Multimodal Emotion Recognition in Response to VideosABSTRACT: This paper presents a user-independent emotion recognition technique with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We tend to 1st selected twenty video clips with extrinsic emotional content from movies and on-line resources. Then, EEG responses and eye gaze knowledge were recorded from 24 participants whereas watching emotional video clips. Ground truth was outlined primarily based on the median arousal and valence scores given to clips in a very preliminary study using an online questionnaire. Based mostly on the participants' responses, 3 categories for every dimension were outlined. The arousal categories were calm, medium aroused, and activated and the valence categories were unpleasant, neutral, and pleasant. One in every of the three affective labels of either valence or arousal resolve by classification of bodily responses. A 1-participant-out cross validation was used to analyze the classification performance during a user-independent approach. The best classification accuracies of sixty eight.five p.c for 3 labels of valence and 76.4 percent for three labels of arousal were obtained using a modality fusion strategy and a support vector machine. The results over a population of twenty four participants demonstrate that user-freelance emotion recognition will outperform individual self-reports for arousal assessments and don't underperform for valence assessments. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Building Autonomous Sensitive Artificial Listeners The Role of Nonlinear Dynamics in Affective Valence and Arousal Recognition