DECAF: MEG-Based Multimodal Database for Decoding Affective Physiological Responses


During this work, we tend to gift DECAF-a multimodal data set for decoding user physiological responses to affective multimedia content. Totally different from knowledge sets such as DEAP [15] and MAHNOB-HCI [31], DECAF contains (1) brain signals acquired using the Magnetoencephalogram (MEG) sensor, which needs very little physical contact with the user's scalp and consequently facilitates naturalistic affective response, and (two) express and implicit emotional responses of thirty participants to forty one-minute music video segments employed in [15] and 36 movie clips, thereby enabling comparisons between the EEG versus MEG modalities with movie versus music stimuli for have an effect on recognition. In addition to MEG data, DECAF contains synchronously recorded near-infra-red (NIR) facial videos, horizontal Electrooculogram (hEOG), Electrocardiogram (ECG), and trapezius-Electromyogram (tEMG) peripheral physiological responses. To demonstrate DECAF's utility, we tend to gift (i) an in depth analysis of the correlations between participants' self-assessments and their physiological responses and (ii) single-trial classification results for valence, arousal and dominance, with performance evaluation against existing information sets. DECAF also contains time-continuous emotion annotations for movie clips from seven users, that we have a tendency to use to demonstrate dynamic emotion prediction.

Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry