Unconstrained Facial Expression Recognition Using Reliable Crowdsourcing and Deep Locality-Preserving Learning PROJECT TITLE : Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition ABSTRACT: Although face expression is fundamental to human experience, most previous databases and studies have focused on posed facial activity under controlled circumstances. We offer the Real-world Affective Face Library (RAF-DB), a novel facial expression database that contains around 30 000 facial photographs with uncontrolled positions and illumination from thousands of people of all ages and races. Approximately 40 annotators label each image separately during the crowdsourced annotation process. To correctly estimate the emotion labels, an expectation-maximization algorithm is built, which indicates that real-world faces frequently display compound or even mixed emotions. A comparison of the RAF-DB and the CK+ databases reveals that the action units of real-world emotions are far more varied than, and even differ from, those of laboratory-controlled emotions. We propose a new deep locality-preserving convolutional neural network (DLP-CNN) technique for recognizing multi-modal expressions in the wild, which attempts to improve the discriminative strength of deep features by retaining locality closeness while minimizing inter-class scatter. The proposed DLP-CNN outperforms state-of-the-art handcrafted features and Deep Learning-based methods for expression recognition in the wild, according to benchmark experiments on 7-class basic expressions and 11-class compound expressions, as well as additional experiments on the CK+, MMI, and SFEW 2.0 databases. We've made the RAF database, benchmarks, and descriptor encodings publicly available to the research community in order to encourage more research. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Wearable Sensor Data Outlier Detection for Human Activity Recognition (HAR) Using DRNNs Unsupervised Multi-view Feature Learning with a Dynamic Graph that is Robust