Driver Activity Monitoring with Deep CNN, Body Pose, and Body-Object Interaction Features PROJECT TITLE : Deep CNN, Body Pose and Body-Object Interaction Features for Drivers Activity Monitoring ABSTRACT: The next generation of driver assistance systems and intelligent autonomous vehicles will be significantly influenced by the automatic recognition and prediction of human activities that take place inside the vehicle. In this piece, we present a novel single image driver action recognition algorithm that was inspired by human perception. This algorithm often focuses selectively on parts of the images to acquire information at specific places that are distinct to a given task. In other words, the algorithm mimics the way humans take in information. In contrast to the approaches that have been taken previously, we contend that human activity is a combination of semantic contextual cues and poses. We model this in greater detail by taking into account the arrangement of the body's joints, with the joints' interactions with the environment being modeled as a pairwise relation in order to capture the structural information. Even though it is coupled with a fundamental linear SVM classifier, our body-pose and body-object interaction representation is designed to be semantically rich and meaningful. This allows it to achieve a high level of discrimination. In addition to this, we suggest using a Multi-stream Deep Fusion Network (MDFN) in order to combine high-level semantics with CNN features. The results of our experiments demonstrate that the proposed method results in a significant improvement in the accuracy of the drivers' action recognition on two stringent datasets. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Deep Hough Transform for Detecting Semantic Lines Neural Processes for Modeling Personalized Vital-Sign Time-Series Data: Data Pre-processing