PROJECT TITLE :
Fusion of depth ,skeleton ,and inertial data for human action recognition - 2016
This paper presents somebody's action recognition approach by the simultaneous deployment of a second generation Kinect depth sensor and a wearable inertial sensor. Three knowledge modalities consisting of depth images, skeleton joint positions, and inertial signals are fused by utilizing 3 collaborative illustration classifiers. A database consisting of ten actions performed by six subjects is put along to hold out 2 types of testing of the developed fusion approach: subject-generic and subject-specific. The general recognition rates obtained from each varieties of testing indicate recognition improvements when fusing all the info modalities compared to the situations when knowledge modalities are used individually.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here