PROJECT TITLE :
Exploring Sparseness and Self-Similarity for Action Recognition
We have a tendency to propose that the dynamics of an action in video data forms a sparse self-similar manifold in the space-time volume, that will be fully characterized by a linear rank decomposition. Impressed by the recurrence plot theory, we tend to introduce the concept of Joint Self-Similarity Volume (Joint-SSV) to model this sparse action manifold, and hence propose a new optimized rank-1 tensor approximation of the Joint-SSV to obtain compact low-dimensional descriptors that terribly accurately characterize an action during a video sequence. We show that these descriptor vectors make it potential to acknowledge actions without explicitly aligning the videos in time so as to catch up on speed of execution or variations in video frame rates. Moreover, we show that the proposed methodology is generic, in the way that it can be applied using completely different low-level options, such as silhouettes, tracked points, histogram of oriented gradients, and thus forth. So, our methodology does not necessarily require explicit tracking of features in the space-time volume. Our experimental results on five public data sets demonstrate that our method produces promising results and outperforms many baseline strategies.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here