PROJECT TITLE :
Semi-Supervised Image-To-Video Adaptation For Video Action Recognition - 2017
Human action recognition has been well explored in applications of pc vision. Many successful action recognition methods have shown that action data will be effectively learned from motion videos or still images. For the same action, the acceptable action information learned from completely different varieties of media, e.g., videos or pictures, could be related. But, less effort has been created to improve the performance of action recognition in videos by adapting the action data conveyed from pictures to videos. Most of the present video action recognition methods suffer from the problem of lacking sufficient labeled coaching videos. In such cases, over-fitting would be a potential downside and also the performance of action recognition is restrained. In this paper, we propose an adaptation method to reinforce action recognition in videos by adapting data from images. The tailored knowledge is utilized to find out the correlated action semantics by exploring the common elements of both labeled videos and images. Meanwhile, we tend to extend the difference methodology to a semi-supervised framework that will leverage both labeled and unlabeled videos. Thus, the over-fitting will be alleviated and therefore the performance of action recognition is improved. Experiments on public benchmark datasets and real-world datasets show that our technique outperforms several alternative state-of-the-art action recognition methods.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here