PROJECT TITLE :
Information fusion from multiple cameras for gait-based re-identification and recognition
During this study, the authors gift a absolutely automated frontal (i.e. employing front and back views only) gait recognition approach using the depth information captured by multiple Kinect RGB-D cameras. Restricted depth sensing range restricts every of these Kinects to record solely a part of a whole gait cycle of a walking subject. Hence, data from a lot of than one Kinect is fused along to examine that features of a gait cycle can be conveniently extracted from the sequences captured independently by these cameras. To achieve this, it is imperative that the same subject be re-identified as he moves from the field of read of 1 camera to a different. The authors use a set of soft-biometric options computed from the skeleton stream provided by Kinect software development kit) for doing automatic re-identification. To enable such information fusion and conjointly to handle missing components even after re-identification, features are extracted at the granularity of tiny fractions of a gait cycle. Experiments applied on a knowledge set with gait videos captured by Kinects respectively from the back and front views show promising results.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here