PROJECT TITLE :
Multimodal Person Reidentification Using RGB-D Cameras
Person reidentification consists of recognizing individuals across completely different sensors of a camera network. Whereas clothing appearance cues are widely used, different modalities may be exploited as further information sources, like anthropometric measures and gait. In this paper, we investigate whether or not the reidentification accuracy of clothing appearance descriptors will be improved by fusing them with anthropometric measures extracted from depth information, using RGB-D sensors, in unconstrained settings. We also propose a dissimilarity-based mostly framework for building and fusing the multimodal descriptors of pedestrian images for reidentification tasks, as an alternate to the widely used score-level fusion. The experimental analysis is dispensed on two information sets as well as RGB-D knowledge, one among which may be a novel publicly offered information set that we acquired using Kinect sensors. The fusion with anthropometric measures increases the first-rank recognition rate of clothing look descriptors up to twentypercent, whereas our fusion approach reduces the processing cost of the matching part.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here