PROJECT TITLE :
continuously adaptive data fusion and model relearning for particle filter tracking with multiple features - 2016
This paper presents a brand new technique for object tracking in an exceedingly camera sensor with particle filters. The strategy enables multiple target and background models, arbitrarily spanning many features or imaging modalities, to be adaptively fused to provide optimal discriminating ability against changing backgrounds, that might gift varying degrees of litter and camouflage for various sorts of features at totally different times. Furthermore, we show how to continuously and robustly relearn all models for all feature modalities online during tracking and for targets whose look may be regularly changing. Each the information fusion weightings and model relearning parameters are robustly adapted at every frame, by extracting contextual information to tell the saliency assessments of each half of each model. Also, we have a tendency to propose a 2-step estimation methodology for improving robustness, by preventing excessive drifting of particles during tracking past difficult, cluttered background scenes. We tend to demonstrate the tactic by implementing a version of the tracker, that combines each shape and color models, and testing it on a publicly offered benchmark information set. Results counsel that the proposed methodology outperforms a variety of well-known state-of-the-art trackers from the literature.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here