PROJECT TITLE :
Vision-Based Coordinated Localization for Mobile Sensor Networks
In this paper, we propose a coordinated localization algorithm for mobile sensor networks with camera sensors to work beneath World Positioning System (GPS) denied areas or indoor environments. Mobile robots are partitioned into 2 teams. One cluster moves inside the sector of views of remaining stationary robots. The moving robots are tracked by stationary robots and their trajectories are used as spatiotemporal options. From these spatiotemporal features, relative poses of robots are computed using multiview geometry and a cluster of robots is localized with respect to the reference coordinate based mostly on the proposed multirobot localization. Once poses of all robots are recovered, a cluster of robots moves from one location to another whereas maintaining the formation of robots for coordinated localization underneath the proposed multirobot navigation strategy. By taking the advantage of a multiagent system, we have a tendency to can reliably localize robots over time as they perform a cluster task. In experiment, we have a tendency to demonstrate that the proposed method consistently achieves a localization error rate of 0.thirty sevenpercent or less for trajectories of length between and using an inexpensive off-the-shelf robotic platform.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here