Visual Cue-Guided Rat Cyborg for Automatic Navigation [Research Frontier]


A rat robot could be a sort of animal robots, where an animal is connected to a machine system via a brain-pc interface. Electrical stimuli will be generated by the machine system and delivered to the animal's brain to control its behavior. The sensory capability and versatile motion ability of rat robots highlight their potential benefits over mechanical robots. But, most existing rat robots need that a human observes the environmental layout to guide navigation, which limits the applications of rat robots. This work incorporates object detection algorithms to a rat robot system to enable it to search out 'human-attention-grabbing' objects, and then use these cues to guide its behaviors to perform automatic navigation. A miniature camera is mounted on the rat's back to capture the scene in front of the rat. The video is transferred via a wireless module to a laptop and we have a tendency to develop some object detection/identification algorithms to permit objects of interest located. Next, we tend to make the rat robot perform a selected motion automatically in response to a detected object, like turning left. One stimulus does not allow the rat to perform a motion successfully. Inspired by the fact that humans typically give a series of stimuli to a rat robot, we develop a closed-loop model that problems a stimulus sequence automatically consistent with the state of the rat and the objects in front of it until the rat completes the motion successfully. So, the rat robot, that we have a tendency to talk over with as a rat cyborg, is able to move in keeping with the detected objects without the requirement for manual operations. The object detection methods and therefore the closed-loop stimulation model are evaluated in experiments, that demonstrate that our rat cyborg can accomplish human-specified navigation automatically.

Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here

PROJECT TITLE :Visual Analysis of Spatio-temporal Distribution and Retweet Relation in Weibo Event - 2018ABSTRACT:Sina Weibo is the foremost standard microblog service in China and it can provide abundant information about netizens'
PROJECT TITLE :A Structured Visual approach to GALS Modelling and Verification of Communication Circuits - 2017ABSTRACT:During this paper, a completely unique globally asynchronous regionally synchronous (GALS) modeling and verification
PROJECT TITLE :A Structured Visual approach to GALS Modellingand Verification of Communication Circuits - 2017ABSTRACT:During this paper, a unique globally asynchronous domestically synchronous (GALS) modeling and verification
PROJECT TITLE : Visual Analysis of Multiple Route Choices based on General GPS Trajectories - 2017 ABSTRACT: There are often multiple routes between regions. Drivers select completely different routes with completely different
PROJECT TITLE : Learning a combined model of visual Saliency for fixation prediction - 2016 ABSTRACT: A giant variety of saliency models, every primarily based on a totally different hypothesis, are proposed over the past twenty

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry