Visual Cue-Guided Rat Cyborg for Automatic Navigation [Research Frontier]


A rat robot could be a sort of animal robots, where an animal is connected to a machine system via a brain-pc interface. Electrical stimuli will be generated by the machine system and delivered to the animal's brain to control its behavior. The sensory capability and versatile motion ability of rat robots highlight their potential benefits over mechanical robots. But, most existing rat robots need that a human observes the environmental layout to guide navigation, which limits the applications of rat robots. This work incorporates object detection algorithms to a rat robot system to enable it to search out 'human-attention-grabbing' objects, and then use these cues to guide its behaviors to perform automatic navigation. A miniature camera is mounted on the rat's back to capture the scene in front of the rat. The video is transferred via a wireless module to a laptop and we have a tendency to develop some object detection/identification algorithms to permit objects of interest located. Next, we tend to make the rat robot perform a selected motion automatically in response to a detected object, like turning left. One stimulus does not allow the rat to perform a motion successfully. Inspired by the fact that humans typically give a series of stimuli to a rat robot, we develop a closed-loop model that problems a stimulus sequence automatically consistent with the state of the rat and the objects in front of it until the rat completes the motion successfully. So, the rat robot, that we have a tendency to talk over with as a rat cyborg, is able to move in keeping with the detected objects without the requirement for manual operations. The object detection methods and therefore the closed-loop stimulation model are evaluated in experiments, that demonstrate that our rat cyborg can accomplish human-specified navigation automatically.

Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here

PROJECT TITLE : Deep Spatial and Temporal Network for Robust Visual Object Tracking ABSTRACT: For visual tracking, there are two crucial components: (a) the appearance of the object and (b) the motion of the object. Since deep
PROJECT TITLE : Visual Correspondences for Unsupervised Domain Adaptation on Electron Microscopy Images ABSTRACT: For Electron Microscopy volumes, we provide an Unsupervised Domain Adaptation approach. Pretrained models are able
PROJECT TITLE : A Blind Stereoscopic Image Quality Evaluator With Segmented Stacked Autoencoders Considering the Whole Visual Perception Route ABSTRACT: Blind stereoscopic image quality assessment (SIQA) methods currently in use
PROJECT TITLE : Deep Visual Saliency on Stereoscopic Images ABSTRACT: Quality of stereoscopic 3D images has been demonstrated to have a significant impact on visual saliency in S3D images. As a result, this dependency is critical
PROJECT TITLE : Fundamental Visual Concept Learning From Correlated Images and Textí_ ABSTRACT: The visual notions in heterogeneous web media, such as objects, situations, and activities, cannot be dissected semantically. Learning

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry