Synaptic Weight States in a Locally Competitive Algorithm for Neuromorphic Memristive Hardware


Memristors promise a suggests that for prime-density neuromorphic nanoscale architectures that leverage in situ learning algorithms. While ancient learning algorithms commonly assume analog values for synaptic weights, actual physical memristors might have a finite set of achievable states during online learning. In this paper, we have a tendency to simulate a learning algorithm with limitations on each the resolution of its weights and therefore the means that of switching between them to explore how these properties have an effect on classification performance. For our experiments, we have a tendency to use the domestically competitive algorithm (LCA) by Rozell et al. in conjunction with the MNIST dataset and a group of natural images. We tend to investigate the effects of each linear and non-linear distributions of weight states. Our results show that as long as the weights are distributed roughly close to linear, the algorithm is still effective for classifying digits, while reconstructing pictures edges from non-linearity. More, the resolution needed from a device depends on its transition operate between states; for transitions reminiscent of spherical-to-nearest, synaptic weights should have around sixteen doable states (four-bit resolution) to obtain optimal results. We tend to notice that lowering the brink needed to change states or adding stochasticity to the system can reduce that demand down to four states (2-bit resolution). The outcomes of our analysis are relevant for building effective neuromorphic hardware with state-of-the-art memristive devices.

Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here

PROJECT TITLE : Depth Reconstruction From Sparse Samples: Representation, Algorithm, and Sampling - 2015 ABSTRACT: The fast development of 3D technology and computer vision applications has motivated a thrust of methodologies
PROJECT TITLE :On-Chip Sparse Learning Acceleration With CMOS and Resistive Synaptic DevicesABSTRACT:Many recent advances in sparse coding led its wide adoption in signal processing, pattern classification, and object recognition
PROJECT TITLE :Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight ElementABSTRACT:Using 2 section-change memory devices per synapse, a
PROJECT TITLE :Depth Reconstruction From Sparse Samples: Representation, Algorithm, and SamplingABSTRACT:The rapid development of 3D technology and pc vision applications has motivated a thrust of methodologies for depth acquisition
PROJECT TITLE : Video Dissemination over Hybrid Cellular and Ad Hoc Networks - 2014 ABSTRACT: We study the problem of disseminating videos to mobile users by using a hybrid cellular and ad hoc network. In particular, we formulate

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry