PROJECT TITLE :
A Sparse Coding Neural Network ASIC With On-Chip Learning for Feature Extraction and Encoding
Hardware-based mostly pc vision accelerators will be a necessary part of future mobile devices to satisfy the low power and real-time processing requirement. To understand a high energy efficiency and high throughput, the accelerator design will be massively parallelized and tailored to vision processing, which is a bonus over software-primarily based solutions and general-purpose hardware. During this work, we have a tendency to gift an ASIC that's designed to find out and extract options from pictures and videos. The ASIC contains 256 leaky integrate-and-hearth neurons connected during a scalable two-layer network of eight$,times,$eight grids linked in an exceedingly 4-stage ring. Sparse neuron activation and the comparatively little grid keep the spike collision chance low to save access arbitration. The weight memory is split into core memory and auxiliary memory, such that the auxiliary memory is only powered on for learning to save lots of inference power. High-throughput inference is accomplished by the parallel operation of neurons. Efficient learning is implemented by passing parameter update messages, which is more simplified by an approximation technique. A three.06 mm$^two$ sixty five nm CMOS ASIC check chip is intended to achieve a most inference throughput of 1.twenty four Gpixel/s at 1.zero V and 310 MHz, and on-chip learning can be completed in seconds. To boost the facility consumption and energy efficiency, core memory supply voltage can be reduced to 440 mV to require advantage of the error resilience of the algorithm, reducing the inference power to 6.67 mW for a one hundred forty Mpixel/s throughput at thirty five MHz.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here