PROJECT TITLE :
Low-power Implementation of Mitchell's Approximate Logarithmic Multiplication for Convolutional Neural Networks - 2018
This paper proposes an occasional-power implementation of the approximate logarithmic multiplier to boost the facility consumption of convolutional neural networks for image classification, profiting from its intrinsic tolerance to error. The approximate logarithmic multiplier converts multiplications to additions by taking approximate logarithm and achieves significant improvement in power and area whereas having low worst-case error, that makes it suitable for neural network computation. Our proposed design shows a important improvement in terms of power and area over the previous work that applied logarithmic multiplication to neural networks, reducing power up to seventy six.half-dozenpercent compared to precise mounted-purpose multiplication, whereas maintaining comparable prediction accuracy in convolutional neural networks for MNIST and CIFAR10 datasets.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here