Convolutional neural networks can be trained efficiently with low-bitwidth weights and activations. PROJECT TITLE : Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations ABSTRACT: The problem of training a deep convolutional neural network with both low-bitwidth weights and activations is investigated in this paper. The non-differentiability of the quantizer makes it very difficult to optimize a network with low precision, which can lead to a significant loss of accuracy. This can make the optimization process very difficult. In order to solve this problem, we have come up with three useful solutions: I progressive quantization; (ii) stochastic precision; and (iii) joint knowledge distillation, all of which aim to improve the training of the network. First, for the purpose of progressive quantization, we suggest two different strategies for progressively arriving at good local minima. To be more specific, we propose first optimizing a network with quantized weights, and then, after that, quantizing activations in the network. In contrast to the conventional methods, which optimize both of them simultaneously, this approach optimizes neither. In addition, we propose a second progressive quantization scheme that, during training, makes the bitwidth transition gradually from high-precision to low-precision. Second, in order to alleviate the excessive training burden that is brought on by the multi-round training stages, we further propose a one-stage stochastic precision strategy that will randomly sample and quantize sub-networks while maintaining other parts in full-precision. Last but not least, we use an innovative learning strategy to train a full-precision model alongside a low-precision model at the same time. The performance of the low-precision network is significantly improved as a result of the full-precision model's use of this method, which involves providing hints to guide the training of the low-precision model. Extensive testing of the proposed methods on a variety of datasets (such as CIFAR-100 and ImageNet, for example) demonstrates that they are effective. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Deep Learning-Based End-to-End Automatic Morphological Classification of Intracranial Pressure Pulse Waveforms Federated Reptile for Semi-supervised Multi-Tasking in Healthcare Applications Using Dynamic Neural Graphs