Ternary Compression for Federated Learning with Efficient Communication PROJECT TITLE : Ternary Compression for Communication-Efficient Federated Learning ABSTRACT: In many real-world applications, it is essential to acquire knowledge over vast amounts of data that are stored in a variety of locations. Nonetheless, there are many obstacles to overcome when it comes to data sharing as a result of the rising concerns regarding privacy and security brought on by the proliferation of smart mobile devices and Internet of Things (IoT) devices. Federated learning offers a potential solution to privacy-preserving and secure Machine Learning by means of jointly training a global model without uploading data distributed on multiple devices to a central server. This type of learning enables Machine Learning to take place in a secure environment. However, the majority of the currently published research on federated learning uses Machine Learning models with full-precision weights. Almost all of these models contain a large number of redundant parameters that do not have to be sent to the server, but doing so would consume an excessive amount of Communication costs. We propose a federated trained ternary quantization (FTTQ) algorithm as a solution to this problem. This algorithm, which optimizes the quantized networks on the clients through a self-learning quantization factor, is what we call for. There are theoretical demonstrations presented here that demonstrate the convergence of quantization factors, the unbiasedness of FTTQ, and a reduced weight divergence. We propose a ternary federated averaging protocol (T-FedAvg), which is based on FTTQ, with the goal of reducing the amount of Communication that occurs between upstream and downstream federated learning systems. Our results demonstrate that the proposed T-FedAvg is effective in reducing Communication costs and can even achieve slightly better performance on non-IID data in comparison to the canonical federated learning algorithms. These findings were obtained by conducting empirical experiments to train widely used Deep Learning models on publicly available data sets. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Clustering of Learnable Subspaces Hierarchical Cascade and Spectral-Temporal Receptive Field-Based Descriptors Classification of Guitar Playing Techniques Using the Deep Belief Network