Tensor Factorization on a Large Scale via Parallel Sketches PROJECT TITLE : Large Scale Tensor Factorization via Parallel Sketches ABSTRACT: In recent years, tensor factorization methods have seen a rise in their level of popularity. Tensors are appealing for a number of reasons, one of which is their capacity to directly model multiple types of relational data. When it comes to dealing with large tensors, we propose using an algorithm called ParaSketch, which is a parallel tensor factorization algorithm that enables massive parallelism. It is proposed that the large tensor be broken up into a number of smaller tensors, that each of these smaller tensors be decomposed in parallel, and that the results be combined in order to reconstruct the latent factors of interest. In this direction, prior art necessitates a potentially very high level of complexity in both the stage of (Gaussian) compression and the stage of final combining. The proposed method takes advantage of sketching matrices for compression, which results in a significant reduction in the complexity of the compression step and also features a much lighter combining step. In addition, theoretical analysis demonstrates that compressed tensors inherit latent identifiability under mild conditions, thereby establishing the correctness of the overall approach. This finding was made possible by the fact that the conditions were relatively mild. The theory is supported by numerical experiments, which also illustrate the viability of the proposed algorithm. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Traffic Forecasting: Learning Dynamics and Heterogeneity of Spatial-Temporal Graph Data Multi-source Visual Domain Adaptation Iterative Refinement