Cold-start Recommendation via Deep Pairwise Hashing PROJECT TITLE : Deep Pairwise Hashing for Cold-start Recommendation ABSTRACT: The problems of data sparsity and recommendation efficiency have been considered to be two of the challenges that must be overcome in order to improve the performance of online recommendation. The vast majority of the earlier work in this field has concentrated on enhancing recommendation accuracy rather than efficiency. In this paper, we propose a Deep Pairwise Hashing (DPH) to map users and items to binary vectors in Hamming space. This allows a user's preference for an item to be efficiently calculated based on the Hamming distance, which significantly improves the effectiveness of online recommendation. The user-item interactive information and the item content information are unified in order to learn effective representations of items and users. This unification helps alleviate problems associated with data sparsity and cold starts. To be more specific, we first pre-train a robust item representation from item content data using a Denoising Auto-encoder instead of other deterministic Deep Learning frameworks; then, we fine-tune the entire framework by adding a pairwise loss objective with discrete constraints; moreover, DPH aims to minimize a pairwise ranking loss that is consistent with the ultimate goal of recommendation; and finally, we use the results of these steps to generate recommendations for users. Finally, in order to optimize the proposed model while adhering to discrete constraints, we use a technique called alternating optimization. Extensive testing on three distinct datasets demonstrates that DPH is capable of significantly advancing the state-of-the-art frameworks in terms of data sparsity and item cold-start recommendation. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Temporal Patterns for Event Sequence Discovery Using the Policy Mixture Model to Cluster Deep Cross-Output Knowledge Transfer Using Support Vector Machines with Stacked-Structure Least Squares