Sell Your Projects | My Account | Careers | This email address is being protected from spambots. You need JavaScript enabled to view it. | Call: +91 9573777164

Interactive Multimodal Learning for Venue Recommendation

1 1 1 1 1 Rating 4.90 (70 Votes)

PROJECT TITLE :

Interactive Multimodal Learning for Venue Recommendation

ABSTRACT:

In this paper, we propose City Melange, an interactive and multimodal content-based venue explorer. Our framework matches the interacting user to the users of social media platforms exhibiting similar style. The information collection integrates location-primarily based social networks like Foursquare with general multimedia sharing platforms such as Flickr or Picasa. In City Melange, the user interacts with a collection of pictures and thus implicitly with the underlying semantics. The semantic data is captured through convolutional deep web options within the visual domain and latent topics extracted using Latent Dirichlet allocation in the text domain. These are any clustered to produce representative user and venue topics. A linear SVM model learns the interacting user’s preferences and determines similar users. The experiments show that our content-based approach outperforms the user-activity-based and fashionable vote baselines even from the early phases of interaction, while conjointly being able to advocate mainstream venues to mainstream users and off-the-beaten-track venues to afficionados. City Melange is shown to be a well-performing venue exploration approach.


Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here


Interactive Multimodal Learning for Venue Recommendation - 4.9 out of 5 based on 70 votes

Project EnquiryLatest Ready Available Academic Live Projects in affordable prices

Included complete project review wise documentation with project explanation videos and Much More...