PROJECT TITLE :
Interactive Multimodal Learning for Venue Recommendation
In this paper, we propose City Melange, an interactive and multimodal content-based venue explorer. Our framework matches the interacting user to the users of social media platforms exhibiting similar style. The information collection integrates location-primarily based social networks like Foursquare with general multimedia sharing platforms such as Flickr or Picasa. In City Melange, the user interacts with a collection of pictures and thus implicitly with the underlying semantics. The semantic data is captured through convolutional deep web options within the visual domain and latent topics extracted using Latent Dirichlet allocation in the text domain. These are any clustered to produce representative user and venue topics. A linear SVM model learns the interacting user’s preferences and determines similar users. The experiments show that our content-based approach outperforms the user-activity-based and fashionable vote baselines even from the early phases of interaction, while conjointly being able to advocate mainstream venues to mainstream users and off-the-beaten-track venues to afficionados. City Melange is shown to be a well-performing venue exploration approach.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here