Semantic Interpretation of Top-N Recommendations


Throughout the years, model-based approaches have proven to be effective in a variety of settings and domains when it comes to the process of computing recommendation lists. They are able to make extremely precise product recommendations thanks to the computation of latent factors, which plays a significant role in the process. When we move to the latent space, even if the model embeds content-based information, we are still unable to find references to the actual semantics of the item that was recommended to us. This is very unfortunate. It introduces a level of complexity into the process of interpreting the recommendations. In this paper, we demonstrate how to initialize latent factors in factorization machines by making use of semantic features derived from knowledge graphs to train an interpretable model. As a result, the model is able to provide recommendations that are accurate to a high degree. The method that is being presented incorporates the addition of semantic features into the learning process in order to maintain the original informativeness of the items that are contained within the dataset. We also propose two metrics to evaluate the semantic accuracy and robustness of knowledge-aware interpretability. These metrics are based on the information that was encoded in the knowledge graph that was originally created. An exhaustive experimental evaluation carried out on six distinct datasets demonstrates that the interpretable model is effective in terms of both the accuracy and diversity of recommendation results, as well as the interpretability robustness.

Did you like this research project?

To get this research project Guidelines, Training and Code... Click Here

Ready to Complete Your Academic MTech Project Work In Affordable Price ?

Project Enquiry