PROJECT TITLE :
Data-driven Answer Selection in Community QA Systems - 2017
Finding similar questions from historical archives has been applied to question answering, with well theoretical underpinnings and nice practical success. Nevertheless, each question in the returned candidate pool often associates with multiple answers, and hence users must painstakingly browse a heap before finding the right one. To alleviate such problem, we have a tendency to present a novel theme to rank answer candidates via pairwise comparisons. In explicit, it consists of 1 offline learning component and one online search element. In the offline learning part, we tend to 1st automatically establish the positive, negative, and neutral coaching samples in terms of preference pairs guided by our information-driven observations. We tend to then present a novel model to jointly incorporate these three varieties of coaching samples. The closed-kind solution of this model is derived. In the web search component, we 1st collect a pool of answer candidates for the given question via finding its similar questions. We have a tendency to then type the answer candidates by leveraging the offline trained model to guage the preference orders. In depth experiments on the important-world vertical and general community-based mostly question answering datasets have comparatively demonstrated its robustness and promising performance. Additionally, we have a tendency to have released the codes and information to facilitate alternative researchers.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here