Active Learning for Ranking through Expected Loss Optimization - 2015
Learning to rank arises in several information mining applications, ranging from internet search engine, online advertising to recommendation system. In learning to rank, the performance of a ranking model is strongly littered with the amount of labeled examples in the training set; on the other hand, obtaining labeled examples for coaching data is terribly expensive and time-consuming. This presents a nice want for the active learning approaches to select most informative examples for ranking learning; but, within the literature there is still very limited work to handle active learning for ranking. During this paper, we propose a general active learning framework, expected loss optimization (ELO), for ranking. The ELO framework is applicable to a big selection of ranking functions. Under this framework, we tend to derive a novel algorithm, expected discounted cumulative gain (DCG) loss optimization (ELO-DCG), to pick most informative examples. Then, we tend to investigate both query and document level active learning for raking and propose a two-stage ELO-DCG algorithm which incorporate each query and document selection into active learning. Furthermore, we show that it's versatile for the algorithm to house the skewed grade distribution downside with the modification of the loss function. In depth experiments on real-world web search data sets have demonstrated great potential and effectiveness of the proposed framework and algorithms.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here