Query Specific Rank Fusion for Image Retrieval - 2015
Recently 2 lines of image retrieval algorithms demonstrate wonderful scalability: one) native features indexed by a vocabulary tree, and a couple of) holistic features indexed by compact hashing codes. Although each of them are able to look visually similar pictures effectively, their retrieval precision may vary dramatically among queries. Therefore, combining these 2 types of methods is predicted to further enhance the retrieval precision. However, the feature characteristics and therefore the algorithmic procedures of those ways are dramatically different, that is very difficult for the feature-level fusion. This motivates us to research how to fuse the ordered retrieval sets, i.e., the ranks of pictures, given by multiple retrieval strategies, to boost the retrieval precision while not sacrificing their scalability. In this paper, we tend to model retrieval ranks as graphs of candidate pictures and propose a graph-based mostly question specific fusion approach, where multiple graphs are merged and reranked by conducting a link analysis on a fused graph. The retrieval quality of a private methodology is measured on-the-fly by assessing the consistency of the top candidates' nearest neighborhoods. Hence, it is capable of adaptively integrating the strengths of the retrieval methods using native or holistic features for various query pictures. This proposed technique will not would like any supervision, has few parameters, and is simple to implement. Extensive and thorough experiments have been conducted on four public datasets, i.e., the UKbench, Corel-5K, Holidays and the large-scale San Francisco Landmarks datasets. Our proposed method has achieved terribly competitive performance, together with state-of-the-art results on several data sets, e.g., the N-S score three.83 for UKbench.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here