PROJECT TITLE :
Semantic-Aware Co-Indexing for Image Retrieval
In content-primarily based image retrieval, inverted indexes permit fast access to database pictures and summarize all knowledge about the database. Indexing multiple clues of image contents permits retrieval algorithms search for relevant pictures from totally different perspectives, which is appealing to deliver satisfactory user experiences. However, when incorporating various image features throughout online retrieval, it's challenging to confirm retrieval efficiency and scalability. In this paper, for giant-scale image retrieval, we propose a semantic-aware co-indexing algorithm to jointly embed two sturdy cues into the inverted indexes: one) local invariant features that are sturdy to delineate low-level image contents, and 2) semantic attributes from giant-scale object recognition which will reveal image semantic meanings. Specifically, for an initial set of inverted indexes of local options, we tend to utilize semantic attributes to filter out isolated images and insert semantically similar pictures to this initial set. Encoding these 2 distinct and complementary cues along effectively enhances the discriminative capability of inverted indexes. Such co-indexing operations are totally off-line and introduce tiny computation overhead to online retrieval, as a result of only native options however no semantic attributes are utilized for the question. Hence, this co-indexing is different from existing image retrieval ways fusing multiple options or retrieval results. Extensive experiments and comparisons with recent retrieval methods manifest the competitive performance of our technique.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here