Uniting Keypoints: Local Visual Information Fusion for Large-Scale Image Search PROJECT TITLE :Uniting Keypoints: Local Visual Information Fusion for Large-Scale Image SearchABSTRACT:In this paper, we propose a novel approach to deal with the problem of the huge quantity of local features for a giant-scale database. 1st, in every image the local features are organized into dozens of groups by performing the quality -means clustering algorithm on their spatial positions. Second, a compact descriptor is generated to describe the visual data of every group of local options. Since, in every image, thousands of native features are reorganized into solely dozens of teams and each group is described by a single descriptor, the entire quantity of descriptors in a very large-scale database will be greatly reduced. So, we tend to will reduce the complexity of the looking procedure considerably. Further, the generated cluster descriptors are encoded into binary format to attain the storage and computation efficiency. The experiments on two benchmark datasets, i.e., UKBench and Holidays, with the Flickr1M distractor database demonstrate the effectiveness of the proposed approach. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest BEEINFO: Interest-Based Forwarding Using Artificial Bee Colony for Socially Aware Networking Toward self-authenticable wearable devices