PROJECT TITLE :
SLED: Semantic Label Embedding Dictionary Representation for Multi label Image Annotation - 2015
Most existing strategies on weakly supervised image annotation depend upon jointly unsupervised feature representation, the parts of which don't seem to be directly correlated with specific labels. In practical cases, however, there's a big gap between the training and also the testing knowledge, say the label combination of the testing information isn't always according to that of the coaching. To bridge the gap, this paper presents a semantic label embedding dictionary illustration that not solely achieves the discriminative feature representation for every label within the image, but also mines the semantic relevance between co-occurrence labels for context data. Additional specifically, to enhance the discriminative representation of labels, the training data is first divided into a group of overlapped groups by graph shift based on the exclusive label graph. Afterward, given a cluster of exclusive labels, we have a tendency to attempt to learn multiple label-specific dictionaries to explicitly decorrelate the feature illustration of each label. A joint optimization approach is proposed in line with the Fisher discrimination criterion for seeking its resolution. Then, to discover the context info hidden within the co-occurrence labels, we tend to explore the semantic relationship between visual words in dictionaries and labels in an exceedingly multitask learning means with respect to the reconstruction coefficients of the coaching information. In the annotation stage, with the discriminative dictionaries and exclusive label groups and a cluster sparsity constraint, the reconstruction coefficients of a test image will be easily obtained. Finally, we introduce a label propagation theme to compute the score of each label for the take a look at image based mostly on its reconstruction coefficients. Experimental results on three challenging knowledge sets demonstrate that our proposed method results in vital performance gains over existing methods.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here