PROJECT TITLE :
Semantic Annotation of High-Resolution Satellite Images via Weakly Supervised Learning
During this paper, we tend to specialise in tackling the problem of automatic semantic annotation of high resolution (HR) optical satellite images, which aims to assign one or several predefined semantic ideas to an image per its content. The most challenges arise from the difficulty of characterizing advanced and ambiguous contents of the satellite images and the high human labor cost caused by preparing a large amount of training examples with high-quality pixel-level labels in absolutely supervised annotation strategies. To address these challenges, we propose a unified annotation framework by combining discriminative high-level feature learning and weakly supervised feature transferring. Specifically, an economical stacked discriminative sparse autoencoder (SDSAE) is first proposed to learn high-level options on an auxiliary satellite image information set for the land-use classification task. Galvanized by the motivation that the encoder of the prelearned SDSAE can be thought to be a generic high-level feature extractor for HR optical satellite images, we have a tendency to then transfer the learned high-level options to semantic annotation. To compensate the difference between the auxiliary data set and therefore the annotation information set, the transferred high-level options are further fine-tuned during a weakly supervised theme by using the tile-level annotated training information. Finally, the fine-tuning process is formulated as an final optimization drawback, that can be solved efficiently with our proposed alternate iterative optimization technique. Comprehensive experiments on a publicly on the market land-use classification knowledge set and an annotation data set demonstrate the prevalence of our SDSAE-based mostly high-level feature learning technique and also the effectiveness of our weakly supervised semantic annotation framework compared with state-of-the-art totally supervised annotation ways.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here