PROJECT TITLE :
A Multiple-Instance Densely-Connected ConvNet for Aerial Scene Classification
Aerial views, in contrast to natural scenes, generally consist of many items that are crowded on the surface from a bird's perspective, necessitating the use of more discriminative features and local semantics in their description. Most existing convolution neural networks (ConvNets) focus on the overall meaning of images, and the loss of low- and mid-level features can't be prevented, especially as the model gets deeper into the data set. A MIDC.Net (multiple-instance densely-connected ConvNet) for aerial scene classification is proposed in this paper to address these issues. Aerial scene classification is viewed as a multiple-instance learning issue, allowing for more in-depth research into local semantics. We use an instance-level classifier, multiple-instance pooling, and a bag-level classification layer to build our classification model. Simplifying the dense connection structure in the instance-level classifier is one of the ways we propose to effectively conserve features from various levels. Following the extraction of the convolution features, these features are transformed into instance feature vectors. Then, we present a multiple instance pooling approach based on trainable attention. When the scene label is selected, the bag-level probability is displayed alongside the local semantics. Our bag-level classification layer places our multiple instance learning architecture directly under the control of bag labels, completing the picture we started with. With only a handful of parameters, we were able to surpass numerous existing algorithms on three frequently used aerial scene benchmarks in our research.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here