Unsupervised Domain Adaptation Using a Deep Ladder-Suppression Network PROJECT TITLE : Deep Ladder-Suppression Network for Unsupervised Domain Adaptation ABSTRACT: The objective of unsupervised domain adaptation, also known as UDA, is to learn a classifier for a target domain that is not labeled by transferring knowledge from a source domain that is labeled and has a distribution that is similar but distinct. The majority of the currently available methods teach domain-invariant features by adapting the total information contained within the images. However, the efficiency of the learned features is diminished when adaptations of domain-specific variations are compelled upon them. We propose a novel yet elegant module that we call the deep ladder-suppression network (DLSN). This module is intended to better learn the cross-domain shared content by suppressing domain-specific variations. In order to solve this problem, we came up with this module. An autoencoder with lateral connections leading from the encoder to the decoder is what we have in mind for a DLSN solution. The domain-specific details, which are only required for reconstructing the unlabeled target data, are directly fed to the decoder in this design so that the task of reconstruction can be finished. This relieves the pressure of learning domain-specific variations at later layers of the shared encoder. As a consequence of this, DLSN enables the shared encoder to concentrate on learning content that is shared across domains while ignoring variations that are specific to individual domains. It is important to note that the proposed DLSN can be implemented as a standard module that can be incorporated with a variety of UDA frameworks already in existence in order to further improve performance. Extensive experimental results on four gold-standard domain adaptation datasets, such as 1) Digits; 2) Office31; 3) Office-Home; and 4) VisDA-C, demonstrate that the proposed DLSN can consistently and significantly improve the performance of various popular UDA frameworks. These results were obtained without the use of bells and whistles. Did you like this research project? To get this research project Guidelines, Training and Code... Click Here facebook twitter google+ linkedin stumble pinterest Classification of Mixed Frequency Data Using a Novel Discriminative Dictionary Pair Learning Restricted by Ordinal Locality In Graph Neural Networks, Deep Constraint-based Propagation