PROJECT TITLE :
A Multi-Domain and Multi-Modal Representation Disentangler for Cross-Domain Image Manipulation and Classification
Deep learning and computer vision have been focusing on the development of interpretable data representations. Representation disentanglement can be used to address this challenge, but existing efforts are unable to deal with situations where data manipulation and recognition across various domains are desired. Multi-domain and Multi-modal Representation Disentangler (M 2 RD) is a unified network architecture that aims to learn domain-invariant content representations with the related domain-specific representations that are observed. Continuous image modification across various data domains with different modalities is now possible thanks to advancements in adversarial learning and disentanglement techniques. More crucially, the resulting domain-invariant feature representation can be used for domain adaptation without supervision. On the other hand, the proposed model's effectiveness and robustness would be confirmed by our quantitative and qualitative outcomes on the aforesaid tasks.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here