Deep learning architectures allowed a significant improvement of the categorization accuracy over previous state-of-the art solutions. However the cost of acquiring labeled data for these models remains extremely high. To overcome the burden of annotation, alternative solutions have been proposed in the literature that one one hand exploit the unlabeled data from the domain but also data data or models borrowed from similar domains. Such process, referred to as domain adaptation (DA), has recently received a lot of attention in computer vision. In this paper we propose a comprehensive experimental study of discrepancy based two-stream discriminative adaptation networks for unsupervised DA. We design and compare shallow and deep architectures by varying parameter sharing strategies and the discrepancy models used in the confusion loss. The models are tested on the two standard office object datasets, using different representations but also on a new DA dataset, called LandMarkDA we propose in this paper. The LandMarkDA dataset was built to study adaptation between landmark place recognition models trained with different image styles, such as photos, paintings and drawings. In particular for this dataset we design a new DA method that combines style transfer between image modalities with deep discriminative adaptation networks.
- Discrepancy-based networks for unsupervised domain adaptation- a comparative study.pdf (2.34MB)