publication . Preprint . 2018

Co-regularized Alignment for Unsupervised Domain Adaptation

Kumar, Abhishek; Sattigeri, Prasanna; Wadhawan, Kahini; Karlinsky, Leonid; Feris, Rogerio; Freeman, William T.; Wornell, Gregory;
Open Access English
  • Published: 13 Nov 2018
Abstract
Deep neural networks, trained with large amount of labeled data, can fail to generalize well when tested with examples from a \emph{target domain} whose distribution differs from the training data distribution, referred as the \emph{source domain}. It can be expensive or even infeasible to obtain required amount of labeled data in all possible domains. Unsupervised domain adaptation sets out to address this problem, aiming to learn a good predictive model for the target domain using labeled examples from the source domain but only unlabeled examples from the target domain. Domain alignment approaches this problem by matching the source and target feature distrib...
Subjects
free text keywords: Computer Science - Machine Learning, Statistics - Machine Learning
Related Organizations
Download from
46 references, page 1 of 4

[1] Maria-Florina Balcan, Avrim Blum, and Ke Yang. Co-training and expansion: Towards bridging theory and practice. In Advances in neural information processing systems, pages 89-96, 2005.

[2] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1-2):151-175, 2010.

[3] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Conference on Learning Theory, 1998.

[4] Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. In Advances in Neural Information Processing Systems, pages 343-351, 2016.

[5] Konstantinos Bousmalis, Alex Irpan, Paul Wohlhart, Yunfei Bai, Matthew Kelcey, Mrinal Kalakrishnan, Laura Downs, Julian Ibarz, Peter Pastor, Kurt Konolige, et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. arXiv preprint arXiv:1709.07857, 2017.

[6] Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, page 7, 2017.

[7] Olivier Chapelle and A. Zien. Semi-Supervised Classification by Low Density Separation. In AISTATS, pages 57-64, 2005.

[8] Minmin Chen, Kilian Q Weinberger, and John Blitzer. Co-training for domain adaptation. In Advances in neural information processing systems, pages 2456-2464, 2011.

[9] Hal Daume III, Abhishek Kumar, and Avishek Saha. Co-regularization Based Semi-supervised Domain Adaptation. In Advances in Neural Information Processing Systems, 2010.

[10] Thomas G Dietterich. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pages 1-15. Springer, 2000.

[11] Harris Drucker, Corinna Cortes, Lawrence D Jackel, Yann LeCun, and Vladimir Vapnik. Boosting and other ensemble methods. Neural Computation, 6(6):1289-1301, 1994.

[12] Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. In Proceedings of the International Conference on Learning Representations, Toulon, France, April 2017.

[13] Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 2960-2967. IEEE, 2013.

[14] Geoffrey French, Michal Mackiewicz, and Mark Fisher. Self-ensembling for domain adaptation. In International Conference on Learning Representations, 2018.

[15] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, 2015.

46 references, page 1 of 4
Abstract
Deep neural networks, trained with large amount of labeled data, can fail to generalize well when tested with examples from a \emph{target domain} whose distribution differs from the training data distribution, referred as the \emph{source domain}. It can be expensive or even infeasible to obtain required amount of labeled data in all possible domains. Unsupervised domain adaptation sets out to address this problem, aiming to learn a good predictive model for the target domain using labeled examples from the source domain but only unlabeled examples from the target domain. Domain alignment approaches this problem by matching the source and target feature distrib...
Subjects
free text keywords: Computer Science - Machine Learning, Statistics - Machine Learning
Related Organizations
Download from
46 references, page 1 of 4

[1] Maria-Florina Balcan, Avrim Blum, and Ke Yang. Co-training and expansion: Towards bridging theory and practice. In Advances in neural information processing systems, pages 89-96, 2005.

[2] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1-2):151-175, 2010.

[3] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Conference on Learning Theory, 1998.

[4] Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. In Advances in Neural Information Processing Systems, pages 343-351, 2016.

[5] Konstantinos Bousmalis, Alex Irpan, Paul Wohlhart, Yunfei Bai, Matthew Kelcey, Mrinal Kalakrishnan, Laura Downs, Julian Ibarz, Peter Pastor, Kurt Konolige, et al. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. arXiv preprint arXiv:1709.07857, 2017.

[6] Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, page 7, 2017.

[7] Olivier Chapelle and A. Zien. Semi-Supervised Classification by Low Density Separation. In AISTATS, pages 57-64, 2005.

[8] Minmin Chen, Kilian Q Weinberger, and John Blitzer. Co-training for domain adaptation. In Advances in neural information processing systems, pages 2456-2464, 2011.

[9] Hal Daume III, Abhishek Kumar, and Avishek Saha. Co-regularization Based Semi-supervised Domain Adaptation. In Advances in Neural Information Processing Systems, 2010.

[10] Thomas G Dietterich. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pages 1-15. Springer, 2000.

[11] Harris Drucker, Corinna Cortes, Lawrence D Jackel, Yann LeCun, and Vladimir Vapnik. Boosting and other ensemble methods. Neural Computation, 6(6):1289-1301, 1994.

[12] Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. In Proceedings of the International Conference on Learning Representations, Toulon, France, April 2017.

[13] Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 2960-2967. IEEE, 2013.

[14] Geoffrey French, Michal Mackiewicz, and Mark Fisher. Self-ensembling for domain adaptation. In International Conference on Learning Representations, 2018.

[15] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, 2015.

46 references, page 1 of 4
Any information missing or wrong?Report an Issue