
Abstract Unsupervised domain adaptation aims to use labeled instances from a source domain to train a good learning model, which can classify unlabeled instances from a target domain as accurate as possible. The biggest challenge is that datasets from the source and target domains have different distributions, thus the general classification model trained on the source domain can not perform well on the target domain data. The classic methods solve this problem mainly by narrowing the distance between the source and target domains. Those methods, however, is not optimal since the nonlinear feature space may not match the kernel-based learning machine. In this paper, we design a new method called bi-adapt kernel learning (BAKL) to learn a domain-invariant kernel by transferring the source and target domains to each other simultaneously. Specifically, we derive the new source and target domain kernel matrix according to the Mercer’s theorem. The domain-invariant kernel machines are then constructed by minimizing the approximation error between the newly generated kernel matrices and the ground truth source domain kernel matrices. Experiments on benchmark tasks of text and object recognition demonstrate that it significantly improves classification accuracy compared to the state-of-art methods.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
