Downloads provided by UsageCounts
Many domain adaptation methods are based on learning a projection or a transformation of the source and target domains to a common domain and training a classifier there, while the performance of such algorithms has not been theoretically studied yet. Previous studies proposing generalization bounds for domain adaptation relate the target loss to the discrepancy between the source and target distributions, however, do not take into account the possible effects of learning a transformation between the two domains. In this work, we present generalization bounds that study the target performance of domain adaptation methods learning a transformation of the source and target domains along with a hypothesis. We show that, under some conditions on the loss regularity, if the domain transformations reduce the distribution distance at a sufficiently high rate, then the expected target loss can be bounded with probability improving at an exponential rate with the number of labeled samples.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 3 | |
| downloads | 5 |

Views provided by UsageCounts
Downloads provided by UsageCounts