Given that the performance you achieve depends on how far the target from the source domain is, how can you judge the performance of an algorithm?
Asked
Active
Viewed 307 times
1 Answers
0
You can measure the divergence between the source and target domain using KL-Divergence (there are some ways to estimate k-l divergence e.g. depdended on k-nn algorithm). Then you can check if there is a correlation between the divergence and the accuracy of the models considering a few cases of source-target pairs of datasets. You can compare several algorithms of Transfer Learning/Domain Adaptation using the same source-target datasets.
Christos Karatsalos
- 830
- 4
- 12
-
Good idea. But, KL divergence requires density estimation. If all I have is samples from both domains, and they're not equivalent in numbers, what to do about it? – Alex Apr 11 '19 at 06:58
-
why not doing stochastic kl-divergence? But in general a low, kl-div, does not! promise good adaption. It is fairly hard to measure the goodness of domain adaption and depends on each case. – Andreas Look Apr 11 '19 at 07:21
-
@Alex check the following paper: "Kullback-Leibler Divergence Estimation of Continuous Distributions", Fernando Perez-Cruz, 2008. – Christos Karatsalos Apr 11 '19 at 12:11