Bridging domains using world wide knowledge for transfer learning

From WikiPapers
Jump to: navigation, search

Bridging domains using world wide knowledge for transfer learning is a 2010 journal article by Evan Wei Xiang, Bin Cao, Derek Hao Hu, Qiang Yang and published in IEEE Transactions on Knowledge and Data Engineering.

[edit] Abstract

A major problem of classification learning is the lack of ground-truth labeled data. It is usually expensive to label new data instances for training a model. To solve this problem, domain adaptation in transfer learning has been proposed to classify target domain data by using some other source domain data, even when the data may have different distributions. However, domain adaptation may not work well when the differences between the source and target domains are large. In this paper, we design a novel transfer learning approach, called {BIG} {(Bridging} Information Gap), to effectively extract useful knowledge in a worldwide knowledge base, which is then used to link the source and target domains for improving the classification performance. {BIG} works when the source and target domains share the same feature space but different underlying data distributions. Using the auxiliary source data, we can extract a bridge that allows cross-domain text classification problems to be solved using standard semisupervised learning algorithms. A major contribution of our work is that with {BIG,} a large amount of worldwide knowledge can be easily adapted and used for learning in the target domain. We conduct experiments on several real-world cross-domain text classification tasks and demonstrate that our proposed approach can outperform several existing domain adaptation approaches significantly.

[edit] References

This section requires expansion. Please, help!

Cited by

Probably, this publication is cited by others, but there are no articles available for them in WikiPapers.

Discussion

No comments yet. Be first!