1) The document presents a new deep unsupervised domain adaptation method that uses graph matching and pseudo-label guided training.
2) It introduces a second-order matching term to capture structural correspondence between domains, in addition to a first-order term.
3) The method performs a two-stage training, where in the first stage it reduces domain discrepancy using graph matching, and in the second stage it exploits unlabeled target data with pseudo-labels to further refine the decision boundaries.
1 of 23
Download to read offline
More Related Content
Icann2018ppt final
1. 1
Click to edit Master title style
Graph Matching and Pseudo-Label Guided Deep
Unsupervised Domain Adaptation
Debasmit Das C.S. George Lee
Assistive Robotics Technology Laboratory
School of Electrical and Computer Engineering
Purdue University, West Lafayette, IN, USA
Funding Source : National Science Foundation (IIS-1813935)
4. 4
Introduction
Classifying a Dog and a Cat
Training Samples Testing Samples
Training and Testing
Distribution different!!
Domain Adaptation
Required!!
5. 5
Introduction
Domain Adaptation Methods
Non-Deep Methods Deep Methods
• Instance Re-weighting
[Dai et al. ICML’07]
• Parameter Adaptation
[Bruzzone et al. TPAMI’10]
• Feature Transformation
[Fernando et al. ICCV’13]
[Sun et al. AAAI’16]
• Discrepancy Based
[Long et al. ICML’15]
[Sun et al. ECCV’16]
• Adversarial Based
[Ganin et al. JMLR’16]
[Tzeng et al. CVPR’17]
6. 6
Introduction
• Discrepancy Based Methods
Mostly global metrics. Minimizes statistics of data like covariance
[Sun et al. ECCV’16] or maximum mean discrepancy [Long et al. ICML’15]
• Local Method
Optimal Transport (Courty et al. TPAMI’17). Basically point-point Matching
Using first order information might be misleading. How ?
13. 13
Second stage training procedure
• Domain Discrepancy is
reduced
• Need to exploit unlabeled data
• Chose confident unlabeled
samples based on a threshold
• ‘Sharpen’ the probability of
these confident samples
Our Method
22. 22
• Second order matching term is
important to match structure in data
• Refining decision boundaries by
self-training with unlabeled data is
beneficial
• Performance improvement on image
recognition justifies the inclusion of
the 2-stage training
Conclusion
Future Work
Multiple source domain generalization