際際滷

際際滷Share a Scribd company logo
Don't Generate Me:
Training Differentially Private Generative Models with Sinkhorn Divergence
Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, Karsten Kreis
PR - 358; Online Paper Reviews
Introduction
一危 殊企(data privacy) 讌 一危一 轟 螳  讌 豢   覲願 碁襦 語讌 襦  豈 襷.
1. 一危 殊企
1) 覲(identifier) 蟇
2. 一危 殊企 蟲
SITE A
SITE B
企: 蟾豌
企: cjftn12
磯曙: ?
企: 蟾豌
企: cjftn12
磯曙: 010-1234-5678
觜覦覯 れ  れ 覯碁 語覯碁ゼ 覲企.
010-1234-****
觜覦覯 れ  れ 覯碁 語覯碁ゼ 覲企.
010-****-5678
Q. 觜覦覯 谿剰鍵
企: *
企: *
磯曙: *
焔: 
豢磯: 1993
豕譬 : 碁蟲 蟆曙螻
 and and and
1993 碁蟲 蟆曙螻 蟆所鍵  and TOEIC: 900 and ......
譴覲(quasi-identifiers)
螻 覲螳  覲伎伎襷 るジ 譴覲り骸 蟆壱 螳 覲    螳レ煙 譟伎.
Latanya Sweeney, Simple Demographics Often Identify People Uniquely(2000) - Gender,Birth, Postal Code
 螳讌 譟壱朱 覩瑚記  螳語 87%襯 覲蠍一 豢覿る 蟆 讀覈.
*
覲(identifiers)
Introduction
2) Statistical Noise - 谿覿 殊企(Differential Privacy)
竜-differential privacy
殊企襯 朱 覈誤 殊企 覲危 襯 豸′   蠍一視逢覯襦朱, 一危  覈語 豸′ 語伎襯 豢螳
一危一 殊企襯 企 蟆 襷.
Data Utility
Ideal
Acceptable
Trade-off
Privacy
Protection
A :螻襴讀 D1 D
, 2 : 螳 襷 るジ 一危一 S :豢 螳 讌
where
Related Works
DP-GAN(Differentially Private GANs) DP襯 GAN 朱  覦覯 語伎襯 豢螳 覦朱 蟲 覈語 襷.
DP-GAN
覈語 螳譴豺 Wasserstein 蟇磯Μ 磯
 蠏碁朱誤語 語伎螳 伎.  蠏(regularization)
Noised Gradients
+
DP-Sinkhorn
DP-GAN  覦覯  Wasserstein 蟇磯Μ 螻旧 , 碁 蠏 Wasserstein 蟇磯Μ(Entropy-regularized Wasserstein Distance)襯
 覦覯 襷.
DP-Sinkhorn
Entropy-regularized Wasserstein Distance Sinkhorn Divergence
DP-Sinkhorn
Sinkhorn Loss
Debiased Sinkhorn Loss
Semi-debiased Sinkhorn Loss
企 谿螳 DP(esp. RDP;Renyi Differential Privacy)襯 襷譟煙(Theorem 4.1).
Experiments
: MNIST, Fashion-MNIST, and CelebA downsampled to 32x32 pixels.
Dataset
: quantitative - FID // utility of generated data - test accuracy with Logistic regression, MLP, and CNN classifiers
Metrics
- Architecture: DCGAN(MNIST, FashionMNIST) // BigGAN(Celeb A)
- Parameters: 了= 0.05(MNIST, Fashion-MNIST), 5(CelebA)
Architectures & Hyperparameters
: (10, 10^5)-DP // (10, 10^6)-DP
Privacy Implementation
(ERWD)
Cost Function
Experiments
Experimental Results
Experiments
Privacy Utility Trade-off
Ablating loss functions CelebA
Conclusion
Experimentallydemonstrate superior performance compared to the previous state-of-the-art both in terms of image
quality and on standard image classification benchmarks using data generated under DP.
Limited image quality is the main challenge in DP generative modeling and future work includes designing more expressive
generator networks that can further improve synthesis quality, while satisfying differential privacy

More Related Content

[PR-358] Training Differentially Private Generative Models with Sinkhorn Divergence

  • 1. Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence Tianshi Cao, Alex Bie, Arash Vahdat, Sanja Fidler, Karsten Kreis PR - 358; Online Paper Reviews
  • 2. Introduction 一危 殊企(data privacy) 讌 一危一 轟 螳 讌 豢 覲願 碁襦 語讌 襦 豈 襷. 1. 一危 殊企 1) 覲(identifier) 蟇 2. 一危 殊企 蟲 SITE A SITE B 企: 蟾豌 企: cjftn12 磯曙: ? 企: 蟾豌 企: cjftn12 磯曙: 010-1234-5678 觜覦覯 れ れ 覯碁 語覯碁ゼ 覲企. 010-1234-**** 觜覦覯 れ れ 覯碁 語覯碁ゼ 覲企. 010-****-5678 Q. 觜覦覯 谿剰鍵 企: * 企: * 磯曙: * 焔: 豢磯: 1993 豕譬 : 碁蟲 蟆曙螻 and and and 1993 碁蟲 蟆曙螻 蟆所鍵 and TOEIC: 900 and ...... 譴覲(quasi-identifiers) 螻 覲螳 覲伎伎襷 るジ 譴覲り骸 蟆壱 螳 覲 螳レ煙 譟伎. Latanya Sweeney, Simple Demographics Often Identify People Uniquely(2000) - Gender,Birth, Postal Code 螳讌 譟壱朱 覩瑚記 螳語 87%襯 覲蠍一 豢覿る 蟆 讀覈. * 覲(identifiers)
  • 3. Introduction 2) Statistical Noise - 谿覿 殊企(Differential Privacy) 竜-differential privacy 殊企襯 朱 覈誤 殊企 覲危 襯 豸′ 蠍一視逢覯襦朱, 一危 覈語 豸′ 語伎襯 豢螳 一危一 殊企襯 企 蟆 襷. Data Utility Ideal Acceptable Trade-off Privacy Protection A :螻襴讀 D1 D , 2 : 螳 襷 るジ 一危一 S :豢 螳 讌 where
  • 4. Related Works DP-GAN(Differentially Private GANs) DP襯 GAN 朱 覦覯 語伎襯 豢螳 覦朱 蟲 覈語 襷. DP-GAN 覈語 螳譴豺 Wasserstein 蟇磯Μ 磯 蠏碁朱誤語 語伎螳 伎. 蠏(regularization) Noised Gradients +
  • 5. DP-Sinkhorn DP-GAN 覦覯 Wasserstein 蟇磯Μ 螻旧 , 碁 蠏 Wasserstein 蟇磯Μ(Entropy-regularized Wasserstein Distance)襯 覦覯 襷. DP-Sinkhorn Entropy-regularized Wasserstein Distance Sinkhorn Divergence
  • 6. DP-Sinkhorn Sinkhorn Loss Debiased Sinkhorn Loss Semi-debiased Sinkhorn Loss 企 谿螳 DP(esp. RDP;Renyi Differential Privacy)襯 襷譟煙(Theorem 4.1).
  • 7. Experiments : MNIST, Fashion-MNIST, and CelebA downsampled to 32x32 pixels. Dataset : quantitative - FID // utility of generated data - test accuracy with Logistic regression, MLP, and CNN classifiers Metrics - Architecture: DCGAN(MNIST, FashionMNIST) // BigGAN(Celeb A) - Parameters: 了= 0.05(MNIST, Fashion-MNIST), 5(CelebA) Architectures & Hyperparameters : (10, 10^5)-DP // (10, 10^6)-DP Privacy Implementation (ERWD) Cost Function
  • 10. Conclusion Experimentallydemonstrate superior performance compared to the previous state-of-the-art both in terms of image quality and on standard image classification benchmarks using data generated under DP. Limited image quality is the main challenge in DP generative modeling and future work includes designing more expressive generator networks that can further improve synthesis quality, while satisfying differential privacy