際際滷

際際滷Share a Scribd company logo
A Beginners guide to understanding Autoencoder
2017.03.06
伎轟
https://blog.keras.io/building-autoencoders-in-keras.html
2
Autoencoder襯  覓語レ朱 覃?
Input = Decoder(Encoder(Input))
讀, input 豢   蟇.
蠏碁 label  朱,
Neural Nets Unsupervised Learning朱 覈
3
貊襦 襷り 貊襯 企螻1
Input(X) dimension 譴企2
3 X -> X (dimension )
蠏碁殊朱 轟 危エ覲手?
螳 螳 1 hidden layer襦 危エ覺 (code: hidden layer)
bottleneck code
https://en.wikipedia.org/wiki/Autoencoder
4
蠏碁殊 朱 誤覃
(encoder)
(decoder)
(X X 蟇磯Μ襯 豕 encoder, decoder )
(code)
activation function (ReLU螳 non-linearity襯 譯朱 )
(output)
x x 蟇磯Μ loss function 豕襦 training
z dimension p x dimension d覲企 蠍 覓語 encoder x襯 豢 り 螳  
蠏碁Μ螻 p < d  螳 regularization 朱 identity function(f(x)=x)襯 牛蠍 覓語 蠎  regularization
deterministic mapping朱 code output   
https://en.wikipedia.org/wiki/Autoencoder
5
蠏朱 autoencoder襯 磯 覘螳 譬蟾...?
覿: code(z) 企 覩碁ゼ 螳讌蟾?
6
Autoencoder襯 豌  朱(Baldi, P. and Hornik, K. 1989)
backpropagation朱 PCA襯 蟲 襯 
Baldi, P. and Hornik, K. (1989) Neural networks and principal components analysis: Learning from examples without local minima. Neural Networks
蠏碁る...PCA(Principal Component Analysis) 覓伎語...?
7
PCA(Principal Component Analysis), 譯殊焔 覿企
(轟 orthogonal企 projection 螳 企ゼ 轟 覈襯伎覃  蠍一朱 蠎 螻給企慨蠍!)
 N correlated variables M uncorrelated variables襦 orthogonal transformation. 蠏碁
 dimension N->M (M N覲企 蟇磯 螳) 朱 譴企! 企 M螳 uncorrelated
variables principal components  (, 一危郁 high-dimensional space linear-manifold
蟆 る 螳 )
 讀, 一危郁 螳 襷 覿磯 M orthogonal direction 谿場 N-dimensional 一危
襯 projection. 蠏碁覃 蠏 螻殊 M orthogonal direction  一危磯 る蟆
讌襷 一危郁 螳 襷 覿磯 direction企襦 variation 讌  れ 豕

 M orthogonal direction朱    螳れ 豌 一危一 蠏螳朱
reconstruction(蟲)
 (蠏碁 2)豌 觜螳 豐襦朱 蠍 覓語  蟇磯Μ襷殊 螳 
(蠏碁 1) PCA: 3谿 -> 2谿 (蠏碁 2) PCA: 2谿 -> 1谿
http://www.nlpca.org/pca_principal_component_analysis.html
https://en.wikipedia.org/wiki/Principal_component_analysis
8
PCA覲企 觜
(譯 襷  一危磯 旧 る慨覃     ..)
 , code 企 non-linear layer襯 豢螳覃 curved(non-linear) manifold 
蠏殊  一危磯ゼ 朱  PCA generalize 覯 螳ロ
 Encoder input space 譬襭襯 manifold 譬襦 覲
 Decoder 覦襦 manifold 譬襯 output space 譬襦 覲
 蠏碁 豐蠍 weight good solution 螳蟾 local minima 觜讌
 input x output x螳 螳襦 reconstruct
network襯 襷り reconstruction error襯 豕
襦 gradient descent learning 

 Code z M hidden unit朱 input N 
compressed representation 
 PCA autoencoder reconstruction error
 狩讌襷 hidden unit螻 principal
component 蠎 殊    (PCA
axes螻 るゴ蟇磯 skewed  蠍 覓!)
PCA 蟲  Autoencoder
Lecture 15.1  From PCA to autoencoders [Neural Networks for Machine Learning] by Geoffrey Hinton
9
蠏朱 蠏碁... autoencoder襯 磯 覘螳 譬蟾...?
Non-linear Dimension Reduction 螳ロ 讀 襷ろ螻
linear 蟆曙 PCA 焔レ  譬
讌  覈襯願れ?!?! (autoencoder  豌 覦 蠏語 蠏碁..)
蠏碁 る 轟 PCA襯  襷 殊!
(螳, 覿襯  襷れ )
10
Baldi, P. and Hornik, K.
Neural networks and principal components analysis: Learning from
examples without local minima. Neural Networks
Hinton & Salakhutdinov,
Reduction the Dimensionality of Data with Neural Network, Science
Pretraining Deep Autoencoder 企慨!
Reduction the Dimensionality of Data with Neural Network, Hinton & Salakhutdinov, Science, 2006
2006
1989
11
Reduction the Dimensionality of Data with Neural Network, Hinton & Salakhutdinov, Science, 2006
[Lecture 15.2] Deep autoencoders by Geoffrey Hinton
Hinton螻 Salakhutdinov螳 襷, 豌覯讌 炎概 Deep Autoencoder
 Deep Autoencoder non-linear dimension reduction 語  レ 
 flexible mapping 螳
 learning time training case  觜襦蟇磯  蟆 蟇碁
 豕譬 encoding 覈語 蟒 觜襴
 蠏碁! backpropagation朱 weight 豕蠍 企れ 2006   

 豐蠍 weight 螳 覃 local minima襦 
 豐蠍 weight 螳 朱 backpropagation vanishing gradient 覓語 覦
 襦 覦覯 autoencoder  豌覯讌碁 炎概 deep autoencoder襯
燕
 layer-by-layer襦 pre-training 
 Echo-State Nets豌 weight 豐蠍壱(initialization)
 Reconstruction ( dimension reduction襷..)螳 譯朱覦蠍
12
Deep autoencoder 蟲譟磯ゼ 危危蠍    蟆れ 伎...
 伎 危エ覲願 れ 螳覺!
RBM螻 DBF, 蠏碁Μ螻 greedy layer-wise training
13
 Boltzmann machine 覲*朱 Input  
襯 覿襯 learning   generative
stochastic artificial neural network襦 visible unit螻
hidden unit朱 蟲焔. , 螳 unit binary unit
.
 Boltzmann machine螻 るゴ蟆 1 layer of
hidden unit朱 蟲焔螻 hidden units 伎
connection . 讀, visible unit 螳
譯殊伎  hidden unit activation Mutually
Independent.
 Bipartite graph企襦  焔(hidden unit
譯殊伎 , visible unit activation MI)
 磯殊, visible units 譯殊伎覃 1 step襷朱
thermal equilibrium(危) 
 蟆郁骸朱 Boltzmann machine覲企  
螳企 inference 螳 蟆 譴
 dimensionality reduction, classification,
collaborative filtering, feature learning, topic
modeling  覈 襷蟆 supervised,
unsupervised  一  
RBM, Restricted Boltzmann machine
*Boltzmann machine: hidden unit 豢螳 EBM(Energy Based Model) 殊朱 Markov Network願鍵 .
Energy襦 襯 蟲螻 Log-likelihood gradient襯 螻壱 , MCMC sampling 牛 stochastic蟆 gradient
豢 螻一 る 蟇碁Μ  . 誤 伎 襭1(蠍), 襭2(蠍)襯 谿谿 覲願鍵襯 豢豌
Supervised 覓語 RBM  螳
unsupervised朱 y x朱 豺
14
RBM training
http://www.cs.toronto.edu/~rsalakhu/deeplearning/yoshua_icml2009.pdf
http://dsba.korea.ac.kr/wp/wp-content/seminar/Deep%20learning/RBM-2%20by%20H.S.Kim.pdf
 旧 伎 Negative Log-Likelihood(NLL) 豕伎狩. 蠍一 stochastic
gradient descent襯 覃 positive phase negative phase襦 讌
 Positive phase input vector 譟郁唄覿 襯(p(h|v))襦 螻壱  讌襷, negative
phase p(v,h) E( 伎 覈 螳ロ讌襷 negative phase 螻一 p(v,h) E(v,h)
襯 螻壱蠍 企糾鍵 覓語 Gibbs sampler襯 伎 MCMC 豢朱 approximation
豢
 Autoencoder るゴ蟆 BP(Backpropagation)  覓危覯 gibbs sampling朱
RBM 覈語 讌   企ジる 覲 企 CD-k襯 牛 MLE 覓語襯 
蟆壱 (hinton 02)
螻一 企れ
15
DBN, Deep Belief Network
 RBM stacked蟆  襷 probabilistic generative 覈
 Top 2 hidden layer undirected associative memory螻, 襾語 hidden layer
directed graph
 蠏碁 stacked RBM企手 覿襯願鍵 
 螳  レ layer-by-layer learning朱 higher level feature襯  layer
learning  蟆. 企ゼ greedy layer-wise training 手 
 螳 layer unsupervised RBM learning 覯 (Gibbs sampling + KL
divergence 豕). 蠏碁Μ螻 螻磯 weight transpose 螳 inference weight朱

  layer 蟆郁骸襯 れ  layer input 一危磯 覩襦, 螳 layer 豕
企  layer蟾讌  螻ろ  豕企   朱  
 Layer-wise pretraining weight 朱 initialize襯 螻 豌 network れ
backpropagation朱 fine-tuning . Fine-tuning supervised 覦覯朱 

A fast learning algorithm for deep belief nets, Hinton et al. Neural Computation 2006
http://www.cs.toronto.edu/~rsalakhu/deeplearning/yoshua_icml2009.pdf
覈 visible unit螻 hidden unit dimension
16
Reduction the Dimensionality of Data with Neural Network, Hinton & Salakhutdinov, Science, 2006
DBN 蟲譟一 greedy layer-wise pre-training  Deep Autoencoder
 Pretraining: RBM 蠍磯 4螳 encoder,
decoder stack朱 learning  (蠏碁
 Stacked AE手 )
 螳 stack one layer of feature
detectors 炎鴬 螳讌. 襦 螳襦
豢 feature襯 detect
 豌 loss襯 豕 蟆 
layer覲襦 loss襯 豕 (Loss
function 螻一 蠍磯..)
 Unrolling(シ豺): Encoder weight 螳
transpose  weight Decoder襦 
 Encoder: weight initialization
random  pretraining 蟆郁骸 
 Decoder: Encoder weight
transpose matrix襯 weight朱 
 Fine-tuning: 蠍 螻殊 伎
weight 螳れ  backprop朱 
襦 豌伎  豕襯 
れ Deep Autoencoder襦 る...
17
, code vector dimension  蠏碁殊 覲企 30
8 覲企 れ 一危(8 所 蟾)覲企  蟇   . PCA 30螳 linear unit朱   覈詩 蟆 覲伎
Reconstruction 蟆郁骸 觜蟲: 覲 vs. Deep Autoencoder vs. PCA
れ 一危
Deep Autoencoder
PCA
 蟇 螻蠍磯, dimension 譴企 PCA 磯狩蠍謂 覯企
reconstruction 螻螳 譯朱覦蠍 !
18
 蠍一ヾ autoencoder input represent discriminative 覈語 覦覃, RBM
statistical distribution learning 襯襦 覈碁, generative 覈語 炎鴬 螳讌.
蠏碁Μ螻 企ゼ 讌觸 deep autoencoder generative 覈語 炎鴬 企 螳讌蟆 
蠏碁 p(h=0,1) = s(Wx+b) 牛 蟆 , h=s(Wx+b)襯 牛 deterministic 覃企  螳讌螻 
り 覺狩
Generative  Discriminative 螳 讌觸 Deep Autoencoder
http://sanghyukchun.github.io/61/
朱 autoencoder
RBM
Deep Autoencoder
(Stacked Autoencoder)
 炎鴬  螳讌螻 
渚覃..
19
Denoising autoencoder (2008)
Extracting and Composing Robust Features with Denoising Autoencoders (P. Vincent, H.
Larochelle Y. Bengio and P.A. Manzagol, ICML08, pages 1096 - 1103, ACM, 2008)
Sparse autoencoder (2008)
Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition (K.
Kavukcuoglu, M. Ranzato, and Y. LeCun, CBLL-TR-2008-12-01, NYU, 2008)
Sparse deep belief net model for visual area V2 (H. Lee, C. Ekanadham, and A.Y. Ng.,
NIPS 20, 2008)
Stacked Denoising autoencoder (2010)
Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network
with a Local Denoising Criterion (P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio and P.A.
Manzagol, J. Mach. Learn. Res. 11 3371-3408, 2010)
Variational autoencoder (2013, 2014)
Auto-encoding variational Bayes (D. P. Kingma and M. Welling. arXiv preprint
arXiv:1312.6114, 2013)
Stochastic backpropagation and approximate inference in deep generative models
(Rezende, Danilo Jimenez, Mohamed, Shakir, and Wierstra, Daan. arXiv preprint
arXiv:1401.4082, 2014)
2006
1989
2008
Autoencoder ロ(variants)
2010
2013
蠍 語 Contrative Autoencoder, Multimodal Autoencoder  襷 variants螳 譟伎
讌 覩語 generative model 
input り 覲糾規
endcoding  input 朱襷 l
譴觜 螳 覿譟燕伎() denoising autoencoder襷 螳 襴給...
20
  朱 input襯 螳讌螻 training伎 覲糾規 original input output朱 襷
 identity function learning 覦讌螻 noise robust 覈語 襷り鍵 
  襦語: input stochastic corruption(ろ ) 豢螳
 v 襯襦, random蟆 input 朱襯 0朱 れ(v~0.5)
  覦覯 襷螻 るジ 覦覯 corruption 企 覓企逢
  襦語 豬: Corrupted input tilda X襦覿 X襯 reconstruct(覲旧)
 企ゼ 伎 input distribution  狩
 Loss function x noise螳  覲碁 input x襯 觜蟲伎 螻壱
 Unsupervised pretraining RBM螻 觜訣蟇磯  譬 焔レ 覲伎
Denoising autoencoder
/zukun/icml2012-tutorial-representationlearning
21
蠏碁  autoencoder螳 譯朱 企至 ?
Dimension reduction? Reconstruction? Pretraining?
(螳 3螳讌 蟷 蟆願鹸 讌襷)
Supervised 覓語 Pretraining   譯朱覦螻
22
Unsupervised Pretraining 螻 (1)
 Bengio RBM DBN ロ伎 continuous value input   蟆 
螻 DBN supervised learning task   蠍一 螳 ろ 
 蠏 蟆郁骸, Greedy unsupervised layer-wise training deep networks襯 豕
朱(generalization)  る 蟇 讀覈. 麹, DBN 訖  deep
net, shallow net  覦
  伎 蟆 autoencoder襯  supervised learning task 焔レ
企 郁規れ 
Greedy Layer-Wise Training for Deep Networks, Bengio et al. NIPS 2006
*Autoencoder襯 AutoAssociator(AA)襦  蟆曙磯 
*
23
/zukun/icml2012-tutorial-representationlearning
Why does unsupervised pre-training help deep learning?
(Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent,and Samy Bengio. JMLR, 11:625660, February 2010)
豕 蠏殊襦 initial weight 譯朱 螻朱 !
蠍磯 SDAE(Stacked Denoising Autoencoder)襦 pretraining  supervised DBN  蟆郁骸
Unsupervised Pretraining 螻 (2)
24
25
https://ko.wikipedia.org/wiki/企Μ_蠍壱
http://astronaut94.tistory.com/6
Euclidean Space Manifold(れ豌)
Euclidean Space: Euclidean Plane 2谿願鍵 覓語 3谿朱 螳覃 Euclidean
Geometry襦 ル. Euclidean Space 焔渚 5螳讌 螻給Μ螳  蠏 伎
蠍一 螳
  螻 るジ   郁屋 讌  訖企.
  覿 朱 朱讌 一ロ  .
   譴朱 螻  蠍語企ゼ 覦讌襴朱   蠏碁Υ  .
 讌螳 覈 襦 螳.
  螻旧:  讌  讌螻 襷 , 螳 讓曙  願  2讌螳(180)覲企 
覃   讌 一ロ  2讌螳覲企  願 企 讓曙 覦 襷.
Manifold: a topological space that is locally Euclidean, 讀,   Euclidean
  螻糾朱 觜企Μ 螻糾 蠍磯伎 蟆 螳
 企Μ 螻糾 5覯讌 螻給Μ  螻給Μ螳 讌蟲 蟯  襷讌襷 
 譯 蟆 襷れ伎  讌朱 焔渚.
 企蟆   企Μ 5覯讌 螻給Μ蟾讌 覈 螻給Μ螳 焔渚  朱
襦  螻糾 Manifold手
26
Bottleneck code
i.e., low-dimensional,
typically dense,
distributed
representation
Overcomplete code
i.e., high-dimensional,
always sparse,
distributed
representation
Code
Input
Target
= input
Code
Input
Target
= input
Bottleneck code vs. Overcomplete code
/danieljohnlewis/piotr-mirowski-review-autoencoders-deep-learning-ciuuk14
Code襯   螳讌 覦覯  る. 朱 autoencoder low-dimensional 蠍 覓語
 bottleneck code手 覲企 . , sparse coding sparse autoencoder overcomplete code
27
http://www.cs.toronto.edu/~rsalakhu/deeplearning/yoshua_icml2009.pdf
28
Pretraining   Loss Function ()
http://stats.stackexchange.com/questions/119959/what-does-pre-training-mean-in-deep-autoencoder
29
Unsupervised pretraining   伎?
Why does unsupervised pre-training help deep learning? Dumitru Erhan et al, JMLR 2010
 Regularization hypothesis
 覈語 P(x) 螳蟾襦 襷
 P(X)襯  覃 P(y|X)    
 Optimization hypothesis
 unsupervised pretraining朱 豕伎  螳蟾 initial襦

 random initialization 詞   lower local
minimum  螳
 layer覲襦 training 蟆

More Related Content

What's hot (20)

求== 求==梶求午メ求 8
求== 求==梶求午メ求 8求== 求==梶求午メ求 8
求== 求==梶求午メ求 8
Sunggon Song
Knowing when to look : Adaptive Attention via A Visual Sentinel for Image Cap...
Knowing when to look : Adaptive Attention via A Visual Sentinel for Image Cap...Knowing when to look : Adaptive Attention via A Visual Sentinel for Image Cap...
Knowing when to look : Adaptive Attention via A Visual Sentinel for Image Cap...
覦 蟾
Attention is all you need
Attention is all you needAttention is all you need
Attention is all you need
Hoon Heo
2017 tensor flow dev summit
2017 tensor flow dev summit2017 tensor flow dev summit
2017 tensor flow dev summit
Tae Young Lee
Image Deep Learning る伎
Image Deep Learning る伎Image Deep Learning る伎
Image Deep Learning る伎
Youngjae Kim
ル 蠍磯蓋
ル 蠍磯蓋ル 蠍磯蓋
ル 蠍磯蓋
deepseaswjh
朱 tensorflow 蠍 - tutorial
朱 tensorflow 蠍 - tutorial朱 tensorflow 蠍 - tutorial
朱 tensorflow 蠍 - tutorial
Lee Seungeun
Attention is all you need る
Attention is all you need るAttention is all you need る
Attention is all you need る
Junho Lee
讖讌襷ル2 - CNN RNN 覯
讖讌襷ル2 - CNN RNN 覯讖讌襷ル2 - CNN RNN 覯
讖讌襷ル2 - CNN RNN 覯
Modulabs
A neural image caption generator
A neural image caption generatorA neural image caption generator
A neural image caption generator
覦 蟾
Deep Learning & Convolutional Neural Network
Deep Learning & Convolutional Neural NetworkDeep Learning & Convolutional Neural Network
Deep Learning & Convolutional Neural Network
agdatalab
襦磯 覦一磯 ル
襦磯 覦一磯 ル襦磯 覦一磯 ル
襦磯 覦一磯 ル
谿 譯
LSTM ろ語 危危蠍
LSTM ろ語 危危蠍LSTM ろ語 危危蠍
LSTM ろ語 危危蠍
Mad Scientists
keras 觜襴 企慨蠍(intro)
keras 觜襴 企慨蠍(intro)keras 觜襴 企慨蠍(intro)
keras 觜襴 企慨蠍(intro)
beom kyun choi
貉覲朱 伎 螻 蟆暑 蠍磯
 貉覲朱 伎 螻 蟆暑 蠍磯 貉覲朱 伎 螻 蟆暑 蠍磯
貉覲朱 伎 螻 蟆暑 蠍磯
Dongyi Kim
ル 蠍磯蓋 襴 危
ル 蠍磯蓋 襴 危ル 蠍磯蓋 襴 危
ル 蠍磯蓋 襴 危
Hee Won Park
Convolution 譬襯 る
Convolution 譬襯 るConvolution 譬襯 る
Convolution 譬襯 る
覦 蟾
蠍一 Variational autoencoder
蠍一 Variational autoencoder蠍一 Variational autoencoder
蠍一 Variational autoencoder
覦 蟾
Ai 蠏瑚願碓
Ai 蠏瑚願碓Ai 蠏瑚願碓
Ai 蠏瑚願碓
[蠍一螳] Recurrent Neural Network (RNN) 螳
[蠍一螳] Recurrent Neural Network (RNN) 螳[蠍一螳] Recurrent Neural Network (RNN) 螳
[蠍一螳] Recurrent Neural Network (RNN) 螳
Donghyeon Kim
求== 求==梶求午メ求 8
求== 求==梶求午メ求 8求== 求==梶求午メ求 8
求== 求==梶求午メ求 8
Sunggon Song
Knowing when to look : Adaptive Attention via A Visual Sentinel for Image Cap...
Knowing when to look : Adaptive Attention via A Visual Sentinel for Image Cap...Knowing when to look : Adaptive Attention via A Visual Sentinel for Image Cap...
Knowing when to look : Adaptive Attention via A Visual Sentinel for Image Cap...
覦 蟾
Attention is all you need
Attention is all you needAttention is all you need
Attention is all you need
Hoon Heo
2017 tensor flow dev summit
2017 tensor flow dev summit2017 tensor flow dev summit
2017 tensor flow dev summit
Tae Young Lee
Image Deep Learning る伎
Image Deep Learning る伎Image Deep Learning る伎
Image Deep Learning る伎
Youngjae Kim
ル 蠍磯蓋
ル 蠍磯蓋ル 蠍磯蓋
ル 蠍磯蓋
deepseaswjh
朱 tensorflow 蠍 - tutorial
朱 tensorflow 蠍 - tutorial朱 tensorflow 蠍 - tutorial
朱 tensorflow 蠍 - tutorial
Lee Seungeun
Attention is all you need る
Attention is all you need るAttention is all you need る
Attention is all you need る
Junho Lee
讖讌襷ル2 - CNN RNN 覯
讖讌襷ル2 - CNN RNN 覯讖讌襷ル2 - CNN RNN 覯
讖讌襷ル2 - CNN RNN 覯
Modulabs
A neural image caption generator
A neural image caption generatorA neural image caption generator
A neural image caption generator
覦 蟾
Deep Learning & Convolutional Neural Network
Deep Learning & Convolutional Neural NetworkDeep Learning & Convolutional Neural Network
Deep Learning & Convolutional Neural Network
agdatalab
襦磯 覦一磯 ル
襦磯 覦一磯 ル襦磯 覦一磯 ル
襦磯 覦一磯 ル
谿 譯
LSTM ろ語 危危蠍
LSTM ろ語 危危蠍LSTM ろ語 危危蠍
LSTM ろ語 危危蠍
Mad Scientists
keras 觜襴 企慨蠍(intro)
keras 觜襴 企慨蠍(intro)keras 觜襴 企慨蠍(intro)
keras 觜襴 企慨蠍(intro)
beom kyun choi
貉覲朱 伎 螻 蟆暑 蠍磯
 貉覲朱 伎 螻 蟆暑 蠍磯 貉覲朱 伎 螻 蟆暑 蠍磯
貉覲朱 伎 螻 蟆暑 蠍磯
Dongyi Kim
ル 蠍磯蓋 襴 危
ル 蠍磯蓋 襴 危ル 蠍磯蓋 襴 危
ル 蠍磯蓋 襴 危
Hee Won Park
Convolution 譬襯 る
Convolution 譬襯 るConvolution 譬襯 る
Convolution 譬襯 る
覦 蟾
蠍一 Variational autoencoder
蠍一 Variational autoencoder蠍一 Variational autoencoder
蠍一 Variational autoencoder
覦 蟾
Ai 蠏瑚願碓
Ai 蠏瑚願碓Ai 蠏瑚願碓
Ai 蠏瑚願碓
[蠍一螳] Recurrent Neural Network (RNN) 螳
[蠍一螳] Recurrent Neural Network (RNN) 螳[蠍一螳] Recurrent Neural Network (RNN) 螳
[蠍一螳] Recurrent Neural Network (RNN) 螳
Donghyeon Kim

Similar to A Beginner's guide to understanding Autoencoder (20)

Deep learning overview
Deep learning overviewDeep learning overview
Deep learning overview
螳覩手記 螳覩手記
FCN to DeepLab.v3+
FCN to DeepLab.v3+FCN to DeepLab.v3+
FCN to DeepLab.v3+
Whi Kwon
Mt
MtMt
Mt
NAVER Engineering
LeNet & GoogLeNet
LeNet & GoogLeNetLeNet & GoogLeNet
LeNet & GoogLeNet
Institute of Agricultural Machinery, NARO
ル企企 企ろ磯 ろ蠍
ル企企 企ろ磯 ろ蠍ル企企 企ろ磯 ろ蠍
ル企企 企ろ磯 ろ蠍
Myeongju Kim
C# / .NET Framework襦 覩碁 覦リ係襴 豈蟆覲伎 (Basic)
C# / .NET Framework襦 覩碁 覦リ係襴 豈蟆覲伎 (Basic)C# / .NET Framework襦 覩碁 覦リ係襴 豈蟆覲伎 (Basic)
C# / .NET Framework襦 覩碁 覦リ係襴 豈蟆覲伎 (Basic)
Dong Chan Shin
れ襦 蟆 谿
れ襦 蟆 谿れ襦 蟆 谿
れ襦 蟆 谿
谿
螻 觜襯 ル 蠏碁Μ螻 Edge computing
螻 觜襯 ル 蠏碁Μ螻 Edge computing螻 觜襯 ル 蠏碁Μ螻 Edge computing
螻 觜襯 ル 蠏碁Μ螻 Edge computing
StellaSeoYeonYang
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
Sanghoon Yoon
Survey of activation functions
Survey of activation functionsSurvey of activation functions
Survey of activation functions
谿所鍵 覓
Convolutional Neural Networks(CNN) / Stanford cs231n 2017 lecture 5 / MLAI@UO...
Convolutional Neural Networks(CNN) / Stanford cs231n 2017 lecture 5 / MLAI@UO...Convolutional Neural Networks(CNN) / Stanford cs231n 2017 lecture 5 / MLAI@UO...
Convolutional Neural Networks(CNN) / Stanford cs231n 2017 lecture 5 / MLAI@UO...
changedaeoh
[貉危磯螻 瑚概讌] 8. 炎咳 蟆暑 ろ豌 5 - Others
 [貉危磯螻 瑚概讌] 8. 炎咳 蟆暑 ろ豌 5 - Others [貉危磯螻 瑚概讌] 8. 炎咳 蟆暑 ろ豌 5 - Others
[貉危磯螻 瑚概讌] 8. 炎咳 蟆暑 ろ豌 5 - Others
jdo
Feature Pyramid Network, FPN
Feature Pyramid Network, FPNFeature Pyramid Network, FPN
Feature Pyramid Network, FPN
Institute of Agricultural Machinery, NARO
History of Vision AI
History of Vision AIHistory of Vision AI
History of Vision AI
Tae Young Lee
Ndc2010 蟾譯朱概, v3. 襷觜瑚鍵2ろ豌襴觀
Ndc2010   蟾譯朱概, v3. 襷觜瑚鍵2ろ豌襴觀Ndc2010   蟾譯朱概, v3. 襷觜瑚鍵2ろ豌襴觀
Ndc2010 蟾譯朱概, v3. 襷觜瑚鍵2ろ豌襴觀
Jubok Kim
HistoryOfCNN
HistoryOfCNNHistoryOfCNN
HistoryOfCNN
Tae Young Lee
Kubernetes in action
Kubernetes in actionKubernetes in action
Kubernetes in action
Bingu Shim
Summary in recent advances in deep learning for object detection
Summary in recent advances in deep learning for object detectionSummary in recent advances in deep learning for object detection
Summary in recent advances in deep learning for object detection
谿所鍵 覓
Summary in recent advances in deep learning for object detection
Summary in recent advances in deep learning for object detectionSummary in recent advances in deep learning for object detection
Summary in recent advances in deep learning for object detection
谿所鍵 覓
Auto Scalable Deep Learning Production AI Serving Infra 蟲 覦 AI DevOps...
Auto Scalable  Deep Learning Production   AI Serving Infra 蟲 覦 AI DevOps...Auto Scalable  Deep Learning Production   AI Serving Infra 蟲 覦 AI DevOps...
Auto Scalable Deep Learning Production AI Serving Infra 蟲 覦 AI DevOps...
hoondong kim
FCN to DeepLab.v3+
FCN to DeepLab.v3+FCN to DeepLab.v3+
FCN to DeepLab.v3+
Whi Kwon
ル企企 企ろ磯 ろ蠍
ル企企 企ろ磯 ろ蠍ル企企 企ろ磯 ろ蠍
ル企企 企ろ磯 ろ蠍
Myeongju Kim
C# / .NET Framework襦 覩碁 覦リ係襴 豈蟆覲伎 (Basic)
C# / .NET Framework襦 覩碁 覦リ係襴 豈蟆覲伎 (Basic)C# / .NET Framework襦 覩碁 覦リ係襴 豈蟆覲伎 (Basic)
C# / .NET Framework襦 覩碁 覦リ係襴 豈蟆覲伎 (Basic)
Dong Chan Shin
れ襦 蟆 谿
れ襦 蟆 谿れ襦 蟆 谿
れ襦 蟆 谿
谿
螻 觜襯 ル 蠏碁Μ螻 Edge computing
螻 觜襯 ル 蠏碁Μ螻 Edge computing螻 觜襯 ル 蠏碁Μ螻 Edge computing
螻 觜襯 ル 蠏碁Μ螻 Edge computing
StellaSeoYeonYang
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
Sanghoon Yoon
Survey of activation functions
Survey of activation functionsSurvey of activation functions
Survey of activation functions
谿所鍵 覓
Convolutional Neural Networks(CNN) / Stanford cs231n 2017 lecture 5 / MLAI@UO...
Convolutional Neural Networks(CNN) / Stanford cs231n 2017 lecture 5 / MLAI@UO...Convolutional Neural Networks(CNN) / Stanford cs231n 2017 lecture 5 / MLAI@UO...
Convolutional Neural Networks(CNN) / Stanford cs231n 2017 lecture 5 / MLAI@UO...
changedaeoh
[貉危磯螻 瑚概讌] 8. 炎咳 蟆暑 ろ豌 5 - Others
 [貉危磯螻 瑚概讌] 8. 炎咳 蟆暑 ろ豌 5 - Others [貉危磯螻 瑚概讌] 8. 炎咳 蟆暑 ろ豌 5 - Others
[貉危磯螻 瑚概讌] 8. 炎咳 蟆暑 ろ豌 5 - Others
jdo
History of Vision AI
History of Vision AIHistory of Vision AI
History of Vision AI
Tae Young Lee
Ndc2010 蟾譯朱概, v3. 襷觜瑚鍵2ろ豌襴觀
Ndc2010   蟾譯朱概, v3. 襷觜瑚鍵2ろ豌襴觀Ndc2010   蟾譯朱概, v3. 襷觜瑚鍵2ろ豌襴觀
Ndc2010 蟾譯朱概, v3. 襷觜瑚鍵2ろ豌襴觀
Jubok Kim
Kubernetes in action
Kubernetes in actionKubernetes in action
Kubernetes in action
Bingu Shim
Summary in recent advances in deep learning for object detection
Summary in recent advances in deep learning for object detectionSummary in recent advances in deep learning for object detection
Summary in recent advances in deep learning for object detection
谿所鍵 覓
Summary in recent advances in deep learning for object detection
Summary in recent advances in deep learning for object detectionSummary in recent advances in deep learning for object detection
Summary in recent advances in deep learning for object detection
谿所鍵 覓
Auto Scalable Deep Learning Production AI Serving Infra 蟲 覦 AI DevOps...
Auto Scalable  Deep Learning Production   AI Serving Infra 蟲 覦 AI DevOps...Auto Scalable  Deep Learning Production   AI Serving Infra 蟲 覦 AI DevOps...
Auto Scalable Deep Learning Production AI Serving Infra 蟲 覦 AI DevOps...
hoondong kim

A Beginner's guide to understanding Autoencoder

  • 1. A Beginners guide to understanding Autoencoder 2017.03.06 伎轟 https://blog.keras.io/building-autoencoders-in-keras.html
  • 2. 2 Autoencoder襯 覓語レ朱 覃? Input = Decoder(Encoder(Input)) 讀, input 豢 蟇. 蠏碁 label 朱, Neural Nets Unsupervised Learning朱 覈
  • 3. 3 貊襦 襷り 貊襯 企螻1 Input(X) dimension 譴企2 3 X -> X (dimension ) 蠏碁殊朱 轟 危エ覲手? 螳 螳 1 hidden layer襦 危エ覺 (code: hidden layer) bottleneck code https://en.wikipedia.org/wiki/Autoencoder
  • 4. 4 蠏碁殊 朱 誤覃 (encoder) (decoder) (X X 蟇磯Μ襯 豕 encoder, decoder ) (code) activation function (ReLU螳 non-linearity襯 譯朱 ) (output) x x 蟇磯Μ loss function 豕襦 training z dimension p x dimension d覲企 蠍 覓語 encoder x襯 豢 り 螳 蠏碁Μ螻 p < d 螳 regularization 朱 identity function(f(x)=x)襯 牛蠍 覓語 蠎 regularization deterministic mapping朱 code output https://en.wikipedia.org/wiki/Autoencoder
  • 5. 5 蠏朱 autoencoder襯 磯 覘螳 譬蟾...? 覿: code(z) 企 覩碁ゼ 螳讌蟾?
  • 6. 6 Autoencoder襯 豌 朱(Baldi, P. and Hornik, K. 1989) backpropagation朱 PCA襯 蟲 襯 Baldi, P. and Hornik, K. (1989) Neural networks and principal components analysis: Learning from examples without local minima. Neural Networks 蠏碁る...PCA(Principal Component Analysis) 覓伎語...?
  • 7. 7 PCA(Principal Component Analysis), 譯殊焔 覿企 (轟 orthogonal企 projection 螳 企ゼ 轟 覈襯伎覃 蠍一朱 蠎 螻給企慨蠍!) N correlated variables M uncorrelated variables襦 orthogonal transformation. 蠏碁 dimension N->M (M N覲企 蟇磯 螳) 朱 譴企! 企 M螳 uncorrelated variables principal components (, 一危郁 high-dimensional space linear-manifold 蟆 る 螳 ) 讀, 一危郁 螳 襷 覿磯 M orthogonal direction 谿場 N-dimensional 一危 襯 projection. 蠏碁覃 蠏 螻殊 M orthogonal direction 一危磯 る蟆 讌襷 一危郁 螳 襷 覿磯 direction企襦 variation 讌 れ 豕 M orthogonal direction朱 螳れ 豌 一危一 蠏螳朱 reconstruction(蟲) (蠏碁 2)豌 觜螳 豐襦朱 蠍 覓語 蟇磯Μ襷殊 螳 (蠏碁 1) PCA: 3谿 -> 2谿 (蠏碁 2) PCA: 2谿 -> 1谿 http://www.nlpca.org/pca_principal_component_analysis.html https://en.wikipedia.org/wiki/Principal_component_analysis
  • 8. 8 PCA覲企 觜 (譯 襷 一危磯 旧 る慨覃 ..) , code 企 non-linear layer襯 豢螳覃 curved(non-linear) manifold 蠏殊 一危磯ゼ 朱 PCA generalize 覯 螳ロ Encoder input space 譬襭襯 manifold 譬襦 覲 Decoder 覦襦 manifold 譬襯 output space 譬襦 覲 蠏碁 豐蠍 weight good solution 螳蟾 local minima 觜讌 input x output x螳 螳襦 reconstruct network襯 襷り reconstruction error襯 豕 襦 gradient descent learning Code z M hidden unit朱 input N compressed representation PCA autoencoder reconstruction error 狩讌襷 hidden unit螻 principal component 蠎 殊 (PCA axes螻 るゴ蟇磯 skewed 蠍 覓!) PCA 蟲 Autoencoder Lecture 15.1 From PCA to autoencoders [Neural Networks for Machine Learning] by Geoffrey Hinton
  • 9. 9 蠏朱 蠏碁... autoencoder襯 磯 覘螳 譬蟾...? Non-linear Dimension Reduction 螳ロ 讀 襷ろ螻 linear 蟆曙 PCA 焔レ 譬 讌 覈襯願れ?!?! (autoencoder 豌 覦 蠏語 蠏碁..) 蠏碁 る 轟 PCA襯 襷 殊! (螳, 覿襯 襷れ )
  • 10. 10 Baldi, P. and Hornik, K. Neural networks and principal components analysis: Learning from examples without local minima. Neural Networks Hinton & Salakhutdinov, Reduction the Dimensionality of Data with Neural Network, Science Pretraining Deep Autoencoder 企慨! Reduction the Dimensionality of Data with Neural Network, Hinton & Salakhutdinov, Science, 2006 2006 1989
  • 11. 11 Reduction the Dimensionality of Data with Neural Network, Hinton & Salakhutdinov, Science, 2006 [Lecture 15.2] Deep autoencoders by Geoffrey Hinton Hinton螻 Salakhutdinov螳 襷, 豌覯讌 炎概 Deep Autoencoder Deep Autoencoder non-linear dimension reduction 語 レ flexible mapping 螳 learning time training case 觜襦蟇磯 蟆 蟇碁 豕譬 encoding 覈語 蟒 觜襴 蠏碁! backpropagation朱 weight 豕蠍 企れ 2006 豐蠍 weight 螳 覃 local minima襦 豐蠍 weight 螳 朱 backpropagation vanishing gradient 覓語 覦 襦 覦覯 autoencoder 豌覯讌碁 炎概 deep autoencoder襯 燕 layer-by-layer襦 pre-training Echo-State Nets豌 weight 豐蠍壱(initialization) Reconstruction ( dimension reduction襷..)螳 譯朱覦蠍
  • 12. 12 Deep autoencoder 蟲譟磯ゼ 危危蠍 蟆れ 伎... 伎 危エ覲願 れ 螳覺! RBM螻 DBF, 蠏碁Μ螻 greedy layer-wise training
  • 13. 13 Boltzmann machine 覲*朱 Input 襯 覿襯 learning generative stochastic artificial neural network襦 visible unit螻 hidden unit朱 蟲焔. , 螳 unit binary unit . Boltzmann machine螻 るゴ蟆 1 layer of hidden unit朱 蟲焔螻 hidden units 伎 connection . 讀, visible unit 螳 譯殊伎 hidden unit activation Mutually Independent. Bipartite graph企襦 焔(hidden unit 譯殊伎 , visible unit activation MI) 磯殊, visible units 譯殊伎覃 1 step襷朱 thermal equilibrium(危) 蟆郁骸朱 Boltzmann machine覲企 螳企 inference 螳 蟆 譴 dimensionality reduction, classification, collaborative filtering, feature learning, topic modeling 覈 襷蟆 supervised, unsupervised 一 RBM, Restricted Boltzmann machine *Boltzmann machine: hidden unit 豢螳 EBM(Energy Based Model) 殊朱 Markov Network願鍵 . Energy襦 襯 蟲螻 Log-likelihood gradient襯 螻壱 , MCMC sampling 牛 stochastic蟆 gradient 豢 螻一 る 蟇碁Μ . 誤 伎 襭1(蠍), 襭2(蠍)襯 谿谿 覲願鍵襯 豢豌 Supervised 覓語 RBM 螳 unsupervised朱 y x朱 豺
  • 14. 14 RBM training http://www.cs.toronto.edu/~rsalakhu/deeplearning/yoshua_icml2009.pdf http://dsba.korea.ac.kr/wp/wp-content/seminar/Deep%20learning/RBM-2%20by%20H.S.Kim.pdf 旧 伎 Negative Log-Likelihood(NLL) 豕伎狩. 蠍一 stochastic gradient descent襯 覃 positive phase negative phase襦 讌 Positive phase input vector 譟郁唄覿 襯(p(h|v))襦 螻壱 讌襷, negative phase p(v,h) E( 伎 覈 螳ロ讌襷 negative phase 螻一 p(v,h) E(v,h) 襯 螻壱蠍 企糾鍵 覓語 Gibbs sampler襯 伎 MCMC 豢朱 approximation 豢 Autoencoder るゴ蟆 BP(Backpropagation) 覓危覯 gibbs sampling朱 RBM 覈語 讌 企ジる 覲 企 CD-k襯 牛 MLE 覓語襯 蟆壱 (hinton 02) 螻一 企れ
  • 15. 15 DBN, Deep Belief Network RBM stacked蟆 襷 probabilistic generative 覈 Top 2 hidden layer undirected associative memory螻, 襾語 hidden layer directed graph 蠏碁 stacked RBM企手 覿襯願鍵 螳 レ layer-by-layer learning朱 higher level feature襯 layer learning 蟆. 企ゼ greedy layer-wise training 手 螳 layer unsupervised RBM learning 覯 (Gibbs sampling + KL divergence 豕). 蠏碁Μ螻 螻磯 weight transpose 螳 inference weight朱 layer 蟆郁骸襯 れ layer input 一危磯 覩襦, 螳 layer 豕 企 layer蟾讌 螻ろ 豕企 朱 Layer-wise pretraining weight 朱 initialize襯 螻 豌 network れ backpropagation朱 fine-tuning . Fine-tuning supervised 覦覯朱 A fast learning algorithm for deep belief nets, Hinton et al. Neural Computation 2006 http://www.cs.toronto.edu/~rsalakhu/deeplearning/yoshua_icml2009.pdf 覈 visible unit螻 hidden unit dimension
  • 16. 16 Reduction the Dimensionality of Data with Neural Network, Hinton & Salakhutdinov, Science, 2006 DBN 蟲譟一 greedy layer-wise pre-training Deep Autoencoder Pretraining: RBM 蠍磯 4螳 encoder, decoder stack朱 learning (蠏碁 Stacked AE手 ) 螳 stack one layer of feature detectors 炎鴬 螳讌. 襦 螳襦 豢 feature襯 detect 豌 loss襯 豕 蟆 layer覲襦 loss襯 豕 (Loss function 螻一 蠍磯..) Unrolling(シ豺): Encoder weight 螳 transpose weight Decoder襦 Encoder: weight initialization random pretraining 蟆郁骸 Decoder: Encoder weight transpose matrix襯 weight朱 Fine-tuning: 蠍 螻殊 伎 weight 螳れ backprop朱 襦 豌伎 豕襯 れ Deep Autoencoder襦 る...
  • 17. 17 , code vector dimension 蠏碁殊 覲企 30 8 覲企 れ 一危(8 所 蟾)覲企 蟇 . PCA 30螳 linear unit朱 覈詩 蟆 覲伎 Reconstruction 蟆郁骸 觜蟲: 覲 vs. Deep Autoencoder vs. PCA れ 一危 Deep Autoencoder PCA 蟇 螻蠍磯, dimension 譴企 PCA 磯狩蠍謂 覯企 reconstruction 螻螳 譯朱覦蠍 !
  • 18. 18 蠍一ヾ autoencoder input represent discriminative 覈語 覦覃, RBM statistical distribution learning 襯襦 覈碁, generative 覈語 炎鴬 螳讌. 蠏碁Μ螻 企ゼ 讌觸 deep autoencoder generative 覈語 炎鴬 企 螳讌蟆 蠏碁 p(h=0,1) = s(Wx+b) 牛 蟆 , h=s(Wx+b)襯 牛 deterministic 覃企 螳讌螻 り 覺狩 Generative Discriminative 螳 讌觸 Deep Autoencoder http://sanghyukchun.github.io/61/ 朱 autoencoder RBM Deep Autoencoder (Stacked Autoencoder) 炎鴬 螳讌螻 渚覃..
  • 19. 19 Denoising autoencoder (2008) Extracting and Composing Robust Features with Denoising Autoencoders (P. Vincent, H. Larochelle Y. Bengio and P.A. Manzagol, ICML08, pages 1096 - 1103, ACM, 2008) Sparse autoencoder (2008) Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition (K. Kavukcuoglu, M. Ranzato, and Y. LeCun, CBLL-TR-2008-12-01, NYU, 2008) Sparse deep belief net model for visual area V2 (H. Lee, C. Ekanadham, and A.Y. Ng., NIPS 20, 2008) Stacked Denoising autoencoder (2010) Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion (P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio and P.A. Manzagol, J. Mach. Learn. Res. 11 3371-3408, 2010) Variational autoencoder (2013, 2014) Auto-encoding variational Bayes (D. P. Kingma and M. Welling. arXiv preprint arXiv:1312.6114, 2013) Stochastic backpropagation and approximate inference in deep generative models (Rezende, Danilo Jimenez, Mohamed, Shakir, and Wierstra, Daan. arXiv preprint arXiv:1401.4082, 2014) 2006 1989 2008 Autoencoder ロ(variants) 2010 2013 蠍 語 Contrative Autoencoder, Multimodal Autoencoder 襷 variants螳 譟伎 讌 覩語 generative model input り 覲糾規 endcoding input 朱襷 l 譴觜 螳 覿譟燕伎() denoising autoencoder襷 螳 襴給...
  • 20. 20 朱 input襯 螳讌螻 training伎 覲糾規 original input output朱 襷 identity function learning 覦讌螻 noise robust 覈語 襷り鍵 襦語: input stochastic corruption(ろ ) 豢螳 v 襯襦, random蟆 input 朱襯 0朱 れ(v~0.5) 覦覯 襷螻 るジ 覦覯 corruption 企 覓企逢 襦語 豬: Corrupted input tilda X襦覿 X襯 reconstruct(覲旧) 企ゼ 伎 input distribution 狩 Loss function x noise螳 覲碁 input x襯 觜蟲伎 螻壱 Unsupervised pretraining RBM螻 觜訣蟇磯 譬 焔レ 覲伎 Denoising autoencoder /zukun/icml2012-tutorial-representationlearning
  • 21. 21 蠏碁 autoencoder螳 譯朱 企至 ? Dimension reduction? Reconstruction? Pretraining? (螳 3螳讌 蟷 蟆願鹸 讌襷) Supervised 覓語 Pretraining 譯朱覦螻
  • 22. 22 Unsupervised Pretraining 螻 (1) Bengio RBM DBN ロ伎 continuous value input 蟆 螻 DBN supervised learning task 蠍一 螳 ろ 蠏 蟆郁骸, Greedy unsupervised layer-wise training deep networks襯 豕 朱(generalization) る 蟇 讀覈. 麹, DBN 訖 deep net, shallow net 覦 伎 蟆 autoencoder襯 supervised learning task 焔レ 企 郁規れ Greedy Layer-Wise Training for Deep Networks, Bengio et al. NIPS 2006 *Autoencoder襯 AutoAssociator(AA)襦 蟆曙磯 *
  • 23. 23 /zukun/icml2012-tutorial-representationlearning Why does unsupervised pre-training help deep learning? (Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent,and Samy Bengio. JMLR, 11:625660, February 2010) 豕 蠏殊襦 initial weight 譯朱 螻朱 ! 蠍磯 SDAE(Stacked Denoising Autoencoder)襦 pretraining supervised DBN 蟆郁骸 Unsupervised Pretraining 螻 (2)
  • 24. 24
  • 25. 25 https://ko.wikipedia.org/wiki/企Μ_蠍壱 http://astronaut94.tistory.com/6 Euclidean Space Manifold(れ豌) Euclidean Space: Euclidean Plane 2谿願鍵 覓語 3谿朱 螳覃 Euclidean Geometry襦 ル. Euclidean Space 焔渚 5螳讌 螻給Μ螳 蠏 伎 蠍一 螳 螻 るジ 郁屋 讌 訖企. 覿 朱 朱讌 一ロ . 譴朱 螻 蠍語企ゼ 覦讌襴朱 蠏碁Υ . 讌螳 覈 襦 螳. 螻旧: 讌 讌螻 襷 , 螳 讓曙 願 2讌螳(180)覲企 覃 讌 一ロ 2讌螳覲企 願 企 讓曙 覦 襷. Manifold: a topological space that is locally Euclidean, 讀, Euclidean 螻糾朱 觜企Μ 螻糾 蠍磯伎 蟆 螳 企Μ 螻糾 5覯讌 螻給Μ 螻給Μ螳 讌蟲 蟯 襷讌襷 譯 蟆 襷れ伎 讌朱 焔渚. 企蟆 企Μ 5覯讌 螻給Μ蟾讌 覈 螻給Μ螳 焔渚 朱 襦 螻糾 Manifold手
  • 26. 26 Bottleneck code i.e., low-dimensional, typically dense, distributed representation Overcomplete code i.e., high-dimensional, always sparse, distributed representation Code Input Target = input Code Input Target = input Bottleneck code vs. Overcomplete code /danieljohnlewis/piotr-mirowski-review-autoencoders-deep-learning-ciuuk14 Code襯 螳讌 覦覯 る. 朱 autoencoder low-dimensional 蠍 覓語 bottleneck code手 覲企 . , sparse coding sparse autoencoder overcomplete code
  • 28. 28 Pretraining Loss Function () http://stats.stackexchange.com/questions/119959/what-does-pre-training-mean-in-deep-autoencoder
  • 29. 29 Unsupervised pretraining 伎? Why does unsupervised pre-training help deep learning? Dumitru Erhan et al, JMLR 2010 Regularization hypothesis 覈語 P(x) 螳蟾襦 襷 P(X)襯 覃 P(y|X) Optimization hypothesis unsupervised pretraining朱 豕伎 螳蟾 initial襦 random initialization 詞 lower local minimum 螳 layer覲襦 training 蟆