4
Vanilla Autoencoder
•What is it?
Reconstruct high-dimensional data using a neural network model with a narrow
bottleneck layer.
The bottleneck layer captures the compressed latent coding, so the nice by-product
is dimension reduction.
The low-dimensional representation can be used as the representation of the data
in various applications, e.g., image retrieval, data compression …
!"#"
ℒ
Latent code: the compressed low
dimensional representation of the
input data
5
Vanilla Autoencoder
•How it works?
!"#"
ℒ
decoder/generator
Z àX
encoder
X àZ
InputReconstructed Input
Ideally the input and reconstruction are identical
The encoder
network is for
dimension
reduction, just
like PCA
6
Vanilla Autoencoder
•Training
!!
!"
!#
"1
"2
"3!$
!%
!&
"4
#!!
#!"
#!#
#!$
#!%
#!&
hidden layerinput layeroutput layer
ℒ=1$
!
%&"&"#
Given 'data samples
•The hidden units are usually less than the number of inputs
•Dimension reduction ---Representation learning
The distance between two data can be measure by
Mean Squared Error (MSE):
ℒ=$
%∑&'$%(%&−'((%&)2
where @is the number of variables
•Itistryingtolearnanapproximationtotheidentityfunction
sothattheinputis“compress”tothe“compressed”features,
discoveringinterestingstructureaboutthedata.
EncoderDecoder
8
Vanilla Autoencoder
•Example:
•Compress MNIST (28x28x1) to the latent code with only 2 variables
Lossy
9
Vanilla Autoencoder
•Power of Latent Representation
•t-SNE visualization on MNIST: PCA vs. Autoencoder
PCAAutoencoder (Winner)
2006 Science paper by Hinton and Salakhutdinov
10
Vanilla Autoencoder
•Discussion
•Hidden layer is overcomplete if greater than the input layer
11
Vanilla Autoencoder
•Discussion
•Hidden layer is overcomplete if greater than the input layer
•No compression
•No guarantee that the hidden units extract meaningful feature
14
Denoising Autoencoder
•Architecture
!!
!"
!#
"1
"2
"3!$
!%
!&
"4
#!!
#!"
#!#
#!$
#!%
#!&
hidden layerinput layeroutput layer
!!
!"
!#
!$
!%
!&
Applying dropout between the input and the first hidden layer
•Improvetherobustness
EncoderDecoder
15
Denoising Autoencoder
•Feature Visualization
Visualizing the learned features
!!
!"
!#
"1
"2
"3!$
!%
!&
"4
Oneneuron==Onefeatureextractor
reshape à
16
Denoising Autoencoder
•Denoising Autoencoder & Dropout
Denoising autoencoder was proposed in 2008, 4 years before the dropout paper (Hinton, et al. 2012).
Denoising autoencoder can be seem as applying dropout between the input and the first layer.
Denoising autoencoder can be seem as one type of data augmentation on the input.
26
Contractive Autoencoder
•Why?
•Denoising Autoencoder and Sparse Autoencoder overcome the overcomplete
problem via the input and hidden layers.
•Could we add an explicit term in the loss to avoid uninteresting features?
We wish the features that ONLY reflect variations observed in the training set
https://www.youtube.com/watch?v=79sYlJ8Cvlc
27
Contractive Autoencoder
•How
•Penalize the representation being too sensitive to the input
•Improve the robustness to small perturbations
•Measure the sensitivity by the Frobeniusnorm of the Jacobian matrix of the
encoder activations
30
Contractive Autoencoder
•New Loss
reconstructionnew regularization
31
Contractive Autoencoder
•vs. Denoising Autoencoder
•Advantages
•CAE can better model the distribution of raw data
•Disadvantages
•DAE is easier to implement
•CAE needs second-order optimization (conjugate gradient, LBFGS)
33
Stacked Autoencoder
•Start from Autoencoder: Learn FeatureFrom Input
!!
!"
!#
"!!
""!
"#!!$
!%
!&
"$!
#!!
#!"
#!#
#!$
#!%
#!&
hidden 1inputoutput
The feature extractor for the input data
Red lines indicate the trainable weights
Black lines indicate the fixed/nontrainable weights
EncoderDecoderUnsupervised
Red color indicates the trainable weights
34
Stacked Autoencoder
•2ndStage: Learn 2ndLevel FeatureFrom 1stLevel Feature
!!
!"
!#
"!!
""!
"#!!$
!%
!&
"$!
hidden 1inputoutput
"!"
"""
"#"
"$"
#!!
#!"
#!#
#!$
#!%
#!&
hidden 2
The feature extractor for the first feature extractor
Red lines indicate the trainable weights
Black lines indicate the fixed/nontrainable weights
EncoderEncoderDecoderUnsupervised
Red color indicates the trainable weights
35
Stacked Autoencoder
•3rdStage: Learn 3rdLevel FeatureFrom 2ndLevel Feature
!!
!"
!#
"!!
""!
"#!!$
!%
!&
"$!
"!"
"""
"#"
"$"
"!#
""#
"##
"$#
#!!
#!"
#!#
#!$
#!%
#!&
hidden 1input outputhidden 2hidden 3
The feature extractor for the second feature extractor
Red lines indicate the trainable weights
Black lines indicate the fixed/nontrainable weights
EncoderEncoderEncoderDecoderUnsupervised
Red color indicates the trainable weights
36
Stacked Autoencoder
•4thStage: Learn 4thLevel FeatureFrom 3rdLevel Feature
!!
!"
!#
"!!
""!
"#!!$
!%
!&
"$!
"!"
"""
"#"
"$"
"!#
""#
"##
"$#
hidden 1input outputhidden 2hidden 3
"!$
""$
"#$
"$%
#!!
#!"
#!#
#!$
#!%
#!&
hidden 4
The feature extractor for the third feature extractor
Red lines indicate the trainable weights
Black lines indicate the fixed/nontrainable weights
EncoderEncoderEncoderEncoderDecoderUnsupervised
Red color indicates the trainable weights
37
Stacked Autoencoder
•Use the Learned Feature Extractor for Downstream Tasks
!!
!"
!#
"!!
""!
"#!!$
!%
!&
"$!
"!"
"""
"#"
"$"
"!#
""#
"##
"$#
hidden 1input outputhidden 2hidden 3
"!$
""$
"#$
"$$
"!%
hidden 4
Learn to classify the input data by using
the labels and high-level features
Red lines indicate the trainable weights
Black lines indicate the fixed/nontrainable weights
Supervised
Red color indicates the trainable weights
38
Stacked Autoencoder
•Fine-tuning
!!
!"
!#
"!!
""!
"#!!$
!%
!&
"$!
"!"
"""
"#"
"$"
"!#
""#
"##
"$#
hidden 1input outputhidden 2hidden 3
"!$
""$
"#$
"$$
"!%
hidden 4
Fine-tune the entire model for classification
Red lines indicate the trainable weights
Black lines indicate the fixed/nontrainable weights
Supervised
Red color indicates the trainable weights
41
Before we start
•Question?
•Are the previous Autoencoders generative model?
•Recap: We want to learn a probability distribution !(#)over #
oGeneration (sampling): %pqr~!(x)
(NO, The compressed latent codes of autoencoders are not prior distributions, autoencoder
cannot learn to represent the data distribution)
oDensity Estimation: !(x)high if %looks like a real data
NO
oUnsupervised Representation Learning:
Discovering the underlying structure from the data distribution (e.g., ears, nose, eyes …)
(YES, Autoencoders learn the feature representation)
45
Variational Autoencoder
•The neural net perspective
•A variational autoencoder consists of an encoder, a decoder, and a loss function
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
47
Variational Autoencoder
•Loss function
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
regularizationCan be represented by MSE
48
•Which direction of the KL divergence to
use?
•Some applications require an
approximation that usually places
high probability anywhere that the
true distribution places high
probability: left one
•VAE requires an approximation that
rarely places high probability
anywhere that the true distribution
places low probability: right one
Variational Autoencoder
•Why KL(Q||P) not KL(P||Q)
If:
49
Variational Autoencoder
•ReparameterizationTrick
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
ℎ!
ℎ"
ℎ#
21
22
23
ℎ$
ℎ%
ℎ&
24
#!!
#!"
#!#
#!$
#!%
#!&
31
32
33
34
(1
(2
(3
(4
{3~u(~3,3)
Resamplingpredict means
predict std
!!
!"
!#
!$
!%
!&
1.Encode the input
2.Predict means
3.Predict standard derivations
4.Use the predicted means and standard
derivations to sample new latent
variables individually
5.Reconstruct the input
Latent variables are independent
50
Variational Autoencoder
•ReparameterizationTrick
•z ~ N(μ, σ) is not differentiable
•To make sampling z differentiable
•z=μ+σ* ϵ
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
ϵ~N(0, 1)
55
Variational Autoencoder
•Problem Definition
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
Goal: Given (={#?,#?,#?…,#p}, find !(to represent (
How: It is difficult to directly model !(, so alternatively, we can …
!(=D
?
!(|F!(F)
where !F=G(0,1)is a prior/known distribution
i.e., sample (from F
56
Variational Autoencoder
•The probability modelperspective
•P(X) is hard to model
•Alternatively, we learn the joint distribution of X and Z
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
EF=G
4
EF|IE(I)
EF=G
4
EF,I
EF,I=EIE(F|I)
59
Variational Autoencoder
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
•Monte Carlo?
•nmight need to be extremely large before we have an accurate estimation of P(X)
60
Variational Autoencoder
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
•Monte Carlo?
•Pixel difference is different from perceptual difference
61
Variational Autoencoder
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
•Monte Carlo?
•VAE alters the sampling procedure
63
Variational Autoencoder
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
•VariationalInference
•VI turns inference into optimization
parameter distribution
64
Variational Autoencoder
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
•Setting up the objective
•Maximize P(X)
•Set Q(z) to be an arbitrary distributionEKF=EFKE(K)
E(F)
Goal: maximize thislogP(x)
65
Variational Autoencoder
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
•Setting up the objective
encoderidealreconstruction/decoderKLDGoal: maximize this
Goal becomes: optimize thisdifficult to compute
ℒkl
!"#"
ℒ456
ℒ)*)+,=ℒ-./+ℒ0,
w(z|{)
generation
|({|z)
inference
66
Variational Autoencoder
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
•Setting up the objective : ELBO
idealencoder
-ELBO
EKF=E(F,K)
E(F)
67
Variational Autoencoder
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
•Setting up the objective : ELBO
72
Variational Autoencoder
•VAE is a Generative Model
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
!F|(is not G(0,1)
Can we input G(0,1)to the decoder for sampling?
YES: the goal of KL is to make !F|(to be G(0,1)
73
Variational Autoencoder
•VAE vs. Autoencoder
•VAE : distribution representation, p(z|x) is a distribution
•AE: feature representation, h = E(x) is deterministic
Auto-Encoding Variational Bayes. DiederikP. Kingma, Max Welling. ICLR 2013
75
Summary: Take Home Message
•Autoencoders learn data representation in an unsupervised/ self-supervised way.
•Autoencoders learn data representation but cannot model the data distribution !(.
•Different with vanilla autoencoder, in sparse autoencoder, the number of hidden units
can be greater than the number of input variables.
•VAE
•…
•…
•…
•…
•…
•…