Architecture Design for Deep Neural Networks I

ssuser2ff343 1,217 views 43 slides Aug 26, 2019
Slide 1
Slide 1 of 44
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44

About This Presentation

ICME2019 Tutorial: Architecture Design for Deep Neural Networks I


Slide Content

Gao Huang
Assistant Professor
Department of Automation, Tsinghua University
Neural Architectures for Efficient Inference

OUTLINE
1.Macro-architecture innovations in ConvNets
2.Micro-architecture innovations in ConvNets
3.From static network to dynamic network

PART 1
MACRO-ARCH INNOVATIONS IN CNN

CONVOLUTIONAL NETWORKS
LeNet AlexNet
VGG Inception
ResNet
DenseNet

Main Ideas:
✓Convolution, local receptive fields, shared weights
✓Spatial subsampling
LENET[LECUNET AL. 1998]

Main Ideas:
✓ReLU(Rectified Linear Unit)
✓Dropout
✓Local Response Normalization, Overlapping Pooling
✓Data Augmentation, Multiple GPUs
ALEXNET [KRIZHEVSKY ET AL. 2012]

Main Idea:
✓Skip connection: promotes gradient propagation
RESNET [HE ET AL. 2016]
Identity mappings promote gradient propagation.
: Element-wise addition

Main Idea:
✓Dense connectivity:creates short path in the network and encourages feature reuse.
ResNet
GoogleNet
FractalNet
DENSENET[HUANG ET AL. 2017]

REDUNDANCY IN DEEP MODELS
Classifier
Input Prediction
Low-level features Mid-level features High-level features
Classifier

REDUCING REDUNDANCY

DENSE CONNECTIVITY
C C C C
: Channel-wise concatenation
C

DENSE AND SLIM
k channelsk channelsk channelsk channels
k: Growth Rate
C C C C

Model #Layers #Parameters Validation
Error
ResNet 18 11.7M 30.43%
DenseNet 121 ?? ??
DENSE AND SLIMARCHITECTURE

Model #Layers #Parameters Validation
Error
ResNet 18 11.7M 30.43%
DenseNet 121 8.0M 25.03%
DENSE AND SLIMARCHITECTURE
More connections, less computation!

21.0
22.5
24.0
25.5
27.0
Top
-
1 error (%)
GFLOPs
ResNet DenseNet
21.0
22.5
24.0
25.5
27.0
Top
-
1 error (%)
# Parameters (M)
ResNet DenseNet
ResNet-152
ResNet-101
ResNet-50
ResNet-34
ResNet-152
ResNet-101
ResNet-50
ResNet-34
DenseNet-232(k=48)
DenseNet-264
DenseNet-201
DenseNet-169
DenseNet-121
DenseNet-121
DenseNet-169
DenseNet-201
DenseNet-264
DenseNet-232(k=48)
RESULTS ON IMAGENET

THE LOSS SURFACE
https://www.cs.umd.edu/~tomg/projects/landscapes/
VGG-56
VGG-110
DenseNet-121ResNet-56
Visualizing the loss landscape of neural nets. Li, Hao, et al. NIPS 2018.a

Main Idea:
✓Multi-scale feature fusion:merge signals with different frequencies.
Interlinked CNN (Zhou et al, ISNN’15)Neural Fabric (Saxena & Verbeek, NIPS’16)
MSDNet(Huang et al, ICLR’18)
HRNet(Sun et al, CVPR’19)
MULTI-SCALE NETWORKS

Main Idea:
✓Automatic architecture searchusing reinforcement learning, genetic/evolutional
algorithms or differentiable approaches.
AutoMLis a very active research field, see www.automl.org
Neural Architecture Search [Zophand Le, 2017 and many]

PART 2
MICRO-ARCH INNOVATIONS IN CNN

Main Idea:
✓Split convolution into multiple groups
Standard Convolution Group Convolution
??????
??????×??????
??????
????????????×??????
Networks using Group Convolution:
✓AlexNet(Krizhevskyet al, NIPS’12)
✓ResNeXt(Xieet al, CVPR’17)
✓CondenseNet(Huang et al, CVPR’18)
✓ShuffleNet (Zhang et al, CVPR’18)
✓…
Group Convolution

Main Idea:
✓Split convolution into multiple groups, each group has one channel
Networks using DSC:
✓Xception(Chollet, CVPR’17)
✓MobileNet (Howard et al, CVPR’18)
✓MobileNet V2 (Sandleret al, 2018)
✓ShuffleNet V2 (Ma et al, CVPR’19)
✓NasNet(Zoph, CVPR’18)
✓…
Depth-wise Separable Convolution (DSC)

Main Idea:
✓Channel-wise attention: second order operations
Squeeze and excitation network (Hu et al, CVPR’18)

Main Idea:
✓Increase receptive field via filter dilation
Dilated convolution (Yu & Koltun, ICLR’18)

Main Idea:
✓Learn the offset filed for convolutional filters
Deformable convolution (Dai et al, CVPR’18)

PART 3
DYNAMIC NETWORKS

57.
68.7 70.5
77.8 79.6
0.
20.
40.
60.
80.
100.
AlexNet GoogleNet VGG ResNet-152DenseNet-264
Top
-
1 Accuracy
DEVELOPMENT OF DEEP LEARNING
2012 2014 2014 2015 2017

ACCURACY-TIME TRADEOFF

*Photo Courtesy of Pixel Addict (CC BY-ND 2.0)
BIGGER IS BETTER
Bigger models are needed for
those noncanonical images.

*Photo Courtesy of Willian Doyle(CC BY-ND 2.0)
BIGGER IS BETTER

Why do we use the same
expensive model for all images?

Can we use small&cheapmodels for easy images;
big&expensivemodels for hard ones?

A NAIVE IDEA OF ADAPTIVE EVALUATION
AlexNet
Inception
ResNet
"easy" horse
"hard" hoarse

CHALLENGE: LACK OF COARSE -LEVEL FEATURES
Down
-
sampling
Linear
Output
Classifiers only work wellon coarse-scalefeature maps
Nearly all computationhas been done beforegetting a coarsefeature
Fine-level features Mid-level featuresCoarse-level features
Down
-
sampling
Down
-
samplingInput
Classifier Classifier Classifier

SOLUTION: MULTI-SCALE ARCHITECTURE
Fine-level features
Mid-level features
Coarse-level features

Fine-level features
Mid-level features
Coarse-level features
SOLUTION: MULTI-SCALE ARCHITECTURE

Classifier 4Classifier 2Classifier 3Classifier 1



Test
Input

MULTI-SCALE FEATURES
Classifiers only operateon high level features!
Fine-level features
Mid-level features
Coarse-level features

Classifier 4Classifier 2Classifier 3Classifier 1



Test
Input

MULTI-SCALE DENSENET

Classifier 4Classifier 2Classifier 3Classifier 1



…Classifier 2Classifier 3Classifier 1
cat: 0.2
0.2 ≱threshold
cat: 0.4
0.4 ≱threshold
cat: 0.6
0.6 > threshold
MULTI-SCALE DENSENET

MULTI-SCALE DENSENET
Results
2x-5x speedup over DenseNet

Classifier 4Classifier 2Classifier 3Classifier 1



Test
Input

VISUALIZATION

Class:
red wine
Class:
volcano
"easy" "hard"
VISUALIZATION
(exit at firstclassifier) (exit at lastclassifier)

MORE RESULTS AND DISCUSSIONS
Please refer to
Multi-scale dense network for efficient image classification, ICLR Oral,
2018 (Acceptance rate 2.2%, Rank 4/935)

ADAPTIVE INFERENCE IS A CHALLENGING PROBLEM
How to design proper network architectures?
How to effectively training dynamic networks?
How to efficiently perform dynamic evaluation?
Adaptive inference for object detection and segmentation?
Spatial adaptive and temporal adaptive?

THANK YOU !