Main Ideas:
✓Convolution, local receptive fields, shared weights
✓Spatial subsampling
LENET[LECUNET AL. 1998]
Main Ideas:
✓ReLU(Rectified Linear Unit)
✓Dropout
✓Local Response Normalization, Overlapping Pooling
✓Data Augmentation, Multiple GPUs
ALEXNET [KRIZHEVSKY ET AL. 2012]
Main Idea:
✓Skip connection: promotes gradient propagation
RESNET [HE ET AL. 2016]
Identity mappings promote gradient propagation.
: Element-wise addition
Main Idea:
✓Dense connectivity:creates short path in the network and encourages feature reuse.
ResNet
GoogleNet
FractalNet
DENSENET[HUANG ET AL. 2017]
REDUNDANCY IN DEEP MODELS
Classifier
Input Prediction
Low-level features Mid-level features High-level features
Classifier
REDUCING REDUNDANCY
DENSE CONNECTIVITY
C C C C
: Channel-wise concatenation
C
DENSE AND SLIM
k channelsk channelsk channelsk channels
k: Growth Rate
C C C C
Model #Layers #Parameters Validation
Error
ResNet 18 11.7M 30.43%
DenseNet 121 ?? ??
DENSE AND SLIMARCHITECTURE
Model #Layers #Parameters Validation
Error
ResNet 18 11.7M 30.43%
DenseNet 121 8.0M 25.03%
DENSE AND SLIMARCHITECTURE
More connections, less computation!
THE LOSS SURFACE
https://www.cs.umd.edu/~tomg/projects/landscapes/
VGG-56
VGG-110
DenseNet-121ResNet-56
Visualizing the loss landscape of neural nets. Li, Hao, et al. NIPS 2018.a
Main Idea:
✓Multi-scale feature fusion:merge signals with different frequencies.
Interlinked CNN (Zhou et al, ISNN’15)Neural Fabric (Saxena & Verbeek, NIPS’16)
MSDNet(Huang et al, ICLR’18)
HRNet(Sun et al, CVPR’19)
MULTI-SCALE NETWORKS
Main Idea:
✓Automatic architecture searchusing reinforcement learning, genetic/evolutional
algorithms or differentiable approaches.
AutoMLis a very active research field, see www.automl.org
Neural Architecture Search [Zophand Le, 2017 and many]
PART 2
MICRO-ARCH INNOVATIONS IN CNN
Main Idea:
✓Split convolution into multiple groups
Standard Convolution Group Convolution
??????
??????×??????
??????
????????????×??????
Networks using Group Convolution:
✓AlexNet(Krizhevskyet al, NIPS’12)
✓ResNeXt(Xieet al, CVPR’17)
✓CondenseNet(Huang et al, CVPR’18)
✓ShuffleNet (Zhang et al, CVPR’18)
✓…
Group Convolution
Main Idea:
✓Split convolution into multiple groups, each group has one channel
Networks using DSC:
✓Xception(Chollet, CVPR’17)
✓MobileNet (Howard et al, CVPR’18)
✓MobileNet V2 (Sandleret al, 2018)
✓ShuffleNet V2 (Ma et al, CVPR’19)
✓NasNet(Zoph, CVPR’18)
✓…
Depth-wise Separable Convolution (DSC)
Main Idea:
✓Channel-wise attention: second order operations
Squeeze and excitation network (Hu et al, CVPR’18)
Main Idea:
✓Increase receptive field via filter dilation
Dilated convolution (Yu & Koltun, ICLR’18)
Main Idea:
✓Learn the offset filed for convolutional filters
Deformable convolution (Dai et al, CVPR’18)
PART 3
DYNAMIC NETWORKS
57.
68.7 70.5
77.8 79.6
0.
20.
40.
60.
80.
100.
AlexNet GoogleNet VGG ResNet-152DenseNet-264
Top
-
1 Accuracy
DEVELOPMENT OF DEEP LEARNING
2012 2014 2014 2015 2017
ACCURACY-TIME TRADEOFF
*Photo Courtesy of Pixel Addict (CC BY-ND 2.0)
BIGGER IS BETTER
Bigger models are needed for
those noncanonical images.
*Photo Courtesy of Willian Doyle(CC BY-ND 2.0)
BIGGER IS BETTER
Why do we use the same
expensive model for all images?
Can we use small&cheapmodels for easy images;
big&expensivemodels for hard ones?
A NAIVE IDEA OF ADAPTIVE EVALUATION
AlexNet
Inception
ResNet
"easy" horse
"hard" hoarse
CHALLENGE: LACK OF COARSE -LEVEL FEATURES
Down
-
sampling
Linear
Output
Classifiers only work wellon coarse-scalefeature maps
Nearly all computationhas been done beforegetting a coarsefeature
Fine-level features Mid-level featuresCoarse-level features
Down
-
sampling
Down
-
samplingInput
Classifier Classifier Classifier
SOLUTION: MULTI-SCALE ARCHITECTURE
Fine-level features
Mid-level features
Coarse-level features
Fine-level features
Mid-level features
Coarse-level features
SOLUTION: MULTI-SCALE ARCHITECTURE
Classifier 4Classifier 2Classifier 3Classifier 1
…
…
…
Test
Input
…
MULTI-SCALE FEATURES
Classifiers only operateon high level features!
Fine-level features
Mid-level features
Coarse-level features
Class:
red wine
Class:
volcano
"easy" "hard"
VISUALIZATION
(exit at firstclassifier) (exit at lastclassifier)
MORE RESULTS AND DISCUSSIONS
Please refer to
Multi-scale dense network for efficient image classification, ICLR Oral,
2018 (Acceptance rate 2.2%, Rank 4/935)
ADAPTIVE INFERENCE IS A CHALLENGING PROBLEM
How to design proper network architectures?
How to effectively training dynamic networks?
How to efficiently perform dynamic evaluation?
Adaptive inference for object detection and segmentation?
Spatial adaptive and temporal adaptive?