AlexNet.pptx

ssuser2624f71 196 views 12 slides Sep 20, 2023
Slide 1
Slide 1 of 12
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12

About This Presentation

AlexNet


Slide Content

AlexNet Min- Seo Kim Network Science Lab Dept. of Artificial Intelligence The Catholic University of Korea E-mail: [email protected]

Ongoing studies AlexNet VGGNet

1. AlexNet Current approaches to object recognition include collecting larger datasets, learning more powerful models, and using better techniques to prevent overfitting.

1. AlexNet - Dataset Down-sampled the images to a fixed resolution of 256 × 256. rescaled the image such that the shorter side was of length 256, and then cropped out the central 256×256 patch from the resulting image.

1. AlexNet - Architecture Activation function- ReLU In terms of training time with gradient descent, these saturating nonlinearities(sigmoid, tanh) are much slower than the non-saturating nonlinearity( ReLU ) A four-layer convolutional neural network with ReLUs ( solid line ) reaches a 25% training error rate on CIFAR-10 six times faster than an equivalent network with tanh neurons ( dashed line ).

1. AlexNet - Architecture Training on Multiple GPUs A single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks that can be trained on it.

1. AlexNet - Architecture Local Response Normalization

1. AlexNet - Architecture Overlapping Pooling

1. AlexNet - Architecture Overall Architecture 224×224×3 input image with 96 kernels of size 11×11×3 with a stride of 4 pixels The second convolutional layer has 256 kernels of size 5 × 5 × 48 The third convolutional layer has 384 kernels of size 3 × 3 × 256 The fourth convolutional layer has 384 kernels of size 3 × 3 × 192 The fifth convolutional layer has 256 kernels of size 3 × 3 × 192 The fully-connected layers have 4096 neurons each. Response-normalization layers follow the first and second convolutional layers.

1. AlexNet - Reducing Overfitting Data Augmentation The easiest and most common method to reduce overfitting on image data is to artificially enlarge the dataset using label-preserving transformations. Horizontal Reflection and Image Translations Altering the intensities of the RGB channels

1. AlexNet - Reducing Overfitting Dropout

End. QnA My blog article : https://velog.io/@kms39273/CNNAlexNet-%EB%85%BC%EB%AC%B8-%EB%A6%AC%EB%B7%B0
Tags