Deep Learning - RNN and CNN

PradnyaSaval 1,376 views 24 slides Jun 07, 2020
Slide 1
Slide 1 of 24
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24

About This Presentation

Deep Learning - RNN and CNN


Slide Content

Deep Learning – Recurrent Neural Network and Convolutional Neural Network Ms. Pradnya Saval

CONTENTS

Feedforward networks Feedforward networks, also called as Deep feedforward networks or multilayer perceptrons (MLPs) . These models are called feedforward because information flows through the function being evaluated through the intermediate computations and finally to the output. There are no feedback connections in which outputs of the model are fed back into itself so the outputs are independent of each other.

Problems with Feedforward networks Eg : Reading a Book We cannot predict the next word/output in a sentence/model if we use Feedforward networks.

Recurrent Neural Networks When feedforward neural networks are extended to include feedback connections, they are called Recurrent Neural Networks (RNN) .

Example

Mathematical Representation of RNN

Problems using RNN

Convolutional Neural Network (CNN) A Convolutional Neural Network ( ConvNet /CNN) is a Deep Learning algorithm which can take in an input image, assign importance (learnable weights and biases) to various aspects/objects in the image and be able to differentiate one from the other.

Architecture of CNN

Working of CNN Layers: Convolution ReLU Layer(Activation function) Pooling Fully Connected

Convolution Layer

Activation Functions Name Formula Graph Range Sigmoid (Logistic Function) (0,1) Tanh (Hyperbolic tangent) tanh(a) = (-1,1) ReLu (Rectified linear unit) relu (a) = max(0,a) (0,∞) Softmax Different everytime (0,1) Name Formula Graph Range Sigmoid (Logistic Function) (0,1) Tanh (Hyperbolic tangent) (-1,1) ReLu (Rectified linear unit) relu (a) = max(0,a) (0,∞) Softmax Different everytime (0,1)

ReLU Layer (activation Function) Activation function of a neuron defines the output of that neuron given a set of inputs. ReLU layers work far better because the network is able to train a lot faster (because of the computational efficiency) without making a significant difference to the accuracy. Example: Climate

ReLu (Rectified linear unit) The more positive the neuron the most activate it is.

Pooling Its function is to progressively reduce the spatial size of the representation to reduce the amount of parameters and computation in the network.  Types: Average Pooling Max Pooling Pooling  layer operates on each feature map independently. The most common approach used in  pooling  is max  pooling .

Max Pooling

Fully Connected Layer Fully Connected Layers form the last few layers in the network. The  input  to the fully connected layer is the output from the  final  Pooling or Convolutional Layer, which is  flattened  and then fed into the fully connected layer.

Projects of RNN and CNN

Demonstration of CNN Flower Classification using CNN Dataset: Kaggle: https://www.kaggle.com/alxmamaev/flowers-recognition

Epochs of CNN

Epochs of CNN

Resources https://towardsdatascience.com/convolutional-neural-network-17fb77e76c05 https://www.kaggle.com/search?q=deep+learning https://pythonprogramming.net/introduction-deep-learning-python-tensorflow-keras/