Classification_by_back_&propagation.pptx

SadiaSaleem301 60 views 11 slides Sep 11, 2024
Slide 1
Slide 1 of 11
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11

About This Presentation

Data Mining notes for back propagation


Slide Content

Classification by back propagation Presented By: Faizan Enrollment No: 21048112022 Semester: 6TH

So, What is Back Propagation. Backpropagation is an algorithm that back propagates the errors from the output nodes to the input nodes. Therefore, it is simply referred to as the backward propagation of errors. It uses in the vast applications of neural networks in data mining like Character recognition, Signature verification, etc. The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight via the chain rule, computing the gradient layer by layer, and iterating backward from the last layer to avoid redundant computation of intermediate terms

First What is a Neural Network. Neural networks are an information processing paradigm inspired by the human nervous system. Just like in the human nervous system, we have biological neurons in the same way in neural networks we have artificial neurons, artificial neurons are mathematical functions derived from biological neurons. The human brain is estimated to have about 10 billion neurons, each connected to an average of 10,000 other neurons.

Features of Back Propagation. Backpropagation is a powerful algorithm with several key features that make it essential for training neural networks. Here are some of the main features: Supervised Learning Algorithm : Training with Labeled Data: Backpropagation is used in supervised learning, where the neural network is trained on labeled data. The correct output (label) is known, and the network adjusts its weights to minimize the difference between its predictions and the actual labels. Flexibility and Generalization : Applicable to Various Architectures: Backpropagation can be used to train various types of neural networks, including feedforward networks, convolutional neural networks (CNNs), and recurrent neural networks (RNNs). Non-Linear Learning : Works with Non-Linear Activation Functions: Backpropagation is effective in networks with non-linear activation functions (like ReLU, Sigmoid, or Tanh), allowing the network to model complex, non-linear relationships in the data. Backward Propagation of Error : Error Correction: The primary feature of backpropagation is its ability to propagate the error from the output layer back through the network. This error is used to correct the weights, ensuring that the network learns from its mistakes and improves its predictions.

How Back Propagation works. Backpropagation consists of two main phases : Forward Pass. Backward Pass. Input Data: The input data is passed through the network layer by layer . Calculations: Each neuron computes a weighted sum of its inputs, applies an activation function, and passes the result to the next layer. Output: The final output is generated at the output layer, which represents the model’s prediction. Loss Calculation: The predicted output is compared with the actual target value to calculate the loss (error). Common loss functions include Mean Squared Error (MSE) for regression and Cross-Entropy Loss for classification.

Backward pass Gradient Calculation: Backpropagation calculates the gradient of the loss function with respect to each weight by applying the chain rule of calculus. This involves propagating the error backward through the network from the output layer to the input layer. Weight Updates: The weights are updated in the direction that reduces the loss. This is done by subtracting the gradient (scaled by a learning rate) from each weight. W:=w−η⋅ ∂w/∂L Where: w is the weight.
𝜂 is the learning rate. ∂L/∂w is the gradient of the loss function with respect to the weight 𝑤.

Back propagation Algorithm: Backpropagation Algorithm: Step 1: Inputs X, arrive through the preconnected path. Step 2: The input is modeled using true weights W. Weights are usually chosen randomly. Step 3: Calculate the output of each neuron from the input layer to the hidden layer to the output layer. Step 4: Calculate the error in the outputs. Backpropagation Error = Actual Output – Desired Output Step 5: From the output layer, go back to the hidden layer to adjust the weights to reduce the error. Step 6: Repeat the process until the desired output is achieved. Parameters: X = inputs training vector x=(x1,x2,…………xn).
T = target vector t=(t1,t2……………tn). Δ k = error at output unit. Δ j = error at hidden layer. Α = learning rate.
V0j = bias of hidden unit j.

Types of Back Propagation: There are two types of backpropagation networks. Static backpropagation: Static backpropagation is a network designed to map static inputs for static outputs. These types of networks are capable of solving static classification problems such as OCR (Optical Character Recognition). Recurrent backpropagation: Recursive backpropagation is another network used for fixed-point learning. Activation in recurrent backpropagation is feed-forward until a fixed value is reached. Static backpropagation provides an instant mapping, while recurrent backpropagation does not provide an instant mapping.

Advantages and Disadvantages: Advantages: It is simple, fast, and easy to program.
Only numbers of the input are tuned, not any other parameter.
It is Flexible and efficient.
No need for users to learn any special functions. Disadvantages: It is sensitive to noisy data and irregularities. Noisy data can lead to inaccurate results.
Performance is highly dependent on input data.
Spending too much time training.
The matrix-based approach is preferred over a mini-batch.

Conclusion: Learning from Errors: Backpropagation adjusts the network’s weights based on the errors it makes. It calculates how much each weight contributed to the error and updates them to reduce this error. Efficient Training: It uses a method called the chain rule to efficiently compute these adjustments across all layers of the network, making it possible to train even complex models. Versatile: It works with different types of neural networks, making it a versatile tool in machine learning. In short, backpropagation is essential for teaching neural networks how to improve their predictions, making it a fundamental part of modern AI and machine learning.

Thank You