Learning Rules - Introduction Learning Rule: A method or algorithm by which a neural network updates its weights and bias during training. Purpose: ✅ Minimize error between predicted and desired outputs. ✅ Enable the network to learn from data. Key Learning Rules Covered: Hebbian Learning Rule Perceptron Learning Rule Delta Rule Widrow -Hoff Learning Rule
Hebbian Learning Rule This rule is based on the biological concept that "neurons that fire together, wire together. It is an unsupervised learning algorithm used in neural networks to adjust the weights between nodes. It is based on the principle that the connection strength between two neurons should change depending on their activity patterns. The rule can be summarized as follows: When two neighboring neurons operate in the same phase at the same time, the weight between them increases. If the neurons operate in opposite phases, the weight between them decreases. When there is no signal correlation between the neurons, the weight remains unchanged. The sign of the weight between two nodes is determined by the sign of their input signals: If both nodes receive inputs that are either positive or negative, the resulting weight is strongly positive. If one node's input is positive while the other's is negative, the resulting weight is strongly negative.
Mathematical Formulation : δw =α xiy Here : δw is the change in weight. α is the learning rate. X i represents the input vector. y is the output. This rule forms the foundation of many learning processes in artificial neural networks.
Perceptron Learning Rule Principle: This rule adjusts weights to minimize classification errors. Perceptron Learning Rule is an error-correcting algorithm designed for single-layer feedforward networks. It is a supervised learning approach that adjusts weights based on the error calculated between the desired and actual outputs. Weight adjustments are made only when an error is present. The process is computed as follows: Definitions: (x1,x2,x3,…, xn ): Set of input vectors (w1,w2,w3,…, wn ): Set of weights y: Actual output w : Initial weight w new : New weight δw : Change in weight α: Learning rate
Computation : The actual output is given by: y= w i x i The learning signal, representing the error, is calculated as: e j = t i −y where t i is the desired output. The change in weight ( δw ) is determined by : δw =α x i e j The new weight is updated as: w new =w +δ w Output Calculation: The final output is based on the net input and the activation function applied to it: y={1,if net input ≥θ 0,if net input <θ
This rule provides the foundation for learning in perceptrons , enabling them to make adjustments and improve performance iteratively.
Delta Learning Rule Principle: Minimizes the error between the actual output and the desired output using gradient descent. It is a supervised learning algorithm that uses a continuous activation function. Also known as the Least Mean Square (LMS) method , it aims to minimize the error across all training patterns. This rule is based on the gradient descent approach , which iteratively reduces error by updating the weights of the network. Computed as follows :
Perceptron The simplest form of a neural network consists of a single neuron with adjustabie synaptic weights and bias performs pattern classification with only two classes perceptron convergence theorem : Patterns (vectors) are drawn from two linearly separable classes During training, the perceptron algorithm converges and positions the decision surface in the form of hyperplane between two classes by adjusting synaptic weights
Single Layer Perceptron A Single Layer Perceptron is one of the simplest types of artificial neural networks, used mainly for binary classification (sometimes multi-class with modifications). It is inspired by biological neurons and their ability to process information. An artificial neuron is a simplified computational model that represents the behavior of a biological neuron. It takes inputs, processes them and produces an output . It works as: Receive signal from outside. Process the signal and decide whether we need to send information or not. Communicate the signal to the target cell, which can be another neuron or gland.
Structure of a biological neuron Neural Network in Machine Learning
Classification Model The SLP model consists of: Inputs (features) : X 1 ,X 2 ,..., X n Weights : W 1 ,W 2 ,..., W n ( one for each input) Bias : b (allows shifting the decision boundary) Activation Function : typically a step function or sign function The perceptron computes:
Features Inputs to the model. Each input feature contributes a weighted influence on the decision. Examples: For image classification: pixel intensities For medical data: blood pressure, age, cholesterol For finance: income, debt, credit score The feature vector : X=[X 1 ,X 2 ,..., X n ]
Decision Reasons The perceptron classifies by: Calculating the weighted sum of input features plus bias: Passing z through a threshold function : If z≥ 0, assign to Class 1 Else, assign to Class 0 The result depends entirely on: Feature values Weights (learned during training using the perceptron learning rule) Bias This creates a linear decision boundary ( hyperplane in higher dimensions). All samples on one side of the boundary are classified as one class, and the rest as the other.