Classification by back propagation, multi layered feed forward neural networks -bihira aggrey
bihiraaggrey
1,557 views
19 slides
Nov 25, 2018
Slide 1 of 19
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
About This Presentation
Classification by Back Propagation, Multi-layered feed forward Neural Networks - Provides a basic introduction of classification in data mining with neural networks
Size: 38.44 MB
Language: en
Added: Nov 25, 2018
Slides: 19 pages
Slide Content
ClassificationbyBackPropagation,Multi-layeredfeed
forwardNeuralNetworks
By
BihiraAggrey
NeuralNetworksarecomputingsystemsinspiredbybiologicalneuronsthat
constituteanimalbrains.Thesesystemslearntodotasksbyconsidering
examplesgenerallywithouttaskspecificprogramming.
Neuron vs. Node
ARCHITECTURE OF A NEURON MODEL
ConsideranelementaryneuronwithRinputsshownbelow.Eachinputis
weightedwithanappropriatew.Thesumoftheweightedinputsandthe
biasformstheinputtotheactivationfunctionf.
A multilayer feed-forward neural
network consists of an input layer,
one or more hidden
layers, and an output layer.
The inputsto the network
correspond to the attributes
measured for each trainingtuple.
The inputs are fed simultaneously into
the units making up the inputlayer.
These inputs pass through the input
layer and are then weighted and fed
simultaneously to a second layer
known as a hiddenlayer.
The outputs of the hidden layer units
can be input to another hidden
layer, and so on.
This structure is called multilayerbecause it has a layer of
processing units (i.e., the hidden units) in addition to the
output units. These networks are called feedforward
because the output from one layer of neurons feeds
forward into the next layer of neurons.
A multilayer feed-forward neural network
For the error of a hidden layer unit j, the weighted sum of the errors of
the units connected to unit jin the next layer are considered.
The error for the hidden layer unit j is then calculated from;
where Wjkis the weight of the connection from unit j
to a unit k in the next higher layer, and Errkis the
error of unit k.
The network was trained with a run consisting of 60,000
iterations, Since the training data has 150 cases, each case was
presented to the network 400 times.
The output values of neurons in the output layer are used to
compute the error. This error is used to adjust the weights of
all the connections in the network using the backward
propagation.
The backprop algorithm cycles through two distinct passes, a forward pass followed by a
backward pass through the layers of the network. The algorithm alternates between these
passes several times as it scans the training data. Typically, the training data has to be
scanned several times before the networks ”learns” to make good classifications.