Machine Learning- Perceptron_Backpropogation_Module 3.pdf

DrShivashankar1 1,090 views 37 slides Jul 19, 2024
Slide 1
Slide 1 of 37
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37

About This Presentation

Machine Learning- Perceptron_Backpropogation


Slide Content

MACHINE LEARNING (INTEGRATED)
(21ISE62)
Dr. Shivashankar
Professor
Department of Information Science & Engineering
GLOBAL ACADEMY OF TECHNOLOGY-Bengaluru
7/19/2024 1Dr. Shivashankar, ISE, GAT
GLOBAL ACADEMY OF TECHNOLOGY
Ideal Homes Township, RajarajeshwariNagar, Bengaluru –560 098
Department of Information Science & Engineering

Course Outcomes
AfterCompletionofthecourse,studentwillbeableto:
IllustrateRegressionTechniquesandDecisionTreeLearning
Algorithm.
ApplySVM,ANNandKNNalgorithmtosolveappropriateproblems.
ApplyBayesianTechniquesandderiveeffectivelearningrules.
IllustrateperformanceofAIandMLalgorithmsusingevaluation
techniques.
Understandreinforcementlearninganditsapplicationinrealworld
problems.
TextBook:
1.TomM.Mitchell,MachineLearning,McGrawHillEducation,IndiaEdition2013.
2.EthemAlpaydın,Introductiontomachinelearning,MITpress,Secondedition.
3.Pang-NingTan,MichaelSteinbach,VipinKumar,IntroductiontoDataMining,
Pearson,FirstImpression,2014.
7/19/2024 2Dr. Shivashankar, ISE, GAT

Module 3: Artificial Neural Networks
•Motivationbehindneuralnetworkishumanbrain.Humanbrainiscalledasthebestprocessor
eventhoughitworksslowerthanothercomputers.
•Humanbraincells,calledneurons,formacomplex,highlyinterconnectednetworkandsend
electricalsignalstoeachothertohelphumansprocessinformation.
•Similarly,anartificialneuralnetworkismadeofartificialneuronsthatworktogethertosolvea
realworldproblems.
•Artificialneuronsaresoftwaremodules,callednodes,andartificialneuralnetworksaresoftware
programsoralgorithmsthat,attheircore,usecomputingsystemstosolvemathematical
calculations.
7/19/2024 3Dr. Shivashankar, ISE, GAT
Fig 3.3: Artificial Neural Networks

Conti..
InputLayer
•Thisisthefirstlayerinatypicalneuralnetwork.
•Inputlayerneuronsreceivetheinputinformationfromtheoutsideworldenterstheartificialneural
network,processitthroughamathematicalfunction(activationfunction),andtransmitoutputtothe
nextlayer’sneuronsbasedoncomparisonwithapresetthresholdvalue.
•Wepre-processtext,image,audio,video,andothertypesofdatatoderivetheirnumericrepresentation.
HiddenLayer
•Hiddenlayerstaketheirinputfromtheinputlayerorotherhiddenlayersandhavealargenumberof
hiddenlayers.Itcontainsthesummationandactivationfunction.
•Eachhiddenlayeranalyzestheoutputfromthepreviouslayer,processesitfurther,andpassesitonto
thenextlayer.Herealso,wemultiplythedatabyedgeweightsasitistransmittedtothenextlayer.
OutputLayer
•Theoutputlayergivesthefinalresultofallthedataprocessingbytheartificialneuralnetwork.Itcan
havesingleormultiplenodes.
•Forinstance,ifwehaveabinary(yes/no)classificationproblem,theoutputlayerwillhaveoneoutput
node,whichwillgivetheresultas1or0.
•However,ifwehaveamulti-classclassificationproblem,theoutputlayermightconsistofmorethanone
outputnode.
7/19/2024 4Dr. Shivashankar, ISE, GAT

Conti..
•Itisusuallyacomputationalnetworkbasedonbiologicalneuralnetworksthat
constructthestructureofthehumanbrain.
•Similartoahumanbrainhasneuronsinterconnectedtoeachother,artificial
neuralnetworksalsohaveneuronsthatarelinkedtoeachotherinvarious
layersofthenetworks.
•Theseneuronsareknownasnodes.
•Artificialneuralnetworks(ANNs)provideageneral,practicalmethodfor
learningreal-valued,discrete-valued,andvector-valuedfunctionsfrom
examples.
•ANNlearningisrobusttoerrorsinthetrainingdataandhasbeensuccessfully
appliedtoproblemssuchasinterpretingvisualscenes,speechrecognition,
andlearningrobotcontrolstrategies.
•Thefastestneuronswitchingtimesareknowntobeontheorderof10
−3
seconds--quiteslowcomparedtocomputerswitchingspeedsof10
−10
seconds.
7/19/2024 5Dr. Shivashankar, ISE, GAT

Biological Motivation
•Theterm"ArtificialNeuralNetwork(ANN)"referstoabiologicallyinspiredsub-fieldof
artificialintelligencemodeledafterthebrain.
•ANNhasbeeninspiredbybiologicallearningsystembiologicallearningsystemismade
upofcomplexwebofinterconnectedneurons.
•ArtificialinterconnectedneuronslikebiologicalneuronsmakingupanANN.
•Eachbiologicalneuroniscapableoftakinganumberofinputsandproduceoutput.
•OnemotivationforANNisthattoworkforaparticulartaskidentificationthroughmany
parallelprocesses.
Considerhumanbrain:
•Numberofneurons~10
11
neurons
•Connectionsperneurons~10
4−5
•Neuronsswitchingtime~10
−3
seconds(0.001)
•Computerswitchingtime~10
−10
seconds
•Scenerecognitiontime~10
−1
seconds(0.1)
7/19/2024 6Dr. Shivashankar, ISE, GAT

NEURAL NETWORK REPRESENTATIONS
•Inanartificialneuralnetwork,aneuroneisalogisticunit.
Feedinputviainputwires.
Logisticunitdoescomputation.
Sendsoutputdownoutputwires
•Thatlogisticcomputationisjustlikeourpreviouslogisticregressionhypothesis
calculation.
•Input–30*32grid–camera.
•Output–Vehicleissteered.
•Training–Observingsteeringcommandsofhumandrivingthevehicle.
•960inputs–30outputunits–Steeringcommandrecommendedmost.
•ALVINN–acyclicgraph.
7/19/2024 7Dr. Shivashankar, ISE, GAT

PERCEPTRONS
•OnetypeofANNsystemisbasedonaunitcalledaperceptron.
•Aperceptrontakesavectorofreal-valuedinputs,calculatesalinearcombinationoftheseinputs,
thenoutputsa1iftheresultisgreaterthansomethresholdand-1otherwise.
•Moreprecisely,giveninputs�
1through�
2,theoutputo(�
1,……�
�)computedbytheperceptron
is o(�
1,……�
�)=ቊ
1���
0+�
1�
1+�
2�
2+⋯+�
��
�>0
−1��ℎ������
•whereeach�
�isareal-valuedconstant,orweight,thatdeterminesthecontributionofinput�
�to
theperceptronoutput.
•Wewillsometimeswritetheperceptronfunctionas
�(Ԧ�)=sgn�.Ԧ�
•Where,sgn(y)=ቊ
1���>0
−1��ℎ������
•Learningaperceptroninvolveschoosingvaluesfortheweights�
0,…..�
�.Therefore,thespaceH
ofcandidatehypothesesconsideredinperceptronlearningisthesetofallpossiblereal-valued
weightvectors.
•H=�|�????????????
�+1
7/19/2024 8Dr. Shivashankar, ISE, GAT

Representational Power of Perceptron
•Wecanviewtheperceptronasrepresentingahyperplanedecisionsurfaceinthen-
dimensionalspaceofinstances(i.e.,points).
•Theperceptronoutputsa1forinstanceslyingononesideofthehyperplaneand
outputsanda-1forinstanceslyingontheotherside.
•Theequationforthisdecisionhyperplaneis�.Ԧ�=0.
•Ofcourse,somesetsofpositiveandnegativeexamplescannotbeseparatedbyany
hyperplane.
•Thosethatcanbeseparatedarecalledlinearlyseparablesetsofexamples.
•Asingleperceptroncanbeusedtorepresentmanybooleanfunctions.
7/19/2024 9Dr. Shivashankar, ISE, GAT

Cont…
•ANDandORcanbeviewedasspecialcasesofm-of-nfunctions:thatis,functions
whereatleastmoftheninputstotheperceptronmustbetrue.
•TheORfunctioncorrespondstom=1andtheANDfunctiontom=n.
•Anym-of-nfunctioniseasilyrepresentedusingaperceptronbysettingallinput
weightstothesamevalue(e.g.,0.5)andthensettingthethresholdtaccordingly.
•PerceptroncanrepresentalloftheprimitivebooleanfunctionsAND,OR,NAND(¬
AND),andNOR(¬OR).
•TheabilityofperceptrontorepresentAND,OR,NAND,andNORisimportantbecause
everybooleanfunctioncanberepresentedbysomenetworkofinterconnectedunits
basedontheseprimitives.
7/19/2024 10Dr. Shivashankar, ISE, GAT

The Perceptron Training Rule
•learningproblemistodetermineaweightvectorthatcausestheperceptronto
producethecorrectoutputforeachofthegiventrainingexamples.
•Onewaytolearnanacceptableweightvectoristobeginwithrandomweights,then
iterativelyapplytheperceptrontoeachtrainingexample,modifyingtheperceptron
weightswheneveritmisclassifiesanexample.
•Thisprocessisrepeated,iteratingthroughthetrainingexamplesasmanytimesas
neededuntiltheperceptronclassifiesalltrainingexamplescorrectly.
•Ateverystepoffeedingatrainingexample,whentheperceptronfailstoproducethe
correct+1/-1,wereviseeveryweight�
�associatedwitheveryinput�
�,accordingto
thefollowingrule:
�
�←�
�+∆�
�
Where ∆�
�=ƞ�−��
�
tisthetargetoutputforthecurrenttrainingexample,
oistheoutputgeneratedbytheperceptron,and
ƞisapositiveconstantcalledthelearningrate.Theroleofthelearningrateistomoderate
thedegreetowhichweightsarechangedateachstep.
∆:Thisisthelearningrate,orthestepsize.
7/19/2024 11Dr. Shivashankar, ISE, GAT

The Perceptron Training Rule
•InordertotrainthePerceptronf(X<W):
&#3627408484;
&#3627408470;←&#3627408484;
&#3627408470;+∆&#3627408484;
&#3627408470;
Where ∆&#3627408484;
&#3627408470;=ƞ&#3627408481;−&#3627408476;&#3627408485;
&#3627408470;
7/19/2024 12Dr. Shivashankar, ISE, GAT
Initialize the weight, W, randomly.
 For as many times as necessary:
For each training examples x????????????
Compute f(x,W)
If x is misclassified:
Modify the weight, &#3627408484;
&#3627408470;associated with every &#3627408485;
&#3627408470;in x.

Problem
Problem1:Assume&#3627408484;
1=0.6&#3627408462;&#3627408475;&#3627408465;&#3627408484;
2=0.6,&#3627408481;ℎ&#3627408479;&#3627408466;&#3627408480;ℎ&#3627408476;??????&#3627408465;=1&#3627408462;&#3627408475;&#3627408465;??????&#3627408466;&#3627408462;&#3627408479;&#3627408475;&#3627408470;&#3627408475;&#3627408468;&#3627408479;&#3627408462;&#3627408481;&#3627408466;ƞ=0.5.
ComputeORgateusingperceptrontrainingrule.
Solution:1.A=0,B=0andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=&#3627408484;
1&#3627408485;
1+&#3627408484;
2&#3627408485;
2
=0.6*0+0.6*0=0
Thisisnotgreaterthanthethresholdvalueof1.
Sotheoutput=0
2.A=0,B=1andtarget=1
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=0.6∗0+0.6*1=0.6
Thisisnotgreaterthanthethresholdvalueof1.Sotheoutput=0.
&#3627408484;
&#3627408470;=&#3627408484;
&#3627408470;+ƞ(t-o)&#3627408485;
&#3627408470;
&#3627408484;
1=0.6+0.5(1-0)0=0.6
&#3627408484;
2=0.6+0.5(1-0)1=1.1
Now&#3627408536;
&#3627409359;=0.6,&#3627408536;
&#3627409360;=1.1,threshold=1andlearningrateƞ=0.5
7/19/2024 13Dr. Shivashankar, ISE, GAT
A B Y=A+B
(Target)
0 0 0
0 1 1
1 0 1
1 1 1

Problem
•Now&#3627408536;
&#3627409359;=0.6,&#3627408536;
&#3627409360;=1.1,threshold=1andlearningrateƞ=0.5
1.A=0,B=0andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=&#3627408484;
1&#3627408485;
1+&#3627408484;
2&#3627408485;
2
=0.6*0+1.1*0=0
Thisisnotgreaterthanthethresholdvalueof1.
Sotheoutput=0
2.A=0,B=1andtarget=1
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=0.6∗0+1.1*1=1.1
Thisisgreaterthanthethresholdvalueof1.
Sotheoutput=1.
3.A=1,B=0andtarget=1
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=0.6∗1+1.1*0=0.6
Thisisnotgreaterthanthethresholdvalueof1.
Sotheoutput=0.
&#3627408484;
&#3627408470;=&#3627408484;
&#3627408470;+ƞ(t-0)&#3627408485;
&#3627408470;
&#3627408484;
1=0.6+0.5(1-0)1=1.1
&#3627408484;
2=1.1+0.5(1-0)0=1.1
7/19/2024 14Dr. Shivashankar, ISE, GAT

Problem
•Now&#3627408536;
&#3627409359;=1.1,&#3627408536;
&#3627409360;=1.1,threshold=1andlearningrateƞ=0.5
1.A=0,B=0andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=&#3627408484;
1&#3627408485;
1+&#3627408484;
2&#3627408485;
2
=1.1*0+1.1*0=0
Thisisnotgreaterthanthethresholdvalueof1.
Sotheoutput=0
2.A=0,B=1andtarget=1
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=1.1∗0+1.1*1=1.1
Thisisgreaterthanthethresholdvalueof1.
Sotheoutput=1.
3.A=1,B=0andtarget=1
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=1.1∗1+1.1*0=1.1
Thisisgreaterthanthethresholdvalueof1.
Sotheoutput=1.
4.A=1.B=1andtarget=1
&#3627408484;
&#3627408470;=1.1*1+1.1*1=2.2
Thisisgreaterthanthethresholdvalueof1.
Sotheoutput=1.
7/19/2024 15Dr. Shivashankar, ISE, GAT
B
A
1.1
1.1
∈ ??????=1
??????&#3627408482;&#3627408481;&#3627408477;&#3627408482;&#3627408481;

Problem
Problem2:Assume&#3627408484;
1=1.2&#3627408462;&#3627408475;&#3627408465;&#3627408484;
2=0.6,&#3627408481;ℎ&#3627408479;&#3627408466;&#3627408480;ℎ&#3627408476;??????&#3627408465;=1&#3627408462;&#3627408475;&#3627408465;??????&#3627408466;&#3627408462;&#3627408479;&#3627408475;&#3627408470;&#3627408475;&#3627408468;&#3627408479;&#3627408462;&#3627408481;&#3627408466;ƞ=0.5.
ComputeANDgateusingperceptrontrainingrule.
Solution:1.A=0,B=0andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=&#3627408484;
1&#3627408485;
1+&#3627408484;
2&#3627408485;
2
=1.2*0+0.6*0=0
Thisisnotgreaterthanthethresholdvalueof1.
Sotheoutput=0
2.A=0,B=1andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=1.2∗0+0.6*1=0.6
Thisisnotgreaterthanthethresholdvalueof1,Sotheoutput=0.
3.A=1,B=0andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=1.2∗1+0.6*0=1.2
Thisisgreaterthanthethresholdvalueof1,Sotheoutput=1.
&#3627408484;
&#3627408470;=&#3627408484;
&#3627408470;+ƞ(t-o)&#3627408485;
&#3627408470;
&#3627408484;
1=1.2+0.5(0-1)1=0.7
&#3627408484;
2=0.6+0.5(0-1)0=0.6
Now&#3627408536;
&#3627409359;=0.7,&#3627408536;
&#3627409360;=0.6,threshold=1andlearningrateƞ=0.5
7/19/2024 16Dr. Shivashankar, ISE, GAT
A B Y=A+B
(Target)
0 0 0
0 1 0
1 0 0
1 1 1

Problems
For&#3627408536;
&#3627409359;=0.7,&#3627408536;
&#3627409360;=0.6,threshold=1andlearningrateƞ=0.5
1.A=0,B=0andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=&#3627408484;
1&#3627408485;
1+&#3627408484;
2&#3627408485;
2
=0.7*0+0.6*0=0
Thisisnotgreaterthanthethresholdvalueof1.
Sotheoutput=0
2.A=0,B=1andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=0.7∗0+0.6*1=0.6
Thisisnotgreaterthanthethresholdvalueof1.
Sotheoutput=0.
3.A=1,B=0andtarget=0
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=0.7∗1+0.6*0=0.7
Thisisnotgreaterthanthethresholdvalueof1.
Sotheoutput=0.
4.A=1,B=1andtarget=1
&#3627408484;
&#3627408470;&#3627408485;
&#3627408470;=0.7∗1+0.6*1=1.3
Thisisgreaterthanthethresholdvalueof1.
Sotheoutput=1.
7/19/2024 17Dr. Shivashankar, ISE, GAT
A
B
0.7
0.6
∈ ??????=1
Weighted sum
Output

Problem
•Problem3:considerX-ORgate,computePerceptrontrainingrulewiththreshold=1and
learningrate=1.5.
•Solution:y=&#3627408485;
1ҧ&#3627408485;
2+ҧ&#3627408485;
1&#3627408485;
2
•Y=&#3627408461;
1+&#3627408461;
2
•Where,&#3627408461;
1=&#3627408485;
1ҧ&#3627408485;
2(&#3627408441;&#3627408482;&#3627408475;&#3627408464;&#3627408481;&#3627408470;&#3627408476;&#3627408475;1),
•&#3627408461;
2=ҧ&#3627408485;
1&#3627408485;
2(Function2)
•Y=&#3627408461;
1OR&#3627408461;
2(Function3)
•Firstfunction:&#3627408461;
1=&#3627408485;
1ҧ&#3627408485;
2
•Assumetheinitialweightsare&#3627408458;
11=&#3627408458;
21=1
•Threshold=1andLearningrate=1.5
7/19/2024 18Dr. Shivashankar, ISE, GAT
&#3627408485;
1 &#3627408485;
2 y
0 0 0
0 1 1
1 0 1
1 1 0
&#3627408459;
1
&#3627408459;
2
&#3627408461;
1
Y
&#3627408461;
2
&#3627408485;
1 &#3627408484;
11
&#3627408485;
2
&#3627408484;
12
&#3627408484;
21
&#3627408484;
22
&#3627408486;
1
&#3627408486;
2
y
&#3627408485;
1 &#3627408485;
2 &#3627408461;
1
0 0 0
0 1 0
1 0 1
1 1 0

Problem
(0,0)&#3627408461;
1&#3627408470;&#3627408475;=&#3627408484;
&#3627408470;,&#3627408471;∗&#3627408485;
&#3627408470;=1∗0+1∗0=0(output=0)
(0,1)&#3627408461;
1&#3627408470;&#3627408475;=1∗0+1∗1=1(output=1)
&#3627408484;
&#3627408470;,&#3627408471;=&#3627408484;
&#3627408470;,&#3627408471;+ƞ(t-o)&#3627408485;
&#3627408470;
&#3627408484;
11=1+1.5(0-1)0=1
&#3627408484;
21=1+1.5(0-1)1=-0.5
Now,&#3627408484;
11=1,&#3627408484;
21=-0.5,threshold=1andlearningrate=1.5
(0,0)&#3627408461;
1&#3627408470;&#3627408475;=&#3627408484;
&#3627408470;,&#3627408471;∗&#3627408485;
&#3627408470;=1∗0+(−0.5)∗0=0(output=0)
(0,1)&#3627408461;
1&#3627408470;&#3627408475;=&#3627408484;
&#3627408470;,&#3627408471;∗&#3627408485;
&#3627408470;=1∗0+−0.5∗1=−0.5(output=0)
(1,0)&#3627408461;
1&#3627408470;&#3627408475;=&#3627408484;
&#3627408470;,&#3627408471;∗&#3627408485;
&#3627408470;=1∗1+(−0.5)∗0=1(output=1)
(1,1)&#3627408461;
1&#3627408470;&#3627408475;=&#3627408484;
&#3627408470;,&#3627408471;∗&#3627408485;
&#3627408470;=1∗1+(−0.5)∗1=0.5(output=0)
……………………………………………………………………………………………………………………………………
Secondfunction:&#3627408461;
2=ҧ&#3627408485;
1&#3627408485;
2
•Assumetheinitialweightsare&#3627408458;
12=&#3627408458;
22=1
•Threshold=1andLearningrate=1.5
•(0,0)&#3627408461;
2&#3627408470;&#3627408475;=&#3627408484;
&#3627408470;,&#3627408471;∗&#3627408485;
&#3627408470;=1∗0+1∗0=0(output=0)
•(0,1)&#3627408461;
2&#3627408470;&#3627408475;=1∗0+1∗1=1(output=1)
•(1,0)&#3627408461;
2&#3627408470;&#3627408475;=1∗1+1∗0=1(output=1)
7/19/2024 19Dr. Shivashankar, ISE, GAT
&#3627408485;
1 &#3627408485;
2 &#3627408487;
2
0 0 0
0 1 1
1 0 0
0 0 0

Problem
&#3627408484;
&#3627408470;,&#3627408471;=&#3627408484;
&#3627408470;,&#3627408471;+ƞ(t-o)&#3627408485;
&#3627408470;
&#3627408484;
12=1+1.5(0-1)1=-0.5
&#3627408484;
22=1+1.5(0-1)0=1
Now,&#3627408484;
12=-0.5,&#3627408484;
22=1,threshold=1,learningrate=1.5
•(0,0)&#3627408461;
2&#3627408470;&#3627408475;=&#3627408484;
&#3627408470;,&#3627408471;∗&#3627408485;
&#3627408470;=−0.5∗0+1∗0=0(output=0)
•(0,1)&#3627408461;
2&#3627408470;&#3627408475;=(-0.5)∗0+1∗1=1(output=1)
•(1,0)&#3627408461;
2&#3627408470;&#3627408475;=−0.5∗1+1∗0=−0.5(output=0)
•(1,1)&#3627408461;
2&#3627408470;&#3627408475;=−0.5∗1+1∗1=0.5(output=0)
•Y=&#3627408461;
1OR&#3627408461;
2&#3627408486;
&#3627408470;&#3627408475;=&#3627408461;
1&#3627408483;
1+&#3627408461;
2&#3627408483;
2
•Assumetheinitialweightsare XORtable
•&#3627408483;
1=&#3627408483;
2=1,threshold=1,learningrate=1.5
•(0,0)&#3627408486;
&#3627408470;&#3627408475;=&#3627408483;
&#3627408470;∗&#3627408487;
&#3627408470;=1∗0+1∗0=0(output=0)
•(0,1)&#3627408486;
&#3627408470;&#3627408475;=1∗0+1∗1=1(output=1)
•(1,0)&#3627408486;
&#3627408470;&#3627408475;=1∗1+1∗0=1(output=1)
•(0,0)&#3627408486;
&#3627408470;&#3627408475;=1∗0+1∗0=0(output=0)
•∴&#3627408484;
11=1,&#3627408484;
12=−0.5,&#3627408484;
21=−0.5,&#3627408484;
22=1
•&#3627408483;
1=&#3627408483;
2=1.
7/19/2024 20Dr. Shivashankar, ISE, GAT
&#3627408485;
1&#3627408485;
2&#3627408461;
1&#3627408461;
2&#3627408486;
&#3627408470;&#3627408475;
0 0 0 0 0
0 1 0 1 1
1 0 1 0 1
1 1 0 0 0

Problem
•Problem3:ConsiderNANDgate,computePerceptrontrainingrulewithW1=1.2,
W2=0.6threshold=-1andlearningrate=1.5.
•Solution:
7/19/2024 21Dr. Shivashankar, ISE, GAT
A B Y=&#3627408436;.&#3627408437;
0 0 1
0 1 1
1 0 1
1 1 0

Problem
•Problem3:ConsiderNORgate,computePerceptrontrainingrulewithW1=0.6,W1=1.
threshold=-0.5andlearningrate=1.5.
•Solution:
7/19/2024 22Dr. Shivashankar, ISE, GAT
A B Y=&#3627408436;+&#3627408437;
0 0 1
0 1 0
1 0 0
1 1 0

Problem
•Problem6:ComputeANDgateusingsingleperceptrontrainingrule.
•Solution:
A
B
Y=ቊ
1&#3627408470;&#3627408467;&#3627408484;&#3627408485;+&#3627408463;>0
0&#3627408470;&#3627408467;&#3627408484;&#3627408485;+&#3627408463;≤0
•Assumew1=1,w2=1andbias=-1
•Perceptrontrainingrule:y=w1x1+w2x2+b
•Ifx1=0,x2=0,then0+0-1=-1
0 1 0+1-1=0 -1
1 0 1+0-1=0 1 y=1
1 1 1+1-1=1
1
7/19/2024 23Dr. Shivashankar, ISE, GAT
A B Y=A.B
0 0 0
0 1 0
1 0 0
1 1 1
Y=A.B
AND
b
x1
x2

Problems
•Problem7:ComputeORgateusingsingleperceptrontrainingrule.
•Solution:
Y=ቊ
1&#3627408470;&#3627408467;&#3627408484;&#3627408485;+&#3627408463;>0
0&#3627408470;&#3627408467;&#3627408484;&#3627408485;+&#3627408463;≤0
•Assumew1=1,w2=1andbias=-1
•Perceptrontrainingrule:y=w1x1+w2x2+b
•Ifx1=0,x2=0,then0+0-1=-1
0 1 0+1-1=0
Butoutputy=0andtarget=1,misclassification,letuschangethew1=1,w2=2.
Then,y=w1x1+w2x2+bandw1=1,w2=2,b=-1
For(0,0),y=0+0-1=-1
(0,1),y=1x0+2x1-1=1
(1,0),y=1x1+2x0-1=0,Butoutput=0andtarget=1,
misclassification,soletuschangethew1=2andw2=2
(0,0),y=0+0-1=-1
(0,1),y=2x0+2x1-1=1
(1,0),y=2x1+0x2-1=1
(1,1),y=2x1+2x1-1=3
7/19/2024 24Dr. Shivashankar, ISE, GAT
A B Y=A+B
0 0 0
0 1 1
1 0 1
1 1 1
OR
b
x1
x2
2
2
Y=1
-1

Gradient Descent and the Delta Rule
•Itisalsoimportantbecausegradientdescentcanserveasthebasisforlearningalgorithmsthat
mustsearchthroughhypothesisspaces.
•Thedeltatrainingruleisbestunderstoodbyconsideringthetaskoftraininganunthresholded
perceptron;thatis,alinearunitforwhichtheoutputoisgivenby
o=&#3627408484;
0+&#3627408484;
1&#3627408485;
1+&#3627408484;
2&#3627408485;
2+…………..+&#3627408484;
&#3627408475;&#3627408485;
&#3627408475;
O(Ԧ&#3627408485;)=&#3627408484;.Ԧ&#3627408485;
Thus, a linear unit corresponds to the first stage of a perceptron, without the threshold.
Althoughtherearemanywaystodefinethiserror,onecommonmeasurethatwillturnouttobe
especiallyconvenientis
&#3627408440;&#3627408484;=
1
2
σ
&#3627408465;&#3627409174;&#3627408439;&#3627408481;
&#3627408465;−&#3627408476;
&#3627408465;
2
whereDisthesetoftrainingexamples,&#3627408481;
&#3627408465;isthetargetoutputfortrainingexampled,and&#3627408476;
&#3627408465;isthe
outputofthelinearunitfortrainingexampled.
GradientDescentandtheDeltaRuleforeachweightchangedby
∆&#3627408484;
&#3627408471;&#3627408470;=ƞ??????
&#3627408471;??????
&#3627408471;
??????
&#3627408471;=??????
&#3627408471;(1=??????
&#3627408471;)(&#3627408481;
&#3627408471;−??????
&#3627408471;)ifjisanoutputunit
??????
&#3627408471;=??????
&#3627408471;(1=??????
&#3627408471;)σ
&#3627408472;??????
&#3627408472;&#3627408484;
&#3627408472;&#3627408471;ifjisahiddenunit
Whereƞisaconstantcalledthelearningrate
&#3627408481;
&#3627408471;isthecorrectteacheroutputforunitj
??????
&#3627408471;istheerrormeasureforunitj
7/19/2024 25Dr. Shivashankar, ISE, GAT

The Backpropagation Algorithm
•Backpropagationisaneffectivealgorithmusedtotrainartificialneuralnetworks,especiallyin
feed-forwardneuralnetworks.
•Itsaniterativealgorithm,thathelpstominimizethecostfunctionbydeterminingwhichweights
andbiasesshouldbeadjustedtominimizethelossbymovingdowntowardsthegradientofthe
error.
LetusConsidernetworkswithmultipleoutputunitsratherthansingleunitsasbefore,webeginby
redefiningEtosumtheerrorsoverallofthenetworkoutputunits
&#3627408440;&#3627408484;=
1
2

&#3627408465;&#3627409174;&#3627408439;

&#3627408472;&#3627409174;&#3627408476;&#3627408482;&#3627408481;&#3627408477;&#3627408482;&#3627408481;&#3627408480;
&#3627408481;
&#3627408472;&#3627408465;−&#3627408476;
&#3627408472;&#3627408465;
2
Where,outputsisthesetofoutputunitsinthenetwork,and&#3627408481;
&#3627408472;&#3627408465;and&#3627408476;
&#3627408472;&#3627408465;arethetargetandoutput
valuesassociatedwiththek
th
outputunitandtrainingexampled.
7/19/2024 26Dr. Shivashankar, ISE, GAT

Case1:Computeandderivetheincrement(∆)foroutputunitweightinThe
BackpropagationAlgorithm(&#3627408528;
&#3627408523;)
Derivation:
&#3627409173;&#3627408440;
??????
&#3627408471;&#3627408475;&#3627408466;&#3627408481;
??????
=
&#3627409173;&#3627408440;
??????&#3627409173;0
??????
&#3627409173;0
??????&#3627408471;&#3627408475;&#3627408466;&#3627408481;
??????
&#3627409173;&#3627408476;
??????
&#3627408471;&#3627408475;&#3627408466;&#3627408481;
??????
=
&#3627409173;
&#3627409173;&#3627408476;
??????
1
2
σ
&#3627408472;&#3627409174;&#3627408476;&#3627408482;&#3627408481;&#3627408477;&#3627408482;&#3627408481;&#3627408480;&#3627408481;
&#3627408472;−&#3627408476;
&#3627408472;
2
=
&#3627409173;
&#3627409173;&#3627408476;
??????
1
2
&#3627408481;
&#3627408471;−&#3627408476;
&#3627408471;
2
=
1
2
*2(&#3627408481;
&#3627408471;−&#3627408476;
&#3627408471;)
&#3627409173;(&#3627408481;
??????−&#3627408476;
??????)
&#3627408471;&#3627408476;
??????
=-(&#3627408481;
&#3627408471;−&#3627408476;
&#3627408471;)

&#3627409173;&#3627408440;
??????
&#3627408471;&#3627408475;&#3627408466;&#3627408481;
??????
=-(&#3627408481;
&#3627408471;−&#3627408476;
&#3627408471;)&#3627408476;
&#3627408471;(1−&#3627408476;
&#3627408471;)=-&#3627408476;
&#3627408471;(1−&#3627408476;
&#3627408471;)(&#3627408481;
&#3627408471;−&#3627408476;
&#3627408471;)
And
??????
&#3627408471;≠−
&#3627409173;&#3627408440;
??????
&#3627408471;&#3627408475;&#3627408466;&#3627408481;
??????
=&#3627408476;
&#3627408471;(1−&#3627408476;
&#3627408471;)(&#3627408481;
&#3627408471;−&#3627408476;
&#3627408471;) .
∆&#3627408484;
&#3627408471;&#3627408470;=ƞ??????
&#3627408471;&#3627408485;
&#3627408471;&#3627408470;=ƞ&#3627408476;
&#3627408471;(1−&#3627408476;
&#3627408471;)(&#3627408481;
&#3627408471;−&#3627408476;
&#3627408471;)&#3627408485;
&#3627408471;&#3627408470;. &#3627408528;
&#3627408523;
7/19/2024 27Dr. Shivashankar, ISE, GAT
Type
equation
here.
∈ +
&#3627408475;&#3627408466;&#3627408481;&#3627408471;??????

Case2:Computeandderivetheincrement(∆)forhiddenunitweightinThe
BackpropagationAlgorithm(&#3627408528;
&#3627408523;)
Derivation:
7/19/2024 28Dr. Shivashankar, ISE, GAT

The Backpropagation Algorithm
BACKPROPOGATION(training_examples,ƞ,&#3627408527;
&#3627408522;&#3627408527;,&#3627408527;
&#3627408528;&#3627408534;&#3627408533;,&#3627408527;
&#3627408521;&#3627408528;&#3627408517;&#3627408517;&#3627408518;&#3627408527;)
Eachtrainingexampleisapairoftheform(Ԧ&#3627408485;,Ԧ&#3627408481;),whereԦ&#3627408485;isthevectorofnetworkinputvalues,
and&#3627408481;isthevectoroftargetnetworkoutputvalues.
ƞisthelearningrate(e.g.,.O5).&#3627408475;
&#3627408470;&#3627408475;,isthenumberofnetworkinputs,&#3627408475;
ℎ&#3627408470;&#3627408465;&#3627408465;&#3627408466;&#3627408475;thenumberof
unitsinthehiddenlayer,and&#3627408475;
&#3627408476;&#3627408482;&#3627408481;,thenumberofoutputunits.
Theinputfromunitiintounitjisdenoted&#3627408485;
&#3627408470;,&#3627408471;,andtheweightfromunititounitjis
denoted&#3627408484;
&#3627408470;,&#3627408471;.
Createafeed-forwardnetworkwith&#3627408475;
&#3627408470;&#3627408475;inputs,&#3627408475;
ℎ&#3627408470;&#3627408465;&#3627408465;&#3627408466;&#3627408475;hiddenunits,and&#3627408475;
&#3627408476;&#3627408482;&#3627408481;outputunits.
Untiltheterminationconditionismet,do
 Foreach(Ԧ&#3627408485;,Ԧ&#3627408481;)intraining_examples,do
Propagatetheinputforwardthroughthenetwork:
1,InputtheinstanceԦ&#3627408485;tothenetworkandcomputetheoutput&#3627408476;
&#3627408482;of
everyunituinthenetwork:&#3627408462;
&#3627408471;=σ
&#3627408471;&#3627408484;
&#3627408470;&#3627408471;∗&#3627408485;
&#3627408470;&#3627408462;&#3627408475;&#3627408465;&#3627408486;
&#3627408471;=&#3627408441;&#3627408462;
&#3627408471;=
1
1+&#3627408466;
−??????
??????
Propagatetheerrorsbackwardthroughthenetwork:
2.Foreachnetworkoutputunitk,calculateitserrorterm??????
&#3627408472;.
??????
&#3627408472;←&#3627408476;
&#3627408472;1−&#3627408476;
&#3627408472;&#3627408481;
&#3627408472;−&#3627408476;
&#3627408472;
3.Foreachhiddenunith,calculateitserrorterm??????
ℎ.
??????
&#3627408472;←&#3627408476;
ℎ1−&#3627408476;
ℎ෍
&#3627408472;&#3627409174;&#3627408476;&#3627408482;&#3627408481;&#3627408477;&#3627408482;&#3627408481;&#3627408480;
&#3627408484;
&#3627408472;ℎ??????
&#3627408472;
4.Updateeachnetworkweight&#3627408484;
&#3627408471;&#3627408470;
&#3627408484;
&#3627408471;&#3627408470;←&#3627408484;
&#3627408471;&#3627408470;+∆&#3627408484;
&#3627408471;&#3627408470;
where,∆&#3627408484;
&#3627408471;&#3627408470;=ƞ??????
&#3627408471;&#3627408485;
&#3627408471;&#3627408470;
7/19/2024 29Dr. Shivashankar, ISE, GAT

Problems
Problem1:Assumethattheneuronshaveasigmoidactivationfunction,performaforwardpassand
backwardpassonthenetwork.Assumethattheactualoutputofyis0.5andlearningrateis1.
Performanotherforwardpass.
Solution:Forwardpass:computeoutputfor&#3627408486;
3,&#3627408486;
4and&#3627408486;
5
&#3627408462;
&#3627408471;=෍
&#3627408471;
&#3627408484;
&#3627408470;&#3627408471;∗&#3627408485;
&#3627408470; &#3627408486;
&#3627408471;=&#3627408441;&#3627408462;
&#3627408471;=
1
1+&#3627408466;
−??????
??????
&#3627408462;
1=&#3627408484;
13∗&#3627408485;
1+&#3627408484;
23∗&#3627408485;
2=0.1*0.35+0.8*0.9=0.755
&#3627408486;
3=&#3627408467;(&#3627408462;
1)=
1
1+&#3627408466;
−0.755
=0.68
&#3627408462;
2=&#3627408484;
14∗&#3627408485;
1+&#3627408484;
24∗&#3627408485;
2=0.4*0.35+0.6*0.9=0.68
&#3627408486;
4=&#3627408467;(&#3627408462;
2)=
1
1+&#3627408466;
−0.68
=0.6637
&#3627408462;
3=&#3627408484;
35∗&#3627408486;
3+&#3627408484;
45∗&#3627408486;
4=0.3*0.68+0.9*0.667=0.801
7/19/2024 30Dr. Shivashankar, ISE, GAT
&#3627408485;
1=0.35
&#3627408485;
2=0.9
??????
3
??????
4
??????
5
&#3627408484;
13=0.1
&#3627408484;
14=0.4
&#3627408484;
23=0.8
&#3627408484;
24=0.6
&#3627408484;
45=0.9
&#3627408484;
35=0.3
&#3627408486;
5
Output y
&#3627408486;
3
&#3627408486;
4

Conti..
&#3627408486;
5=&#3627408467;(&#3627408462;
3)=
1
1+&#3627408466;
−0.801
=0.69(Networkoutput)
∴&#3627408440;&#3627408479;&#3627408479;&#3627408476;&#3627408479;=&#3627408486;
&#3627408481;??????&#3627408479;??????&#3627408466;&#3627408481;−&#3627408486;
5=0.5-0.69=-0.19
…………………………………………………………………………………………………………………………………………
Eachweightchangedby
∆&#3627408484;
&#3627408471;&#3627408470;=ƞ??????
&#3627408471;??????
&#3627408471;
??????
&#3627408471;=??????
&#3627408471;(1=??????
&#3627408471;)(&#3627408481;
&#3627408471;−??????
&#3627408471;)ifjisanoutputunit
??????
&#3627408471;=??????
&#3627408471;(1=??????
&#3627408471;)σ
&#3627408472;??????
&#3627408472;&#3627408484;
&#3627408472;&#3627408471;ifjisahiddenunit
Whereƞisaconstantcalledthelearningrate
&#3627408481;
&#3627408471;isthecorrectteacheroutputforunitj
??????
&#3627408471;istheerrormeasureforunitj
Backwardpass:Compute??????
3,??????
4&#3627408462;&#3627408475;&#3627408465;??????
5
Foroutputunit:
??????
5=&#3627408486;
51−&#3627408486;
5(&#3627408486;
&#3627408481;??????&#3627408479;??????&#3627408466;&#3627408481;−&#3627408486;
5)=0.69*(1-0.69)*(0.5-0.69)=-0.0406
Forhiddenunit:
??????
3=&#3627408486;
31−&#3627408486;
3(&#3627408484;
35∗??????
5)=0.68*(1-0.68)*(0.3*-0.0406)=-0.00265
??????
4=&#3627408486;
41−&#3627408486;
4(&#3627408484;
45∗??????
5)=0.6637*(1-0.6637)*(0.9*-0.0406)=-0.0082
7/19/2024 31Dr. Shivashankar, ISE, GAT

Conti..
Computenewweights:
∆&#3627408484;
&#3627408471;&#3627408470;=ƞ??????
&#3627408471;??????
&#3627408471;
∆&#3627408484;
45=ƞ??????
5&#3627408486;
4=1*-0.0406*0.6637=-0.0269
&#3627408484;
45(new)=∆&#3627408484;
45+&#3627408484;
45(old)=-0.0269+0.9=0.8731
∆&#3627408484;
14=ƞ??????
4&#3627408485;
1=1*-0.0082*0.35=-0.00287
&#3627408484;
14(&#3627408475;&#3627408466;&#3627408484;)=∆&#3627408484;
14+&#3627408484;
14(&#3627408476;??????&#3627408465;)=-0.00287+0.4=0.3971
Similarly,updateallotherweights
7/19/2024 32Dr. Shivashankar, ISE, GAT
i J &#3627408484;
&#3627408470;&#3627408471;??????
&#3627408471;&#3627408485;
&#3627408470;ƞ Updated
&#3627408484;
&#3627408470;&#3627408471;
1 3 0.1-0.00265 0.351 0.0991
2 3 0.8-0.00265 0.9 1 0.7976
1 4 0.4-0.0082 0.351 0.3971
2 4 0.6-0.0082 0.9 1 0.5926
3 5 0.3-0.0406 0.681 0.2724
4 5 0.9-0.0406 0.66371 0.8731

Conti..
Updatednetwork
2
nd
timeForwardpass:Forwardpass:computeoutputfor&#3627408538;
&#3627409361;,&#3627408538;
&#3627409362;and&#3627408538;
&#3627409363;
&#3627408462;
&#3627408471;=෍
&#3627408471;
&#3627408484;
&#3627408470;&#3627408471;∗&#3627408485;
&#3627408470; &#3627408486;
&#3627408471;=&#3627408441;&#3627408462;
&#3627408471;=
1
1+&#3627408466;
−??????
??????
&#3627408462;
1=&#3627408484;
13∗&#3627408485;
1+&#3627408484;
23∗&#3627408485;
2=0.0991*0.35+0.7976*0.9=0.7525
&#3627408486;
3=&#3627408467;(&#3627408462;
1)=
1
1+&#3627408466;
−0.7525
=0.6797
&#3627408462;
2=&#3627408484;
14∗&#3627408485;
1+&#3627408484;
24∗&#3627408485;
2=0.3971*0.35+0.5926*0.9=0.6723
&#3627408486;
4=&#3627408467;(&#3627408462;
2)=
1
1+&#3627408466;
−0.6723
=0.6620
&#3627408462;
3=&#3627408484;
35∗&#3627408486;
3+&#3627408484;
45∗&#3627408486;
4=0.2724*0.6797+0.8731*0.6620=0.7631
&#3627408486;
5=&#3627408467;(&#3627408462;
3)=
1
1+&#3627408466;
−0.7631
=0.6820(Networkoutput)
Error=&#3627408486;
&#3627408481;??????&#3627408479;??????&#3627408466;&#3627408481;−&#3627408486;
5=0.5−0.6820=-0.182
7/19/2024 33Dr. Shivashankar, ISE, GAT

Conti..
Problem2:Assumethattheneuronshaveasigmoidactivationfunction,performa
forwardpassandabackwardpassonthenetwork.Assumethattheactualoutputofyis1
andlearningrateis0.9.Performanotherforwardpass.
Solution:
Forwardpass:Computeoutputfor&#3627408486;
4,&#3627408486;
5and&#3627408486;
6
7/19/2024 34Dr. Shivashankar, ISE, GAT
&#3627408485;
1=1
&#3627408485;
3=1
&#3627408485;
2=0
??????
4
??????
5
??????
6
&#3627408484;
1,5=−0.3
&#3627408484;
1,4=0.2
&#3627408484;
2,5=0.1
&#3627408484;
2,4=0.4
&#3627408484;
3,4=−0.5
&#3627408484;
3,5=0.2
&#3627408484;
4,6=-0.3
&#3627408484;
5,6=-0.2
&#3627408436;&#3627408464;&#3627408481;&#3627408482;&#3627408462;??????&#3627408476;&#3627408482;&#3627408477;&#3627408482;&#3627408481;=1
??????
4=-0.4 or Bias
??????
5=0.2
??????
6=0.1

Conti..
&#3627408462;
&#3627408471;=෍
&#3627408471;
&#3627408484;
&#3627408470;&#3627408471;∗&#3627408485;
&#3627408470; &#3627408486;
&#3627408471;=&#3627408441;&#3627408462;
&#3627408471;=
1
1+&#3627408466;
−??????
??????
&#3627408462;
4=&#3627408484;
14∗&#3627408485;
1+&#3627408484;
24∗&#3627408485;
2+&#3627408484;
34∗&#3627408485;
3+??????
4(orbias)=(0.2*1)+(0.4*0)+(-0.5*1)+(-0.4)=-0.7
&#3627408476;(??????
4)=&#3627408486;
4=&#3627408467;(&#3627408462;
4)=
1
1+&#3627408466;
0.7
=0.332
&#3627408462;
5=&#3627408484;
15∗&#3627408485;
1+&#3627408484;
25∗&#3627408485;
2+&#3627408484;
35∗&#3627408485;
3+??????
5=(-0.3*1)+(0.1*0)+(0.2*1)+0.2=0.1
&#3627408476;(??????
5)=&#3627408486;
5=&#3627408467;(&#3627408462;
5)=
1
1+&#3627408466;
−0.1
=0.525
&#3627408462;
6=&#3627408484;
46∗??????
4+&#3627408484;
56∗??????
5+??????
6=(-0.3*0.332)+(-0.2*0.525)+0.1=-0.105
&#3627408476;(??????
6)=&#3627408486;
6=&#3627408467;(&#3627408462;
6)=
1
1+&#3627408466;
0.105
=0.474
Error=&#3627408486;
&#3627408481;??????&#3627408479;??????&#3627408466;&#3627408481;-&#3627408486;
6=1-0.474=0.526
.................................................................................................................................................
Backwardpass:
Foroutputunit:
??????6
=&#3627408486;
6(1-&#3627408486;
6)(&#3627408486;
&#3627408481;??????&#3627408479;??????&#3627408466;&#3627408481;-&#3627408486;
6)=0.474*(1-0.474)=0.1311
Forhiddenunit:
??????5
=&#3627408486;
5(1-&#3627408486;
5)&#3627408484;
56*
??????6
=0.525*(1-0.525)*(-0.2*0.1311)=-0.0065
??????4
=&#3627408486;
4(1-&#3627408486;
4)&#3627408484;
46*
??????6
=0.332*(1-0.332)*(-0.3*0.1331)=-0.0087
7/19/2024 35Dr. Shivashankar, ISE, GAT

Conti..
Computenewweights
∆&#3627408484;
&#3627408470;&#3627408471;=ƞ??????
&#3627408471;&#3627408476;
&#3627408470;
∆&#3627408484;
46=ƞ??????
6&#3627408486;
4=0.9*0.1311*0.332=0.03917
&#3627408484;
46(new)=∆&#3627408484;
46+&#3627408484;
46(old)=0.03917+(-0.3)=-0.261
∆&#3627408484;
14=ƞ??????
4&#3627408485;
1=0.9*-0.0087*1=-0.0078
&#3627408484;
14(new)=∆&#3627408484;
14+&#3627408484;
14(old)=-0.0076+0.2=0.192
7/19/2024 36Dr. Shivashankar, ISE, GAT
ij&#3627408484;
&#3627408470;&#3627408471;??????
&#3627408470;&#3627408485;
&#3627408470;ƞUpdated&#3627408484;
&#3627408470;&#3627408471;
46-0.30.13110.3320.9 -0.261
56-0.20.13110.5250.9 -0.138
140.2-0.00871 0.9 0.192
15-0.3-0.00651 0.9 -0.306
240.4-0.00870 0.9 0.4
250.1-0.00650 0.9 0.1
34-0.5-0.00871 0.9 -0.508
350.2-0.00651 0.9 0.194

Conti..
Updatednetwork:
2
nd
timeForwardpass:Forwardpass:computeoutputfor&#3627408538;
&#3627409362;,&#3627408538;
&#3627409363;and&#3627408538;
&#3627409364;
&#3627408462;
4=&#3627408484;
14∗&#3627408485;
1+&#3627408484;
24∗&#3627408485;
2+&#3627408484;
34∗&#3627408485;
3+??????
4=(0.192*1)+(0.4*0)+(-0.508*1)+(-0.408)=-0.724
&#3627408476;(??????
4)=&#3627408486;
4=&#3627408467;(&#3627408462;
4)=
1
1+&#3627408466;
0.724
=0.327
&#3627408462;
5=&#3627408484;
15∗&#3627408485;
1+&#3627408484;
25∗&#3627408485;
2+&#3627408484;
35∗&#3627408485;
3+??????
5=(-0.306*1)+(0.1*0)+(0.194*1)+(0.194)=0.082
&#3627408476;(??????
5)=&#3627408486;
5=&#3627408467;(&#3627408462;
5)=
1
1+&#3627408466;
−0.082
=0.520
&#3627408462;
6=&#3627408484;
46∗??????
4+&#3627408484;
56∗??????
5+??????
6=(-0.261*0.327)+(-0.138*0.520)+0.218=0.061
&#3627408476;(??????
6)=&#3627408486;
6=&#3627408467;(&#3627408462;
6)=
1
1+&#3627408466;
−0.061.
=0.515(NetworkOutput)
Error=&#3627408486;
&#3627408481;??????&#3627408479;??????&#3627408466;&#3627408481;-&#3627408486;
6=1-0.515=0.485
7/19/2024 37Dr. Shivashankar, ISE, GAT