How neural networks work. Demonstration by Brandon Rohrer

nsj9y5bj7x 28 views 72 slides Jul 03, 2024
Slide 1
Slide 1 of 72
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72

About This Presentation

Machine Learning, Neural Networks


Slide Content

How neural networks work Brandon Rohrer e2eml.school

A four pixel camera e2eml.school

Categorize images solid vertical diagonal horizontal e2eml.school

Categorize images solid vertical diagonal horizontal e2eml.school

Categorize images solid vertical diagonal horizontal e2eml.school

Categorize images solid vertical diagonal horizontal e2eml.school

solid vertical diagonal horizontal e2eml.school

Simple rules can’t do it solid vertical diagonal horizontal e2eml.school

Input neurons e2eml.school

Pixel brightness -1.0 -.75 -.50 -.25 0.0 +.25 +.50 +.75 +1.0 e2eml.school

.75 -.75 0.0 .50 Input vector e2eml.school

Receptive fields e2eml.school

A neuron + e2eml.school

Sum all the inputs + .75 -.75 0.0 .50 .50 .50 0.00 -.75 .75 .50 e2eml.school

Weights + .75 -.75 0.0 .50 .50 1.0 1.0 1.0 1.0 .50 0.00 -.75 .75 .50 x 1.0 x 1.0 x 1.0 x 1.0 e2eml.school

Weights + .75 -.75 0.0 .50 -1.075 -.2 0.0 .8 -.5 .50 0.00 -.75 .75 -1.075 -.2 0.0 .8 -.5 x x x x e2eml.school

Weights + .75 -.75 0.0 .50 -1.075 -.2 0.0 .8 -.5 .50 0.00 -.75 .75 -1.075 -.2 0.0 .8 -.5 x x x x e2eml.school

Squash the result + .75 -.75 0.0 .50 -1.075 -.746 e2eml.school

Hyperbolic tangent (tanh) squashing function 1.0 .5 -1.0 -.5 1.0 .5 1.5 2.0 -1.0 -.5 -1.5 -2.0 e2eml.school

tanh squashing function 1.0 .5 -1.0 -.5 1.0 .5 1.5 2.0 -1.0 -.5 -1.5 -2.0 Your number goes in here e2eml.school

tanh squashing function 1.0 .5 -1.0 -.5 1.0 .5 1.5 2.0 -1.0 -.5 -1.5 -2.0 e2eml.school

tanh squashing function 1.0 .5 -1.0 -.5 1.0 .5 1.5 2.0 -1.0 -.5 -1.5 -2.0 The squashed version comes out here e2eml.school

tanh squashing function 1.0 .5 -1.0 -.5 1.0 .5 1.5 2.0 -1.0 -.5 -1.5 -2.0 e2eml.school

tanh squashing function 1.0 .5 -1.0 -.5 1.0 .5 1.5 2.0 -1.0 -.5 -1.5 -2.0 e2eml.school

No matter what you start with, the answer stays between -1 and 1. 1.0 .5 -1.0 -.5 1.0 .5 1.5 2.0 -1.0 -.5 -1.5 -2.0 e2eml.school

Squash the result + .75 -.75 0.0 .50 -1.075 -.746 e2eml.school

Weighted sum-and-squash neuron .75 -.75 0.0 .50 -.746 e2eml.school

Make lots of neurons, identical except for weights To keep our picture clear, weights will either be 1.0 (white) -1.0 (black) or 0.0 (missing) e2eml.school

Receptive fields get more complex e2eml.school

Repeat for additional layers e2eml.school

Receptive fields get still more complex e2eml.school

Repeat with a variation e2eml.school

Rectified linear units (ReLUs) 1.0 .5 -1.0 -.5 1.0 .5 1.5 2.0 -1.0 -.5 -1.5 -2.0 If your number is positive, keep it. Otherwise you get a zero. e2eml.school

e2eml.school

Add an output layer solid vertical diagonal horizontal e2eml.school

solid vertical diagonal horizontal e2eml.school

solid vertical diagonal horizontal e2eml.school

solid vertical diagonal horizontal e2eml.school

solid vertical diagonal horizontal e2eml.school

solid vertical diagonal horizontal e2eml.school

solid vertical diagonal horizontal e2eml.school

solid vertical diagonal horizontal e2eml.school

Errors solid vertical diagonal horizontal 1. 0. 0. 0. truth e2eml.school

Errors solid vertical diagonal horizontal -.75 -.25 .75 .5 1. 0. 0. 0. truth answer e2eml.school

Errors solid vertical diagonal horizontal -.75 -.25 .75 .5 1. 0. 0. 0. truth answer error 1.75 .25 .75 .5 e2eml.school

Errors solid vertical diagonal horizontal -.75 -.25 .75 .5 1. 0. 0. 0. truth answer error 1.75 .25 .75 .5 total 3.25 e2eml.school

Learn all the weights: Gradient descent Error at: Weight original weight e2eml.school

Learn all the weights: Gradient descent Error at: Weight original weight lower weight e2eml.school

Learn all the weights: Gradient descent Error at: Weight original weight lower weight higher weight e2eml.school

Numerically calculating the gradient is very expensive e2eml.school

Calculate the gradient (slope) directly Error at: Weight original weight e2eml.school

Slope Error at: Weight original weight change in weight = +1 e2eml.school

Slope Error at: Weight original weight change in weight = +1 move along the curve e2eml.school

Slope Error at: Weight original weight change in weight = +1 change in error = -2 e2eml.school

Slope Error at: Weight original weight change in weight = +1 change in error = -2 slope = change in error change in weight e2eml.school

Slope Error at: Weight original weight change in weight = +1 change in error = -2 slope = change in error change in weight = ∆ error ∆ weight = d(error) d(weight) = ∂e ∂w = -2 +1 e2eml.school

Slope Error at: Weight original weight You have to know your error function. For example: error = weight ^2 -1 +1 e2eml.school

Slope Error at: Weight original weight You have to know your error function. For example: error = weight ^2 ∂e = 2 * weight ∂w -1 +1 e2eml.school

Slope Error at: Weight original weight You have to know your error function. For example: error = weight ^2 ∂e = 2 * weight ∂w = 2 * -1 = -2 -1 +1 e2eml.school

y = x * w 1 Chaining + + x (input) e (output) y (intermediate value) w 1 w 2 e2eml.school

y = x * w 1 ∂y = x ∂w 1 Chaining + + x (input) e (output) y (intermediate value) w 1 w 2 e2eml.school

y = x * w 1 ∂y = x ∂w 1 e = y * w 2 ∂e = w 2 ∂y Chaining + + x (input) e (output) y (intermediate value) w 1 w 2 e2eml.school

y = x * w 1 ∂y = x ∂w 1 e = y * w 2 ∂e = w 2 ∂y e = x * w 1 * w 2 ∂e = x * w 2 ∂w 1 Chaining + + x (input) e (output) y (intermediate value) w 1 w 2 e2eml.school

y = x * w 1 ∂y = x ∂w 1 e = y * w 2 ∂e = w 2 ∂y e = x * w 1 * w 2 ∂e = x * w 2 ∂w 1 ∂e = ∂y * ∂e ∂w 1 ∂w 1 ∂y Chaining + + x (input) e (output) y (intermediate value) w 1 w 2 e2eml.school

y = x * w 1 ∂y = x ∂w 1 e = y * w 2 ∂e = w 2 ∂y e = x * w 1 * w 2 ∂e = x * w 2 ∂w 1 ∂e = ∂y * ∂e ∂w 1 ∂w 1 ∂y Chaining + + x (input) e (output) y (intermediate value) w 1 w 2 e2eml.school

∂err = ∂a * ∂b * ∂c * ∂d * … * ∂y * ∂z * ∂err ∂weight ∂weight ∂a ∂b ∂c ∂x ∂y ∂z Chaining + err a weight b c x y z ... e2eml.school

Backpropagation + err a weight b c x y z ... e2eml.school

∂err = ∂a * ∂b * ∂c * ∂d * … * ∂y * ∂z * ∂err ∂weight ∂weight ∂a ∂b ∂c ∂x ∂y ∂z Backpropagation + err a weight b c x y z ... e2eml.school

∂err = ∂a * ∂b * ∂c * ∂d * … * ∂y * ∂z * ∂err ∂weight ∂weight ∂a ∂b ∂c ∂x ∂y ∂z Backpropagation + err a weight b c x y z ... e2eml.school

∂err = ∂a * ∂b * ∂c * ∂d * … * ∂y * ∂z * ∂err ∂weight ∂weight ∂a ∂b ∂c ∂x ∂y ∂z Backpropagation + err a weight b c x y z ... e2eml.school

∂err = ∂a * ∂b * ∂c * ∂d * … * ∂y * ∂z * ∂err ∂weight ∂weight ∂a ∂b ∂c ∂x ∂y ∂z Backpropagation + err a weight b c x y z ... e2eml.school

Backpropagation + err a weight b c x y z ... e2eml.school