Cours setting up your Machine learning 2M2.pdf

KadimAbdelkarim 0 views 31 slides Oct 10, 2025
Slide 1
Slide 1 of 31
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31

About This Presentation

Cours setting up your Machine learning version 2


Slide Content

Optimization
Algorithms
Mini-batch
gradient descent
deeplearning.ai

Andrew Ng
Batch vs. mini-batch gradient descent
Vectorization allows you to efficiently compute on mexamples.

Andrew Ng
Mini-batch gradient descent

Optimization
Algorithms
Understanding
mini-batch
gradient descent
deeplearning.ai

Andrew Ng
Training with mini batch gradient descent
# iterations
cost
Batch gradient descent
mini batch # (t)
cost
Mini-batch gradient descent

Andrew Ng
Choosing your mini-batch size

Andrew Ng
Choosing your mini-batch size

Andrew Ng

Andrew Ng

Optimization
Algorithms
Understanding
exponentially
weighted averages
deeplearning.ai

Andrew Ng
Exponentially weighted averages
days
temperature
!"=$!"%&+(1−$),"

Andrew Ng
Exponentially weighted averages
!"##=0.9!((+0.1+"##
!((=0.9!(,+0.1+((
!(,=0.9!(-+0.1+(,

!/=0!/1"+(1−0)+/

Andrew Ng
Implementing exponentially weighted
averages
!"=0
!%=&!"+(1−&) -%

!/=&!%+(1−&) -/
!0=&!/+(1−&) -0

Optimization
Algorithms
Bias correction
in exponentially
weighted average
deeplearning.ai

Andrew Ng
Bias correction
days
temperature
!"=$!"%&+(1−$),"

Optimization
Algorithms
Gradient descent
with momentum
deeplearning.ai

Andrew Ng
Gradient descent example

Andrew Ng
Implementation details
!"#=%!"#+1−%)*
!"+=%!"++1−%),
*=*−-!"#,
Hyperparameters: -,%
On iteration 8:
Compute)*,), on the current mini-batch
,=,−-!"+
%=0.9

Optimization
Algorithms
RMSprop
deeplearning.ai

Andrew Ng
RMSprop

Optimization
Algorithms
Adam optimization
algorithm
deeplearning.ai

Andrew Ng
Adam optimization algorithm
yhat= np.array([.9, 0.2, 0.1, .4, .9])

Andrew Ng
Hyperparameters choice:
Adam Coates

Optimization
Algorithms
Learning rate
decay
deeplearning.ai

Andrew Ng
Learning rate decay

Andrew Ng
Learning rate decay

Andrew Ng
Other learning rate decay methods

Optimization
Algorithms
The problem of
local optima
deeplearning.ai

Andrew Ng
Local optima in neural networks

Andrew Ng
Problem of plateaus
•Unlikely to get stuck in a bad local optima
•Plateaus can make learning slow