Real time Systems lecture slides (Complete)

7 views 272 slides Apr 20, 2025
Slide 1
Slide 1 of 272
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128
Slide 129
129
Slide 130
130
Slide 131
131
Slide 132
132
Slide 133
133
Slide 134
134
Slide 135
135
Slide 136
136
Slide 137
137
Slide 138
138
Slide 139
139
Slide 140
140
Slide 141
141
Slide 142
142
Slide 143
143
Slide 144
144
Slide 145
145
Slide 146
146
Slide 147
147
Slide 148
148
Slide 149
149
Slide 150
150
Slide 151
151
Slide 152
152
Slide 153
153
Slide 154
154
Slide 155
155
Slide 156
156
Slide 157
157
Slide 158
158
Slide 159
159
Slide 160
160
Slide 161
161
Slide 162
162
Slide 163
163
Slide 164
164
Slide 165
165
Slide 166
166
Slide 167
167
Slide 168
168
Slide 169
169
Slide 170
170
Slide 171
171
Slide 172
172
Slide 173
173
Slide 174
174
Slide 175
175
Slide 176
176
Slide 177
177
Slide 178
178
Slide 179
179
Slide 180
180
Slide 181
181
Slide 182
182
Slide 183
183
Slide 184
184
Slide 185
185
Slide 186
186
Slide 187
187
Slide 188
188
Slide 189
189
Slide 190
190
Slide 191
191
Slide 192
192
Slide 193
193
Slide 194
194
Slide 195
195
Slide 196
196
Slide 197
197
Slide 198
198
Slide 199
199
Slide 200
200
Slide 201
201
Slide 202
202
Slide 203
203
Slide 204
204
Slide 205
205
Slide 206
206
Slide 207
207
Slide 208
208
Slide 209
209
Slide 210
210
Slide 211
211
Slide 212
212
Slide 213
213
Slide 214
214
Slide 215
215
Slide 216
216
Slide 217
217
Slide 218
218
Slide 219
219
Slide 220
220
Slide 221
221
Slide 222
222
Slide 223
223
Slide 224
224
Slide 225
225
Slide 226
226
Slide 227
227
Slide 228
228
Slide 229
229
Slide 230
230
Slide 231
231
Slide 232
232
Slide 233
233
Slide 234
234
Slide 235
235
Slide 236
236
Slide 237
237
Slide 238
238
Slide 239
239
Slide 240
240
Slide 241
241
Slide 242
242
Slide 243
243
Slide 244
244
Slide 245
245
Slide 246
246
Slide 247
247
Slide 248
248
Slide 249
249
Slide 250
250
Slide 251
251
Slide 252
252
Slide 253
253
Slide 254
254
Slide 255
255
Slide 256
256
Slide 257
257
Slide 258
258
Slide 259
259
Slide 260
260
Slide 261
261
Slide 262
262
Slide 263
263
Slide 264
264
Slide 265
265
Slide 266
266
Slide 267
267
Slide 268
268
Slide 269
269
Slide 270
270
Slide 271
271
Slide 272
272

About This Presentation

This comprehensive lecture slide set covers the core concepts, design principles, and practical applications of Real-Time Systems—systems that must operate correctly within the constraints of time. Real-Time Systems are critical in domains such as embedded systems, robotics, aerospace, automotive ...


Slide Content

Real-Time Scheduling
Formal Model
[Some parts of this lecture are based on a real-time systems course
of Colin Perkins
http://csperkins.org/teaching/rtes/index.html]
1

Real-Time Scheduling – Formal Model
IIntroduce an abstract model of real-time systems
Iabstracts away unessential details
Isets up consistent terminology
IThree components of the model
IA workload model that describes applications supported by
the system
i.e. jobs, tasks, ...
IA resource model that describes the system resources
available to applications
i.e. processors, passive resources, ...
IAlgorithms that dene how the application uses the
resources at all times
i.e. scheduling and resource access protocols
2

Basic Notions
IAjobis a unit of work that is scheduled and executed by
a system
compute a control law, transform sensor data, etc.
IAtaskis a set of related jobs which jointly provide some
system function
check temperature periodically, keep a steady ow of water
IA job executes on aprocessor
CPU, transmission link in a network, database server, etc.
IA job may use some (shared) passiveresources
le, database lock, shared variable etc.
3

Life Cycle of a Job
READYRUNWAITINGCOMPL.schedulingpreemption
wait for a busy
resource
signal free
resource
releasecompleted
4

Jobs – Parameters
We consider nite, or countably innte number of jobsJ1;J2; : : :
Each job has several parameters.
There are four types of job parameters:
Itemporal
Irelease time, execution time, deadlines
Ifunctional
ILaxity type: hard and soft real-time
Ipreemptability, (criticality)
Iinterconnection
Iprecedence constraints
Iresource
Iusage of processors and passive resources
5

Job Parameters – Execution Time
Execution timeeiof a jobJi– the amount of time required to
complete the execution ofJiwhen it executes alone and has all
necessary resources
IValue ofeidepends upon complexity of the job and speed of the
processor on which it executes; may change for various reasons:
IConditional branches
ICaches, pipelines, etc.
I...
IExecution times fall into an interval[e

i
;e
+
i
]; we assume that
we know this interval (WCET analysis) but not necessarilyei
We usually validate the system using onlye
+
i
for each job
i.e. assumeei=e
+
i
6

Job Parameters – Release and Response Time
Release timeri– the instant in time when a jobJibecomes
available for execution
IRelease time mayjitter, only an interval[r

i
;r
+
i
]is known
IA job can be executed at any time at, or after, its release time,
provided its processor and resource demands are met
Completion timeCi– the instant in time when a job completes
its execution
Response time– the differenceCiribetween the completion
time and the release time
TimeJiJir

i
r
+
i
Release timeriCompletion timeCiResponse time
7

Job Parameters – Deadlines
Absolute deadlinedi– the instant in time by which a job must
be completed
Relative deadlineDi– the maximum allowable response time
i.e.Di=diri
Feasible intervalis the interval(ri;di]
TimeJiJir

i
r
+
i
Release timeriCompletion timeCiResponse timeAbsolute deadlinediRel. deadlineDi
Atiming constraintof a job is specied using release time
together with relative and absolute deadlines.
8

Laxity Type – Hard Real-Time
Ahard real-time constraintspecies that a job should never
miss its deadline.
Examples: Flight control, railway signaling, anti-lock brakes, etc.
Several more precise denitions occur in literature:
IA timing constraint is hard if the failure to meet it is considered
a fatal error
e.g. a bomb is dropped too late and hits civilians
IA timing constraint is hard if the usefulness of the results falls off
abruptly (may even become negative) at the deadline
Here the nature of abruptness allows to soften the constraint
Denition 1
Atiming constraint is hardif the user requiresformal validation
that the job meets its timing constraint.
9

Laxity Type – Soft Real-Time
Asoft real-time constraintspecies that a job could
occasionally miss its deadline
Examples: stock trading, multimedia, etc.
Several more precise denitions occur in literature:
IA timing constraint is soft if the failure to meet it is undesirable
but acceptable if the probability is low
IA timing constraint is soft if the usefulness of the results
decreases at a slower rate withtardinessof the job
e.g. the probability that a response time exceeds 50 ms is less than 0.2
Denition 2
Atiming constraint is softif either validation is not required, or
only a demonstration that astatistical constraintis met sufces.
10

Jobs – Preemptability
Jobs may be interrupted by higher priority jobs
IA job ispreemptableif its execution can be interrupted
IA job isnon-preemptableif it must run to completion once
started
(Some preemptable jobs have periods during which they cannot be
preempted)
IThecontext switch timeis the time to switch between jobs
(Most of the time we assume that this time is negligible)
Reasons for preemptability:
IJobs may have different levels of criticality
e.g. brakes vs radio tunning
IPriorities may make part of scheduling algorithm
e.g. resource access control algorithms
11

Jobs – Precedence Constraints
Jobs may be constrained to execute in a particular order
IThis is known as aprecedence constraint
IA jobJiis apredecessorof another jobJkandJka
successorofJi(denoted byJi<Jk) ifJkcannot begin
execution until the execution ofJicompletes
IJiis animmediate predecessorofJkifJi<Jkand there is
no other jobJjsuch thatJi<Jj<Jk
IJiandJkareindependentwhen neitherJi<JknorJk<Ji
A job with a precedence constraint becomes ready for
execution when its release time has passed and when all
predecessors have completed.
Example:authentication before retrieving an information, a signal
processing task in radar surveillance system precedes a tracker task
12

Tasks – Modeling Reactive Systems
Reactive systems – run for unlimited amount of time
A system parameter: number of tasks
Imay be known in advance (ight control)
Imay change during computation (air trafc control)
We consider three types of tasks
IPeriodic – jobs executed at regular intervals, hard deadlines
IAperiodic – jobs executed in random intervals, soft deadlines
ISporadic – jobs executed in random intervals, hard deadlines
... precise denitions later.
13

Processors
A, P, is anactivecomponent on which jobs are scheduled
The general case considered in literature:
mprocessorsP1; : : : ;Pm, eachPihas itstypeandspeed.
We mostly concentrate onsingle processorscheduling
IEfcient scheduling algorithms
IIn a sense subsumes multiprocessor scheduling where tasks are
assignedstaticallyto individual processors
i.e. all jobs of every task are assigned to a single processor
Multi-processorscheduling is a rich area of current research, we
touch it only lightly (later).
14

Resources
A, R, is apassiveentity upon which jobs may depend
In general,we considernresourcesR1; : : : ;Rnof distinct typesEachRiis used in a mutually exclusive manner
IA job that acquires a free resource locks the resource
IJobs that need a busy resource have to wait until the resource is
released
IOnce released, the resource may be used by another job
(i.e. it is not consumed)
(More generally, each resource may be used bykjobs concurrently, i.e., there arek
units of the resource)
Resource requirementsof a job specify
Iwhich resources are used by the job
Ithe time interval(s) during which each resource is required
(precise denitions later)
15

Scheduling
Schedule
resources to jobs.
More formally, a schedule is a function
:fJ1; : : :g R
+
0
! P(fP1; : : : ;Pm;R1; : : : ;Rng)
so that for everyt2R
+
0
there are rational 0t1tt2such
that(Ji;)is constant on[t1;t2).
(We also assume that there is the least time quantum in which scheduler
does not change its decisions, i.e. each of the intervals[t1;t2)is larger than a
xed" >0.)
16

Valid and Feasible Schedule
A schedule isvalidif it satises the following conditions:
IEvery processor is assigned to at most one job at any time
IEvery job is assigned to at most one processor at any time
INo job is scheduled before its release time
IThe total amount of processor time assigned to a given job is
equal to its actual execution time
IAll the precedence and resource usage constraints are satised
A schedule isfeasibleifall jobs with hard real-time constraints
complete before their deadlines
A set of jobs isschedulableif there is a feasible schedule for
the set.
17

Scheduling – Algorithms
Scheduling algorithm computes a schedule for a set of jobs
A set of jobs isschedulable according to a scheduling algorithm
if the algorithm produces a feasible schedule
Sometimes efciency of scheduling algorithms is measured using
acost function:
Ithe maximum/average response time
Ithe maximum/average lateness – the difference between the
completion time and the absolute deadline, i.e.Cidi
Imiss rate – the percentage of jobs that are executed but
completed too late
I...
Denition 3
A scheduling algorithm is
a feasible schedule whenever such a schedule exists,
and if a cost function is given, minimizes the cost.
18

Real-Time Scheduling
Individual Jobs
19

Scheduling of Individual Jobs
We start with scheduling of nite sets of jobsfJ1; : : : ;Jmgfor
execution onsingle processorsystems
EachJihas a release timeri, an execution timeeiand
a relative deadlineDi.
We assume hard real-time constraints
The question:Is there an optimal scheduling algorithm?
We proceed in the direction of growing generality:
1.No resources, independent, synchronized (i.e.ri=0 for alli)
2.No resources, independent but not synchronized
3.No resources but possibly dependent
4.The general case
20

No resources, Independent, Synchronized
J1J2J3J4J5
ei11132
di310785
Is there a feasible schedule? Minimize maximal lateness.
Note: Preemption does not help in synchronized case
Theorem 4
If there are no resource contentions, then executing independent jobs
in the order of non-decreasing deadline (EDD) produces a feasible
schedule (if it exists) and minimizes the maximal lateness (always).
Proof.
Any feasible schedulecan be transformed in nitely many steps to
EDD schedule whose lateness isthe lateness of(whiteboard).
Is there any simple schedulability test?fJ1; : : : ;Jngwhered1 dnis schedulable iff
8i2 f1; : : : ;ng:
P
i
k=1
ekdi
21

No resources, Independent (No Synchro)
J1J2J3
ri002
ei122
di254
Ind a (feasible) schedule (with and without preemption)
Idetermine response time of each job in your schedule
Idetermine lateness of each job in your schedule (is the
maximal lateness minimized?)
Recall that lateness ofJiis equal toCidi
Preemption makes a difference
22

No resources, Independent (No Synchro)
Earliest Deadline First (EDF)scheduling:
At any time instant, a job with the earliest absolute deadline is
executed
Here EDF works in the preemptive case but not in
the non-preemptive one.
J1J2
ri01
ei42
di75
23

No Resources, Dependent (No Synchro)
Theorem 5
If there are no resource contentions, jobs are independent and
preemption is allowed, the EDF algorithm nds a feasible
schedule (if it exists) and minimizes the maximal lateness.
Proof.
We show that any feasible schedulecan be transformed in nitely
many steps to EDF schedule which is feasible and whose lateness is
the lateness of.
Letbe a feasible schedule but not EDF. Assume, w.l.o.g., that for
everyk2Nat most one job is executed in the interval[k;k+1)and
that all release times and deadlines are inN.
(Otherwise rescale by the least common multiple.)
24

No Resources, Dependent (No Synchro)
Proof cont.
We say thatviolates kif there are two jobsJaandJb
that satisfy:
IJbis executed in[k;k+1)
IJais executed in[`; `+1)for` >k
Ida<db
We say thatJaandJbwitness
Letk2Nbe theleasttime instant such thatviolates EDF at
kas witnessed by jobsJaandJb.
Assume, w.l.o.g. thatJahas the minimum deadline among all
jobs ready for execution atk.
Let us dene a new schedule
0
which is the same asexcept:
IexecutesJain[k;k+1)
IexecutesJbin[`; `+1)
Then
0
does not violate EDF at anyk
0
k.
So nitely many transformations transform any feasible
schedule to EDF.
25

No resources, Independent (No Synchro)
Thenon-preemptivecase is.
Heuristics are needed, such as theSpring algorithm, that
usually work in much more general setting (with resources etc.)
Use the notion ofpartial schedulewhere only a subset of tasks
has been scheduled.
Exhaustive search through partial schedules
Istart with an empty schedule
Iin every step either
Iadd a job which maximizes aheuristic function Hamong
jobs that have not yet been tried in this partial schedule
Ior backtrack if there is no such a job
IAfter failure, backtrack to previous partial scheduleHeuristic function identies plausible jobs to be scheduled
(earliest release, earliest deadline, etc.)
26

No resources, Dependent (No Synchro)
Theorem 6
Assume that there are no resource contentions and jobs are
preemptable. There is a polynomial time algorithm which decides
whether a feasible schedule exists and if yes, then computes one.
Idea:Reduce to independent jobs by changing release times
and deadlines. Then use EDF.
Observe that ifJi<Jkthen replacing
Irkwith maxfrk;ri+eig
(Jkcannot be scheduled for execution beforeri+eibecauseJicannot
be nished beforeri+ei)
Idiwith minfdi;dkekg
(Jimust be nished beforedkekso thatJkcan be nished beforedk)
does not change feasibility.
Replace systematically according to the precedence relation.
27

No Resources, Dependent (No Synchro)
Dener

k
;d

k
systematically as follows:
IPickJkwhose all predecessors have been processed and
computer

k
:=maxfrk;maxJi<Jk
r

i
+eig. Repeat for all jobs.
IPickJkwhose all successors have been processed and
computed

k
:=minfdk;minJk<Ji
d

i
eig. Repeat for all jobs.
This gives a new set of jobsJ

1
; : : : ;J

m
where eachJ

k
has the
release timer

k
and the absolute deadlined

k
.
We imposeno precedence constraintsonJ

1
; : : : ;J

m
.
Lemma 7
fJ1; : : : ;Jmgis feasible ifffJ

1
; : : : ;J

m
gis feasible. If EDF schedule
is feasible onfJ

1
; : : : ;J

m
g, then the same schedule is feasible
onfJ1; : : : ;Jmg.
The same schedule means that whenever J

i
is scheduled at time t, then Jiis
scheduled at time t.
28

No Resources, Dependent (No Synchro)
Recall:r

k
:=maxfrk;maxJi<Jk
r

i
+eigand
d

k
:=minfdk;minJk<Ji
d

i
eig
Proof of Lemma 7.
): It is easy to show that inno feasible scheduleonfJ1; : : : ;Jmgany
jobJkcan be released beforer

k
and completed befored

k
(otherwise,
precedence constraints would be violated).
(: Assume that EDF
is feasible onfJ

1
; : : : ;J

m
g. Let us useonfJ1; : : : ;Jmg.
I.e.Jiis executed iffJ

i
is executed.
Timing constraints offJ1; : : : ;Jmgare satised sincerkr

k
and
dkd

k
for allk.
Precedence constraints: Assume thatJs<Jt. ThenJ

s
executes completely beforeJ

t
sincer

s
<r

s
+esr

t
and
d

s
d

t
et<d

t
andis EDF onfJ

1
: : : ;J

m
g.
29

No Resources, Dependent (No Synchro)
J1J2J3J4J5J6
ei111111
di254356
Dependencies:
J1J2J3J4J5J6
Find a feasible schedule.
Recall:r

k
:=maxfrk;maxJi<Jk
r

i
+eigand
d

k
:=minfdk;minJk<Ji
d

i
eig
30

Resources, Dependent, Not Synchronized
Even the preemptive case is NP-hard
Ireduce the non-preemptive case without resources to the
preemptive with resources
IUse a common resourceR.
IWhenever a job starts its execution it locks the resourceR.
IWhenever a job nishes its execution it releases the
resourseR.
Could be solved using heuristics, e.g. the Spring algorithm.
31

Real-Time Scheduling
Scheduling of Reactive Systems
[Some parts of this lecture are based on a real-time systems course
of Colin Perkins
http://csperkins.org/teaching/rtes/index.html]
32

Reminder of Basic Notions
IJobs are executed on processors and need resources
IParameters of jobs
Itemporal:
Irelease time –ri
Iexecution time –ei
Iabsolute deadline –di
Iderived params: relative deadline (Di), completion time,
response time, ...
Ifunctional:
Ilaxity type: hard vs soft
Ipreemptability
Iinterconnection
Iprecedence constraints (independence)
Iresource
Iwhat resources and when are used by the job
ITasks = sets of jobs
33

Reminder of Basic Notions
ISchedule assigns, in every time instant, processors and
resources to jobs
Ivalid schedule = correct (common sense)
IFeasible schedule = valid and all hard real-time tasks meet
deadlines
ISet of jobs is schedulable if there is a feasible schedule for it
IScheduling algorithm computes a schedule for a set of jobs
IScheduling algorithm is optimal if it always produces a feasible
schedule whenever such a schedule exists, and if a cost function
is given, minimizes the cost
34

Scheduling Reactive Systems
We have considered scheduling of individual jobs
From this point on we concentrate on reactive systems
i.e. systems that run for unlimited amount of time
Recall that a task is a set of related jobs that jointly provide
some system function.
IWe consider various types of tasks
IPeriodic
IAperiodic
ISporadic
IDiffer in execution time patterns for jobs in the tasks
IMust be modeled differently
IDiffering scheduling algorithms
IDiffering impact on system performance
IDiffering constraints on scheduling
35

Periodic Tasks
IA set of jobs that are executed repeatedly at regular time
intervals can be modeled as aperiodic task
TimeJi;1ri;1Ji;2ri;2Ji;3ri;3Ji;4ri;4 'i
IEach periodic taskTiis a sequence of jobs
Ji;1;Ji;2; : : :Ji;n; : : :
IThephase'iof a taskTiisthe release time ri;1of the rst
job Ji;1in the taskTi;
tasks arein phaseif their phases are equal
ITheperiod piof a taskTiisthe minimum length of all time
intervals between release times of consecutive jobsinTi
ITheexecution time eiof a taskTiisthe maximum execution
time of all jobsinTi
ITherelative deadline Diis relative deadline of all jobs inTi
(The period and execution time of every periodic task in the system are
known with reasonable accuracy at all times)
36

Periodic Tasks – Notation
The 4-tupleTi= ('i;pi;ei;Di)refers to a periodic taskTiwith phase
'i, periodpi, execution timeei, and relative deadlineDi
For example: jobs ofT1= (1;10;3;6)are
Ireleased at times 1, 11, 21,: : :,
Iexecute for 3 time units,
Ihave to be nished in 6 time units (the rst by 7, the second by 17, ...)
Default phase ofTiis'i=0 and default relative deadline isdi=pi
T2= (10;3;6)satises'=0,pi=10,ei=3,Di=6, i.e. jobs ofT2are
Ireleased at times 0, 10, 20,: : :,
Iexecute for 3 time units,
Ihave to be nished in 6 time units (the rst by 6, the second by 16, ...)
T3= (10;3)satises'=0,pi=10,ei=3,Di=10, i.e. jobs ofT3are
Ireleased at times 0, 10, 20,: : :,
Iexecute for 3 time units,
Ihave to be nished in 10 time units (the rst by 10, the second by 20, ...)
37

Periodic Tasks – Hyperperiod
Thehyper-period Hof a set of periodic tasks is the least
common multiple of their periods
If tasks are in phase, thenHis the time instant after which the pattern of job
release/execution times starts to repeat
051015202530HH
38

Aperiodic and Sporadic Tasks
IMany real-time systems are required to respond to
external events
IThe tasks resulting from such events aresporadicand
aperiodictasks
ISporadic
e.g. autopilot on/off in aircraft
IAperiodic
e.g. sensitivity adjustment of radar surveilance system
IInter-arrival times between consecutive jobs are identically
and independently distributed according to a probability
distributionA(x)
IExecution times of jobs are identically and independently
distributed according to a probability distributionB(x)
IIn the case of sporadic tasks, the usual goal is to decide,
whether a newly released job can be feasibly scheduled with
the remaining jobs in the system
IIn the case of aperiodic tasks, the usual goal is to minimize the
average response time
39

Scheduling – Classication of Algorithms
IOff-line vs Online
IOff-line – sched. algorithm is executed on the whole task
set before activation
IOnline – schedule is updated at runtime every time a new
task enters the system
IOptimal vs Heuristic
IOptimal – algorithm computes a feasible schedule and
minimizes cost of soft real-time jobs
IHeuristic – algorithm is guided by heuristic function; tends
towards optimal schedule, may not give one
The main division is on
IClock-Driven
IPriority-Driven
40

Scheduling – Clock-Driven
IDecisions about what jobs execute when are made at specic
time instants
Ithese instants are chosen before the system begins
execution
IUsually regularly spaced, implemented using a periodic
timer interrupt
IScheduler awakes after each interrupt, schedules jobs to
execute for the next period, then blocks itself until the next
interrupt
E.g. the helicopter example with the interrupt every 1/180 th of a
second
ITypically in clock-driven systems:
IAll parameters of the real-time jobs are xed and known
IA schedule of the jobs is computed off-line and is stored for
use at runtime; thus scheduling overhead at run-time can
be minimized
ISimple and straight-forward, not exible
41

Scheduling – Priority-Driven
IAssign priorities to jobs, based on some algorithm
IMake scheduling decisions based on the priorities, when events
such as releases and job completions occur
IPriority scheduling algorithms areevent-driven
IJobs are placed in one or more queues; at each event, the
ready job with the highest priority is executed
(The assignment of jobs to priority queues, along with rules such as
whether preemption is allowed, completely denes a priority-driven alg.)
IPriority-driven algs. makelocally optimalscheduling decisions
ILocally optimal scheduling is oftennotglobally optimal
IPriority-driven algorithmsneverintentionally leave idle
processors
ITypically in priority-driven systems:
ISome parameters do not have to be xed or known
IA schedule is computed online; usually results in larger
scheduling overhead as opposed to clock-driven scheduling
IFlexible – easy to add/remove tasks or modify parameters
42

Clock-Driven & Priority-Driven Example
T1T2T3
pi3510
ei121
Clock-Driven:
0123456789101112131415161718192021222324252627282930
Priority-driven:T1T2T3
0123456789101112131415161718192021222324252627282930
43

Real-Time Scheduling
Scheduling of Reactive Systems
Clock-Driven Scheduling
44

Current Assumptions
IFixed number,n, of periodic tasksT1; : : : ;Tn
IParameters of periodic tasks are known a priori
IExecution timeei;kof each jobJi;kin a taskTiis xed
IFor a jobJi;kin a taskTiwe have
Iri;1='i=0 (i.e., synchronized)
Iri;k=ri;k1+pi
IWe allow aperiodic jobs
Iassume that the system maintains a single queue for
aperiodic jobs
IWhenever the processor is available for aperiodic jobs, the
job at the head of this queue is executed
IWe treat sporadic jobs later
45

Static, Clock-Driven Scheduler
IConstruct astatic scheduleofine
IThe schedule species exactly when each job executes
IThe amount of time allocated to every job is equal to its
execution time
IThe schedule repeats each hyperperiod
i.e. it sufces to compute the schedule up to hyperperiod
ICan use complex algorithms ofine
IRuntime of the scheduling algorithm is not relevant
ICan compute a schedule that optimizes some
characteristics of the system
e.g. a schedule where the idle periods are nearly periodic (useful
to accommodate aperiodic jobs)
46

Example
T1= (4;1),T2= (5;1:8),T3= (20;1),T4= (20;2)
HyperperiodH=20
04812162024Aper.T1T2T3T4
47

Implementation of Static Scheduler
IStore pre-computed schedule as a table
IEach entry(tk;T(tk))gives
Ia decision timetk
Ischeduling decisionT(tk)which is either a
task to be executed, or idle (denoted byI)
IThe system creates all tasks that are to be
executed:
IAllocates memory for the code and data
IBrings the code into memory
IScheduler sets the hardware timer to interrupt
at the rst decision timet0=0
IOn receipt of an interrupt attk:
IScheduler sets the timer interrupt totk+1
IIf previous task overrunning, handle failure
IIfT(tk) =Iand aperiodic job waiting, start
executing it
IOtherwise, start executing the next job inT(tk)
48

Example
T1= (4;1),T2= (5;1:8),T3= (20;1),T4= (20;2)
HyperperiodH=20
04812162024Aper.T1T2T3T4
tk0.01.02.03.84.05.06.0
T(tk)T1T3T2IT1IT4
49

Frame Based Scheduling
IArbitrary table-driven cyclic schedules exible, but
inefcient
IRelies on accurate timer interrupts, based on execution
times of tasks
IHigh scheduling overhead
IEasier to implement if a structure is imposed
IMake scheduling decisions at periodic intervals(frames)of
lengthf
IExecute a xed list of jobs within each frame;
no preemption within frames
IGives two benets:
IScheduler can easily check for overruns and missed
deadlines at the end of each frame.
ICan use a periodic clock interrupt, rather than
programmable timer.
50

Frame Based Scheduling – Cyclic Executive
IModify previous table-driven scheduler to be frame based
ITable that drives the scheduler hasFentries, where
F=H=f
IThek-th entryL(k)lists the names of the jobs that are to
be scheduled in framek(L(k)is calledscheduling block)
IEach job is implemented by a procedure
ICyclic executive executed by the clock interrupt that
signals the start of a frame:
IIf an aperiodic job is executing, preempts it; if a periodic
overruns, handles the overrun
IDetermines the appropriate scheduling block for this frame
IExecutes the jobs in the scheduling block
IExecutes jobs from the head of the aperiodic job queue for
the remainder of the frame
ILess overhead than pure table driven cyclic scheduler,
since only interrupted on frame boundaries, rather than on
each job
51

Frame Based Scheduling – Frame Size
How to choose the frame length?
(Assume that periods are inNand choose frame sizes inN.)
1.Necessary condition for avoiding preemption of jobs is
fmax
i
ei
(i.e. we want each job to have a chance to nish within a frame)
2.To minimize the number of entries in the cyclic schedule, the
hyper-period should be an integer multiple of the frame size, i.e.
9i:pimodf=0
3.To allow scheduler to check that jobs complete by their deadline,
at least one frame should lie between release time of a job and
its deadline, which is equivalent to
8i:2fgcd(pi;f)Di
All three constraints should be satised.
52

Frame Based Scheduling – Frame Size – Example
1.fmaxiei
2.9i:pimodf=0
3.8i:2fgcd(pi;f)Di
Example 8
T1= (4;1:0),T2= (5;1:8),T3= (20;1:0),T4= (20;2:0)
Thenf2Nsatises 1.–3. ifff=2.
Withf=2 is schedulable:
53

Frame Based Scheduling – Job Slices
ISometimes a system cannot meet all three frame size
constraints simultaneously (and even if it meets the
constraints, no non-preemptive schedule is feasible)
ICan be solved by partitioning a job with large execution
time into slices with shorter execution times
This, in effect, allows preemption of the large job
IConsiderT1= (4;1),T2= (5;2;7),T3= (20;5)
ICannot satisfy constraints: 1.)f5 but 3.)f4
ISolve by splittingT3intoT3;1= (20;1),T3;2= (20;3), and
T3;3= (20;1)
(Other splits exist)
IResult can be scheduled withf=4
54

Building a Structured Cyclic Schedule
To construct a schedule, we have to make three kinds of design
decisions (that cannot be taken independently):
IChoose a frame size based on constraints
IPartition jobs into slices
IPlace slices into frames
There are efcient algorithms for solving these problems based
e.g. on a reduction to the network ow problem.
55

Scheduling Aperiodic Jobs
So far, aperiodic jobs scheduled in the background after all jobs
with hard deadlines
This may unnecessarily delay aperiodic jobs
Note: There is no advantage in completing periodic jobs early
Ideally, nish periodic jobs by their respective deadlines.
Slack Stealing:
ISlack time in a frame = the time left in the frame after all
(remaining) slices execute
ISchedule aperiodic jobs ahead of periodic in the slack time of
periodic jobs
IThe cyclic executive keeps track of the slack time left in
each frame as the aperiodic jobs execute, preempts them
with periodic jobs when there is no more slack
IAs long as there is slack remaining in a frame and the
aperiodic jobs queue is non-empty, the executive executes
aperiodic jobs, otherwise executes periodic
IReduces resp. time for aper. jobs, but requires accurate timers
56

Example
Assume that the aperiodic queue is never empty.
Aperiodic at the ends of frames:
04812162024Aper.T1T2T3T4
Slack stealing:
04812162024Aper.T1T2T3T4
57

Slack Stealing – cont.
Sl. S.StandardRel. aper.Period.
58

Frame Based Scheduling – Sporadic Jobs
Let us allow
i.e. hard real-time jobs whose release and exec. times are not known a priori
The scheduler determines whether to accept a sporadic job when it
arrives (and its parameters become known)
IPerformacceptance testto check whether the new sporadic job
can be feasibly scheduled with all the jobs (periodic and
sporadic) in the system at that time
Acceptance check done at the beginning of the next frame; has to keep
execution times of the parts of sporadic jobs that have already executed
IIf there is sufcient slack time in the frames before the new job's
deadline, the new sporadic job is accepted; otherwise, rejected
IAmong themselves, sporadic jobs scheduled according to EDF
This is optimal for sporadic jobs
Note: rejection is often better than missing deadline
e.g. a robotic arm taking defective parts off a conveyor belt: if the arm cannot
meet deadline, the belt may be slowed down or stopped
59

IS1(17;4:5)released at 3 with abs. deadline 17 and execution time 4:5;
acceptance test at 4; must be scheduled in frames 2;3;4; total slack in
these frames is 4, i.e. rejected
IS2(29;4)released at 5 with abs. deadline 29 and exec. time 4; acc. test
at 8; total slack in frames 3-7 is 5:5, i.e. accepted
IS3(22;1:5)released at 11 with abs. deadline 22 and exec. time 1:5;
acc. test at 12;
2 units of slack in frames 4;5 asS3will be executedahead of the
remaining parts of S2by EDF – check whether there will be enough
slack for the remaining parts ofS2, accepted
IS4(44;5:0)is rejected (only 4:5 slack left)
60

Handling Overruns
Overruns may happen due to failures
e.g. unexpectedly large data over which the system operates, hardware
failures, etc.
Ways to handle overruns:
IAbort the overrun job at the beginning of the next frame;
log the failure; recover later
e.g. control law computation of a robust digital controller
IPreempt the overrun job and nish it as an aperiodic job
use this when aborting job would cause “costly” inconsistencies
ILet the overrun job nish – start of the next frame and the
execution jobs scheduled for this frame are delayed
This may cause other jobs to be delayed
depends on application
61

Clock-drive Scheduling: Conclusions
Advantages:
IConceptual simplicity
IComplex dependencies, communication delays, and
resource contention among jobs can be considered when
constructing the static schedule
IEntire schedule in a static table
INo concurrency control or synchronization needed
IEasy to validate, test and certify
Disadvantages:
IInexible
IIf any parameter changes, the schedule must be usually
recomputed
Best suited for systems which are rarely modied (e.g. controllers)
IParameters of the jobs must be xed
As opposed to most priority-driven schedulers
62

Real-Time Scheduling
Scheduling of Reactive Systems
Priority-Driven Scheduling
63

Current Assumptions
ISingle processor
IFixed number,n, ofindependent periodictasks
i.e. there is no dependency relation among jobs
IJobs can be preempted at any time and never suspend
themselves
INo aperiodic and sporadic jobs
INo resource contentions
Moreover, unless otherwise stated, we assume that
IScheduling decisions take place precisely at
Irelease of a job
Icompletion of a job
(and nowhere else)
IContext switch overhead is negligibly small
i.e. assumed to be zero
IThere is an unlimited number of priority levels
64

Fixed-Priority vs Dynamic-Priority Algorithms
A priority-driven scheduler is on-line
i.e. it does not precompute a schedule of the tasks
IIt assigns priorities to jobs after they are released and places the
jobs in a ready job queue in the priority order
with the highest priority jobs at the head of the queue
IAt each scheduling decision time, the scheduler updates the
ready job queue and then schedules and executes the job at the
head of the queue
i.e. one of the jobs with the highest priority
Fixed-priority=all jobs in a taskare assigned the same priority
Dynamic-priority= jobs in a task may be assigned different priorities
Note:
job-level dynamic priorityalgorithms that vary priorities of individual jobs – we
won't consider such algorithms.
65

Fixed-priority Algorithms – Rate Monotonic
Best known xed-priority algorithm israte monotonic (RM)scheduling
that assigns priorities to tasks based on their periods
IThe shorter the period, the higher the priority
ITherateis the inverse of the period, so jobs with higher rate
have higher priority
RM is very widely studied and used
Example 9
T1= (4;1),T2= (5;2),T3= (20;5)
with rates 1=4, 1=5, 1=20, respectively
The priorities:T1T2T3
048121620T3T2T1
66

Fixed-priority Algorithms – Deadline Monotonic
Thedeadline monotonic (DM)algorithm assigns priorities to
tasks based on theirrelative deadlines
Ithe shorter the deadline, the higher the priority
Observation:When relative deadline of every task matches its
period, then RM and DM give the same results
Proposition 1
When the relative deadlines are arbitrary DM can sometimes
produce a feasible schedule in cases where RM cannot.
67

Rate Monotonic vs Deadline Monotonic
T1= (50;50;25;100),T2= (0;62:5;10;20),T3= (0;125;25;50)
DM is optimal (with prioritiesT2T3T1):
50100150200250062.5125187.525001252502082.5145207.550175T3T2T1
RM is not optimal (with prioritiesT1T2T3):
50100150200250062.5125187.525001252502082.5145207.550175T3T2T1
68

Dynamic-priority Algorithms
Best known isearliest deadline rst (EDF)that assigns
priorities based oncurrent(absolute) deadlines
IAt the time of a scheduling decision, the job queue is
ordered by earliest deadline
Another one is theleast slack time (LST)
IThe job queue is ordered by least slack time
Recall that theslack timeof a jobJiat timetis equal toditxwherexis
the remaining computation time ofJiat timet
Comments:
IThere is also a strict LST which reassigns priorities to jobs whenever
their slacks change relative to each other – won't consider
IStandard “non real-time” algorithms such as FIFO and LIFO are also
dynamic-priority algorithms
We focus on EDF here, leave some LST for homework
69

EDF – Example
T1= (2;1)andT2= (5;2:5)
012345678910T2T1
Note that the processor is 100% “utilized”, not surprising :-)
70

Summary of Priority-Driven Algorithms
We consider:
Dynamic-priority:
IEDF= at the time of a scheduling decision, the job queue is
ordered by the earliest deadline
Fixed-priority:
IRM= assigns priorities to tasks based on their periods
IDM= assigns priorities to tasks based on their relative deadlines
(In all cases, ties are broken arbitrarily.)
We consider the following questions:
IAre the algorithms optimal?
IHow to efciently (or even online) test for schedulability?
To measure abilities of scheduling algorithms and to get fast online
tests of schedulability we use a notion ofutilization
71

Utilization
IUtilization uiof a periodic task Tiwith periodpiand
execution timeeiis dened byui:=ei=pi
uiis the fraction of time a periodic task with periodpiand execution time
eikeeps a processor busy
ITotal utilization U
T
of a set of tasksT=fT1; : : : ;Tngis
dened as the sum of utilizations of all tasks ofT, i.e. by
U
T
:=
nX
i=1
ui
IUis aschedulable utilizationof an algorithm ALG if all sets
of tasksTsatisfyingU
T
Uare schedulable by ALG.
Maximum schedulable utilization U
ALGof an algorithm ALG
is thesupremum of schedulable utilizations of ALG.
IIfU
T
<UALG, thenTis schedulable by ALG.
IIfU>UALG, then there isTwithU
T
Uthat is not
schedulable by ALG.
72

Utilization – Example
IT1= (2;1)thenu1=
1
2
IT1= (11;5;2;4)thenu1=
2
5
(i.e., the phase and deadline do not play any role)
IT=fT1;T2;T3gwhereT1= (2;1);T2= (6;1);T3= (8;3)
then
U
T
=
1
2
+
1
6
+
3
8
=
25
24
73

Real-Time Scheduling
Priority-Driven Scheduling
Dynamic-Priority
74

Optimality of EDF
Theorem 10
LetT=fT1; : : : ;Tngbe a set of independent, preemptable
periodic tasks with Dipifor i=1; : : : ;n. The following
statements are equivalent:
1.Tcan be feasibly scheduled on one processor
2.U
T
1
3.Tis schedulable using EDF
(i.e., in particular, UEDF=1)
Proof.
1.)2.We prove thatU
T
>1 implies thatTis not schedulable
2.)3.Next slides and whiteboard ...
3.)1.Trivial

75

Proof of 1.)2.
Assume thatU
T
=
P
N
i=1
ei
pi
>1.
Consider a time instantt>maxi'i
(i.e. a time when all tasks are already "running")
Observe that the number of jobs ofTithat are released in the time
interval[0;t]is
l
t'i
pi
m
. Thus a single processor needs
P
n
i=1
l
t'i
pi
m
ei
time units to nish all jobsreleased before or at t.
However,
nX
i=1
&
t'i
pi
'
ei
nX
i=1
(t'i)
ei
pi
=
nX
i=1
ti'iui=
nX
i=1
tui
nX
i=1
'iui=tU
T

nX
i=1
'iui
Here
P
n
i=1
'iuidoes not depend ont.
Note that limt!1

tU
T

P
n
i=1
'iui

t=1. So there existstsuch
thattU
T

P
n
i=1
'iui>t+maxiDi.
So in order to complete all jobs released before timetwe need more
time thant+maxiDi. However, the latest deadline of a job released
beforetist+maxiDi. So at least one job misses its deadline.
76

Proof of 2.)3. – Simplied
Let us start with a proof of a special case (see the assumptions A1 and A2
below). Then a complete proof will be presented.
We prove:3.) :2. assuming thatDi=pifori=1; : : : ;n.
(Note that the general case immediately follows.)
Assume thatTis not schedulable according to EDF.
(Our goal is to show thatU
T
>1.)
This means that there must be at least one job that misses its
deadline when EDF is used.
Simplifying assumptions:
A1Suppose that all tasks are in phase, i.e. the phase'`=0 for
every taskT`.
A2Suppose thatthe rst job Ji;1of a taskTimisses its deadline.
By A1,Ji;1is released at 0 and misses its deadline atpi. Assume
w.l.o.g. that this is the rst time when a job misses its deadline.
(To simplify even further, you may (privately) assume that no other job has its
deadline atpi.)
77

Proof of 2.)3. – Simplied
LetGbe the [0;pi]and have their
deadlines in[0;pi].
Crucial observations:
IGcontainsJi;1and all jobs that preemptJi;1.
(If there are more jobs with deadlinepi, then these jobs do not have to
preemptJi;1. Assume the worst case: all these jobs preemptJi;1.)
IThe processor is never idle during[0;pi]and executesonlyjobs
ofG.
Denote byEGthe G, that is, the sum of
execution times of all jobs inG.
Corollary of the crucial observation:EG>pibecause otherwise
Ji;1(and all jobs that preempt it) would complete bypi.
Let us computeEG.
78

Proof of 2.)3. – Simplied
Since we assume'`=0 for everyT`, the rst job ofT`is released
at 0, and thus
j
pi
p`
k
jobs ofT`belong toG.
E.g., ifp`=2 andpi=5 then three jobs ofT`are released in[0;5](at times
0, 2, 4) but only 2=
j
5
2
k
=
j
p
i
p
`
k
of them have their deadlines in[0;pi].
Thus the total execution timeEGof all jobs inGis
EG=
nX
`=1
$
pi
p`
%
e`
But then
pi<EG=
nX
`=1
$
pi
p`
%
e`
nX
`=1
pi
p`
e`pi
nX
`=1
u`piU
T
which implies thatU
T
>1.
79

Proof of 2.)3. – Complete
Now let us drop the simplifying assumptions A1 and A2 !
Notation:Given a set of tasksL, we denote by
S
Lthe set of all
jobs of the tasks inL.
We prove:3.) :2. assuming thatDi=pifori=1; : : : ;n(note that
the general case immediately follows).
Assume thatTis not schedulable by EDF. We show thatU
T
>1.Suppose that a jobJi;kofTimisses its deadline at timet=ri;k+pi.
Assume that this is the earliest deadline miss.LetT
0
be the set of all tasks whose jobs have deadlines (and thus
also release times) in[ri;k;t]
(i.e., a task belongs toT
0
iff at least one job of the task is released in[ri;k;t]).
Lettbe the end of thelatestinterval beforetin which either jobs of
S
(TrT
0
)are executed, or the processor is idle.Thenri;ktsince all jobs of
S
(TrT
0
)waiting for execution during
[ri;k;t]have deadlines later thant(thus have lower priorities thanJi;k).
80

Proof of 2.)3. – Complete (cont.)
It follows that
Ino job of
S
(TrT
0
)is executed in[t;t],
(by denition oft)
Ithe processor is fully utilized in[t;t].
(by denition oft)
Iall jobs(that all must belong to
S
T
0
)executed in[t;t]are
released in[t;t]and have their deadlines in[t;t]since
Ino job of
S
T
0
executes just beforet,
Iall jobs of
S
T
0
released in[t;ri;k]have deadlines beforet,
Ijobs of
S
T
0
released in[ri;k;t]with deadlines aftertare not
executed in[ri;k;t]as they have lower priorities thanJi;k.
LetGbe the [t;t]and have their
deadlines in[t;t].
Note thatJi;k2Gsinceri;kt.
Denote byEGthe sum of all execution times of all jobs inG(the total
execution time ofG).
81

Proof of 2.)3. – Complete (cont.)
NowEG>ttbecause otherwiseJi;kwould complete in[t;t].
How to computeEG?
ForT`2 T
0
, denote byR`the earliest release time of a job inT`
during the interval[t;t].
For everyT`2 T
0
, exactly
j
tR`
p`
k
jobs ofT`belong toG. (For every
T`2 TrT
0
, exactly 0 jobs belong toG.)
Thus
EG=
X
T`2T
0
$
tR`
p`
%
e`
As argued above:
tt<EG=
X
T`2T
0
$
tR`
p`
%
e`
X
T`2T
0
tt
p`
e`(tt)
X
T`2T
0
u`(tt)U
T
which implies thatU
T
>1.
82

Density and EDF
What about tasks withDi<pi?
Density of a task Tiwith periodpi, execution timeeiand relative
deadlineDiis dened by
ei=min(Di;pi)
Total density
T
of a set of tasksTis the sum of densities of
tasks inT
Note that ifDi<pifor somei, then
T
>U
T
Theorem 11
A setTof independent, preemptable, periodic tasks can be
feasibly scheduled on one processor if
T
1.
Note that this is NOT a necessary condition! (Example whiteb.)
83

Schedulability Test For EDF
The problem:Given a set of independent, preemptable, periodic
tasksT=fT1; : : : ;Tngwhere eachTihas a periodpi, execution time
ei, and relative deadlineDi, decide whetherTis schedulable by EDF.
Solution using utilization and density:
IfpiDifor eachi, then it sufces to decide whetherU
T
1.
Otherwise, decide whether
T
1:
IIf yes, thenTis schedulable with EDF
IIf not, thenTdoes not have to be schedulable
Note that
IPhases of tasks do not have to be specied
IParameters may vary: increasing periods or deadlines, or
decreasing execution times does not prevent schedulability
84

Schedulability Test for EDF – Example
Consider a digital robot controller
IA control-law computation
Itakes no more than 8 ms
Ithe sampling rate: 100 Hz, i.e. computes every 10 ms
Feasible? Trivially yes ....
IAdd Built-In Self-Test (BIST)
Imaximum execution time 50 ms
Iwant a minimal period that is feasible (max one second)
With 250 ms still feasible ....
IAdd a telemetry task
Imaximum execution time 15 ms
Iwant to minimize the deadline on telemetry
period may be large
Reducing BIST to once a second, deadline on telemetry
may be set to 100 ms ....
85

Real-Time Scheduling
Priority-Driven Scheduling
Fixed-Priority
86

Fixed-Priority Algorithms
Recall that we consider a set ofntasksT=fT1; : : : ;Tng
Any xed-priority algorithm schedules tasks ofTaccording to xed
(distinct) prioritiesassigned to tasks.
We writeTiATjwheneverTihas a higher priority thanTj.
To simplify our reasoning, assume that
all tasks are in phase, i.e.'k=0for allTk.
We will remove this assumption at the end.
87

Fixed-Priority Algorithms – Reminder
Recall that Fixed-Priority Algorithms do not have to be optimal.
ConsiderT=fT1;T2gwhereT1= (2;1)andT2= (5;2:5)
U
T
=1 and thusTis schedulable by EDF
IfT1AT2, thenJ2;1misses its deadline
IfT2AT1, thenJ1;1misses its deadline
We consider the following algorithms:
IRM= assigns priorities to tasks based on their periods
the priority is inversely proportional to the periodpi
IDM= assigns priorities to tasks based on their relative deadlines
the priority is inversely proportional to the relative deadlineDi
(In all cases, ties are broken arbitrarily.)
We consider the following questions:
IAre the algorithms optimal?
IHow to efciently (or even online) test for schedulability?
88

Maximum Response Time
Which job of a taskTihas the maximum response time?
As all tasks are in phase, the rst job ofTiis released together with
(rst) jobs of all tasks that have higher priority thanTi.
This means, thatJi;1is the most preempted of jobs inTi.
It follows, thatJi;1has the maximum response time.
Note that this relies heavily on the assumption that tasks are in phase!
Thus in order to decide whetherTis schedulable, it sufces to test
for schedulability of the rst jobs of all tasks.
89

Optimality of RM for Simply Periodic Tasks
Denition 12
A setfT1; : : : ;Tngissimply periodicif for every pairTi,T`satisfying
pi>p`we have thatpiis an integer multiple ofp`
Example 13
The helicopter control system from the rst lecture.
Theorem 14
A setTof n simply periodic, independent, preemptable tasks with
Di=piis schedulable on one processor according to RMiffU
T
1.
i.e. on simply periodic tasks RM is as good as EDF
Note: Theorem 14 is true in general, no "in phase" assumption is needed.
90

Proof of Theorem 14
By Theorem 10, every schedulable setTsatisesU
T
1.
We prove that ifTis according to RM, thenU
T
>1.Assume that a jobJi;1ofTimisses its deadline atDi=pi. W.l.o.g., we
assume thatT1A ATnaccording to RM.
Let us compute the total execution time ofJi;1and all jobs that
preempt it:
E=ei+
nX
`=i+1
&
pi
p`
'
e`=
nX
`=i
pi
p`
e`=pi
nX
`=i
u`pi
nX
`=1
u`=piU
T
NowE>pibecause otherwiseJi;1meets its deadline.Thus
pi<EpiU
T
and we obtainU
T
>1.
91

Optimality of DM (RM) among Fixed-Priority Algs.
Theorem 15
A set of independent, preemptable periodic tasks with Dipithat are
in phase (i.e.,'i=0for all i=1; : : : ;n) can be feasibly scheduled on
one processor according to DM if it can be feasibly scheduled by
some xed-priority algorithm.
Proof.
Assume a xed-priority feasible schedule withT1A ATn.
Consider the leastisuch that the relative deadlineDiofTiis larger
than the relative deadlineDi+1ofTi+1.
Swap the priorities ofTiandTi+1.The resulting schedule is still feasible.DM is obtained by using nitely many swaps. Note: If the assumptions of the above theorem hold and all relative deadlines
are equal to periods, then RM is optimal among all xed-priority algorithms.
Note: no "in phase" assumption is needed here.
92

Fixed-Priority Algorithms: Schedulability
We consider two schedulability tests:
ISchedulable utilizationURMof the RM algorithm.
ITime-demand analysis based on response times.
93

Schedulable Utilization for RM
Theorem 16
Let us x n2Nand consider only independent, preemptable
periodic tasks with Di=pi.
IIfTis a set of n tasks satisfying U
T
n(2
1=n
1), then U
T
is schedulable according to the RM algorithm.
IFor every U>n(2
1=n
1)there is a setTof n tasks
satisfying U
T
U that is not schedulable by RM.
Note: Theorem 16 holds in general, no "in phase" assumption is needed.
94

Schedulable Utilization for RM
It follows that the maximum schedulable utilizationURMover
independent, preemptable periodic tasks satises
URM=inf
n
n(2
1=n
1) =lim
n!1
n(2
1=n
1) =ln 20:693
Note thatU
T
n(2
1=n
1)is a sufcient but not necessary condition for
schedulability ofTusing the RM algorithm (an example will be given later)
We say that a set of tasksTisRM-schedulableif it is
schedulable according to RM.
We say thatTisRM-infeasibleif it is not RM-schedulable.
95

Proof – Special Case
To simplify, we restrict to two tasks and always assumep22p1.
(the latter condition is w.l.o.g., proof omitted)
Outline:Givenp1;p2;e1, denote bymax_e2the
time so thatT=f(p1;e1);(p2;max_e2)gis RM-schedulable.
We deneU
p1;p2
e1
to beU
T
whereT=f(p1;e1);(p2;max_e2)g.
We say thatTfully utilizes the processor, any increase in an execution time
causes RM-infeasibility.
Nowwe nd the (global) minimumminUofU
p1;p2
e1
.
Note that this sufces to obtain the desired result:
IGiven a set of tasksT=f(p1;e1);(p2;e2)gsatisfyingU
T
minU
we getU
T
minUU
p1;p2
e1
, and thus the execution timee2
cannot be larger thanmax_e2. Thus,Tis RM-schedulable.
IGivenU>minU, there must bep1;p2;e1satisfying
minUU
p1;p2
e1
<UwhereU
p1;p2
e1
=U
T
for a set of tasks
T=f(p1;e1);(p2;max_e2)g.
However, now increasinge1by a sufciently small" >0 makes
the set RM-infeasible without making utilization larger thanU.
96

Proof – Special Case (Cont.)
Consider two cases depending one1:
1.e1<p2p1:
Maximum RM-feasiblemax_e2(withp1;p2;e1xed) isp22e1.
Which gives the utilization
U
p1;p2
e1
=
e1
p1
+
max_e2
p2
=
e1
p1
+
p22e1
p2
=
e1
p1
+
p2
p2

2e1
p2
=1+
e1
p2

p2
p1
2
!
As
p2
p1
20, the utilizationU
p1;p2
e1
is minimized by maximizinge1.
2.e1p2p1:Maximum RM-feasiblemax_e2(withp1;p2;e1xed) isp1e1.
Which
gives the utilization
U
p1;p2
e1
=
e1
p1
+
max_e2
p2
=
e1
p1
+
p1e1
p2
=
e1
p1
+
p1
p2

e1
p2
=
p1
p2
+
e1
p2

p2
p1
1
!
As
p2
p1
10, the utilizationU
p1;p2
e1
is minimized by minimizinge1.
The minimum ofU
p1;p2
e1
is attained ate1=p2p1.
(Both expressions deningU
p
1;p
2
e
1
give the same value fore1=p2p1.)
97

Proof – Special Case (Cont.)
Substitutee1=p2p1into the expression forU
p1;p2
e1
:
U
p1;p2
p2p1
=
p1
p2
+
p2p1
p2

p2
p1
1
!
=
p1
p2
+

1
p1
p2
!
p2
p1
1
!
=
p1
p2
+
p1
p2

p2
p1
1
!
p2
p1
1
!
=
p1
p2
0
B
B
B
B
@
1+

p2
p1
1
!
2
1
C
C
C
C
A
DenotingG=
p2
p1
1 we obtain
U
p1;p2
p2p1
=
p1
p2
(1+G
2
)=
1+G
2
p2=p1
=
1+G
2
1+G
Differentiating w.r.t.Gwe get
G
2
+2G1
(1+G)
2
which attains minimum atG=1
p
2. Here onlyG=1+
p
2>0
is acceptable since the other root is negative.
98

Proof – Special Case (Cont.)
Thus the U
p1;p2
e1
is
1+ (
p
21)
2
1+ (
p
21)
=
42
p
2
p
2
=2(
p
21)
It is attained at periods satisfying
G=
p2
p1
1=
p
21 i.e. satisfyingp2=
p
2p1.
The execution timee1which at full utilization of the processor (due to
max_e2) gives the minimum utilization is
e1=p2p1=
p
21)p1
and the correspondingmax_e2=p1e1=p1(p2p1) =2p1p2.
Scaling top1=1, we obtain a completely determined example
p1=1p2=
p
21:41e1=
p
210:41max_e2=2
p
20:59
that fully utilizes the processor (no execution time can be increased)
but has the minimum utilization 2(
p
21).
99

Proof Idea of Theorem 16
Fix periodsp1< <pnso that (w.l.o.g.)pn2p1. Then the
following set of tasks has the smallest utilization among all task sets
that fully utilize the processor (i.e., any increase in any execution time
makes the set unschedulable).
0p
12p
10p
20p
30p
n10pn
:
:
:T3T2T1TnTn1
ek=pk+1pk fork=1; : : : ;n1
en=pn2
n1X
k=1
ek=2p1pn
100

Time-Demand Analysis
Consider a set ofntasksT=fT1; : : : ;Tng.
Recall that we consider only independent, preemptable, in phase (i.e.'i=0
for alli) tasks without resource contentions.
Assume thatDipifor everyi, and consider an arbitrary
xed-priority algorithm. W.l.o.g. assumeT1A ATn.
Idea:For every taskTiand every time instantt0, compute the total
execution timewi(t)(the time demand) of the rst jobJi;1and of all
higher-priority jobs released up to timet.
Ifwi(t)tfor some timetDi, thenJi;1is schedulable, and hence all
jobs ofTiare schedulable.
101

Time-Demand Analysis
IConsider one taskTiat a time, starting with highest priority and
working to lowest priority.
IFocus on the rst jobJi;1ofTi.
IfJi;1makes it, all jobs ofTiwill make it due to'i=0.
IAt timetfort0, the processor time demandwi(t)for this job
and all higher-priority jobs released in[0;t]is bounded by
wi(t) =ei+
i1X
`=1
&
t
p`
'
e` for 0<tpi
(Note that the smallesttfor whichwi(t)tis the response time ofJi;1,
and hence the maximum response time of jobs inTi).
IIfwi(t)tfor sometDi, the jobJi;1meets its deadlineDi.IIfwi(t)>tfor all 0<tDi, then the rst job of the task cannot
complete by its deadline.
102

Time-Demand Analysis – Example
Example:T1= (3;1),T2= (5;1:5),T3= (7;1:25),T4= (9;0:5)
This is schedulable by RM even though
U
fT1;:::;T4g
=0:85>0:757=URM(4)
103

Time-Demand Analysis
IThe time-demand functionwi(t)is a staircase function
ISteps in the time-demand for a task occur at multiples of
the period for higher-priority tasks
IThe value ofwi(t)tlinearly decreases from a step until
the next step
IIf our interest is the schedulability of a task, it sufces to
check ifwi(t)tat the time instants when a higher-priority
job is released and atDi
IOur schedulability test becomes:
IComputewi(t)
ICheck whetherwi(t)tfor sometequal either toDi, or to
jpkwherek=1;2; : : : ;iandj=1;2; : : : ;bDi=pkc 104

Time-Demand Analysis – Comments
ITime-demand analysis schedulability test is more complex than
the schedulable utilization test but more general:
IWorks foranyxed-priority scheduling algorithm, provided
the tasks have short response time (Dipi)
Can be extended to tasks with arbitrary deadlines
IStill more efcient than exhaustive simulation.IAssuming that the tasks are in phase the time demand analysis
is complete.
We have considered the time demand analysis for tasks in phase. In
particular, we used the fact that the rst job has the maximum
response time.
This is not true if the jobs are not in phase, we need to identify the so
calledcritical instant, the time instant in which the system is most
loaded, and has its worst response time.
105

Critical Instant – Formally
Denition 17
Acritical instanttcritof a taskTiis a time instant in which a jobJi;kin
Tiis released so thatJi;keither does not meet its deadline, or has
the maximum response time of all jobs inTi.
Theorem 18
In a xed-priority system where every job completes before the next
job in the same task is released, a critical instant of a task Tioccurs
when one of its jobs Ji;kis released at the same time with a job from
every higher-priority task.
Note that the situation described in the theorem does not have to occur if
tasks are not in phase!
106

Critical Instant and Schedulability Tests
We use critical instants to get upper bounds on schedulability as
follows:
ISet phases of all tasks to zero, which gives a new set of tasks
T
0
=fT
0
1
; : : : ;T
0
ng
By Theorem 18, the response time of the rst jobJ
0
i;1
ofT
0
1
inT
0
is at
least as large as the response time of every job ofTiinT.
IDecide schedulability ofT
0
, e.g. using the timed-demand
analysis.
IIfT
0
if schedulable, then alsoTis schedulable.
IIfT
0
is not schedulable, thenTdoes not have to be
schedulable.
But may be schedulable, which make the time-demand analysis
incomplete in general for tasks not in phase.
107

Dynamic vs Fixed Priority
IEDF
Ipros:
Ioptimal
Ivery simple and complete test for schedulability
Icons:
Idifcult to predict which job misses its deadline
Istrictly following EDF in case of overloads assigns higher
priority to jobs that missed their deadlines
Ilarger scheduling overhead
IDM (RM)
Ipros:
Ieasier to predict which job misses its deadline (in particular,
tasks are not blocked by lower priority tasks)
Ieasy implementation with little scheduling overhead
I(optimal in some cases often occurring in practice)
Icons:
Inot optimal
Iincomplete and more involved tests for schedulability
108

Real-Time Scheduling
Priority-Driven Scheduling
Aperiodic Tasks
109

Current Assumptions
ISingle processor
IFixed number,n, ofindependent periodictasks
IJobs can be preempted at any time and never suspend
themselves
INo resource contentions
IAperiodic jobs exist
IThey are independent of each other, and of the periodic
tasks
IThey can be preempted at any time
IThere are no sporadic jobs (for now)
IJobs are scheduled using a priority driven algorithm
110

Scheduling Aperiodic Jobs
Consider:
IA setT=fT1; : : : ;Tngof periodic tasks
IAn aperiodic taskA
Recall that:
IA schedule is feasible if all jobs with hard real-time
constraints complete before their deadlines
)This includes all periodic jobs
IA scheduling algorithm is optimal if it always produces a
feasible schedule whenever such a schedule exists, and if
a cost function is given, minimizes the cost
We assume that the periodic tasks are scheduled using a
priority-driven algorithm
111

Background Scheduling of Aperiodic Jobs
IAperiodic jobs are scheduled and executed only at times
when there are no periodic jobs ready for execution
IAdvantages
IClearly produces feasible schedules
IExtremely simple to implement
IDisadvantages
INot optimal since the execution of aperiodic jobs may be
unnecessarily delayed
Example:T1= (3;1),T2= (10;4)
112

Polled Execution of Aperiodic Jobs
IWe may use apolling server
IA periodic job(ps;es)scheduled according to the periodic
algorithm, generally as the highest priority job
IWhen executed, it examines the aperiodic job queueIIf an aperiodic job is in the queue, it is executed for up toes
time units
IIf the aperiodic queue is empty, the polling server
self-suspends, giving up its execution slot
IThe server does not wake-up once it has self-suspended,
aperiodic jobs which become active during a period are not
considered for execution until the next period begins
ISimple to prove correctness, performance less than ideal –
executes aperiodic jobs in particular timeslots
113

Polled Execution of Aperiodic Jobs
Example:T1= (3;1),T2= (10;4),poller= (2:5;0:5)
Can we do better?
Yes, polling server is a special case ofperiodic-serverfor
aperiodic jobs.
114

Periodic Severs – Terminology
periodic server= a task that behaves much like a periodic task,
but is created for the purpose of executing aperiodic jobs
IA periodic server,TS= (pS;eS)
IpSis a period of the server
IeSis the (maximal)budgetof the serverIThe budget can beconsumedandreplenished; the budget
isexhaustedwhen it reaches 0
(Periodic servers differ in how they consume and replenish the budget)
IA periodic server is
Ibackloggedwhenever the aperiodic job queue is non-empty
Iidleif the queue is empty Ieligibleif it is backlogged and the budget is not exhaustedIWhen a periodic server is eligible, it is scheduled as any
other periodic task with parameters(pS;eS) 115

Periodic Severs
Each periodic server is thus specied by
Iconsumption rules: How the budget is consumed
Ireplenishment rules: When and how the budget is
replenished
Polling server
Iconsumption rules:
IWhenever the server executes, the budget is consumed at
the rate one per unit time.
IWhenever the server becomes idle, the budget gets
immediately exhausted
Ireplenishment rule: At each time instantkpSreplenish
the budget toeS
116

Periodic Severs
Deferrable sever
IConsumption rules:
IThe budget is consumed at the rate of one per unit time
whenever the server executes
IUnused budget is retained throughout the period, to be
used whenever there are aperiodic jobs to execute
(i.e. instead of discarding the budget if no aperiodic job to execute
at start of period, keep in the hope a job arrives)
IReplenishment rule:IThe budget is set toeSat multiples of the period
Ii.e. time instantskpSfork=0;1;2; : : :
(Note that the server is not able tu cumulate the budget over
periods)
We consider both
IFixed-priority scheduling
IDynamic-priority scheduling (EDF)
117

Deferrable Server – RM
Here the tasks are scheduled using RM.
Is it possible to increase the budget of the server to 1:5 ?
118

Deferrable Server – RM
ConsiderT1= (3:5;1:5),T2= (6:5;0:5)andTDS= (3;1)
Acritical instantforT1= (3:5;1:5)looks as follows:
i.e. increasing the budget above 1 may causeT1to miss its
deadline
119

Deferrable Server – Critical Instant
Lemma 19
Assume a xed-priority scheduling algorithm. Assume that
Dipiand that the deferrable server(pS;eS)has the highest
priority among all tasks.
Then a critical instant of every periodic
task Tioccurs at a time t0when all of the following are true:IOne of its jobs Ji;cis released at t0
IA job in every higher-priority periodic task is released at t0
IThe budget of the server is eSat t0, one or more aperiodic
jobs are released at t0, and they keep the server
backlogged hereafter
IThe next replenishment time of the server is t0+eS
120

Deferrable Server – Critical Instant
AssumeTDSAT1AT2A ATn
(i.e.T1has the highest pririty andTnlowest)
121

Deferrable Server – Time Demand Analysis
Assume that the deferrable server has the highest priority
IThe denition of critical instant is identical to that for the
periodic tasks without the deferrable server +
the worst-case requirements for the server
IThus the expression for the time-demand function
becomes
wi(t) =ei+
i1X
k=1
&
t
pk
'
ek+eS+
&
teS
pS
'
eS for 0<tpi
ITo determine whether the taskTiis schedulable, we simply
check whetherwi(t)tfor sometDi
Note that this is asufcient condition, not necessary.
ICheck whetherwi(t)tfor sometequal either
ItoDi, or
Itojpkwherek=1;2; : : : ;iandj=1;2; : : : ;bDi=pkc, or
ItoeS;eS+pS;eS+2pS; : : : ;eS+

(Diei)=pS

pS
122

Deferrable Server – Time Demand Analysis
TDS= (3;1:0),T1= (3:5;1:5),T2= (6:5;0:5)
123

Deferrable Server – Schedulable Utilization
INo maximum schedulable utilization is known in general
IA special case:
IA setTofnindependent, preemptable periodic tasks
whose periods satisfypS<p1< <pn<2pSand
pn>pS+eSand whose relative deadlines are equal to
their respective periods, can be scheduled according to RM
with a deferrable server provided that
U
T
URM=DS(n) := (n1)
2
6
6
6
6
4

uS+2
uS+1

1
n1
1
3
7
7
7
7
5
whereuS=eS=pS
124

Deferrable Server – EDF
Here the tasks are scheduled using EDF.
TDS= (3;1),T1= (2;3:5;1:5),T2= (6:5;0:5)
125

Deferrable Server – EDF – Schedulability
Theorem 20
A set of n independent, preemptable, periodic tasks satisfying
piDifor all1in is schedulable with a deferrable server
with period pS, execution budget eSand utilization uS=eS=pS
according to the EDF algorithm if:
nX
k=1
uk+uS

1+
pSeS
miniDi

1
126

Sporadic Server – Motivation
IProblem with polling server:TPS= (pS;eS)executes
aperiodic tasks at the multiples ofpS
IProblem with deferrable server:TDS= (pS;eS)may delay
lower priority jobs longer than periodic task(pS;eS)
Therefore special version of time-demand analysis and utilization
bounds were needed.
ISporadic serverTSS= (eS;pS)
Imay execute jobs “in the middle” of its period
Ineverdelays periodic tasks for longer time than the periodic
task(pS;eS)
Thus can be tested for schedulability as an ordinary periodic task.
Originally proposed by Sprunt, Sha, Lehoczky in 1989
original version contains a bug which allows longer delay of lower priority jobs
Part of POSIX standard
also incorrect as observed and (probably) corrected by Stanovich in 2010
127

Very Simple Sporadic Server
For simplicity, we consider only xed priority scheduling, i.e. assume
T1AT2A ATnand consider a sporadic serverTSS= (pS;eS)with
thehighest priority
Notation:
Itr= thelatestreplenishment time
Itf= rst instant aftertrat which server begins to execute
Inr= a variable representing thenextreplenishment
IConsumption rule: The budget is consumed (at the rate of one
per unit time) whenever the current timetsatisesttf
IReplenishment rules: At the beginning,tr=nr=0
IWhenever the current time is equal tonr, the budget is set
toeSandtris set to the current time
IAt the rst instanttfaftertrat which the server starts
executing,nris set totf+pS
(Note that such server resembles a periodic task with the highest priority
whose jobs are released at timestfand execution times are at mosteS)
128

Very Simple Sporadic/Background Server
New notation:
Itr= thelatestreplenishment time
Itf= rst instant aftertrat which server begins to execute andat
least one task ofTis not idle
Inr= a variable representing thenextreplenishment
IConsumption rule: The budget is consumed (at the rate of one
per unit time) whenever the current timetsatisesttfand
least one task ofTis not idle
IReplenishment rules: At the beginning,tr=nr=0
IWhenever the current time is equal tonr, the budget is set
toeSandtris set to the current time
IAt the beginning of an idle interval ofT, the budget is set to
eSandnris set to the end of this interval
IAt the rst instanttfaftertrat which the server starts
executing andTis not idle, nris set totf+pS
This combines the very simple sporadic server with background scheduling.
129

Very Simple Sporadic Server
Correctness (informally):
Assuming thatTnever idles, the sporadic server resembles a
periodic task with the highest priority whose jobs are released
at timest
fand execution times are at mosteS
WheneverTidles, the sporadic server executes in the
background, i.e. does not block any periodic task, hence does
not consume the budget
Whenever an idle interval ofTends, we may treat this situation
as a restart of the system with possibly different phases of
tasks (so that it is safe to have the budget equal toeS)
Note that in both versions of the sporadic server,eSunits of
execution time are available for aper. jobs everypSunits of time
This means that if the server is always backlogged, then it executes foreS
time units everypSunits of time
130

Real-Time Scheduling
Priority-Driven Scheduling
Sporadic Tasks
131

Current Assumptions
ISingle processor
IFixed number,n, ofindependent periodictasks,T1; : : : ;Tn
whereTi= ('i;pi;ei;Di)
IJobs can be preempted at any time and never suspend
themselves
INo resource contentions
ISporadic tasks
IIndependent of the periodic tasks
IJobs can be preempted at any time
IAperiodic tasks
For simplicity scheduled in the background – i.e. we may ignore them
IJobs are scheduled using a priority driven algorithm
A sporadic job = a job of a sporadic task
132

Our situation
IBased on the execution time and deadline of each newly arrived
sporadic job, decide whether to accept or reject the job
IAccepting the job implies that the job will complete within its
deadline, without causing any periodic job or previously
accepted sporadic job to miss its deadline
IDo not accept a sporadic job if cannot guarantee it will meet its
deadline
133

Scheduling Sporadic Jobs – Correctness and
Optimality
IAcorrectschedule is one where all periodic tasks, and all
sporadic jobs that have been accepted, meet their deadlines
IA scheduling algorithm supporting sporadic jobs is acorrect
algorithm if it only produces correct schedules for the system
IA sporadic job scheduling algorithm isoptimalif it accepts a new
sporadic job, and schedules that job to complete by its deadline,
iff the new job can be correctly scheduled to complete in time
134

Model for Scheduling Sporadic Jobs with EDF
IAssume that all jobs in the system are scheduled by EDF
Iif more sporadic jobs are released at the same time their
acceptance test is done in the EDF order
IDenitions:
ISporadic jobs are denoted byS(r;d;e)whereris the
release time,dthe (absolute) deadline, andeis the
maximum execution time
IThedensityofS(r;d;e)is dened bye=(dr)IThetotal densityof a set of sporadic jobs is the sum of
densities of these jobs
IThe sporadic jobS(r;d;e)isactive at time tifft2(r;d]
Note that each job of a periodic task(';p;e;D)can be seen as a
sporadic job; to simplify, weassume that alwaysDp.
This in turn means that there is always at most one job of a given task active
at a given time instant.
For every job of this task released atrwith abs. deadlined, we obtain
the densitye=(dr) =e=D
135

Schedulability of Sporadic Jobs with EDF
Theorem 21
A set of independent preemptable sporadic jobs is schedulable
according to EDF if at every time instant t the total density ofall
jobs active at time t is at most one.
Proof.
By contradiction, suppose that a job misses its deadline att, no
deadlines missed beforet
Lett1be the supremum of time instants beforetwhen either the
system idles, or a job with a deadline aftertexecutes
Suppose that jobsJ1; : : : ;Jkexecute in[t1;t]and that they are
ordered w.r.t. increasing deadline (Jkmisses its deadline att)
LetLbe the number of releases and completions in[t1;t], denote by
tithei-th time instant wheni-th such event occurs (thent1=t1, we
denote bytL+1the time instantt)
Denote byXithe set of all jobs that are active during the interval
(ti;ti+1]and letibe their total density
The rest on whiteboard ....

136

Sporadic Jobs with EDF – Example
Note that the above theorem includes both the periodic as well
as sporadic jobs
This test is sufcient but not necessary
Example 22
Three sporadic jobs:S1(0;2;1),S2(0:5;2:5;1),S3(1;3;1)
Total density at time 1:5 is 1:5
Yet, the jobs are schedulable by EDF
137

Admission Control for Sporadic Jobs with EDF
Letbe the total density ofperiodic tasks.
Assume that a new sporadic jobS(t;d;e)is released at timet.IAt timetthere arenactive sporadic jobs in the system
IThe EDF scheduler maintains a list of the jobs, in
non-decreasing order of their deadlines
IThe deadlines partition the time fromtto1inton+1
discrete intervalsI1;I2; : : : ;In+1
II1begins attand ends at the earliest sporadic job deadline
IFor each 1kn, eachIk+1begins when the intervalIk
ends, and ends at the next deadline in the list (or1forIn+1)
IThe scheduler maintains the total densityS;kof sporadic
jobs active in each intervalIk
ILetI`be the interval containing the deadlinedof the new
sporadic jobS(t;d;e)
IThe scheduler accepts the job ife=(dt) + S;k1
for allk=1;2; : : : ; `
Ii.e. accept if the new sporadic job can be added, without
increasing density of any intervals past 1
138

139

Admission Control for Sporadic Jobs with EDF
This acceptance test is not optimal: a sporadic job may be
rejected even though it could be scheduled.
IThe test is based on the density and hence is sufcient but
not necessary.
IIt is possible to derive a – much more complex –
expression for schedulability which takes into account
slack time, and is optimal. Unclear if the complexity is
worthwhile.
140

Sporadic Jobs with EDF
IOne way to schedule sporadic jobs in axed-prioritysystem is
to use a sporadic server to execute them
IBecause the server(pS;eS)haseSunits of processor time
everypSunits of time, the scheduler can compute the least
amount of time available to every sporadic job in the system
IAssume that sporadic jobs are ordered among themselves
according to EDF
IWhen rst sporadic jobS1(t;dS;1;eS;1)arrives, there is at
least

(dS;1t)=pS

eS
units of processor time available to the server before the
deadline of the job
ITherefore it acceptsS1if the slack of the job
S;1(t) =

(dS;1t)=pS

eSeS;10
141

Sporadic Jobs with EDF
ITo decide if a new jobSi(t;dS;i;eS;i)is acceptable when
there arensporadic jobs in the system, the scheduler rst
computes the slackS;i(t)ofSi:
S;i(t) =

(dS;it)=pS

eSeS;i
X
dS;k<dS;i
(eS;kS;k)
whereS;kis the execution time of the completed part of
the existing jobSk
Note that the sum is taken over sporadic jobs with earlier deadline asSi
since sporadic jobs are ordered according to EDF
IThe job cannot be accepted ifS;i(t)<0IIfS;i(t)0, the scheduler checks if any existing sporadic
jobSkwith deadline equal to, or afterdS;imay be adversely
affected by the acceptance ofSi, i.e. check ifS;k(t)eS;i
142

Real-Time Scheduling
Resource Access Control
[Some parts of this lecture are based on a real-time systems course
of Colin Perkins
http://csperkins.org/teaching/rtes/index.html]
143

Current Assumptions
ISingle processor
IIndividual jobs
(that possibly belong to periodic/aperiodic/sporadic tasks)
IJobs can be preempted at any time and never suspend
themselves
IJobs are scheduled using a priority-driven algorithm
i.e., jobs are assigned priorities, scheduler executes jobs according to
these priorities
InresourcesR1; : : : ;Rnof distinct types
Iused in non-preemptable and mutually exclusive manner;
serially reusable
144

Motivation & Notation
Resources may represent:
IHardware devices such as sensors and actuators
IDisk or memory capacity, buffer space
ISoftware resources: locks, queues, mutexes etc.
Assume a lock-based concurrency control mechanism
IA job wanting to use a resourceRkexecutesL(Rk)to lock the
resourceRk
IWhen the job is nished with the resourceRk, unlocks this
resource by executingU(Rk)
IIf lock request fails, the requesting job isblockedand has to
wait, when the requested resource becomes available, it is
unblocked
In particular, a job holding a lock cannot be preempted by a higher
priority job needing that lock
The segment of a job that begins at a lock and ends at a matching
unlock is acritical section(CS)
ICS must be properly nested if a job needs multiple resources145

Example
J1;J2;J3scheduled according to EDF.
IAt 0,J3is ready and executes
IAt 1,J3executesL(R)and is grantedR
IJ2is released at 2, preemptsJ3and begins to execute
IAt 4,J2executesL(R), becomes blocked,J3executes
IAt 6,J1becomes ready, preemptsJ3and begins to execute
IAt 8,J1executesL(R), becomes blocked, andJ3executes
IAt 9,J3executesU(R)and bothJ1andJ2are unblocked.J1has higher
priority thanJ2and executes
IAt 11,J1executesU(R)and continues executing
IAt 12,J1completes,J2has higher priority thanJ3and has the resource
R, thus executes
IAt 16,J2executesU(R)and continues executing
IAt 17,J2completes,J3executes until completion at 18
146

Priority Inversion
Denition 23
Priority inversionoccurs when
Ia
Iis blocked by a
Iwhich is subsequently preempted by a
Then effectively the
priority than the
contend for resources
There may be arbitrarily many medium priority jobs that
preempt the low priority job)uncontrolled priority inversion
147

Priority Inversion – Example
Uncontrolled priority inversion:
High priority job (J1) can be blocked by low priority job (J3) for
unknown amount of time depending on middle priority jobs (J2)
148

Deadlock
Denition 24 (suitable for resource access control)
A deadlock occurs when there is a set of jobsDsuch that each
job ofDis waiting for a resource previously allocated by
another job ofD.
Deadlocks can be
Idetected:regularly check for deadlock, e.g. search for
cycles in a resource allocation graph regularly
Iavoided:postpone unsafe requests for resources even
though they are available (banker's algorithm,
priority-ceiling protocol)
Iprevented:many methods invalidating sufcient conditions
for deadlock (e.g., impose locking order on resources)
See your operating systems course for more information ....
149

Deadlock – Example
Deadlockcan result from piecemeal acquisition of resources: classic
example of two jobsJ1andJ2both needing both resourcesRandR
0
IJ2locksR
0
andJ1locksR
IJ1tries to getR
0
and is blocked
IJ2tries to getRand is blocked
150

Timing Anomalies due to Resources
Previous example, the critical section ofJ3has length 4
... the critical section ofJ3shortened to 2:5
... but response ofJ1becomes longer!
151

Controlling Timing Anomalies
Contention for resources causes timing anomalies, priority
inversion and deadlock
Several protocols exist to control the anomalies
INon-preemptive CS
IPriority inheritance protocol
IPriority ceiling protocol
I....
Terminology:
IA jobJhisblockedby a jobJkwhen
Ithe priority ofJkis lower than the priority ofJhand
IJkholds a resourceRand
IJhexecutesL(R).
In such situation we sometimes say thatJhis blocked by
the corresponding critical section ofJk.
152

Non-preemptive Critical Sections
The protocol: when a job locks a resource, it is scheduled with
priority higher than all other jobs (i.e., is non-preemptive)
Example 25
JobsJ1;J2;J3with release times 2;5;0, resp., and with
execution times 4;5;7, resp.
153

Non-preemptive Critical Sections – Features
Ino deadlock as no job holding a resource is ever preempted
Ino priority inversion:
IA jobJhcan be blocked (by a lower priority job)only at
release time.
(Indeed, ifJhis not blocked at the release timerh, it means that no
lower priority job holds any resource atrh. However, no lower
priority job can be executed before completion ofJh, and thus no
lower priority job may blockJh.)
IIfJhis blocked at release time, then once the blocking
critical section completes, no lower priority job can blockJh.
IIt follows that any job can be blocked only once, at release
time, blocking time is bounded by duration of one critical
section of a lower priority job.
Advantage: very simple; easy to implement both in xed and dynamic
priority; no prior knowledge of resource demands of jobs needed
Disadvantage: every job can be blocked by every lower-priority job
with a critical section, even if there is no resource conict
154

Priority-Inheritance Protocol
Idea: adjust the scheduling priorities of jobs during resource
access, to reduce the duration of timing anomalies
(As opposed to non-preemptive CS protocol, this time the priority is not
always increased to maximum)
Notation:
Iassigned priority= priority assigned to a job according to
a standard scheduling algorithm
IAt any timet, each ready jobJkis scheduled and executes
at itscurrent priorityk(t)which may differ from its
assigned priority and may vary with time
IThe current priorityk(t)of a jobJkmay be raised to
the higher priorityh(t)of another jobJh
IIn such a situation, the lower-priority jobJkis said toinherit
the priority of the higher-priority jobJh, andJkexecutes at
its inherited priorityh(t)
155

Priority-Inheritance Protocol
IScheduling rules:
IJobs are scheduled in a preemptable priority-driven manner
according to their current priorities
IAt release time, the current priority of a job is equal to its
assigned priority
IThe current priority remains equal to the assigned priority,
except when the priority-inheritance rule is invoked
IPriority-inheritance rule:
IWhen a jobJhbecomes blocked on a resourceR, the job
Jkwhich blocksJhinherits the current priorityh(t)ofJh;
IJkexecutes at its inherited priority until it releasesR;
at that time, the priority ofJkisset to the highest priority of
all jobs still blocked by Jkafter releasing R.
(the resulting priority may still be an inherited priority)
IResource allocation: When a jobJrequests a resourceRatt:
IIfRis free,Ris allocated toJuntilJreleases it
IIfRis not free, the request is denied andJis blocked
(Note thatJis only deniedRif the resource is held by another job.)
156

Priority-Inheritance Simple Example
non-preemptive CS:
priority-inheritance:
IAt 3,J1is blocked byJ3,J3inherits priority ofJ1
IAt 5,J2is released but cannot preemptJ3since the inherited priority of
J3is higher than the (assigned) priority ofJ2
157

Priority-Inheritance Example
IAt 0,J5starts executing at priority 5, at 1 it executesL(Black)
IAt 2,J4preemptsJ5and executes
IAt 3,J4executesL(Shaded),J4continues to execute
IAt 4,J3preemptsJ4; at 5,J2preemptsJ3
IAt 6,J2executesL(Black)and is blocked byJ5. ThusJ5inherits the
priority 2 ofJ2and executes
IAt 8,J1executesL(Shaded)and is blocked byJ4. ThusJ4inherits the
priority 1 ofJ1and executes
IAt 9,J4executesL(Black)and is blocked byJ5. ThusJ5inherits the
currentpriority 1 ofJ4and executes
IAt 11,J5executesU(Black), its priority returns to 5 (the priority before
lockingBlack). NowJ4has the highest priority (1) and executes
the Black critical section.
Later, whenJ4executesU(Black), the priority ofJ4remains 1 (since
ShadedblocksJ1), andJ4also nishes theShadedcritical section
(at 13).
IAt 13,J4executesU(Shaded), its priority returns to 4.J1has now the
highest priority and executes
IAt 15,J1completes,J2is grantedBlackand has the highest priority and
executes
IAt 17,J2completes, afterwardsJ3;J4;J5complete.
158

Properties of Priority-Inheritance Protocol
ISimple to implement, does not require prior knowledge of
resource requirements
IJobs exhibit two types of "blocking"
I(Direct) blocking
i.e., a jobJ`locks a resourceR,JhexecutesL(R)is directly
blocked byJ`onR
IPriority-inheritance "blocking"
i.e., a jobJhis preempted by a lower-priority job that inherited a
higher priority
IJobs may exhibit
In the previous example, at 9,J5blocksJ4andJ4blocksJ1, henceJ5
inherits the priority ofJ1
IDeadlock isnotprevented
In the previous example, letJ5requestshadedat 6:5, thenJ4andJ5
become deadlocked
ICan reduce blocking time (see next slide) compared to
non-preemptable CS but does not guarantee to minimize
blocking
159

Priority-Inheritance – Blocking Time (Optional)
z`;k= thek-th critical section ofJ`
A jobJhis blocked byz`;kifJhhas higher assigned priority thanJ`but
has to wait forJ`to exitz`;kin order to continue


h;`
= the set of all maximal critical sectionsz`;kthatmayblockJh,
i.e., which correspond to resources that are (potentially) used by jobs
with priorities equal or higher thanJh.
(recall that CS are properly nested, maximal CS which may blockJhis the
one which is not contained within any other CS which may blockJh)
Theorem 26
Let Jhbe a job and let Jh+1; : : : ;Jh+mbe jobs with lower priority than
Jh. Then Jhcan be blocked for at most the duration of one critical
section ineachof

h;`
where`2 fh+1; : : : ;h+mg.
The theorem is a direct consequence of the next lemma.
160

Lemma 27
Jhcan be blocked by J`only if J`is executing within a critical
section z`;kof

h;`
when Jhis released
IAssume thatJhis released attandJ`is in no CS of

h;`
att. We
show thatJ`never executes betweentand completion ofJh:
IIfJ`is not in any CS att, then its current priority attis
equal to its assigned priority and cannot increase. Thus,J`
has to wait for completion ofJhas the current priority ofJh
is always higher than the assigned priority ofJ`.
IIfJ`is still in a CS att, then this CS does not belong to

h;`
and thus cannot blockJhbefore completion and cannot
execute before completion ofJh.
IAssume thatJ`leavesz`;k2

h;`
at timet. We show thatJ`never
executes betweentand completion ofJh:
IIfJ`is not in any CS att, then, as above,J`never executes
before completion ofJhand cannot blockJh.
IIfJ`is still in a CS att, then this CS does not belong to

h;`
because otherwisez`;kwould not be maximal. ThusJ`
cannot blockJh, and thusJ`is never executed before
completion ofJh. 161

Priority-Inheritance – The Worst Case
J1is blocked for the total duration of all critical sections in all lower
priority jobs.
162

Priority-Ceiling Protocol
The goal: to furhter reduce blocking times due to resource contention
and to prevent deadlock
Iin its basic form priority-ceiling protocol works under the
assumption that the priorities of jobs and resources required by
all jobs are known apriori
can be extended to dynamic priority (job-level xed priority), see later
Notation:
IThepriority ceilingof any resourceRkis the highest priority of all
the jobs that requireRkand is denoted by(Rk)
IAt any timet, the current priority ceiling(t)of the system is
equal to the highest priority ceiling of the resources that are in
use at the time
IIf all resources are free,(t)is equal to, a newly introduced
priority level that is lower than the lowest priority level of all jobs
163

Priority-Ceiling Protocol
The scheduling and priority-inheritance rules are the same as for
priority-inheritance protocol
IScheduling rules:
IJobs are scheduled in a preemptable priority-driven manner
according to their current priorities
IAt release time, the current priority of a job is equal to its
assigned priority
IThe current priority remains equal to the assigned priority,
except when the priority-inheritance rule is invoked
IPriority-inheritance rule:
IWhen jobJhbecomes blocked on a resourceR, the jobJk
which blocksJhinherits the current priorityh(t)ofJh;
IJkexecutes at its inherited priority until it releasesR;
at that time, the priority ofJkisset to the highest priority of
all jobs still blocked by Jkafter releasing R.
(which may still be an inherited priority)
164

Priority-Ceiling Protocol
Resource allocation rules:
IWhen a jobJrequests a resourceRheld by another job, the
request fails and the requesting job blocks
IWhen a jobJrequests a resourceRat timet, and that resource
is free:
IIfJ's priority(t)isstrictly higherthan current priority
ceiling(t),Ris allocated toJ
IIfJ's priority(t)is not higher than(t),Ris allocated toJ
only ifJis the job holding the resource(s) whose priority
ceiling is equal to(t), otherwiseJis blocked
(Note that only one job may hold the resources whose priority
ceiling is equal to(t))
Note that unlike priority-inheritance protocol, the priority-ceiling
protocol can deny access to an available resource.
165

Priority-Ceiling Protocol
IAt 1,(t) = ,J5executesL(Black), continues executing
IAt 3,(t) =2,J4executesL(Shaded); because the ceiling of the
system(t)is higher than the current priority ofJ4, jobJ4is blocked,J5
inheritsJ4's priority and executes at priority 4
IAt 4,J3preemptsJ5; at 5,J2preemptsJ3. At 6,J2requestsBlackand
is directly blocked byJ5. Consequently,J5inherits priority 2 and
executes until preempted byJ1
IAt 8,J1executesL(Shaded), its priority is higher than(t) =2, its
request is granted andJ1executes; at 9,J1executesU(Shaded)and at
10 completes
IAt 11,J5releasesBlackand its priority drops to 5;J2becomes
unblocked, is allocatedBlackand executes
IAt 14,J2andJ3complete,J4is grantedShaded(because its priority is
higher than(t) = ) and executes
IAt 16,J4executesL(Black)which is free, the priority ofJ4is not higher
than(16) =1 butJ4is the job holding the resource whose priority
ceiling is equal to(16). ThusJ4getsBlack, continues to execute;
the rest is clear
166

Priority-Ceiling Protocol
Theorem 28
Assume a system of preemptable jobs with xed assigned
priorities. Then
Ideadlock may never occur,
Ia job can be blocked for at most the duration of one critical
section. 167

These situations cannot occur with priority ceiling protocol:
168

Differences between the priority-inheritance and
priority-ceiling
IPriority-inheritance is greedy, while priority ceiling is not
The priority-ceiling protocol may withhold access to a free
resource, i.e., a job can be prevented from execution by a
lower-priority job which does not hold the requested
resource –avoidance "blocking"
IThe priority ceiling protocol forces a xed order onto
resource accesses thus eliminating deadlock
169

Resources in Dynamic Priority Systems
The priority ceiling protocol assumes xed and known priorities
In a dynamic priority system, the priorities of the periodic tasks
change over time, while the set of resources is required by
each task remains constant
IAs a consequence, the priority ceiling of each resource
changes over time
What happens ifT1uses resourceX, butT2does not?
IPriority ceiling ofXis 1 for 0t4, becomes 2 for
4t5, etc. even though the set of resources is required
by the tasks remains unchanged
170

Resources in Dynamic Priority Systems
IIf a system is job-level xed priority, but task-level dynamic
priority, a priority ceiling protocol can still be applied
IEach job in a task has a xed priority once it is scheduled,
but may be scheduled at different priority to other jobs in
the task (e.g. EDF)
IUpdate the priority ceilings of all resources each time a new
job is introduced; use until updated on next job release
IHas been proven to prevent deadlocks and no job is ever
blocked for longer than the length of one critical section
IBut: very inefcient, since priority ceilings updated
frequently
IMay be better to use priority inheritance, accept longer
blocking
171

Schedulability Tests with Resources
How to adjust schedulability tests?
Add the blocking times to execution times of jobs; then run the
test as normal
The blocking timebiof a jobJican be determined for all three
protocols:
Inon-preemptable CS)biis bounded by the maximum
length of a critical section in lower priority jobs
Ipriority-inheritance)biis bounded by the total length of
themlongest critical sections wheremis the number of
jobs that may blockJi
(For a more precise formulation see Theorem 2.)
Ipriority-ceiling)biis bounded by the maximum length of
a critical section
172

Mars Pathnder vs Priority Inversion
IMars Pathnder = a US spacecraft that landed
on Mars in July 4th, 1997.
IConsisted of a lander and a lightweight
wheeled robotic Mars rover called Sojourner
IWhat Happened:
IFew days in to the mission, not long after Pathnder started
gathering meteorological data, it began experiencing total
system resets, each resulting in losses of data.
IApparently a software problem caused these resets.
IThe system:
IPathnder used the well-known real-time embedded
systems kernel VxWorks by Wind River.
IVxWorks uses preemptive priority-based scheduling, in this
case a deadline monotonic algorithm.
IPathnder contained an "information bus" (a shared
memory) used for communication, synchronized by locks.
173

Mars Pathnder – The Problem
IProblematic tasks:
IA
move data in/out of the bus. If the bus has been locked,
then this thread itself had to wait.
IA
low priority thread, and used the bus to publish its data.
IThe bus was also used by a
with medium priority.
IOccasionally the
invoked at the precise time when the
(high priority) was blocked by the
task (low priority) – priority inversion!
IThe
of time by the
timer to go off, notice that the bus management task has not
been executed for some time, which typically means that
something had gone drastically wrong, and initiate a total system
reset.
174

Mars Pathnder – Solution
IJPL (Jet Propulsion Laboratory) engineers spent hours and
hours running the system on a spacecraft replica.
IEarly in the morning, after all but one engineer had gone home,
the engineer nally reproduced a system reset on the replica.
Solution: Turn the priority inheritance on!
This was done online using a C language interpreter which allowed to
execute C functions on-the-y.
A short code changed a mutex initialization parameter from FALSE to
TRUE.
175

Real-Time Scheduling
Multiprocessor Real-Time Systems
176

Multiprocessor Real-time Systems
IMany embedded systems are composed of many processors
(control systems in cars, aircraft, industrial systems etc.)
IToday most processors in computers have multiple cores
The main reason is that increasing frequency of a single processor is
no more feasible (mostly due to power consumption problems, growing
leakage currents, memory problems etc.)
Applications must be developed specically for multiprocessor
systems.
177

Multiprocessor Frustration
In case of real-time systems, multiple processors bring serious
difculties concerning correctness, predictability and efciency.
The “root of all evil” in global scheduling: (Liu, 1969)
Few of the results obtained for a single processor generalize
directly to the multiple processor case; bringing in additional
processors adds a new dimension to the scheduling problem.
The simple fact that a task can use only one processor even
when several processors are free at the same time adds a
surprising amount of difculty to the scheduling of multiple
processors.
178

The Model
IAjobis a unit of work that is scheduled and executed by
a system
(Characterised by the release timeri, execution timeeiand deadlinedi)
IAtaskis a set of related jobs which jointly provide some
system function
IJobs execute onprocessors
In this lecture we considermprocessors
IJobs may use some (shared) passiveresources
179

Schedule
Schedule assigns, in every time instant, processors and resources to
jobs.
A schedule isfeasibleifall jobs with hard real-time constraints
complete before their deadlines.
A set of jobs isschedulableif there is a feasible schedule for the set.
A scheduling algorithm is
schedule whenever such a schedule exists.
(and if a cost function is given, minimizes the cost)
We also consideronlinescheduling algorithms that do not use any
knowlede about jobs that will be released in the future but are given
a complete information about jobs that have been released.
(e.g. EDF is online)
180

Multiprocessor Taxonomy
IIdentical processors: All processors identical, have the same
computing power
IUniform processors: Each processor is characterized by its own
computing capacity, completestunits of execution aftert
time units
IUnrelated processors: There is an execution rate ijassociated
with each job-processor pair(Ji;Pj)so thatJicompletesijt
units of execution by executing onPjforttime units
In addition, cost of communication can be included etc.
181

Assumptions – Priority Driven Scheduling
Throughout this lecture we assume:
IUnless otherwise stated, considerm identicalprocessors
IJobs can be preempted at any time and never suspend
themselves
IContext switch overhead is negligibly small
i.e. assumed to be zero
IThere is an unlimited number of priority levels
IFor simplicity, we assumeindependentjobs that do not contend
for resources
Unless otherwise stated, we assume that scheduling decisions take
place only when a job is released, or completed.
182

Multiprocessor Scheduling Taxonomy
Multiprocessor scheduling attempts to solve two problems:
Itheallocation problem, i.e., on which processor a given job
executes
Ithepriority problem, i.e., when and in what order the jobs
execute
183

Issues
What results from single processor scheduling remain valid in
multiprocessor setting?
IAre there simple optimal scheduling algorithms?
IAre there optimalonlinescheduling algorithms
(i.e. those that do not know what jobs come in future)
IAre there efcient tests for schedulability?
In this lecture we consider:
IIndividual jobs
IPeriodic tasks
Start withnindividual jobsfJ1; : : : ;Jng
184

Individual Jobs – Timing Anomalies
Priority order:J1A AJ4
185

Individual Jobs – EDF
EDF onmidentical processors: At any time instant, jobs with
the earliest absolute deadlines are executed on available processors.
(Recall: no job can be executed on more than one processor at a given time!)
Is this optimal?
NO!
Example:
J1;J2;J3where
Iri=0 fori2 f1;2;3g
Ie1=e2=1 ande3=5
Id1=1,d2=2,d3=5
186

Individual Jobs – Online Scheduling
Theorem 29
No optimalon-linescheduler can exist for a set of jobs with two or
more distinct deadlines on any m>1processor system.
Proof.
Assumem=2 and consider three jobsJ1;J2;J3are released at time
0 with the following parameters:
Ie1=e2=2 ande3=4
Id1=d2=4 andd3=8
Depending on scheduling in[0;2], new tasksT4;T5are released at 2:
IIfJ3is executed in[0;2], then at 2 releaseJ4;J5withd4=d5=4
ande4=e5=2.
IIfJ3is not executed in[0;2], then at 4 releaseJ4;J5with
d4=d5=8 ande4=e5=4.
In either case the schedule produced is not feasible. However, if the
scheduler is given either of the setsfJ1; : : : ;J5gat the beginning, then
there is a feasible schedule.
187

Individual Jobs – Speedup Helps(?)
Theorem 30
If a set of jobs is feasible on m identical processors, then the same
set of jobs will be scheduled to meet all deadlines by EDF on identical
processors in which the individual processors are(2
1
m
)times as
fast as in the original system.
The result is tight for EDF (assuming dynamic job priority):
Theorem 31
There are sets of jobs that can be feasibly scheduled on m identical
processors but EDF cannot schedule them on m processors that are
only(2
1
m
")faster for every" >0.
... there are also general lower bounds for online algorithms:
Theorem 32
There are sets of jobs that can be feasibly scheduled on m (here m is
even) identical processors butno onlinealgorithm can schedule
them on m processors that are only(1+")faster for every" <
1
5
.
[Optimal Time-Critical Scheduling Via Resource Augmentation, Phillips et al, STOC 1997]
188

Reactive Systems
Consider xed number,n, ofindependent periodictasks
T=fT1; : : : ;Tng
i.e. there is no dependency relation among jobs
IUnless otherwise stated, assume no phase and deadlines equal
to periods
IIgnore aperiodic tasks
INo sporadic tasks unless otherwise stated
Utilization uiof a periodic task Tiwith periodpiand execution timeei
is dened byui:=ei=pi
uiis the fraction of time a periodic task with periodpiand execution timeei
keeps a processor busy
Total utilization U
T
of a set of tasksT=fT1; : : : ;Tngis dened as the
sum of utilizations of all tasks ofT, i.e. byU
T
:=
P
n
i=1
ui
Given a scheduling algorithmALG, the UALGof
ALGis the maximum numberUsuch that for allT:UTUimpliesT
is schedulable byALG.
189

Multiprocessor Scheduling Taxonomy
Allocation (migration type)
INo migration: each taskis allocated to a processor
I(Task-level migration:jobsof a task may execute on different
processors; however, each job is assigned to a single processor)
IJob-level migration: A single job can migrate and execute on
different processors
(however, parallel execution of one job is not permitted and migration
takes place only when the job is rescheduled)
Priority type
IFixed task-level priority
IFixed job-level priority
I(Dynamic job-level priority)
Partitionedscheduling = No migration
Globalscheduling = job-level migration
190

Fundamental Limit – Fixed Job-Level Priority
Considermprocessors andm+1 tasksT=fT1; : : : ;Tm+1g, each
Ti= (L;2L1).
Then
UT=
P
m+1
i=1
L=(2L1) = (m+1)(L=(2L1))= (m+1)=2L=(L1)
For very largeL, this number is close to(m+1)=2.
The setTis not schedulable using anyxed job-levelpriority
algorithm.
In other words, the schedulable utilization of xed job-level priority
algorithms is at most(m+1)=2, i.e., half of the processors capacity.
There are variants of EDF achieving this bound (see later slides).
191

Partitioned vs Global Scheduling
Most algorithms up to the end of 1990s based onpartitioned
scheduling
Ino migration
From the end of 1990s, many results concerningglobal
scheduling
Ijob-level migration
The task-level migration has not been much studied, so it is not covered in
this lecture.
We consider xed job-level priority (e.g. EDF) and xed
task-level priority (e.g. RM).
As before, we ignore dynamic job-level priority.
192

Partitioned Scheduling & Fixed Job-Level Priority
The algorithm proceeds in two phases:
1.Allocate tasks to processors, i.e., partition the set of tasks intom
possibly emptymodules M1; : : : ;Mm
2.Schedule tasks of eachMion the processoriaccording to
a given single processor algorithm
The quality of task assignment is determined by the number of
assigned processors
IUse EDF to schedule modules
ISufces to test whether the total utilization of each module is1
(or, possibly,
^
Uwhere
^
U<1 in order to accomodate aperiodic jobs ...)
Finding an optimal schedule is equivalent to a simpleuniform-size
bin-packing problem(and hence is NP-complete)
Similarly, we may use RM for xed task-level priorities (total utilization in
moduleslog 2, etc.)
193

Global Scheduling
IAll ready jobs are kept in a global queue
IWhen selected for execution, a job can be assigned to any
processor
IWhen preempted, a job goes to the global queue (i.e.,
forgets on which processor it executed)
194

Global Scheduling – Fixed Job-Level Priority
Dhall's effect:
IConsiderm>1 processors
ILet" >0
IConsider a set of tasksT=fT1; : : : ;Tm;Tm+1gsuch that
ITi= (2";1)for 1im
ITm+1= (1;1+")
ITis schedulable
IStadnard EDF and RM schedules are not feasible (whiteb.)
However,
UT=m
2"
1
+
1
1+"
which means that for small"the utilizationUTis close to 1
(i.e.,UT=mis very small form>>0 processors)
195

How to avoid Dhall's effect?
INote that RM and EDF only account for task periods and
ignore the execution time!
I(Partial) Solution: Dhall's effect can be avoided by giving
high priority to tasks with high utilization
Then in the previous example,Tm+1is executed whenever
it comes and the other tasks are assigned to the remaining
processors – produces a feasible schedule
196

Global Scheduling – Fixed Job-Level Priority
Apparently there is a problem with long jobs due to Dhall's effect.
There is an improved version of EDF called EDF-US(1/2) which
Iassigns the highest priority to tasks withui1=2
Iassigns priorities to the rest according to deadlines
which reaches the generic schedulable utilization bound(m+1)=2.
197

Partitioned vs Global
Advantages of the global scheduling:
ILoad is automatically balanced
IBetter average response time (follows from queueing theory)
Disadvantages of the global scheduling:
IProblems caused by migration (e.g. increased cache misses)
ISchedulability tests more difcult (active area of research)
Is either of the approaches better from the schedulability standpoint?
198

Global Beats Partitioned
There are sets of tasks schedulable only with global scheduler:
IT=fT1;T2;T3gwhereT1= (1;2);T2= (2;3);T3= (2;3),
can be scheduled using a global scheduler:
INo feasible partitioned schedule exists, always at least one
processor gets tasks with total utilization higher than 1.
199

Partitioned Beats Global
There are task sets that can be scheduled only with partitioned
scheduler (assuming xed task-level priority assignment):
IT=fT1; : : : ;T4gwhere
T1= (2;3);T2= (3;4);T3= (5;15);T4= (5;20), can be
scheduled using a xed task-level priority partitioned schedule:
IGlobal scheduling (xed job-level priority): There are 9 jobs
released in the interval[0;12). Any of the 9!possible priority
assignments leads to a deadline miss.
200

Optimal Algorithm?
There IS an optimal algorithm in the case of job-level migration &
dynamic job-level priority. However, the algorithm istime driven.
Thepriority fair(PFair) algorithm is optimal for periodic systems with
deadlines equal to periods
Idea(of PFair): In any interval(0;t]jobs of a taskTiwith utilizationui
execute for amount of timeWso thatuit1<W<uit+1
(Here every parameter is assumed to be a natural number)
This is achieved by cutting time into small quanta and scheduling jobs
in these quanta so that the execution times are always (more or less)
in proportion.
There are other optimal algorithms, all of them suffer from a large
number of preemptions/migrations.
No optimal algorithms are known for more general settings: deadlines
bounded by periods, arbitrary deadlines.
Recall, that no optimalon-linescheduling possible
201

Real-Time Programming & RTOS
Concurrent and real-time programming tools
202

Concurrent Programming
Concurrency in real-time systems
Itypical architecture of embedded real-time system:
Iseveral input units
Icomputation
Ioutput units
Idata logging/storing
Ii.e., handling several concurrent activities
Iconcurrency occurs naturally in real-time systems
Support for concurrency in programming languages (Java, Ada, ...)
advantages: readability, OS independence, checking of interactions by
compiler, embedded computer may not have an OS
Support by libraries and the operating system (C/C++ with POSIX)
advantages: multi-language composition, language's model of concurrency
may be difcult to implement on top of OS, OS API stadards imply portability
203

Processes and Threads
Process
Irunning instance of a program,
Iexecutes its own virtual machine to avoid interference from
other processes,
Icontains information about program resources and
execution state, e.g.:
Ienvironment, working directory, ...
Iprogram instructions,
Iregisters, heap, stack,
Ile descriptors,
Isignal actions, inter-process communication tools (pipes,
message boxes, etc.)
Thread
Iexistswithin a process, uses process resources ,
Ican be scheduled by OS and run as an independent entity,
Ikeeps its own: execution stack, local data, etc.
Ishare global data and resources with other threads of
the same process
204

Processes and threads in UNIX
205

Process (Thread) States
206

Communication and Synchronization
Communication
Ipassing of information from one process (thread) to
another
Itypical methods: shared variables, message passing
Synchronization
Isatisfaction of constraints on the interleaving of actions of
processes
e.g. action of one process has to occur after an action of another one
Itypical methods: semaphores, monitors
Communication and synchronization are linked:
Icommunication requires synchronization
Isynchronization corresponds to communication without
content
207

Communication: Shared Variables
Consistency problems:
Iunrestricted use of shared variables is unreliable
Imultiple update problem
example: shared variableX, assignmentX:=X+1
Iload the current value ofXinto a register
Iincrement the value of the register
Istore the value of the register back toX
Itwo processes executing these instruction)certain
interleavings can produce inconsistent results
Solution:
Iparts of the process that access shared variables (i.e. critical
sections) must be executed indivisibly with respect to each other
Irequired protection is calledmutual exclusion
... one may use a special mutual ex. protocol (e.g. Peterson) or a
synchronization mechanism – semaphores, monitors
208

Synchronization: Semaphores
A sempahore contains an integer variable that, apart from
initialization, is accessed only through two standard operations:
wait()andsignal().
Isemaphore is initialized to a non-negative value (typically 1)
Iwait()operation: decrements the semaphore value if the value
is positive; otherwise, if the value is zero, the caller becomes
blocked
Isignal()operation: increments the semaphore value; if the
value is not positive, then one process blocked by the
semaphore is unblocked (usually in FIFO order)
Ibothwaitandsignalare atomic
Semaphores are elegant low-level primitive but error prone and hard
to debug (deadlock, missing signal, etc.)
For more details see an operating systems course.
209

Synchronization: Monitors
Iencapsulationand efcient condition synchronization
Icritical regions are written as procedures; all encapsulated in
a single object or module
Iprocedure calls into the module are guaranteed to be mutually
exclusive
Ishared resources accessible only by these procedures
For more details (such as condition variables) see an operating
systems course.
210

Communication: Message Passing
Communication among two, or more processes where there is no
shared region between the two processes. Instead they communicate
by passing messages.
Isynchronous(rendezvous): send and receive operations are
blocking, no buffer required
Iasynchronous(no-wait): send operation is not blocking,
requires buffer space (mailbox)
Iremote invocation(extended rendezvous): sender is blocked
until reply is received
211

Synchronous Message Passing
212

Asynchronous Message Passing
213

Asynch. Message Passing with Bounded Buffer
214

Concurrent Programming is Complicated
Multi-threaded applications withshared datamay have
numerous aws.
IRace condition
Two or more threads try to access the same shared data, the result
depends on the exact order in which their instructions are executed
IDeadlock
occurs when two or more threads wait for each other, forming a cycle
and preventing all of them from making any forward progress
IStarvation
an idenite delay or permanent blocking of one or more runnable
threads in a multithreaded application
ILivelock
occurs when threads are scheduled but are not making forward
progress because they are continuously reacting to each other's state
changes
Usually difcult to nd bugs and verify correctness.
215

Real-Time Aspects
Itime-aware systemsmake explicit references to the time
frame of the enclosing environment
e.g. a bank safe's door are to be locked from midnight to nine o'clock
Ithe "real-time" of the environment must be available
Ireactive systemsare typically concerned with relative
times
an output has to be produced within 50 ms of an associated input
Imust be able to measure intervals
Iusually must synchronize with environment: input sampling
and output signalling must be done very regularly with
controlled variability
216

The Concept of Time
Real-time systems must have a concept of time – but what is time?
IMeasure of a time interval
IUnits?
seconds, milliseconds, cpu cycles, system "ticks"
IGranularity, accuracy, stability of the clock source
IIs "one second" a well dened measure?
"A second is the duration of 9,192,631,770 periods of the
radiation corresponding to the transition between the two
hyperne levels of the ground state of the caesium-133
atom."
I... temperature dependencies and relativistic effects
(the above denition refers to a caesium atom at rest, at
mean sea level and at a temperature of 0 K)
ISkew and divergence among multiple clocks
Distributed systems and clock synchronization
IMeasuring time
Iexternal source (GPS, NTP, etc.)
Iinternal – hardware clocks that count the number of
oscillations that occur in a quartz crystal
217

Requirements for Interaction with "time"
For RT programming, it is desirable to have:
Iaccess to clocks and representation of time
Idelays
Itimeouts
I(deadline specication and real-time scheduling)
218

Access to Clock and Representation of Time
Irequires a hardware clock that can be read like a regular
external device
Imostly offered by an OS service, if direct interfacing to
the hardware is not allowed
Example of time representation:(POSIX high resolution clock, counting
seconds and nanoseconds since 1970 with known resolution)
struct timespec {
time_t tv_sec;
long tv_nsec;
}
int clock_gettime( clockid_t clock_id,
struct timespec * tp );
int clock_settime( clockid_t id,
const struct timespec * tp );
Time is often kept by incrementing an integer variable, need to take
case of overows (i.e. jumps to the past).
219

Delays
In addition to having access to a clock, need ability to
IDelay execution until an arbitrary calendar time
What about daylight saving time changes? Problems with leap seconds.
IDelay execution for a relative period of time
IDelay fortseconds
IDelay fortseconds after eventebegins
220

A Repeated Task (An Attempt)
The goal is todo workrepeatedly every 100 time units
while(1) {
delay(100);
do_work();
}
Does it work as intended? No, accumulates drift ...
Each turn in the loop will take at least 100 + x milliseconds,
where x is the time taken to performdo_work()
221

A Repeated Task (An Attempt)
The goal is todo workrepeatedly every 100 time units
while(1) {
delay(100);
do_work();
}
Does it work as intended? No, accumulates drift ...
Delay is just lower bound, a delaying process is not guaranteed
access to the processor (the delay does not compensate for this)
222

Eliminating (Part of) The Drift: Timers
ISet an alarm clock, do some work, and then wait for
whatever time
IThis is done with
IThread is told to wait until the next ring – accumulating drift
is eliminated
ITwo types of timers
Ione-shot
After a specied interval call an associated function.
Iperiodic (also called auto-reload timer in freeRTOS)
Call the associated function repeatedly, always after the specied
interval.
IEven with timers, drift may still occur, but it does not
accumulate (local drift)
223

Timeouts
Synchronous blocking operations can include timeouts
ISynchronization primitives
Semaphores, locks, etc.
... timeout usually generates an error/exception
INetworking and other I/O calls
E.g.select()in POSIX
Monitors readiness of multiple le descriptors, is ready when the
corresponding operation with the le desc is possible without blocking.
Has a timeout argument that species the minimum interval that
select()should block waiting for a le descriptor to become ready.
May also provide an asynchronous timeout signal
IDetect time overruns during execution of periodic and
sporadic tasks
224

Deadline specication and real-time scheduling
Clock driven scheduling trivial to implement via cyclic executive.
Other scheduling algorithms need OS and/or language support:
ISystem calls create, destroy, suspend and resume tasks.
IImplement tasks as either threads or processes.
Threads usually more benecial than processes (with separate address
space and memory protection):
IProcesses not always supported by the hardware
IProcesses have longer context switch time
IThreads can communicate using shared data (fast and
more predictable)
IScheduling support:
IPreemptive scheduler with multiple priority levels
ISupport for aperiodic tasks (at least background
scheduling)
ISupport for sporadic tasks with acceptance tests, etc.
225

Jobs, Tasks and Threads
IIn theory, a system comprises a set of (abstract)tasks,
each task is a series ofjobs
Itasks are typed, have various parameters, react to events,
etc.
IAcceptance test performed before admitting new tasks
IIn practice, athread(or a process) is the basic unit of work
handled by the scheduler
IThreads are the instantiation of tasks that have been
admitted to the system
How to maptaskstothreads?
226

Periodic Tasks
Real-time tasks dened to execute periodicallyT= (;p;e;D)
It is clearly inefcient if the thread is created and destroyed
repeatedly every period
ISome op. systems (funkOS) and programming languages
(Real-time Java & Ada) supportperiodic threads
Ithe kernel (or VM) reinitializes such a thread and puts it to
sleep when the thread completes
IThe kernel releases the thread at the beginning of the next
period
IThis provides clean abstraction but needs support from OS
IThread instantiated once, performs job, sleeps until next period,
repeats
ILower overhead, but relies on programmer to handle timing
IHard to avoid timing drift due to sleep overuns
(see the discussion of delays earlier in this lecture)
IMost common approach
227

Sporadic and Aperiodic Tasks
Events trigger sporadic and aperiodic tasks
IMight be extenal (hardware) interrupts
IMight be signalled by another task
Usual implementation:
IOS executes periodic server thread
(background server, deferrable server, etc.)
IOS maintains a “server queue” = a list of pointers which give
starting addresses of functions to be executed by the server
IUpon the occurrence of an event that releases an aperiodic or
sporadic job, the event handler (usually an interrupt routine)
inserts a pointer to the corresponding function to the list
228

Real-Time Programming & RTOS
Real-Time Operating systems
229

Operating Systems – What You Should Know ...
An operating system is a collection of software that manages
computer hardware resources and provides common services
for computer programs.
Basic components multi-purpose OS:
IProgram execution & process management
processes (threads), IPC, scheduling, ...
IMemory management
segmentation, paging, protection ...
IStorage & other I/O management
les systems, device drivers, ...
INetwork management
network drivers, protocols, ...
ISecurity
user IDs, privileges, ...
IUser interface
shell, GUI, ...
230

Operating Systems – What You Should Know ...
231

Implementing Real-Time Systems
IKey fact from scheduler theory:need predictable behavior
IRaw performance less critical than consistent and
predictable performance; hence focus on scheduling
algorithms, schedulability tests
IDon't want to fairly share resources – be unfair to ensure
deadlines met
INeed to run on a wide range of – often custom – hardware
IOften resource constrained:
limited memory, CPU, power consumption, size, weight, budget
IClosed set of applications
(Do we need a wristwatches to play DVDs?)
IStrong reliability requirements – may be safety critical
IHow to upgrade software in a car engine? A DVD player?
232

Implications on Operating Systems
IGeneral purpose operating systems not well suited for
real-time
IAssume plentiful resources, fairly shared amongst
untrusted users
IServe multiple purposes
IExactly opposite of an RTOS!
IInstead want an operating system that is:
ISmall and light on resources
IPredictable
ICustomisable, modular and extensible
IReliable
... and that can bedemonstratedorprovento be so
233

Implications on Operating Systems
IReal-time operating systems typically eithercyclic
executiveormicrokerneldesigns, rather than a traditional
monolithic kernel
ILimited and well dened functionality
IEasier to demonstrate correctness
IEasier to customise
IProvide rich support for concurrency & real-time control
IExpose low-level system details to the applications
control of scheduling, interaction with hardware devices, ...
234

Cyclic Executive without Interrupts
IThe simplest real-time systems use a “nanokernel” design
IProvides a minimal time service: scheduled clock pulse
with a xed period
INo tasking, virtual memory/memory protection etc.
IAllows implementation of a static cyclic schedule, provided:
ITasks can be scheduled in a frame-based manner
IAll interactions with hardware to be done on a polled basis
IOperating system becomes a single task cyclic executive
235

Microkernel Architecture
ICyclic executive widely used in low-end embedded devices
I8 bit processors with kilobytes of memory
IOften programmed in (something like) C via cross-compiler,
or assembler
ISimple hardware interactions
IFixed, simple, and static task set to execute
IClock driven scheduler
IBut many real-time embedded systems are more complex,
need a sophisticated operating system with priority
scheduling
ICommon approach: amicrokernelwith priority scheduler
Congurable and robust, since architected around interactions between
cooperating system servers, rather than a monolithic kernel with ad-hoc
interactions
236

Microkernel Architecture
IA microkernel RTOS typically provides:
ITiming services, interrupt handling, support for hardware
interaction
ITask management, scheduling
IMessaging, signals
ISynchronization and locking
IMemory management (and sometimes also protection)
237

Example RTOS: FreeRTOS
IRTOS for embedded devices (ported to many
microcontrollers from more than 20 manufacturers)
IDistributed under GPL
IWritten in C, kernel consists of 3+1 C source les
(approx. 9000 lines of code including comments)
ILargely congurable
238

Example RTOS: FreeRTOS
IThe OS is (more or less) a library of object modules;
the application and OS modules are linked together in
the resulting executable image
IPrioritized scheduling of tasks
Itasks correspond to threads (share the same address
space; have their own execution stacks)
Ihighest priority executes; same priority)round robin
Iimplicitidle taskexecuting when no other task executes)
may be assigned functionality of a background server
ISynchronization using semaphores
ICommunication using message queues
IMemory management
Ino memory protection in basic version (can be extended)
Ivarious implementations of memory management
memory can/cannot be freed after allocation, best t vs
combination of adjacent memory block into a single one
That's (almost) all ....
239

Example RTOS: FreeRTOS
Tiny memory requirements: e.g. IAR STR71x ARM7 port, full
optimisation, minimum conguration, four priorities)
Isize of the scheduler = 236 bytes
Ieach queue adds 76 bytes + storage area
Ieach task 64 bytes + the stack size
240

Details of FreeRTOS Scheduling
IThe scheduler must be explicitly invoked by calling
void vTaskStartScheduler(void)frommain().
The scheduler may also stop either due to error, or if one of the tasks
callsvoid vTaskEndScheduler(void).
IIt is possible to create a new task by calling
portBASE_TYPE xTaskCreate(
pdTASK_CODE pvTaskCode,
const char * const pcName,
unsigned short usStackDepth,
void *pvParameters,
unsigned portBASE_TYPE uxPriority,
xTaskHandle *pvCreatedTask);
pvTaskCodeis a pointer to a function that will be executed as the task,
pcNameis a human-readable name of the task,usStackDepthindicates
how many words must be reserved for the task stack,pvParametersis
a pointer to parameters of the task (without interpretation by the OS),
uxPriorityis the assigned priority of the task (see resource control
lecture 7),pvCreatedTaskis the task handle used by OS routines.
241

Details of FreeRTOS Scheduling
IA task can be deleted by means of
void vTaskDelete(xTaskHandle pxTaskToDelete)
ILike most other (non-POSIX-compliant) small real-time
systems, does not provide a task cancellation mechanism,
i.e. tasks cannot decline, or postpone deletion –
the deletion is immediate.
IMemory is not freed immediately, only the idle task can do it
that must be executed occasionally.
IA shared resource owned by a deleted task remains locked.
IPriorities are handled by means ofuxTaskPriorityGetand
uxTaskPrioritySet. FreeRTOS implements priority inheritance
protocol, the returned priorities are the current ones.
ITasks can be
suspendedvTaskSuspendorvTaskSuspendAll
(suspends of but the calling one),
and resumed byvTaskResumeorvTaskResumedAll.
Suspend/resume all can be used to implement non-preemptable critical
sections.
242

Clocks & Timers in FreeRTOS
IportTickType xTaskGetTickCount(void)
Get current time, in ticks, since the scheduler was started.
The frequency of ticks is determined byconfigTICK_RATE_HZ
set w.r.t. the HW port.
Ivoid vTaskDelay(portTickType xTicksToDelay)
Blocks the calling task for the specied number of ticks.
Ivoid vTaskDelayUntil(
portTickType *pxPreviousWakeTime,
portTickType xTimeIncrement
);
Blocks the calling process forxTimeIncrementticks since
thepxPreviousWakeTime.
(At the wakeup, thepxPreviousWakeTimeis incremented by
xTimeIncrementso that it can be readily used to implement periodic
tasks.)
243

Real-Time Programming & RTOS
Real-Time Programming Languages
Brief Overview
244

C and POSIX
IEEE 1003 POSIX
I"Portable Operating System Interface"
IDenes a subset of Unix functionality, various (optional)
extensions added to support real-time scheduling, signals,
message queues, etc.
IWidely implemented:
IUnix variants and Linux
IDedicated real-time operating systems
ILimited support in Windows
Several POSIX standards for real-time scheduling
IPOSIX 1003.1b ("real-time extensions")
IPOSIX 1003.1c ("pthreads")
IPOSIX 1003.1d ("additional real-time extensions")
ISupport a sub-set of scheduler features we have discussed
245

POSIX Scheduling API (Threads)
struct sched_paramtypically contains onlysched_priority.
pthread_joinsuspends execution of the thread until termination of the
thread;retvalof the terminating thread is available to any successfully
joined thread.
246

Threads: Example I
#include <pthread.h>
pthread_t id;
void *fun(void *arg) {
// Some code sequence
}
main() {
pthread_create(&id, NULL, fun, NULL);
// Some other code sequence
}
247

Threads: Example II
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
void *PrintHello(void *threadid)
{
printf("%d: Hello World!", threadid);
pthread_exit(NULL);
}
int main (int argc, char *argv[])
{
pthread_t threads[NUM_THREADS];
int rc, t;
for(t=0; t<NUM_THREADS; t++){
printf("Creating thread %d", t);
rc = pthread_create(&threads[t], NULL, PrintHello, (void *)t);
if (rc){
printf("ERROR; return code from pthread_create() is %d", rc);
exit(-1);
}
}
pthread_exit(NULL);
}
248

POSIX: Synchronization and Communication
ISynchronization:
Imutexes
(variables that can be locked/unlocked by threads),
I(counting) semaphores,
Icondition variables,
(Used to wait for some condition. The waiting thread is put into a
queue until signaled on the condition variable by another thread.)
I...
ICommunication:
Isignals (kill(pid,sig)),
Imessage passing,
Ishared memory.
249

POSIX: Real-Time Support
Getting Time
Itime()= seconds since Jan 1 1970
Igettimeofday()= seconds + nanoseconds since Jan 1
1970
Itm= structure for holding human readable time
IPOSIX requires at least one clock of minimum resolution
50Hz (20ms)
250

POSIX: High Resolution Time & Timers
High resolution clock. Known clock resolution.
struct timespec {
time_t tv_sec;
long tv_nsec;
}
int clock_gettime( clockid_t clock_id,
struct timespec * tp );
int clock_settime( clockid_t id,
const struct timespec * tp );
Simple waiting:sleep, or
Sleep for the interval specied. May return earlier due to signal (in
which caseremaininggives the remaining delay).
Accuracy of the delay not known (and not necessarily correlated to
clock_getres()value)
251

POSIX: Timers
Itypetimer_t; can be set:
Irelative/absolute time
Isingle alarm time and an optional repetition period
Itimer “rings” according tosevp(e.g. by sending a signal)
int timer_create(clockid_t clockid, struct sigevent *sevp,
timer_t *timerid);
int timer_settime(timer_t timerid, int flags,
const struct itimerspec *new_value,
struct itimerspec * old_value);
where
struct itimerspec {
struct timespec it_interval; /* Timer interval */
struct timespec it_value; /* Initial expiration */
};
252

POSIX Scheduling API
IFour scheduling policies:
ISCHED_FIFO= Fixed priority, pre-emptive, FIFO on the same
priority level
ISCHED_RR= Fixed priority, pre-emptive, round robin on the
same priority level
ISCHED_SPORADIC= Sporadic server
ISCHED_OTHER= Unspecied (often the default time-sharing
scheduler)
IA process cansched_yield()or otherwise block at any time
IPOSIX 1003.1b provides (largely) xed priority scheduling
IPriority can be changed usingsched_set_param(), but this
is high overhead and is intended for reconguration rather
than for dynamic scheduling
INo direct support for dynamic priority algorithms (e.g. EDF)
ILimited set of priorities:
IUsesched_get_priority_min(),
sched_get_priority_max()to determine the range
IGuarantees at least 32 priority levels
253

Using POSIX Scheduling: Rate Monotonic
Rate monotonic and deadline monotonic schedules can be
naturally implemented using POSIX primitives
1.Assign priorities to tasks in the usual way for RM/DM
2.Query the range of allowed system priorities
sched_get_priority_min()and
sched_get_priority_max()
3.Map task set onto system priorities
Care needs to be taken if there are large numbers of tasks, since some
systems only support a few priority levels
4.Start tasks using assigned priorities andSCHED_FIFO
There is no explicit support for indicating deadlines, periods
EDF scheduling not supported by POSIX
254

Using POSIX Scheduling: Sporadic Server
POSIX 1003.1d denes a hybrid sporadic/background server
When server has budget, runs atsched_priority, otherwise
runs as a background server atsched_ss_low_priority
Setsched_ss_low_priorityto be lower priority than real-time tasks, but
possibly higher than other non-real-time tasks in the system
Also denes the replenishment period and the initial budget
after replenishment
255

POSIX-compliant RTOS
Examples of POSIX-compliant implementations:
Icommercial:
IVxWorks
IQNX
IOSE
ILinux-related:
IRTLINUX
IRTAI
256

Latency
(Some) sources of hard to predict latency caused by the
system:
IInterrupts
see next slide
ISystem calls
RTOS should characterise WCET; kernel should be preemptable
IMemory management: paging
avoid, either use segmentation with a xed memory management
scheme, or memory locking
ICaches
may introduce non-determinism; there are techniques for computing
WCET with processor caches
IDMA
competes with processor for the memory bus, hard to predict who wins
257

Interrupts
The amount of time required to handle interrupt varies
Thus in most OS, interrupt handling is divided into two steps
IImmediate interrupt service
very short; invokes a scheduled interrupt handling routine
IScheduled interrupt service
preemptable, scheduled as an ordinary job at a suitable priority
258

Immediate Interrupt Service
Interrupt latency is the time between interrupt request and execution
of the rst instruction of the interrupt service routine
The total delay caused by interrupt is the sum of the following factors:
Ithe time the processor takes to complete the current instruction,
do the necessary chores (ush pipeline and read the interrupt
vector), and jump to the trap handler and interrupt dispatcher
Ithe time the kernel takes to disable interrupts
Ithe time required to complete the immediate interrupt service
routines with higher-priority interrupts (if any) that occurred
simultaneously with this one
Ithe time the kernel takes to save the context of the interrupted
thread, identify the interrupting device, and get the starting
address of the interrupt service routine
Ithe time the kernel takes to start the interrupt service routine
259

Event Latency
260

Java
Iobject-oriented
Ideveloped by Sun Microsystems in the early 1990s
Icompiled to virtual machine), which is
compiled to native machine code at runtime
Isyntax of Java is largely derived from C/C++
261

Concurrency: Threads
Ipredened classjava.lang.Thread– provides the
mechanism by which threads are created
Ito avoid all threads having to be child classes of Thread, it
also uses a standard:
public interface Runnable {
public abstract void run();
}
Iany class which wishes to express concurrent execution
must implement this interface and provide therun()
method
262

Threads: Creation & Termination
Creation:
Idynamic thread creation, arbitrary data to be passed as
parameters
Ithread hierarchies and thread groups can be created
Termination:
Ione thread can wait for another thread (the target) to
terminate by issuing the
thread object
ItheisAlivemethod allows a thread to determine if the
target thread has terminated
Igarbage collection cleans up objects which can no longer
be accessed
Imain program terminates when all its user threads have
terminated
263

Synchronized Methods
Imonitors
objects
Ilock; lock cannot be accessed
directly by the application but is affected by
Ithe method modiersynchronized
Iblock synchronization
Isynchronizedmethod – access to the method can only
proceed once the lock associated with the object has been
obtained
Inon-synchronized methods do not require the lock, can be
called at any time
264

Waiting and Notifying
Iwait()always blocks the
calling thread and releases the
lock associated with the object
Inotify()wakes up one
waiting thread
which thread is woken is not dened
InotifyAll()wakes up all
waiting threads
Iif no thread is waiting, then
notify()andnotifyAll()
have no effect
265

Real-Time Java
IStandard Java is not enough to handle real-time constraints
IJava (and JVM) lacks semantic for standard real-time
programming techniques.
IEmbedded Java Specication was there, but merely a subset of
standard Java API.
IThere is a gap for a language real-time capable and equipped
with all Java's powerful advantages.
IIBM, Sun and other partners formed Real-time for Java Expert
Group sponsored by NIST in 1998
IIt came up with Real-Time Specication for Java (RTSJ) to ll
this gap for real-time systems
IRTSJ proposed seven areas of enhancements to the standard
Java
266

RTSJ – Areas of Enhancement
1.Thread scheduling and dispatching
see the next slides
2.Memory management
immortal memory (no garbage collection), threads not preemptable by
garbage collector
3.Synchronization and resource sharing
priority inheritance and ceiling protocols
4.Asynchronous event handling, asynchronous transfer of
control, asynchronous thread termination
reaction to OS-level signals (POSIX), hardware interrupts and custom
events dened and red by the application
5.Physical memory access
The resulting real-time extension needs a modied Java virtual
machine due to changes to memory model, garbage collector
and thread scheduling
267

Real-Time Java: Time
Ijava.lang.System.currentTimeMilisreturns the number
of milliseconds since Jan 1 1970
IReal Time Java adds real time clocks with high resolution
time types
Timers
Ione shot timers (javax.realtime.OneShotTimer)
Iperiodic timers (javax.realtime.PeriodicTimer)
Constructor:
Timer(HighResolutionTime t, Clock c, AsyncEventHandler handler)
... create a timer that res at timet, according toClock cand is handled by
the specied handler.
268

Real-Time Thread Scheduling
IExtends Java with aschedulableinterface and
RealtimeThreadclass, and numerous supporting libraries
)Allows denition of timing and scheduling parameters,
and memory requirements of threads
IAbstractSchedulerandSchedulingParametersclasses
dened
IAllows a range of schedulers to be developed
ICurrent standards only allow system-dened schedulers;
cannot write a new scheduler without modifying the JVM
ICurrent standards provide a pre-emptive xed priority
scheduler (PrioritySchedulerclass)
IAllows monitoring of execution times; missed deadlines;
CPU budgets
IAllows thread priority to be changed programatically
ILimited support for acceptance tests (isFeasible())
269

Real-Time Thread Scheduling
IClass hierarchy to express
release timing parameters
IDeadline monitoring:
missHandlerif deadline
exceeded
IExecution time monitoring:
Icost= needed CPU time
IoverrunHandlerif
execution time budget
exceeded
270

Real-Time Thread Scheduling
ITheRealtimeThreadclass extendsThreadwith extra
methods and parameters
IDirect support for periodic threads
Irun() method will be a loop ending in a
waitForNextPeriod() call
I... i.e. does not have to be implemented using sleep as e.g.
with POSIX API
271

Ada
Idesigned for United States
Department of Defense during
1977-1983
Itargeted at embedded and
real-time systems
IAda 95 revision
Iused in critical systems
(avionics, weapon systems,
spacecrafts)
Ifree compiler:gnat
Ada Lovelace
(1815-1852)
... see the lecture of Petr Holub.
272