Process And Scheduling Algorithms in os

gunadhondwad 14 views 39 slides Jan 02, 2025
Slide 1
Slide 1 of 39
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39

About This Presentation

A brief detail about process and CPU Scheduling


Slide Content

Unit 2
Processes and CPU scheduling

2.1 Process Defination
 A process is defined as a sequence of instructions
are executed in a predefined order. any program
that is executed is termed as a process.
 A program becomes a process when an executable
file is loaded into memory.
 A process in OS is managed by the Process Control
Block (PCB).
A system consists of collection of process
 System processes.
 User processes
Process Control Block

The OS consists of Process Control Block (PCB) that
helps control the functioning of processes.Every
process in OS has a PCB associated with it.
A PCB keeps track of processes by storing information
about various things like their state, I/O status and
CPU Scheduling.
 Process ID
An identifier that helps us
in identifying and locating a process.
 Process state
It identifies the state that the process is
currently in.
It could be a new process, ready,
running, waiting or terminated.
 Program counter:
It holds the address of the next instruction to
be executed for the process.
It also stores a count of the number of
instructions in the process.
 CPU registers:
The content of processor registers gets
stored here when the process is in a running state.

 CPU scheduling information:
A process needs to be scheduled for
execution. Based on this scheduling, it goes from ready
to running.
CPU Scheduling information contains process
priority ,pointers for scheduling queues and various
other scheduling parameters.
 Accounting and business information:
Contains details like CPU utilization, real-
time used by the process, number of jobs or processes
 Memory-management information:
It Contains the value of base and limit
registers, details about the page, and segment tables.
It depends on the memory system of the OS in
use.
 I/O status information:
Comprises I/O related information including
list of I/O devices allocated to the process, status, etc.

As a process executes, it changes state
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to
occur
 ready: The process is waiting to be assigned to a
processor
 terminated: The process has finished execution

Structure Of Process in Memory

 Stack -
Temporary data such as method/function
arguments, return address, and local variables are
stored in the stack.
 Heap -
Memory that is dynamically allocated to a
process during its execution. Contains dynamically
allocated content.
 Data -
This section contains the global and static
variables.
 Text -
Consists of the current activity as
reflected by the Program Counter value and the
processor's registers content

2.2 Process scheduling
Process scheduling is an important part of
multiprogramming operating systems.
 It is the process of removing the running task
from the processor and selecting another task for
processing
 . It schedules a process into different states like
ready, waiting, and running.
Categories of Scheduling in OS
There are two categories of scheduling:
1. Non-preemptive:
In non-preemptive , the resource can’t be
taken from a process until the process completes
execution.
The switching of resources occurs when the
running process terminates and moves to a
waiting state.

2. Preemptive:
In pre emptive scheduling, the OS allocates the
resources to a process for a fixed amount of time.
During resource allocation, the process switches
from running state to ready state or from waiting
state to ready state.
This switching occurs as the CPU may give priority
to other processes and replace the process with
higher priority with the running process.

Process Scheduling Queues
There are multiple states a process has to go through
during execution.
The OS maintains a separate queue for each state along
with the process control blocks (PCB) of all processes.
The PCB moves to a new state queue, after being
unlinked from its current queue, when the state of a
process changes.
These process scheduling queues are:
1. Job queue: Makes sure that processes stay in the
system.
2. Ready queue: This stores a set of all processes in
main memory, ready and waiting for execution. The
ready queue stores any new process.
3. Device queue: This queue consists of the processes
blocked due to the unavailability of an I/O device.
There are different policies that the OS uses to manage
each queue and the OS scheduler decides how to move
processes between the ready and run queue which
allows only one entry per processor core on the
system.
The stages a process goes through are

 A new process first goes in the Ready queue,
where it waits for execution or to be dispatched.
 The CPU gets allocated to one of the processes for
execution.
 The process issues an I/O request, after which an
OS places it in the I/O queue.
 The process then creates a new sub process and
waits for its termination.
 If removed forcefully, the process creates an
interrupt. Thus, once this interrupt completes, the
process goes back to the ready queue.

There are two states in the two-state process model
1. Running
A new process enters into the system in running
state, after creation.
2. Not Running
The not running processes are stored in a queue
until their turn to get executed arrives and each entry
in the queue points to a particular process.

If a process has been completed or aborted, the OS
discards it and if it is interrupted, then the OS
transfers it to the waiting queue. Irrespective of either
case, the dispatcher then selects a process from the
queue to execute.
Schedulers
Schedulers are special system software which handle
process scheduling in various ways. The main task is to
select the jobs to be submitted into the system and to
decide which process to run.
Schedulers are of three types
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Schedule

1. Long term scheduler
Long term scheduler is also known as job scheduler. It
chooses the processes from the pool (secondary
memory) and keeps them in the ready queue
maintained in the primary memory.
Long Term scheduler mainly controls the degree of
Multiprogramming. The purpose of long term
scheduler is to choose a perfect mix of IO bound and
CPU bound processes amo ng the jobs present in the
pool.

If the job scheduler chooses more IO bound processes
then all of the jobs may reside in the blocked state all
the time and the CPU will remain idle most of the time.
This will reduce the degree of Multiprogramming.
Therefore, the Job of long term scheduler is very
critical and may affect the system for a very long time.
2. Short term scheduler
Short term scheduler is also known as CPU scheduler.
It selects one of the Jobs from the ready queue and
dispatch to the CPU for the execution.
A scheduling algorithm is used to select which job is
going to be dispatched for the execution. The Job of the
short term scheduler can be very critical in the sense
that if it selects job whose CPU burst time is very high
then all the jobs after that, will have to wait in the
ready queue for a very long time.
This problem is called starvation which may arise if
the short term scheduler makes some mistakes while
selecting the job.
3. Medium term scheduler
Medium term scheduler takes care of the swapped out
processes. If the running state processes needs some
IO time for the completion then there is a need to
change its state from running to waiting.
Medium term scheduler is used for this purpose. It
removes the process from the running state to m ake
room for the other processes. Such processes are the
swapped out processes and this procedure is called

swapping. The medium term scheduler is responsible
for suspending and resuming the processes.
It reduces the degree of multiprogramming. The
swapping is necessary to have a perfect mix of
processes in the ready queue.

Context Switching
A context switching is the mechanism to store and
restore the state or context of a CPU in Process Control
block so that a process execution can be resumed from
the same point at a later time. Using this technique, a
context switcher enables multiple processes to share a
single CPU. Context switching is an essential part of a
multitasking operating system features.
When the scheduler switches the CPU from executing
one process to execute another, the state from the
current running process is stored into the process
control block. After this, the state for the process to
run next is loaded from its own PCB and used to set the
PC, registers, etc. At that point, the second process can
start executing.

Context switches are computationally intensive since
register and memory state must be saved and restored.
To avoid the amount of context switching time, some
hardware systems employ two or more sets of
processor registers. When the process is switched, the
following information is stored for later use.
 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information
Mode Switch in OS
Mode switching happens usually when a system call is
made or a fault occurs i.e., it happens when the
processor privilege level is changed.

A mode switch is necessary if a user process needs to
access things exclusively accessible to the kernel. The
executing process does not change during a mode
switch.
We can say that a mode switch occurs so that a
process context switch can take place as only the
kernel can cause a context switch.

2.4 Co-operating Process
Cooperating Processes are those processes that depend
on other processes or processes. They work together to
achieve a common task in an operating system. These
processes interact with each other by sharing the
resources such as CPU, memory, and I/O devices to
complete the task.
There are various processes in a computer system,
which can be either independent or cooperating
processes that operate in the operating system.
It is considered independent when any other processes
operating on the system may not impact a process.
Process-independent processes don't share any data
with other processes. On the other way, a collaborating
process may be affected by any other process executing
on the system. A cooperating process shares data with
another.

Advantages of Cooperating Process in Operating
System
There are various advantages of cooperating process in
the operating system. Some advantages of the
cooperating system are as follows:
1. Information Sharing
Cooperating processes can be used to share
information between various processes. It could
involve having access to the same files. A technique is
necessary so that the processes may access the files
concurrently.
2. Modularity
Modularity refers to the division of complex tasks into
smaller subtasks. Different cooperating processes can
complete these smaller subtasks. As a result, the
required tasks are completed more quickly and
efficiently.
3. Computation Speedup
Cooperating processes can be used to accomplish
subtasks of a single task simultaneously. It improves
computation speed by allowing the tas k to be
accomplished faster. Although, it is only possible if the
system contains several processing elements.
4. Convenience
There are multiple tasks that a user requires to
perform, such as printing, compiling, editing, etc. It is
more convenient if these activities may be managed
through cooperating processes.

Concurrent execution of cooperating processes needs
systems that enable processes to communicate and
synchronize their actions.
Methods of Cooperating Process
Cooperating processes may coordinate with each other
by sharing data or messages. The methods are given
below:
1. Cooperation by sharing
The processes may cooperate by sharing data,
including variables, memory, databases, etc. The
critical section provides data integrity, and writing is
mutually exclusive to avoid inconsistent data.

Here, you see a diagram that shows cooperation by
sharing. In this diagram, Process P1 and P2 may
cooperate by using shared data like files, databases,
variables, memory, etc.
2. Cooperation by Communication
The cooperating processes may cooperate by using
messages. If every process waits for a message from

another process to execute a task, it may cause a
deadlock. If a process does not receive any messages, it
may cause starvation.

Here, you have seen a diagram that shows cooperation
by communication. In this diagram, Process P1 and P2
may cooperate by using messages to communicate.

cooperating process is one that can affect or be
affected by other process executing in the system
cooperating process an:
1. Directly share a logical address data space (i.e.
code & data) - This may result in data
inconsistency. It is implemented on threads.
2. Share data only through files/ messages - So we
will deal with various to order....orderly execution
of cooperating process so that data consistency is
maintained.
Example- producer-consumer problem

There is a producer and a consumer, producer
produces on the item and places it into buffer whereas
consumer consumes that item. For example, a print
program produces characters that are consumed by the
printer driver. A compiler may produce assembly code,
which is consumed by an assembler. The assembler, in
turn, may produce objects modules which are
consumed by the loader.
Producer process
while(true)
{
produce an item &
while(counter = = buffer-size);
buffer[int] = next produced;
in = (in+1) % buffer- size;
counter ++;
}
Consumer process
While(true)
{
while (counter = = 0);
next consumed = buffer[out];
out= (out+1) % buffer size;
counter--;
}
Here,
 in variable is used by producer t identify the next
empty slot in the buffer.

 out variable is used by the consumer to identify
where it has to the consumer the item.
 counter is used by producer and consumer to
identify the number of filled slots in the buffer.

shared Resources
1. buffer
2. counter
When producer and consumer are not executed can
current then inconsistency arises. Here the value of a
counter that is used by both producer and consumer
will be wrong if both are executed concurrently
without any control. The producer and consumer
processes share the following variables:
var n;
type item = .....;
var Buffer : array [0,n-1] of item;
In, out:0..n-1;
With the variables in and out initialized to the value 0.
In The shared buffer there are two logical
pointers; in and out that is implemented in the form of a
circular array. The in variables points to the next free
position in the buffer and out variable points to the
first full position in the buffer. When, in = out the buffer
is empty and when in+1 mod n = out the buffer is full.
2.5 Threads in Operating System (OS)
 A thread is a single sequential flow of execution of
tasks of a process so it is also known as thread of
execution .

 There is a way of thread execution inside the
process of any operating system.
 Each thread of the same process makes use of a
separate program counter and a stack of activation
records and control blocks.
 Thread is often referred to as a lightweight
process.

Need of Thread
 It takes far less time to create a new thread in an
existing process than to create a new process.
 Threads can share the common data, they do not
need to use Inter- Process communication.
 Context switching is faster when working with
threads.
 It takes less time to terminate a thread than a
process.

Difference between Process and thread

Types of Threads
In the operating system, there are two types of
threads.
 Kernel level thread.
 User-level thread.
User-level thread
 The operating system does not recognize the user-
level thread.
 User threads can be easily implemented and it is
implemented by the user.
 If a user performs a user-level thread blocking
operation, the whole process is blocked. The
kernel level thread does not know nothing about
the user level thread.

Advantages of User-level threads
 The user threads can be easily implemented than
the kernel thread.
 User-level threads can be applied to such types of
operating systems that do not support threads at
the kernel-level.
 It is faster and efficient.
 Context switch time is shorter than the kernel-
level threads.
 It does not require modifications of the operating
system.
Disadvantages of User-level threads
 User-level threads lack coordination between the
thread and the kernel.
 If a thread causes a page fault, the entire process
is blocked.

Kernel level thread
 The kernel thread recognizes the operating
system. There is a thread control block and PCB in
the system for each thread and process in the
kernel-level thread.
 The kernel-level thread is implemented by the
operating system. The kernel knows about all the
threads and manages them.
 The kernel-level thread offers a system call to
create and manage the threads from user-space.
 The implementation of kernel threads is more
difficult than the user thread. .

Advantages of Kernel-level threads
 The kernel-level thread is fully aware of all
threads.

 The scheduler may decide to spend more CPU time
in the process of threads being large numerical.
 The kernel-level thread is good for those
applications that block the frequency.
 thread is slower than user-level threads.
Disadvantages of Kernel-level threads
 The kernel thread manages and schedules all
threads.
 The implementation of kernel threads is difficult
than the user thread.
 The kernel-level is slower than user level thread.
Benefits of Threads
 Enhanced throughput of the system - When the
process is split into many threads, and each thread
is treated as a job, the no of jobs done in the unit
time increases. So the throughput of the system
also increases.
 Effective Utilization of Multiprocessor system-
When more than one thread in one process, you
can schedule more than one thread in more than
one processor.
 Faster context switch: The context switching
period between threads is less than the process
context switching. The process context switch
means more overhead for the CPU.

 Responsiveness: When the process is split into
several threads, and when a thread completes its
execution, that process can be responded to as
soon as possible.
 Communication: Multiple-thread communication
is simple because the threads share the same
address space, while in process, we adopt just a
few exclusive communication strategies for
communication between two processes.
 Resource sharing: Resources can be shared
between all threads within a process, such as
code, data, and files.

Difference between user level thread and kernel
level thread

Multithreading Model
 Multithreading allows the application to divide its
task into individual threads.
 In multi-threads, the same process or task can be
done by the number of threads, or we can say that
there is more than one thread to perform the task
in multithreading. With the use of multithreading,
multitasking can be achieved.
One to one multithreading model
 In this model, one to one relationship between
kernel and user thread. In this model multiple
thread can run on multiple processor. Problem
with this model is that creating a user thread
requires the corresponding kernel thread.

 As each user thread is connected to different
kernel , if any user thread makes a blocking
system call, the other user threads won’t be
blocked

Many to one multithreading model
 In this model, we have multiple user threads
mapped to one kernel thread. In this model when a
user thread makes a blocking system call entire
process blocks. As we have only one kernel thread
and only one user thread can access kernel at a
time, so multiple threads are not able access
multiprocessor at the same time.
 The thread management is done on the user level
so it is more efficient.

Many to Many Model multithreading model
 In this model, we have multiple user threads
multiplex to same or lesser number of kernel level
threads.
 Number of kernel level threads are specific to the
machine, advantage of this model is if a user
thread is blocked we can schedule others user
thread to other kernel thread. Thus, System
doesn’t block if a particular thread is blocked.

Operating System Scheduling algorithms
A Process Scheduler schedules different processes to
be assigned to the CPU base
There are mainly six types of process scheduling
algorithms -
 First Come First Serve (FCFS)
 Shortest-Job-First (SJF) Scheduling
 Shortest Remaining Time
 Priority Scheduling
 Round Robin Scheduling
 Multilevel Queue Scheduling

 These algorithms are either non-preemptive or
preemptive.
 Non-preemptive algorithms are designed so that
once a process enters the running state, it cannot
be preempted until it completes its allotted time,
 preemptive scheduling is based on priority where
a scheduler may preempt a low priority running
process anytime when a high priority process
enters into a ready state
First Come First Serve
 First Come First Serve is the full form of FCFS. It
is the easiest and most simple CPU scheduling
algorithm.
 In this type of algorithm, the process which
requests the CPU gets the CPU allocation first.
This scheduling method can be managed with a
FIFO queue.
 As the process enters the ready queue, its PCB
(Process Control Block) is linked with the tail of
the queue. So, when CPU becomes free, it should
be assigned to the process at the beginning of the
queue

Characteristics of FCFS method
 It offers non-preemptive and pre-emptive
scheduling algorithm.
 Jobs are always executed on a first-come, first-
serve basis
 It is easy to implement and use.
 However, this method is poor in performance, and
the general wait time is quite high.
Shortest Job First
 SJF is a full form of (Shortest job first) is a
scheduling algorithm in which the process with
the shortest execution time should be selected for
execution next.

 This scheduling method can be preemptive or non-
preemptive.
 It significantly reduces the average waiting time
for other processes awaiting execution

Characteristics of SJF Scheduling

 In this method, when the CPU is available, the
next process or job with the shortest completion
time will be executed first.
 It is Implemented with non-preemptive policy.
 This algorithm method is useful for batch-type
processing, where waiting for jobs to complete is
not critical.
 It improves job output by offering shorter jobs,
which should be executed first, which mostly have
a shorter turnaround time
Shortest Remaining Time
 The full form of SRT is Shortest remaining time.
 It is also known as SJF preemptive scheduling.
 In this method, the process will be allocated to the
task, which is closest to its completion.
 This method prevents a newer ready state process
from holding the completion of an older process
Characteristics of SRT scheduling method
 This method is mostly applied in batch
environments where short jobs are required to be
given preference.
 This is not an ideal method to implement it in a
shared system where the required CPU time is
unknown.
Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm
and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with
highest priority is to be executed first and so on.
 Processes with same priority are executed on first
come first served basis.
 Priority can be decided based on memory
requirements, time requirements or any other
resource requirement.

Round Robin Scheduling
 Round Robin is the preemptive process scheduling
algorithm.
 Each process is provided a fix time to execute, it is
called a quantum.
 Once a process is executed for a given time period,
it is preempted and other process executes for a
given time period.
 Context switching is used to save states of
preempted processes.

Multiple-Level Queues Scheduling
 This algorithm separates the ready queue into
various separate queues.
 In this method, processes are assigned to a queue
based on a specific property of the process, like
the process priority, size of the memory, etc.
 However, this is not an independent scheduling OS
algorithm as it needs to use other types of
algorithms in order to schedule the jobs
Characteristic of Multiple-Level Queues Scheduling
 Multiple queues should be maintained for
processes with some characteristics.
 Every queue may have its separate scheduling
algorithms.

 Priorities are given for each queue.
Purpose of a Scheduling algorithm
 The CPU uses scheduling to improve its efficiency.
 It helps you to allocate resources among
competing processes.
 The maximum utilization of CPU can be obtained
with multi-programming.
 The processes which are to be executed are in
ready queue.