Operating System ppt part 2 - Process Management Concepts as per NEP syllabus by Dr. K. Adisesha
Size: 4.08 MB
Language: en
Added: Jun 18, 2024
Slides: 75 pages
Slide Content
Introduction
to
Operating Systems -2
PROF. K. ADISESHA
Process management
Process
management
Process Concept
Threads
Process Synchronization
CPU Scheduling Algorithm
Inter process Communication
2
Process management
Introduction
Prof. K. Adisesha (Ph. D)
3
Introduction to Process:
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
➢When a program is loaded into the main memory it becomes a process.
➢It can be divided into four sections ─
❖Stack: The process Stack contains the temporary data such as
method/function parameters, return address and local variables
❖Heap: This is dynamically allocated memory to a process during its run
time
❖Text: This includes the current activity represented by the value of
Program Counter and the contents of the processor's registers.
❖Data: This section contains the global and static variables.
Introduction
Prof. K. Adisesha (Ph. D)
4
Process Life Cycle:
When a process executes, it passes through different states. These stages may differ in
different operating systems, and the names of these states are also not standardized.
➢A process can have one of the following five states at a time.
❖Start
❖Ready
❖Running
❖Waiting
❖Terminated or Exit
Introduction
Prof. K. Adisesha (Ph. D)
5
Process Life Cycle:
➢Start: This is the initial state when a process is first started/created.
➢Ready: The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the OS so that they can run.
➢Running: Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.
➢Waiting: Process moves into the waiting state if it needs to wait for a resource, such as
waiting for user input, or waiting for a file to become available.
➢Terminated or Exit: Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be removed from
main memory.
Process Control Block
Prof. K. Adisesha (Ph. D)
6
Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System for
every process.
➢The PCB is identified by an integer process ID (PID).
➢A PCB keeps all the information needed to keep track of
a process.
➢The PCB is maintained for a process throughout its
lifetime, and is deleted once the process terminates.
Process Control Block
Prof. K. Adisesha (Ph. D)
7
Process Control Block (PCB):
A Process Control Block is a data structure maintained by the Operating System for
every process.
➢Information associated with each process
❖Process state
❖Program counter
❖CPU registers
❖CPU scheduling information
❖Memory-management information
❖Accounting information
❖I/O status information
Process Control Block
Prof. K. Adisesha (Ph. D)
8
CPU Switch From Process to Process:
Process Communication
Prof. K. Adisesha (Ph. D)
9
Context Switch:
When CPU switches to another process, the system must save the state of the old
process and load the saved state for the new process via a context switch.
➢Context of a process represented in the PCB
➢Context-switch time is overhead; the system does no useful work while switching
❖The more complex the OS and the PCB -> longer the context switch
➢Time dependent on hardware support
❖Some hardware provides multiple sets of registers per CPU -> multiple contexts
loaded at once
Process Communication
Prof. K. Adisesha (Ph. D)
10
Process Creation:
Parent process create children processes, which, in turn create other processes,
forming a tree of processes.
➢Generally, process identified and managed via a process identifier (pid)
➢Resource sharing
❖Parent and children share all resources
❖Children share subset of parent’s resources
❖Parent and child share no resources
➢Execution
❖Parent and children execute concurrently
❖Parent waits until children terminate
Process Communication
Prof. K. Adisesha (Ph. D)
11
Process Termination:
Process executes last statement and asks the operating system to delete it (exit).
➢Output data from child to parent (via wait)
➢Process’ resources are deallocated by operating system
➢Parent may terminate execution of children processes (abort)
❖Child has exceeded allocated resources
❖Task assigned to child is no longer required
❖If parent is exiting
▪Some operating systems do not allow child to continue if its parent terminates
▪All children terminated - cascading termination
Process Communication
Prof. K. Adisesha (Ph. D)
12
Inter process Communication (IPC):
Processes within a system may be independent or cooperating, Cooperating process can
affect or be affected by other processes, including sharing data.
➢Reasons for cooperating processes:
❖Information sharing
❖Computation speedup
❖Modularity
❖Convenience
➢Cooperating processes need interprocess communication (IPC)
➢Two models of IPC
❖Shared memory
❖Message passing
Process Communication
Prof. K. Adisesha (Ph. D)
13
Inter process Communication (IPC):
Inter process communication is a mechanism which allows processes to communicate
with each other and synchronize their actions.
➢The communication between these processes can be seen as a method of co-operation
between them.
➢Some of the methods to provide IPC:
❖Message Queue.
❖Shared Memory.
❖Signal.
❖Shared Files and Pipe
❖Socket
Threading
Prof. K. Adisesha (Ph. D)
14
Thread:
A thread is a single sequential flow of execution of tasks of a process so it is also known
as thread of execution or thread of control.
➢Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks.
➢Thread is often referred to as a lightweight process.
➢Need of Thread:−
❖It takes far less time to create a new thread in an existing process than to create a new
process.
❖Threads can share the common data, they do not need to use Inter- Process
communication.
❖Context switching is faster when working with threads.
❖It takes less time to terminate a thread than a process.
Threading
Prof. K. Adisesha (Ph. D)
15
Thread:
Threads are implemented in following two ways :
➢User Level Threads − User managed threads.
❖The operating system does not recognize the user-level thread.
❖User threads can be easily implemented and it is implemented by the user.
❖If a user performs a user-level thread blocking operation, the whole process is
blocked.
➢Kernel Level Threads − The kernel thread recognizes the operating system.
❖There is a thread control block and process control block in the system for
each thread and process in the kernel-level thread.
❖The kernel-level thread is implemented by the operating system.
❖The kernel knows about all the threads and manages them.
Threading
Prof. K. Adisesha (Ph. D)
16
Multithreading Models:
Some operating system provide a combined user level thread and Kernel level thread
facility.
➢In a combined system, multiple threads within the same application can run in parallel on
multiple processors and a blocking system call need not block the entire process.
➢The idea is to achieve parallelism by dividing a process into multiple threads.
➢For example, in a browser, multiple tabs can be different threads.
➢Multithreading models are three types
❖Many to many relationship
❖Many to one relationship
❖One to one relationship
Threading
Prof. K. Adisesha (Ph. D)
17
Thread:
Difference between User-Level & Kernel-Level Thread.
User-Level Threads Kernel-Level Thread
User-level threads are faster to create and
manage.
Kernel-level threads are slower to create and
manage.
Implementation is by a thread library at the
user level.
Operating system supports creation of Kernel
threads.
User-level thread is generic and can run on
any operating system.
Kernel-level thread is specific to the
operating system.
Multi-threaded applications cannot take
advantage of multiprocessing.
Kernel routines themselves can be
multithreaded.
Threading
Prof. K. Adisesha (Ph. D)
18
Advantages of Thread:
The various advantages of using Thread are:
➢Threads minimize the context switching time.
➢Use of threads provides concurrency within a process.
➢Efficient communication.
➢It is more economical to create and context switch threads.
➢Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency..
Swapping
Prof. K. Adisesha (Ph. D)
19
Swapping:
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory to secondary storage and make that memory available to other processes.
➢Swapping is also known as a technique for memory compaction.
➢The total time taken by swapping process includes
the time it takes to move the entire process to a
secondary disk and then to copy the process back to
memory
Process Scheduling
Prof. K. Adisesha (Ph. D)
20
Process Scheduling:
Maximize CPU use, quickly switch processes onto CPU for time sharing.
➢Process scheduler selects among available processes for next execution on CPU
➢Maintains scheduling queues of processes
❖Job queue – set of all processes in the system
❖Ready queue – set of all processes residing in main memory, ready and waiting to
execute
❖Device queues – set of processes waiting for an I/O device
❖Processes migrate among the various queues
Process Scheduling
Prof. K. Adisesha (Ph. D)
21
Process Scheduling:
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU on the basis of a particular strategy.
➢Process scheduling is an essential part of a Multiprogramming operating systems used
to select among available processes for next execution on CPU
➢The OS maintains all PCBs in Process Scheduling Queues.
❖Job queue: This queue keeps all the processes in the
system.
❖Ready queue: This queue keeps a set of all processes
residing in main memory, ready and waiting to execute.
❖Device queues: The processes which are blocked due to
unavailability of an I/O device constitute this queue
Process Scheduling
Prof. K. Adisesha (Ph. D)
22
Representation of Process Scheduling:
Processes can be described as either:
➢I/O-bound process – spends more time
doing I/O than computations, many short
CPU bursts
➢CPU-bound process – spends more time
doing computations; few very long CPU
bursts
Process Scheduling
Prof. K. Adisesha (Ph. D)
23
Schedulers:
Schedulers are special system software which handle process scheduling in various ways.
➢The main task is to select the jobs to be submitted into the system and to decide which process to run.
➢Schedulers are of three types:
❖Long-Term Scheduler:
▪It is also called a job scheduler.
▪A long-term scheduler determines which programs are admitted to the system for processing.
❖Short-Term Scheduler:
▪It is also called as CPU scheduler.
▪Its main objective is to increase system performance in accordance with the chosen set of criteria.
❖Medium-Term Scheduler:
▪Medium-term scheduling is a part of swapping.
▪It removes the processes from the memory. It reduces the degree of multiprogramming.
Process Scheduling
Prof. K. Adisesha (Ph. D)
24
Context Switch:
When CPU switches to another process, the system must save the state of the old process
and load the saved state for the new process via a context switch.
➢Context of a process represented in the PCB
➢Context-switch time is overhead; the system does no useful work while switching
❖The more complex the OS and the PCB -> longer the context switch
➢Time dependent on hardware support
❖Some hardware provides multiple sets of registers per CPU -> multiple contexts
loaded at once
Process Scheduling
Prof. K. Adisesha (Ph. D)
25
Scheduling Criteria:
Schedulers selects the processes in memory that are ready to execute, and allocates the
CPU based on certain scheduling Criterias.
➢Scheduling Criteria are based on:
❖CPU utilization – keep the CPU as busy as possible
❖Throughput – No. of processes that complete their execution per time unit
❖Turnaround time – amount of time to execute a particular process
❖Waiting time – amount of time a process has been waiting in the ready queue
❖Response time – amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
26
Scheduling algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms.
Process Scheduling
Prof. K. Adisesha (Ph. D)
27
Scheduling algorithms:
A Process Scheduler schedules different processes to be assigned to the CPU based on
particular scheduling algorithms.
➢These algorithms are either non-preemptive or preemptive
➢There are popular process scheduling algorithms:
❖First-Come, First-Served (FCFS) Scheduling
❖Shortest-Job-Next (SJN) Scheduling
❖Priority Scheduling
❖Round Robin(RR) Scheduling
❖Multiple-Level Queues Scheduling.
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
28
First Come First Serve (FCFS):
In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
➢It is a non-preemptive, pre-emptive scheduling algorithm.
➢Easy to understand and implement.
➢Its implementation is based on FIFO queue.
➢Poor in performance as average wait time is high..
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
29
First Come First Serve (FCFS):
In First Come First Serve (FCFS) scheduling, Jobs are executed on first come, first
serve basis.
Example: FCFS Scheduling
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
30
Shortest Job Next (SJN):
This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢This is a non-preemptive, pre-emptive scheduling algorithm.
➢Best approach to minimize waiting time.
➢Easy to implement in Batch systems where required CPU time is known in advance.
➢Impossible to implement in interactive systems where required CPU time is not
known.
➢The processer should know in advance how much time process will take..
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
31
Shortest Job Next (SJN):
This is also known as shortest job first, associate with each process the length of its
next CPU burst. Use these lengths to schedule the process with the shortest time.
➢Example:
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
32
Priority Scheduling:
Priority scheduling is a priority based algorithm and one of the most common
scheduling algorithms in batch systems.
➢Each process is assigned a priority. Process with highest priority is to be executed first
and so on.
➢Processes with same priority are executed on first come first served basis.
➢Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
33
Priority Based Scheduling:
Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
➢Example:
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
34
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢Round Robin is the preemptive process scheduling algorithm.
➢Each process is provided a fix time to execute, it is called a quantum.
➢Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
➢Context switching is used to save states of preempted processes.
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
35
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢Example:
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
36
Round Robin Scheduling:
Each process gets a small unit of CPU time (time quantum), after this time has elapsed,
the process is preempted and added to the end of the ready queue.
➢Example:
Scheduling Algorithms
Prof. K. Adisesha (Ph. D)
37
Multiple-Level Queues Scheduling:
Multiple-level queues are not an independent scheduling algorithm.
➢They make use of other existing algorithms to group and schedule jobs with common
characteristics.
❖Multiple queues are maintained for processes with common characteristics.
❖Each queue can have its own scheduling algorithms.
❖Priorities are assigned to each queue.
Multiprocessor Scheduling
Prof. K. Adisesha (Ph. D)
38
Multiprocessor Scheduling:
Multiple processor scheduling or multiprocessor scheduling focuses on designing the
system's scheduling function, which consists of more than one processor.
➢In multiple-processor scheduling multiple CPU’s are available and hence Load Sharing
becomes possible.
➢In multi-core processors, multiple processor cores are placed on the same physical chip.
➢There are two approaches to multiple processor scheduling in the operating system:
❖Symmetric Multiprocessing: It is used where each processor is self-scheduling. All
processes may be in a common ready queue, or each processor may have its private queue
for ready processes.
❖Asymmetric Multiprocessing: It is used when all the scheduling decisions and I/O
processing are handled by a single processor called the Master Server.
Thread Scheduling
Prof. K. Adisesha (Ph. D)
39
Thread Scheduling:
Light-weight process are threads in the user space that acts as an interface for the
User-Level Threads(UTL) to access the physical CPU resources.
➢Threads can share the common data, they do not need to use Inter- Process
communication.
➢Context switching is faster when working with threads. It takes less time to terminate a
thread than a process.
➢Scheduling of threads involves two boundary scheduling.
❖Scheduling of user-level threads (ULT) to kernel-level threads (KLT) via
lightweight process (LWP) by the application developer.
❖Scheduling of kernel-level threads by the system scheduler to perform different
unique OS functions.
Real Time Operating System
Prof. K. Adisesha (Ph. D)
40
Real-time operating systems (RTOS):
Real-time systems are systems that carry real-time tasks. These tasks need to be
performed immediately with a certain degree of urgency.
➢This system is time-bound and has a fixed deadline. The processing in this type of system must
occur within the specified constraints. Otherwise, This will lead to system failure.
➢RTOS applications are industrial control, telephone switching equipment, flight control, and
real-time simulations.
➢The real-time operating systems can be of 2 types –
➢Hard Real-Time Operating System: These operating systems guarantee that critical tasks are
completed within a range of time.
➢Soft real-time operating system: This operating system provides some relaxation in the time limit.
Process Synchronization
Prof. K. Adisesha (Ph. D)
41
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢Process Synchronization ensures a perfect co-ordination among the process.
➢Maintaining data consistency demands mechanisms to ensure synchronized execution
of cooperating processes.
➢Process Synchronization can be provided by using several different tools like:
❖Semaphores
❖Mutual Exclusion or Mutex
❖Monitor
Process Synchronization
Prof. K. Adisesha (Ph. D)
42
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.
➢On the basis of synchronization, processes are categorized as one of the following two types:
❖Independent Process: Execution of one process does not affects the execution of
other processes.
❖Cooperative Process: Execution of one process affects the execution of other
processes.
Process Synchronization
Prof. K. Adisesha (Ph. D)
43
Process Synchronization:
Process Synchronization means sharing system resources by processes in a such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data.
➢There are a number of benefits to process synchronization, including:
❖Data integrity: Process synchronization helps to ensure that data is not corrupted
when multiple processes are accessing it.
❖Resource efficiency: Process synchronization helps to ensure that resources are not
wasted when multiple processes are trying to use them at the same time.
❖Increased performance: Process synchronization can help to improve the performance
of applications by reducing the amount of time that processes spend waiting for
resources.
Process Synchronization
Prof. K. Adisesha (Ph. D)
44
Race Condition:
The Race Condition is a situation that is developed when a device or system tries to do
two or more operations simultaneously.
➢When serval process access and manipulates the same data at the same time, they may
enter into a race condition
➢Race condition occurs among process that share common storage for read and write.
➢Race condition occurs due to improper synchronization of shared memory access.
Process Synchronization
Prof. K. Adisesha (Ph. D)
45
Critical section problem:
The critical section problem in operating systems is an issue that arises when shared
resources are accessed by concurrent processes.
➢Critical section is a code segment that can be accessed by only one process at a time.
➢Critical section contains shared variables which need to be synchronized to maintain
consistency of data variables.
➢Any solution to the critical section problem must satisfy three requirements:
❖Mutual Exclusion
❖Progress
❖Bounded Waiting
Process Synchronization
Prof. K. Adisesha (Ph. D)
46
Critical section problem:
The critical section problem means designing a way for cooperative processes to access
shared resources without creating data inconsistencies.
➢A critical section is a code segment that can be accessed
by only one process at a time.
➢The critical section contains shared variables that need to
be synchronized to maintain the consistency of data
variables.
Process Synchronization
Prof. K. Adisesha (Ph. D)
47
Critical section problem:
In order to synchronize the cooperative processes, our main task is to solve the critical
section problem.
➢Mutual Exclusion: By Mutual Exclusion, we mean that if one process is executing
inside critical section then the other process must not enter in the critical section.
➢Progress: Progress means that if one process doesn't need to execute into critical
section then it should not stop other processes to get into the critical section.
➢Bounded Waiting: We should be able to predict the waiting time for every process to
get into the critical section. The process must not be endlessly waiting for getting into
the critical section.
Process Synchronization
Prof. K. Adisesha (Ph. D)
48
Semaphores:
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore
can be signaled by another thread.
➢A semaphore uses two atomic operations, wait and signal for process synchronization.
➢Classical problems of Synchronization with Semaphore Solution:
❖Bounded-buffer (or Producer-Consumer) Problem
❖Dining- Philosophers Problem
❖Readers and Writers Problem
❖Sleeping Barber Problem
Process Synchronization
Prof. K. Adisesha (Ph. D)
49
Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem. This problem is
generalized in terms of the Producer-Consumer problem.
➢Solution to this problem is, creating two counting semaphores “full” and “empty” to
keep track of the current number of full and empty buffers respectively.
➢Producers produce a product and consumers consume the product, but both use of one
of the containers each time.
Process Synchronization
Prof. K. Adisesha (Ph. D)
50
Bounded-buffer (or Producer-Consumer) Problem:
One solution of this problem is to use semaphores. The semaphores which will be used
here are:
➢m, a binary semaphore which is used to acquire and release the lock.
➢empty, a counting semaphore whose initial value is the number of slots in the buffer,
since, initially all slots are empty.
➢full, a counting semaphore whose initial value is 0.
Process Synchronization
Prof. K. Adisesha (Ph. D)
51
Bounded-buffer (or Producer-Consumer) Problem:
The Producer Operation: The Consumer Operation:
do
{ // wait until empty > 0 and then decrement 'empty'
wait(empty);
// acquire lock
wait(mutex);
/* perform the insert operation in a slot */
// release lock
signal(mutex);
// increment 'full'
signal(full);
} while(TRUE);
do {
// wait until full > 0 and then decrement 'full'
wait(full);
// acquire the lock
wait(mutex);
/* perform the remove operation in a slot */
// release the lock
signal(mutex);
// increment 'empty'
signal(empty);
} while(TRUE);
Process Synchronization
Prof. K. Adisesha (Ph. D)
52
Dining- Philosophers Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers.
➢There is one chopstick between each philosopher.
➢A philosopher may eat if he can pickup the two chopsticks
adjacent to him.
➢One chopstick may be picked up by any one of its adjacent
followers but not both.
➢This problem involves the allocation of limited resources to a
group of processes in a deadlock-free and starvation-free
manner.
Process Synchronization
Prof. K. Adisesha (Ph. D)
53
Dining- Philosophers Problem:
The Dining Philosopher Problem states that 5 philosophers seated around a circular
table with one chopstick between each pair of philosophers.
Process Synchronization
Prof. K. Adisesha (Ph. D)
54
Dining- Philosophers Problem:
Semaphore Solution to Dining Philosopher.
➢There are three states of the philosopher: THINKING,
HUNGRY, and EATING.
➢Here there are two semaphores: Mutex and a
semaphore array for the philosophers.
➢Mutex is used such that no two philosophers may
access the pickup or put it down at the same time.
➢The array is used to control the behavior of each
philosopher
process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i],
CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[ i],
CHOPSTICK[i+1 mod 5])
}
Process Synchronization
Prof. K. Adisesha (Ph. D)
55
Readers and Writers Problem:
The reader-writer problem is a classic synchronization problem in operating systems
where multiple processes require access to a shared resource.
➢In this problem, some processes may only read the resource while others may write to it.
➢Problem parameters:
❖One set of data is shared among a number of processes.
❖Once a writer is ready, it performs its write. Only one writer
may write at a time.
❖If a process is writing, no other process can read it.
❖If at least one reader is reading, no other process can write.
❖Readers may not write and only read.
Process Synchronization
Prof. K. Adisesha (Ph. D)
56
Readers and Writers Problem:
The solution of readers and writers can be implemented using binary semaphores.
➢We use two binary semaphores "write" and "mutex", where binary semaphore can be
defined as an integer variable in S,
➢Functions for semaphore :
❖wait() : decrements the semaphore value.
❖signal() : increments the semaphore value.
➢Writer process:
❖Writer requests the entry to critical section.
❖If allowed i.e. wait() gives a true value, it enters and
performs the write. If not allowed, it keeps on waiting.
❖It exits the critical section.
Writer process:
do {
// writer requests for critical section
wait(w);
// performs the write
// leaves the critical section
signal(w);
} while(true);
Process Synchronization
Prof. K. Adisesha (Ph. D)
57
Readers and Writers Problem:
Reader process:
➢Reader requests the entry to critical section.
➢If allowed:
❖it increments the count of number of readers inside the critical
section. If this reader is the first reader entering, it locks the w
semaphore to restrict the entry of writers if any reader is inside.
❖It then, signals mutex as any other reader is allowed to enter
while others are already reading.
❖After performing reading, it exits the critical section. When
exiting, it checks if no more reader is inside, it signals the
semaphore “w” as now, writer can enter the critical section.
➢If not allowed, it keeps on waiting.
while(TRUE)
{ //acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);
//release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
read_count--;
if(read_count == 0)
signal(w);
// release lock
signal(m); }
Process Synchronization
Prof. K. Adisesha (Ph. D)
58
Sleeping Barber Problem:
The analogy is based upon a hypothetical barber shop with one barber. There is a
barber shop which has one barber, one barber chair, and n chairs for waiting for
customers if there are any to sit on the chair..
➢Barber shop with one barber, one barber chair and N
chairs to wait in.
➢When no customers the barber goes to sleep in barber
chair and must be woken when a customer comes in.
➢When barber is cutting hair new customers take empty
seats to wait, or leave if no vacancy.
Process Synchronization
Prof. K. Adisesha (Ph. D)
59
Sleeping Barber Problem:
In the solution, we use three semaphores:
➢One for customers, which counts the number of waiting
customers, excludes the customer in the barber chair
since he isn’t waiting.
➢Another one for the situation of the barber, whether he’s
waiting or busy.
➢And the last one for the mutual exclusion since there’re
some critical sections that should be thread-safe.
Process Synchronization
Prof. K. Adisesha (Ph. D)
60
Sleeping Barber Problem:
In the solution, we use three semaphores:
➢We can start with the semaphores and some constants
like the number of chairs in the waiting room:
➢When the barber comes to the barbershop, he executes
the barber routine below. Since there are no
customers, he needs to wait until a customer arrives.
By using the customers semaphore we wait for the
barber and provide one of the important
synchronization conditions:
Deadlocks
Prof. K. Adisesha (Ph. D)
61
Deadlock:
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
➢In operating systems when there are two or more processes
that hold some resources and wait for resources held by
other(s).
➢Example:
❖Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting
for resource 1.
Deadlocks
Prof. K. Adisesha (Ph. D)
62
Example of a Deadlock:
Deadlocks occur when multiple tasks or threads cannot make progress because each
task is waiting for a lock held by another task that is also stuck.
➢Suppose that two tasks, Task 1 and Task 2, write
values to the shared resources, sharedVar1 and
sharedVar2.
➢The write operations are protected with two mutual
exclusion locks, Mutex #1 and Mutex #2, held in
sequence.
➢The important thing to note is that the sequence of
lock acquisition is reversed in the two tasks.
Deadlocks
Prof. K. Adisesha (Ph. D)
63
Difference between Starvation and Deadlock:
SL. Deadlock Starvation
1Deadlock is a situation where no process
got blocked and no process proceeds
Starvation is a situation where the low priority
process got blocked and the high priority
processes proceed.
2Deadlock is an infinite waiting.Starvation is a long waiting but not infinite.
3Every Deadlock is always a starvation.Every starvation need not be deadlock.
4The requested resource is blocked by the
other process.
The requested resource is continuously be used by
the higher priority processes.
5Deadlock happens when Mutual exclusion,
hold and wait, No preemption and circular
wait occurs simultaneously.
It occurs due to the uncontrolled priority and
resource management.
Deadlocks
Prof. K. Adisesha (Ph. D)
64
Deadlock:
Necessary conditions for Deadlocks:
➢Mutual Exclusion: A resource can only be shared in mutually exclusive manner. It implies, if
two process cannot use the same resource at the same time.
➢Hold and Wait: A process waits for some resources while holding another resource at the
same time.
➢No preemption: The process which once scheduled will be executed till the completion. No
other process can be scheduled by the scheduler meanwhile.
➢Circular Wait: All the processes must be waiting for the resources in a cyclic manner so that
the last process is waiting for the resource which is being held by the first process.
Deadlocks
Prof. K. Adisesha (Ph. D)
65
Mutual Exclusion:
A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.
➢This condition requires that at least one resource be held in a non-shareable mode,
which means that only one process can use the resource at any given time.
➢Both Resource 1 and Resource 2 are non-shareable in our scenario, and only one
process can have exclusive access to each resource at any given time.
➢As an example:
❖Process P1 obtains Resource 1.
❖Process P2 acquires Resource 2.
Deadlocks
Prof. K. Adisesha (Ph. D)
66
Hold and Wait:
A process waits for some resources while holding another resource at the same time.
➢The hold and wait condition specifies that a process must be holding at least one
resource while waiting for other processes to release resources that are currently held
by other processes.
➢In our example,
❖When a process(P2) holding one resource(R1)
and applying for another resource(R2) which is
already acquired.
Deadlocks
Prof. K. Adisesha (Ph. D)
67
No preemption:
The process which once scheduled will be executed till the completion. No other
process can be scheduled by the scheduler meanwhile.
➢Preemption is the act of taking a resource from a process before it has finished its task.
➢According to the no preemption condition, resources cannot be taken forcibly from a
process a process can only release resources voluntarily after completing its task.
❖In our scenario, neither Process P1 nor Process P2 can be forced to release the
resources in their possession.
➢The processes can only release resources voluntarily.
Deadlocks
Prof. K. Adisesha (Ph. D)
68
Circular Wait:
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
➢This condition implies that circular processes must exist, with each process waiting for
a resource held by the next process in the chain.
➢In our scenario,
❖A set of waiting processes {P1, P2, P3} must exist such
that P1 is waiting for a resource held by P2, P2 is
waiting for a resource held by P3.
❖Likewise P3 is waiting for a resource held by P1 and P2.
Deadlocks
Prof. K. Adisesha (Ph. D)
69
Deadlock:
Methods for handling deadlock .
➢There are three ways to handle deadlock:
❖Deadlock prevention or avoidance (Banker's Algorithm): The idea is to not let the
system into a deadlock state. by using strategy of “Avoidance”, we have to make an
assumption.
❖Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.
❖Deadlock Ignorance (Ostrich Method): In this the approach Operating system
assumes that deadlock never occurs. It simply ignores deadlock. This approach is best
suitable for a single end user system where If deadlock is very rare, then let it happen
and reboot the system.
Deadlocks
Prof. K. Adisesha (Ph. D)
70
Banker’s Algorithm:
It is a banker algorithm used to avoid deadlock and allocate resources safely to each
process in the computer system.
➢it is also known as deadlock avoidance algorithm or deadlock detection in the operating
system.
➢The banker's algorithm is named because it checks whether a person should be sanctioned a
loan amount or not to help the bank system safely simulate allocation resources.
❖ Suppose the number of account holders in a particular bank is 'n', and the total money in
a bank is 'T’.
❖If an account holder applies for a loan; first, the bank subtracts the loan amount from full
cash and then estimates the cash difference is greater than T to approve the loan amount.
❖It helps the bank manage and operate all things without any restriction in the functionality
of the banking system.
Deadlocks
Prof. K. Adisesha (Ph. D)
71
Banker’s Algorithm:
It is a banker algorithm used to avoid deadlock and allocate resources safely to each
process in the computer system.
➢When working with a banker's algorithm, it requests to know about three things:
❖How much each process can request for each resource in the system.
✓It is denoted by the [MAX] request.
❖How much each process is currently holding each resource in a system.
✓It is denoted by the [ALLOCATED] resource.
❖It represents the number of each resource currently available in the system.
✓It is denoted by the [AVAILABLE] resource.
Deadlocks
Prof. K. Adisesha (Ph. D)
72
Banker’s Algorithm:
The characteristics of Banker's algorithm are as follows:
❖If any process requests for a resource, then it has to wait.
❖This algorithm consists of advanced features for maximum resource allocation.
❖There are limited resources in the system we have.
❖In this algorithm, if any process gets all the needed resources, then it is that it
should return the resources in a restricted period.
❖Various resources are maintained in this algorithm that can fulfill the needs of at
least one client.
➢Banker’s algorithm comprises of two algorithms:
❖Safety algorithm
❖Resource request algorithm
Deadlocks
Prof. K. Adisesha (Ph. D)
73
Banker’s Algorithm:
The characteristics of Banker's algorithm are as follows:
❖If any process requests for a resource, then it has to wait.
❖This algorithm consists of advanced features for maximum resource allocation.
❖There are limited resources in the system we have.
❖In this algorithm, if any process gets all the needed resources, then it is that it
should return the resources in a restricted period.
❖Various resources are maintained in this algorithm that can fulfill the needs of at
least one client.
➢Banker’s algorithm comprises of two algorithms:
❖Safety algorithm
❖Resource request algorithm
Deadlocks
Prof. K. Adisesha (Ph. D)
74
Deadlock Prevention And Avoidance:
Detection and Recovery: Approach which aims to eliminate the possibility of deadlocks,
when deadlocks is to detected, then try to recover from them when they occur.
➢We can prevent a Deadlock by eliminating any of the above four conditions.
❖Eliminate Mutual Exclusion
❖Eliminate Hold and Wait
❖Eliminate No preemption
❖Eliminate Circular wait
Discussion
Prof. K. Adisesha (Ph. D)
75
Queries ?
Prof. K. Adisesha
9449081542