Unit-2(Operating Systems)-Updated(Final).ppt

AdityaRaj307296 28 views 108 slides Sep 10, 2024
Slide 1
Slide 1 of 108
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108

About This Presentation

Operating Systems powerpoint presentation, Unit-2.


Slide Content

MAIT, New Delhi, by Dr. Neelam Sharma U2.1
Operating Systems
(AIML-301)
Unit - 2
by
Dr. Neelam Sharma
(MAIT, New Delhi)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.2MAIT, New Delhi, by Dr. Neelam Sharma U2.2
•A process is a program in execution.
•A process is the unit of work in systems.
•Systems consist of a collection of processes:
Operating-system processes execute system code, and
User processes execute user code
All these processes may execute concurrently.
•A process need certain resources - such as CPU time, memory, files, and
I/O devices - to accomplish its task.
Process

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.3MAIT, New Delhi, by Dr. Neelam Sharma U2.3
•Modern operating systems support processes that have multiple
threads.
•The operating system is responsible for the following activities in
connection with process and thread management:
The creation and deletion of both user and system processes;
The scheduling of processes; and
The provision of mechanisms for synchronization, communication, and
deadlock handling for processes.
Process (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.4MAIT, New Delhi, by Dr. Neelam Sharma U2.4
•What to call all the CPU activities?
A batch system executes jobs;
Time-shared system has user programs, or tasks.
The terms job and process are used almost interchangeably.
•Much of operating-system theory and terminology was developed
during a time when the major activity of operating systems was job
processing.
Process Concept

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.5MAIT, New Delhi, by Dr. Neelam Sharma U2.5
•Informally, a process is a program in execution.
•However, a process is more than the program
code. It includes:
Text Section (Program Code)
Stack Section (Temporary Data such as Function
Parameters, Return Addresses, and Local
Variables)
Data Section (Global Variables)
Heap (Memory that is Dynamically Allocated
during Process Run Time)
•Process also includes the current activity, as
represented by the value of the program counter
and the contents of the processor’s registers.
Process in Memory

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.6MAIT, New Delhi, by Dr. Neelam Sharma U2.6
•A program by itself is not a process.
•A program is a passive entity, such as a file containing a list of
instructions stored on disk (often called an executable file)
•A process is an active entity, with a program counter specifying the next
instruction to execute and a set of associated resources.
•A program becomes a process when an executable file is loaded into
memory.
•Two common techniques for loading executable files are double-clicking
an icon representing the executable file and entering the name of the
executable file on the command line (as in prog.exe or a.out.)
Process vs. Program

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.7MAIT, New Delhi, by Dr. Neelam Sharma U2.7
•Although two processes may be associated with the same program,
they are considered two separate execution sequences.
The same user may invoke many copies of the Web Browser program.
Each of these is a separate process.
Although the text sections are equivalent, the data, heap, and stack
sections vary.
Process Concept (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.8MAIT, New Delhi, by Dr. Neelam Sharma U2.8
•As a process executes, it changes state.
•Each process may be in one of the following states:
New - The process is being created.
Running - Instructions are being executed.
Waiting - The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
Ready - The process is waiting to be assigned to a processor.
Terminated - The process has finished execution.
Process State

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.9MAIT, New Delhi, by Dr. Neelam Sharma U2.9
Process State (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.10MAIT, New Delhi, by Dr. Neelam Sharma U2.10
•Each process is represented in the operating system by a
Process Control Block (PCB) – A Data Structure.
•PCB contains many information, including:
Process State (The state may be new, ready, running, etc.)
Program Counter (The counter indicates the address of the
next instruction to be executed for this process.)
CPU Registers (The registers vary in number and type,
depending on the computer architecture. They include
accumulators, index registers, stack pointers, etc.)
CPU-scheduling Information
Accounting Information (This information includes amount
of CPU and real time used, job/process numbers, etc.)
I/O Status Information (This information includes the list of
I/O devices allocated to the process, a list of open files, etc.
Process Control Block

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.11MAIT, New Delhi, by Dr. Neelam Sharma U2.11
•The process control block in the Linux operating system is represented by the C
structure task_struct.
•All active processes are represented using a doubly linked list of task_struct,
and the kernel maintains a pointer - current - to the process currently executing
on the system.
Process Representation

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.12MAIT, New Delhi, by Dr. Neelam Sharma U2.12
•The objective of multiprogramming is to have some process running at all
times, to maximize CPU utilization.
•The objective of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is running.
•To meet these objectives, the process scheduler selects an available process
(possibly from a set of several available processes) for execution on the
CPU.
•For a single-processor system, there will never be more than one running
process.
•If there are more processes, the rest will have to wait until the CPU is free
and can be rescheduled.
Process Scheduling

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.13MAIT, New Delhi, by Dr. Neelam Sharma U2.13
CPU Switching from Process to Process

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.14MAIT, New Delhi, by Dr. Neelam Sharma U2.14
•As processes enter the system, they are put into a job queue, which
consists of all processes in the system.
•The processes that are residing in main memory and are ready and
waiting to execute are kept on a list called the ready queue.
This queue is generally stored as a linked list.
A ready-queue header contains pointers to the first and final PCBs in the list.
Each PCB includes a pointer field that points to the next PCB in the ready queue.
•The process may have to wait for shared device, such as a disk. The list
of processes waiting for a particular I/O device is called a device queue.
Each device has its own device queue.
Scheduling Queues

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.15MAIT, New Delhi, by Dr. Neelam Sharma U2.15
Ready Queue and various I/O Device Queues

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.16MAIT, New Delhi, by Dr. Neelam Sharma U2.16
Process Scheduling: Queuing Diagram
•Two types of queues are present: the ready queue and a set of device queues.
•The circles represent the resources that serve the queues.
•The arrows indicate the flow of processes in the system.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.17MAIT, New Delhi, by Dr. Neelam Sharma U2.17
•A process migrates among the various scheduling queues throughout its
lifetime.
•The operating system selects processes from these queues with appropriate
scheduler.
•Often, in a batch system, more processes are submitted than can be executed
immediately.
These processes are spooled to a mass-storage device (typically a disk), where
they are kept for later execution.
The long-term scheduler, or job scheduler, selects processes from this pool and
loads them into memory for execution.
The short-term scheduler, or CPU scheduler, selects from among the processes
that are ready to execute and allocates the CPU to one of them.
Schedulers

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.18MAIT, New Delhi, by Dr. Neelam Sharma U2.18
•The short-term scheduler must select a new process for the CPU
frequently.
•Often, the short-term scheduler executes at least once every 100
milliseconds.
•Because of the short time between executions, the short-term
scheduler must be fast.
If it takes 10 milliseconds to decide to execute a process for 100 milliseconds, then
10/(100 + 10) = 9 percent of the CPU is being used (wasted) simply for scheduling
the work.
Short term scheduler controls multitasking.
Short-term Scheduler

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.19MAIT, New Delhi, by Dr. Neelam Sharma U2.19
•The long-term scheduler executes much less frequently.
Minutes may separate the creation of one new process and the next.
•The long-term scheduler controls the degree of multiprogramming (the
number of processes in memory).
If the degree of multiprogramming is stable, then the average rate of process
creation must be equal to the average departure rate of processes leaving the
system.
Thus, the long-term scheduler may need to be invoked only when a process leaves
the system.
Because of the longer interval between executions, the long-term scheduler can
afford to take more time to decide which process should be selected for
execution.
Long-term Scheduler

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.20MAIT, New Delhi, by Dr. Neelam Sharma U2.20
•Most processes can be described as either I/O bound or CPU bound.
An I/O-bound process is one that spends more of its time doing I/O than it spends
doing computations.
A CPU-bound process, in contrast, generates I/O requests infrequently, using more
of its time doing computations.
•It is important that the long-term scheduler select a good process mix
of I/O-bound and CPU-bound processes.
If all processes are I/O bound, the ready queue will almost always be empty, and
the short-term scheduler will have little to do.
If all processes are CPU bound, the I/O waiting queue will almost always be empty,
devices will go unused, and again the system will be unbalanced.
The system with the best performance will thus have a combination of CPU-bound
and I/O-bound processes.
Long-term Scheduler (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.21MAIT, New Delhi, by Dr. Neelam Sharma U2.21
•Some time-sharing systems may introduce an additional scheduler (medium-
term scheduler).
•Sometimes it can be advantageous to remove processes from memory, and thus
reduce the degree of multiprogramming.
•Later, the process can be reintroduced into memory, and its execution can be
continued where it left off.
•This scheme is called swapping.
•The process is swapped out, and is later swapped in, by the medium-term
scheduler.
•Swapping may be necessary to improve the process mix or because a change in
memory requirements has overcommitted available memory, requiring memory
to be freed up.
Medium-term Scheduler

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.22MAIT, New Delhi, by Dr. Neelam Sharma U2.22
Medium-term Scheduler
Addition of medium-term scheduling to the queuing diagram

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.23MAIT, New Delhi, by Dr. Neelam Sharma U2.23
•When an interrupt occurs, the system needs to save the current context
of the process running on the CPU so that it can restore that context
when its processing is done.
•Switching the CPU to another process requires performing a state save
of the current process and a state restore of a different process.
•This task is known as a context switch.
•The context is represented in the PCB of the process.
•Context-switch time is pure overhead.
•Context-switch times are highly dependent on hardware support.
Context Switch

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.24MAIT, New Delhi, by Dr. Neelam Sharma U2.24
•A process may create several new processes during the course of
execution.
The creating process is called a parent process.
The new processes are called the children of that process.
Each of these new processes may in turn create other processes, forming a
tree of processes.
•Most operating systems identify processes according to a unique process
identifier (or pid), which is typically an integer number.
•When a process creates a new process, two possibilities exist for execution:
The parent continues to execute concurrently with its children.
The parent waits until some or all of its children have terminated.
Operations on Processes: Creation

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.25MAIT, New Delhi, by Dr. Neelam Sharma U2.25
•There are also two possibilities for the address space of the new
process:
The child process is a duplicate of the parent process (it has the same
program and data as the parent).
The child process has a new program loaded into it.
•In UNIX, using fork() system call, the new process consists of a copy of the
address space of the original process. This mechanism allows the parent
process to communicate easily with its child process.
•In WINDOWS, using spawn() system call, a specified program is loaded into the
address space of the child process at process creation.
UNIX systems implement this as a second step, using exec() system call.
Process Creation (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.26MAIT, New Delhi, by Dr. Neelam Sharma U2.26
•A process terminates when it finishes executing its final statement and asks
the operating system to delete it by using appropriate system call.
•At that point, the process may return a status value (typically an integer) to its
parent process.
•All the resources of the process - including physical and virtual memory, open
files, and I/O buffers - are deallocated by the operating system.
•Termination can occur in other circumstances as well.
A process can cause the termination of another process via an appropriate system
call.
oUsually, such a system call can be invoked only by the parent of the process that is to be
terminated.
Operations on Processes: Termination

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.27MAIT, New Delhi, by Dr. Neelam Sharma U2.27
•A parent may terminate the execution of one of its children for a variety of
reasons:
The child has exceeded its usage of some of the resources that it has been
allocated.
The task assigned to the child is no longer required.
The operating system does not allow a child to continue if its parent terminates.
•Some systems do not allow a child to exist if its parent has terminated.
In such systems, if a process terminates (either normally or abnormally), then all its
children must also be terminated.
•In UNIX, if the parent terminates, however, all its children have assigned as
their new parent the init process. Thus, the children still have a parent to
collect their status and execution statistics.
Process Termination (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.28MAIT, New Delhi, by Dr. Neelam Sharma U2.28
•Zombie process is also known as "dead" process.
Ideally when a process completes its execution, it's entry from the process table should be
removed but this does not happen in case of zombie a process.
•Analogy: Zombie, mythological, is a dead person revived physically. Similarly, a zombie process
in Operating System is a dead process (completed its execution) but is still revived (it's entry is
present in memory).
Zombie Process in Operating System

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.29MAIT, New Delhi, by Dr. Neelam Sharma U2.29
•wait() system call is used
for removal of zombie
processes.
wait() call ensures that the
parent doesn't execute or
sits idle till the child
process is completed.
What happens with the Zombie Process

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.30MAIT, New Delhi, by Dr. Neelam Sharma U2.30
•A process which is executing (is alive) but it's parent process has terminated
(dead) is called an orphan process.
Orphan Process in Operating System

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.31MAIT, New Delhi, by Dr. Neelam Sharma U2.31
•A process is independent if it cannot affect or be affected by the other
processes executing in the system.
Any process that does not share data with any other process is independent.
•A process is cooperating if it can affect or be affected by the other
processes executing in the system.
Any process that shares data with other processes is a cooperating process.
Interprocess Communication

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.32MAIT, New Delhi, by Dr. Neelam Sharma U2.32
•Information Sharing
Several users may be interested in the same piece of information (for instance, a
shared file)
We must provide an environment to allow concurrent access to such information.
•Computation Speedup
We may break a task into subtasks, each of which will be executing in parallel with
the others.
Such a speedup can be achieved only with multiple processing elements.
•Modularity
Dividing the system functions into separate processes or threads.
•Convenience
An individual user may work on many tasks at the same time.
Reasons for Cooperation

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.33MAIT, New Delhi, by Dr. Neelam Sharma U2.33
•Cooperating processes require an Interprocess Communication (IPC)
mechanism.
IPC allows cooperating processes to exchange data and information.
•Models of Interprocess Communication:
Shared Memory
Message Passing
Interprocess Communication (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.34MAIT, New Delhi, by Dr. Neelam Sharma U2.34
Models of Interprocess Communication
Message Passing Shared Memory

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.35MAIT, New Delhi, by Dr. Neelam Sharma U2.35
Communicating processes establish a region of shared memory.
A shared-memory region resides in the address space of the process that creates
the shared-memory region.
Other processes that wish to communicate using this shared-memory region must
attach shared address space to their address space.
Normally, the operating system tries to prevent one process from accessing
another process’s memory.
Shared memory requires that two or more processes agree to remove this
restriction.
The form of data and location are determined by cooperating processes and are
not under the operating system’s control.
The processes are also responsible for ensuring that they are not writing to the
same location simultaneously.
Shared-Memory System

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.36MAIT, New Delhi, by Dr. Neelam Sharma U2.36
Producer–Consumer Problem: A Paradigm for Cooperating Processes
A producer process produces information that is consumed by a consumer
process.
A Web server produces HTML, images, etc., client computer consume the
information through the browser.
Compiler code makes assembly code consumed by an assembler.
•The use of shared memory is a solution of producer-consumer problem.
A buffer is maintained in the shared memory region, that is filled by a producer
and emptied by the consumer.
A producer can produce one item while the consumer is consuming another item.
The producer and consumer are synchronized, so that the consumer could not try
to consume an item that has not yet been produced.
Shared-Memory System (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.37MAIT, New Delhi, by Dr. Neelam Sharma U2.37
Two types of buffers can be used:
Unbounded Buffer
oThe consumer waits for a new item, however, there is no restriction on the
producer to produce items.
Bounded Buffer
oFixed buffer size.
oIf the buffer is empty, the consumer must wait for a new item.
oWhen the buffer is full, the producer waits until it can produce new items.
oA bounded buffer is implemented using a circular queue of an array.
Shared-Memory System (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.38MAIT, New Delhi, by Dr. Neelam Sharma U2.38
•It is useful in a distributed environment, where the communicating processes
may reside on different computers connected by a network.
•A message-passing facility provides at least two operations: send(message)
and receive(message).
•If processes P and Q want to communicate, they must send messages to and
receive messages from each other.
•A communication link exists between the communicating processes.
Logically, the link can be implemented with several methods:
oDirect or Indirect Communication
oSynchronous or Asynchronous (Blocking or Non-blocking) Communication
oAutomatic or Explicit Buffering
Message-Passing Systems

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.39MAIT, New Delhi, by Dr. Neelam Sharma U2.39
•Each process that wants to communicate must explicitly name the recipient or
sender of the communication.
send(P, message) - Send a message to process P.
receive(Q, message) - Receive a message from process Q.
•Properties of Communication Link:
A link is established between every pair of processes that want to communicate.
The processes need to know only each other’s identity to communicate.
A link is associated with exactly two processes.
Between each pair of processes, there exists exactly one link.
•Changing the identifier of a process may necessitate examining all other
process definitions.
Direct Communication

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.40MAIT, New Delhi, by Dr. Neelam Sharma U2.40
•The messages are sent to and received from mailboxes or ports.
A mailbox can be viewed abstractly as an object into which messages can
be placed by processes and from which messages can be removed.
Each mailbox has a unique identification.
A process can communicate with some other process via a number of
different mailboxes.
Two processes can communicate only if the processes have a shared
mailbox.
osend(A, message) - Send a message to mailbox A.
oreceive(A, message) - Receive a message from mailbox A.
Indirect Communication

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.41MAIT, New Delhi, by Dr. Neelam Sharma U2.41
•Properties of Communication Link:
A link is established between a pair of processes only if both members of the pair
have a shared mailbox (port).
A link may be associated with more than two processes.
Between each pair of communicating processes, there may be a number of
different links, with each link corresponding to one mailbox.
•A mailbox may be owned either by a process or by the operating system.
Suppose that processes P1, P2, and P3 all share mailbox A. Process P1 sends a
message to A, while both P2 and P3 execute a receive() from A. Which process will
receive the message sent by P1?
The answer depends on the scheme that we choose: (a) Allow a link to be associated with at most two processes. (b)
Allow at most one process at a time to execute a receive operation. (c) Allow the system to select arbitrarily which
process will receive the message.
Indirect Communication (contd…)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.42MAIT, New Delhi, by Dr. Neelam Sharma U2.42
•Communication between processes takes place through calls to send() and
receive() primitives.
•There are different design options for implementing each primitive.
•Message passing may be either blocking or non-blocking - also known as
synchronous and asynchronous.
Blocking send - The sending process is blocked until the message is received
by the receiving process or by the mailbox.
Non-blocking send - The sending process sends the message and resumes
operation.
Blocking receive - The receiver blocks until a message is available.
Non-blocking receive - The receiver retrieves either a valid message or a null.
Synchronization

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.43MAIT, New Delhi, by Dr. Neelam Sharma U2.43
•Whether communication is direct or indirect, messages exchanged by
communicating processes reside in a temporary queue.
•Such queues can be implemented in three ways:
Zero Capacity - This queue cannot keep any message waiting in it. For this, a
sending process must be blocked until the receiving process receives the
message. It is also known as no buffering.
Bounded Capacity - This queue has finite length n. Thus it can have n
messages waiting in it. If the queue is not full, new message can be placed in
the queue, and a sending process is not blocked. It is also known as automatic
buffering.
Unbounded Capacity - The queue’s length is potentially infinite; thus, any
number of messages can wait in it. The sender never blocks.
Buffering

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.44MAIT, New Delhi, by Dr. Neelam Sharma U2.44
Threads: Motivation
Many software packages that run on modern desktop PCs are
multithreaded.
An application typically is implemented as a separate process with
several threads of control.
A Web browser might have one thread display images or text while
another thread retrieves data from the network.
A word processor may have a thread for displaying graphics, another
thread for responding to keystrokes from the user, and a third thread for
performing spelling and grammar checking in the background.
Web server runs as a single process and creates a separate thread for each
client request.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.45MAIT, New Delhi, by Dr. Neelam Sharma U2.45
Single-threaded and Multithreaded Process

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.46MAIT, New Delhi, by Dr. Neelam Sharma U2.46
Benefits of Multithreading
Responsiveness
A multithreaded application may allow a program to continue running even if part
of it is blocked or is performing a lengthy operation.
A multithreaded web browser could allow user interaction in one thread while an
image was being loaded in another thread.
•Resource Sharing
Threads share the memory and the resources of the process to which they belong
by default.
The benefit of sharing code and data is that it allows an application to have several
different threads of activity within the same address space.
•Economy
Threads share the resources of the process to which they belong, it is more
economical to create and context-switch threads.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.47MAIT, New Delhi, by Dr. Neelam Sharma U2.47
CPU Scheduling
The objective of multiprogramming is to have some process running at
all times.
A process is executed until it must wait, typically for the completion of
some I/O request.
When one process has to wait, the operating system takes the CPU away
from that process and gives the CPU to another process.
Every time one process has to wait, another process can take over use of
the CPU.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.48MAIT, New Delhi, by Dr. Neelam Sharma U2.48
CPU-I/O Burst Cycle
Process execution consists of a cycle of
CPU execution and I/O wait.
Processes alternate between these two
states.
Process execution begins with a CPU
burst.
That is followed by an I/O burst, which
is followed by another CPU burst, then
another I/O burst, and so on.
•Eventually, the final CPU burst ends
with a system request to terminate
execution.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.49MAIT, New Delhi, by Dr. Neelam Sharma U2.49
CPU-I/O Burst Cycle (contd…)
The durations of CPU bursts vary greatly from process to process and
from computer to computer.
However, it tend to have a following frequency curve:
oThis curve is generally characterized as exponential, with a large number of short CPU
bursts and a small number of long CPU bursts.
•An I/O-bound program typically
has many short CPU bursts.
•A CPU-bound program might have
a few long CPU bursts.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.50MAIT, New Delhi, by Dr. Neelam Sharma U2.50
CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one
of the processes in the ready queue to be executed.
The selection process is carried out by the short-term scheduler (or CPU
scheduler).
The scheduler selects a process from the processes in memory that are
ready to execute and allocates the CPU to that process.
The ready queue is not necessarily a first-in, first-out (FIFO) queue.
The records in the queues are generally PCBs of the processes.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.51MAIT, New Delhi, by Dr. Neelam Sharma U2.51
Circumstances for Scheduling Decisions
CPU-scheduling decisions may take place under the following circumstances:
1.When a process switches from the running state to the waiting state (for example, as the
result of an I/O request or an invocation of wait for the termination of one of the child
processes).
2.When a process switches from the running state to the ready state (for example, when an
interrupt occurs).
3.When a process switches from the waiting state to the ready state (for example, at
completion of I/O).
4.When a process terminates.
For situations 1 and 4, there is no choice in terms of scheduling.
•A new process (if one exists in the ready queue) must be selected for execution.
•There is a choice, however, for situations 2 and 3.
When scheduling takes place only under circumstances 1 and 4, scheduling
scheme is called non-preemptive; otherwise, it is preemptive.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.52MAIT, New Delhi, by Dr. Neelam Sharma U2.52
Preemptive and Non-Preemptive Scheduling
Non-Preemptive Scheduling
The CPU is allocated to the process till it terminates or switches to waiting
state.
This scheduling method was used by Microsoft Windows 3.x.
Preemptive Scheduling
The CPU is allocated to the processes for the limited time.
Windows 95 introduced preemptive scheduling, and all subsequent versions of
Windows operating systems have used preemptive scheduling.
The Mac OS X operating system for the Macintosh also uses preemptive
scheduling.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.53MAIT, New Delhi, by Dr. Neelam Sharma U2.53
Preemptive vs. Non-Preemptive Scheduling
Preemptive Scheduling Non-Preemptive Scheduling
The CPU is allocated to the processes for
the limited time.
The CPU is allocated to the process till it
terminates or switches to waiting state.
Processor can be preempted to execute a
different process in the middle of execution
of any current process.
Once Processor starts to execute a process
it must finish it before executing the other. It
cannot be paused in middle.
CPU utilization is more as compared to Non-
Preemptive Scheduling.
CPU utilization is less as compared to
Preemptive Scheduling.
Waiting time and Response time is less.Waiting time and Response time is more.
If a high priority process frequently arrives in
the ready queue, low priority process may
starve.
If a process with long burst time is running
CPU, then another process with less CPU
burst time may starve.
Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.54MAIT, New Delhi, by Dr. Neelam Sharma U2.54
Dispatcher
The dispatcher (a component of CPU-scheduling function) gives control
of the CPU to the process selected by the short-term scheduler.
The functions of dispatcher involves:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart the program
•The dispatcher should be as fast as possible, since it is invoked during
every process switch.
•The time taken by dispatcher to stop one process and start another
running is known as the dispatch latency.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.55MAIT, New Delhi, by Dr. Neelam Sharma U2.55
Scheduling Criteria
CPU Utilization
•We want to keep the CPU as busy as possible. CPU utilization can range
from 0 to 100 percent.
Throughput
•If the CPU is busy executing processes, then work is being done.
Throughput refers to the number of processes completed per time unit.
Turnaround Time
•The interval from the time of submission of a process to the time of
completion is called turnaround time.
•Turnaround time is the sum of the periods spent waiting to get into
memory, waiting in the ready queue, executing on the CPU, and doing I/O.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.56MAIT, New Delhi, by Dr. Neelam Sharma U2.56
Scheduling Criteria (contd…)
Response Time
•In an interactive system, turnaround time may not be the best criterion.
•Response time is the time it takes to start responding, not the time it takes
to output the response.
•It is desirable to:
•Maximize CPU utilization and throughput
•Minimize turnaround time, waiting time and response time

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.57MAIT, New Delhi, by Dr. Neelam Sharma U2.57
Scheduling Algorithms
First-Come, First-Served Scheduling
Shortest-Job-First Scheduling
Priority Scheduling
Round-Robin Scheduling
Multilevel Queue Scheduling
Multilevel Feedback Queue Scheduling
Refer to PDF File

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.58MAIT, New Delhi, by Dr. Neelam Sharma U2.58
Multilevel Queue Scheduling
The processes can be classified into different groups where each group has its
own scheduling needs.
A common classification is:
Foreground (Interactive) Processes
Background Processes
These two types of processes have different requirements and so may have
different scheduling needs.
A multilevel queue scheduling algorithm partitions the ready queue into
several separate queues.
The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
Each queue has its own scheduling algorithm.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.59MAIT, New Delhi, by Dr. Neelam Sharma U2.59
Multilevel Queue Scheduling (contd…)
Separate queues might be used for different categories of processes.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.60MAIT, New Delhi, by Dr. Neelam Sharma U2.60
Multilevel Queue Scheduling (contd…)
Scheduling among the queues is commonly implemented as fixed-
priority preemptive scheduling.
Each queue has absolute priority over lower-priority queues.
For example, no process in the batch queue, could run unless the queues for
system processes, interactive processes, and interactive editing processes were all
empty.
If an interactive editing process entered the ready queue while a batch process
was running, the batch process would be preempted.
•Another possibility is to time-slice among the queues.
Each queue gets a certain portion of the CPU time, which it can then schedule
among its various processes.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.61MAIT, New Delhi, by Dr. Neelam Sharma U2.61
Multilevel Queue Scheduling: Example
Process :P1 P2 P3 P4
Arrival Time :0 0 0 10
Burst Time :4 3 8 5
Queue No. :1 1 2 1
Priority of Queue 1 is greater than Queue 2.
Queue 1 uses Round Robin (Time Quantum = 2) and Queue 2 uses First-Come, First-Served.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.62MAIT, New Delhi, by Dr. Neelam Sharma U2.62
Questions
•Three process P1, P2 and P3 arrive at time zero. The total time spent by
the process in the system is 10ms, 20ms, and 30ms respectively. They
spent first 20% of their execution time in doing I/O and the rest 80% in
CPU processing. What is the percentage utilization of CPU using FCFS
scheduling algorithm?
•Three process P1, P2 and P3 arrive at time zero. Their total execution
time is 10ms, 15ms, and 20ms respectively. They spent first 20% of their
execution time in doing I/O, next 60% in CPU processing and the last
20% again doing I/O. For what percentage of time was the CPU free?
Use Round robin algorithm with time quantum 5ms.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.63MAIT, New Delhi, by Dr. Neelam Sharma U2.63
Multilevel Feedback Queue Scheduling
The multilevel feedback queue scheduling algorithm allows a process to
move between queues.
The idea is to separate processes according to the characteristics of their
CPU bursts.
If a process uses too much CPU time, it will be moved to a lower-priority
queue.
A process that waits too long in a lower-priority queue may be moved to a
higher-priority queue.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.64MAIT, New Delhi, by Dr. Neelam Sharma U2.64
Multilevel Feedback Queue Scheduling
A process entering the ready queue is put in
Queue 0.
A process in Queue 0 is given a time
quantum of 8 milliseconds. If it does not
finish within this time, it is moved to the tail
of Queue 1.
If Queue 0 is empty, the process at the head
of Queue 1 is given a quantum of 16
milliseconds. If it does not complete, it is
preempted and is put into Queue 2.
Processes in Queue 2 are run on an FCFS
basis but are run only when Queues 0 and 1
are empty.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.65MAIT, New Delhi, by Dr. Neelam Sharma U2.65
Process Synchronization: Background
•Both, the producer and consumer routines are correct separately.
•They may not function correctly when executed concurrently.
Producer
Consumer

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.66MAIT, New Delhi, by Dr. Neelam Sharma U2.66
Process Synchronization: Background
An incorrect state may arrive because both processes are allowed to
manipulate the variable counter concurrently.
A situation, where several processes access and manipulate the same
data concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a race
condition.
In our example, to guard against the race condition, we need to ensure
that only one process at a time can manipulate the variable counter.
To make such a guarantee, the processes must be synchronized in some
way.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.67MAIT, New Delhi, by Dr. Neelam Sharma U2.67
The Critical-Section Problem
Consider a system consisting of n processes {P
0
, P
1
, ..., P
n−1
}.
Each process has a segment of code, called a critical section, in which
the process may be changing common variables, updating a table, etc.
When one process is executing in its critical section, no other process is
to be allowed to execute in its critical section.
No two processes are executing in their critical sections at the same time.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.68MAIT, New Delhi, by Dr. Neelam Sharma U2.68
The Critical-Section Problem (contd…)
The solution of critical-section problem involves design of a protocol that the processes
can use to cooperate.
Each process must request permission to enter its critical section.
The section of code implementing this request is the entry section.
The critical section may be followed by an exit section.
The remaining code is the remainder section.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.69MAIT, New Delhi, by Dr. Neelam Sharma U2.69
Solution of Critical-Section Problem
A solution to the critical-section problem must satisfy the following three
requirements:
Mutual Exclusion - If process P
i is executing in its critical section, then no other
processes can be executing in their critical sections.
Progress - If no process is executing in its critical section and some processes
wish to enter their critical sections, then only those processes that are not
executing in their remainder sections should decide which will enter its critical
section next, in a finite time.
Bounded Waiting - After a process makes a request for getting into its critical
section, there is a limit for how many other processes can get into their critical
section, before this process's request is granted. So after the limit is reached,
system must grant the process permission to get into its critical section.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.70MAIT, New Delhi, by Dr. Neelam Sharma U2.70
Critical Section Problem: Algorithm 1
This algorithm works only for two processes.
turn = i;
do{
while(turn!=i); <-- ENTRY SECTION
CRITICAL SECTION
turn = j; <-- EXIT SECTION
REMAINDER SECTION
}while(TRUE);
Given:int turn;(Shared Variable)
Process P
i

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.71MAIT, New Delhi, by Dr. Neelam Sharma U2.71
Critical Section Problem: Algorithm 1 (contd…)
This algorithm does not satisfy the progress requirement because there is strict
alternation between the processes.
turn = i;
do{
while(turn!=i);
CRITICAL SECTION
turn = j;
REMAINDER SECTION
}while(TRUE);
Given:int turn;(Shared Variable)
Process P
i
Process P
j
turn = j;
do{
while(turn!=j);
CRITICAL SECTION
turn = i;
REMAINDER SECTION
}while(TRUE);

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.72MAIT, New Delhi, by Dr. Neelam Sharma U2.72
Critical Section Problem: Algorithm 2
This algorithm also works only for two processes.
do{
flag[i] = TRUE;
while(flag[j]); <-- ENTRY SECTION
CRITICAL SECTION
flag[i] = FALSE; <-- EXIT SECTION
REMAINDER SECTION
}while(TRUE);
Given:boolean flag[2];flag[0] = FALSE;flag[1] = FALSE;
Process P
i

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.73MAIT, New Delhi, by Dr. Neelam Sharma U2.73
Critical Section Problem: Algorithm 2 (contd…)
This algorithm can fail the progress requirement if both processes set their flags to
true and then both execute the while loop.
do{
flag[i] = TRUE;
while(flag[j]);
CRITICAL SECTION
flag[i] = FALSE;
REMAINDER SECTION
}while(TRUE);
Process P
i
Process P
j
do{
flag[j] = TRUE;
while(flag[i]);
CRITICAL SECTION
flag[j] = FALSE;
REMAINDER SECTION
}while(TRUE);
Given:boolean flag[2];flag[0] = FALSE;flag[1] = FALSE;

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.74MAIT, New Delhi, by Dr. Neelam Sharma U2.74
Critical Section Problem: Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical
section problem.
Peterson’s algorithm is used to synchronize two processes.
Peterson’s solution requires the two processes to share two data items:
int turn;
boolean flag[2];
Variable turn indicates whose turn it is to enter its critical section.
oIf turn == i, then process P
i is allowed to execute in its critical section.
Flag array is used to indicate if a process wants to enter its critical section.
oIf flag[i] == true, it indicates that P
i wants to enter its critical section.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.75MAIT, New Delhi, by Dr. Neelam Sharma U2.75
Structure of Process Pi in Peterson’s Solution

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.76MAIT, New Delhi, by Dr. Neelam Sharma U2.76
Structure of Process P
i
and P
j
•Peterson’s algorithm satisfy the three requirements (mutual exclusive, progress,
bounded waiting) of solution of critical section problem.
do {
flag[i] = TRUE;
turn = j;
while (flag[j] == TRUE && turn ==
j);
CRITICAL SECTION
flag[i] = FALSE
REMAINDER SECTION
}while(TRUE);
do {
flag[j] = TRUE;
turn = i;
while (flag[i] == TRUE && turn ==
i);
CRITICAL SECTION
flag[j] = FALSE
REMAINDER SECTION
}while(TRUE);
Process P
i
Process P
j
Given:boolean flag[2]; int turn; flag[0]=false; flag[1]=false;

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.77MAIT, New Delhi, by Dr. Neelam Sharma U2.77
Limitations of Peterson’s Solution
•Works only for TWO Processes.
•Busy Waiting
Busy waiting, also known as spinning, or busy looping is a process synchronization
technique in which a process/task waits and constantly checks for a condition to be
satisfied before proceeding with its execution.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.78MAIT, New Delhi, by Dr. Neelam Sharma U2.78
Waiting Approaches
•There are two general approaches to waiting in operating systems:
Busy Waiting - A process/task can continuously check for the condition to
be satisfied while consuming the processor.
Sleeping (Blocked Waiting or Sleep Waiting) - A process can wait without
consuming the processor. In such a case, the process/task is alerted or
awakened when the condition is satisfied.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.79MAIT, New Delhi, by Dr. Neelam Sharma U2.79
Busy Waiting
•Busy looping is usually used to achieve mutual exclusion in operating
systems.
•Busy waiting can be inefficient because the looping procedure is a waste of
computer resources
•Although inefficient, busy waiting can be beneficial in mutual exclusion if the
waiting time is short and insignificant.
•Additionally, busy waiting is quick and simple to understand and implement.
---
•A workaround solution for the inefficiency of busy waiting that is
implemented in most operating systems is the use of a delay function.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.80MAIT, New Delhi, by Dr. Neelam Sharma U2.80
Sleep Waiting
•In this case, the process/task is alerted or awakened when the condition is
satisfied.
•A delay function (sleep system call) places the process involved in busy
waiting into an inactive state for a specified amount of time.
•In this case, resources are not wasted as the process is “asleep”.
•After the sleep time has elapsed, the process is awakened to continue its
execution.
•If the condition is still not satisfied, the sleep time is incremented until the
condition can be satisfied.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.81MAIT, New Delhi, by Dr. Neelam Sharma U2.81
Bakery Algorithm
Bakery Algorithm is a critical section solution for n processes.
Each process, wanting to enter critical section, gets a token number.
The token numbering scheme always generates numbers in increasing order
of enumeration; i.e., 1, 2, 3, 3, 4, 5, …
A process with lowest token number will enter the critical section.
The algorithm preserves the first come first serve property.
If two processes have same token number, the process with lower
process id (PId) will enter the critical section.
The token number of process is set to 0 when it finishes execution.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.82MAIT, New Delhi, by Dr. Neelam Sharma U2.82
Bakery Algorithm: Structure of Process P
i
do {
choosing[i] = TRUE;
number[i] = MAX(number[0], number[1], ..., number[n-1]) + 1;
choosing[i] = FALSE;
for (j = 0; j < n; j++) {
while (choosing[j]);
while ((number[j] != 0) && ((number[j],j) < (number[i],i)) ;
}
CRITICAL SECTION
number[i] = 0;
REMAINDER SECTION
}while(TRUE);
Process P
i
boolean choosing[N] = {FALSE, ..., FALSE};
int number[N] = {0, ..., 0};

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.83MAIT, New Delhi, by Dr. Neelam Sharma U2.83
Solution using Lock
do {
while (Lock != 0);
Lock = 1;
CRITICAL SECTION
Lock = 0;
REMAINDER SECTION
}while(TRUE);

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.84MAIT, New Delhi, by Dr. Neelam Sharma U2.84
Synchronization Hardware
Uniprocessor Environment
Critical-section problem could be solved if interrupts could be prevented
from occurring (while a shared variable was being modified).
oThis is often the approach taken by non-preemptive kernels.
Multiprocessor Environment
Disabling interrupts on a multiprocessor can be time consuming.
With special atomic hardware instructions, solution of critical-section
problem can be done.
oTestAndSet()
oSwap()

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.85MAIT, New Delhi, by Dr. Neelam Sharma U2.85
TestAndSet()
Definition of TestAndSet()
Implementation of TestAndSet()
This algorithm satisfies the
mutual-exclusion requirement,
but do not satisfy the bounded-
waiting requirement.
It is executed atomically
(that is, as one
uninterruptible unit)
If two TestAndSet() instructions are executed simultaneously (each on a different
CPU), they will be executed sequentially in some arbitrary order.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.86MAIT, New Delhi, by Dr. Neelam Sharma U2.86
Swap
Definition of Swap()
Implementation of Swap()
This algorithm satisfies the
mutual-exclusion requirement,
but do not satisfy the bounded-
waiting requirement.
It is executed atomically
(that is, as one
uninterruptible unit)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.87MAIT, New Delhi, by Dr. Neelam Sharma U2.87
Semaphore
In 1965, proposed by Dijkstra, Semaphore is a mechanism that can be used to
provide synchronization of tasks (to solve critical-section problem).
A semaphore S is an integer variable that, apart from initialization, is accessed
only through two standard atomic operations: wait() and signal().
The initial value of S depends on how many processes are allowed in CS.
wait(S)
{
while(S <= 0);
S--;
}
wait()
signal(S)
{
S++;
}
signal()
The testing of the integer
value of S (S ≤ 0), as well
as its possible
modification (S--), must
be executed without
interruption.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.88MAIT, New Delhi, by Dr. Neelam Sharma U2.88
Types of Semaphores
Binary Semaphore
•It is a special form of semaphore used for implementing mutual exclusion,
hence it is often called a Mutex Lock.
•A binary semaphore is initialized to 1 and only takes the values 0 and 1
during execution of a program.
•It can be used to deal with the critical-section problem for multiple
processes.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.89MAIT, New Delhi, by Dr. Neelam Sharma U2.89
Types of Semaphores
•Counting Semaphore
Counting semaphores can be used to control access to a given resource
consisting of a finite number of instances.
It is initialized to the number of resources available.
Each process that wishes to use a resource performs a wait() operation on the
semaphore (decrementing the count).
When a process releases a resource, it performs a signal() operation
(incrementing the count).
When the count for the semaphore goes to 0, all resources are being used.
After that, processes that wish to use a resource will block until the count
becomes greater than 0.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.90MAIT, New Delhi, by Dr. Neelam Sharma U2.90
Implementation of Semaphore
Mutual-Exclusion Implementation with Semaphores
•The main disadvantage of the semaphore definition given here is that it
requires busy waiting.
While a process is in its critical section, any other process that tries to
enter its critical section must loop continuously in the entry code.
do{
wait(S);
CRITICAL SECTION
signal(S);
REMAINDER SECTION
}while(TRUE)
wait(S){
while(S <= 0);
S--;
}
signal(S){
S++;
}

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.91MAIT, New Delhi, by Dr. Neelam Sharma U2.91
Spinlock
•Busy waiting wastes CPU cycles that some other process might be able
to use productively.
•The semaphore having busy waiting mechanism is also called a spinlock
because the process “spins” while waiting for the lock.
•Spinlock mechanism has an advantage in that no context switch is
required when a process waits on a lock
When locks are expected to be held for short times, spinlocks are useful.
Spinlocks are often employed on multiprocessor systems where one thread
can “spin” on one processor while another thread performs its critical
section on another processor.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.92MAIT, New Delhi, by Dr. Neelam Sharma U2.92
Solution of Spinlock (Busy Waiting)
•To overcome the need for busy waiting, we can modify the definition of
the wait() and signal() semaphore operations.
•When a process executes the wait() operation and finds that the
semaphore value is not positive, it must wait.
However, rather than engaging in busy waiting, the process can block itself.
The block operation places a process into a waiting queue associated with
the semaphore, and the state of the process is switched to the waiting
state. Then control is transferred to the CPU scheduler, which selects
another process to execute.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.93MAIT, New Delhi, by Dr. Neelam Sharma U2.93
Solution of Spinlock (Busy Waiting)
•To overcome the need for busy waiting, we can modify the definition of
the wait() and signal() semaphore operations.
The block() operation
suspends the process that
invokes it. The wakeup(P)
operation resumes the
execution of a blocked process
P. These two operations are
provided by the operating
system as basic system calls.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.94MAIT, New Delhi, by Dr. Neelam Sharma U2.94
Solution of Spinlock (Busy Waiting)
•In the solution of busy waiting, semaphore values may be negative.
•Semaphore values are never negative under the classical definition of
semaphores with busy waiting.
•If a semaphore value is negative, its magnitude is the number of
processes waiting on that semaphore.
•The list of waiting processes can be easily implemented by a link field in
each process control block (PCB).
•To ensure bounded waiting, a FIFO queue or any other queuing strategy
may be used.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.95MAIT, New Delhi, by Dr. Neelam Sharma U2.95
Limitations of Semaphores
•Deadlocks
Consider a system consisting of two processes, P0 and P1, each accessing
two semaphores, S and Q, set to the value 1:
•Suppose that P0 executes wait(S)
and then P1 executes wait(Q).
•When P0 executes wait(Q), it must
wait until P1 executes signal(Q).
•Similarly, when P1 executes wait(S),
it must wait until P0 executes
signal(S).
•Since these signal() operations
cannot be executed, P0 and P1 are
deadlocked.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.96MAIT, New Delhi, by Dr. Neelam Sharma U2.96
Limitations of Semaphores
•Indefinite Blocking or Starvation
Indefinite blocking may occur if we remove processes from the list
associated with a semaphore in LIFO (last-in, first-out) order.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.97MAIT, New Delhi, by Dr. Neelam Sharma U2.97
Classic Problems of Synchronization
Bounded-Buffer Problem
Readers–Writers Problem
Dining-Philosophers Problem
Solution using Semaphore

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.98MAIT, New Delhi, by Dr. Neelam Sharma U2.98
Bounded-Buffer Problem
•The bounded-buffer problem is also known as producer-consumer problem.
•Both, the producer and consumer routines are correct separately.
•They may not function correctly when executed concurrently.
Producer
Consumer

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.99MAIT, New Delhi, by Dr. Neelam Sharma U2.99
Bounded-Buffer Problem - Solution
•Assume,
Buffer Size = n slots
semaphore mutex (for mutual exclusion)
mutex = 1
semaphore empty (number of empty slots in buffer)
empty = n
semaphore full (number of full slots in buffer)
full = 0

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.100MAIT, New Delhi, by Dr. Neelam Sharma U2.100
Bounded-Buffer Problem - Solution
Producer Process Consumer Process

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.101MAIT, New Delhi, by Dr. Neelam Sharma U2.101
Readers–Writers Problem
•Consider a situation where a shared resource (such as file or dataset) is
to be accessed by multiple concurrent processes.
•Some of these processes may want only to read the database, whereas
others may want to update (that is, to read and write) the database.
•We distinguish between these two types of processes - reader and
writer.
If two readers access the shared data simultaneously, no adverse effects will
result.
If a writer and some other process (either a reader or a writer) access the
database simultaneously, inconsistency may occur.
•This synchronization problem is referred to as the readers–writers problem.

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.102MAIT, New Delhi, by Dr. Neelam Sharma U2.102
Readers–Writers Problem - Solution
•Assume,
int readcount = 0
The readcount variable keeps track of how many processes are currently reading
the object.
semaphore mutex
mutext is used for mutual exclusion when the variable readcount is updated.
mutex = 1
semaphore wrt
wrt functions as a mutual-exclusion semaphore for the writers.
wrt = 1

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.103MAIT, New Delhi, by Dr. Neelam Sharma U2.103
Readers–Writers Problem - Solution
Write Process(s) Reader Process(s)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.104MAIT, New Delhi, by Dr. Neelam Sharma U2.104
Dining-Philosophers Problem

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.105MAIT, New Delhi, by Dr. Neelam Sharma U2.105
Dining-Philosophers Problem
•Consider a situation where five philosophers share a circular table surrounded by
five chairs, each belonging to one philosopher.
•They are in thinking-eating cycle.
•In the center of the table is a bowl of rice, and the table is laid with five single
chopsticks.
•A philosopher may pick up only one chopstick at a time. (Obviously, he/she
cannot pick up a chopstick that is already in the hand of a neighbor).
•From time to time, a philosopher gets hungry and tries to pick up the two
chopsticks that are closest to his/her (the chopsticks that are between his/her
and his/her left and right neighbors).
•The problem is how to choose two chopsticks (one his/her own and one of
other)

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.106MAIT, New Delhi, by Dr. Neelam Sharma U2.106
Dining-Philosophers Problem - Solution
•The Dining-Philosopher Problem is a simple representation of the need
to allocate several resources among several processes in a deadlock-free
and starvation-free manner.
•The problem can be solved by representing each chopstick with a
semaphore.
A philosopher tries to grab a chopstick by executing a wait() operation on
that semaphore.
He/she releases his/her chopsticks by executing the signal() operation on
the appropriate semaphores.
semaphore chopstick[5] = 1;

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.107MAIT, New Delhi, by Dr. Neelam Sharma U2.107
Dining-Philosophers Problem - Solution

Bhagwan Parshuram Institute of Technology, New Delhi-89, by Dr. Nitish Pathak
U2.108MAIT, New Delhi, by Dr. Neelam Sharma U2.108
Limitations of Solution
•Although this solution guarantees that no two neighbors are eating
simultaneously, but it could create a deadlock.
Suppose that all five philosophers become hungry simultaneously and each grabs
her left chopstick.
All the elements of chopstick will now be equal to 0.
When each philosopher tries to grab her right chopstick, she will be delayed
forever.
•Possible Solutions:
Allow at most four philosophers to be sitting simultaneously at the table.
Allow a philosopher to pick up her chopsticks only if both chopsticks are available.
Tags