process synchronization topic of operating system

jarmanjeetsinghpb786 16 views 84 slides Aug 23, 2024
Slide 1
Slide 1 of 84
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84

About This Presentation

process synchronization


Slide Content

PROCESS SYNCHRONIZATION

Process Synchronization is a technique which is used to coordinate the process that use shared Data . There are two types of Processes in an Operating Systems:- Independent Process – The process that does not affect or is affected by the other process while its execution then the process is called Independent Process. Example The process that does not share any shared variable, database, files, etc. Cooperating Process – The process that affect or is affected by the other process while execution, is called a Cooperating Process. Example The process that share file, variable, database, etc are the Cooperating Process. Process Synchronization is mainly used for Cooperating Process that shares the resources. Process synchronization is the technique to overcome the problem of concurrent access to shared data which can result in data inconsistency. Process Synchronization is mainly needed in a multi-process system when multiple processes are running together, and more than one processes try to gain access to the same shared resource or any data at the same time. For Example, process A changing the data in a memory location while another process B is trying to read the data from the same memory location. There is a high probability that data read by the second process will be erroneous.

Sections of a Program Here, are four essential elements of the critical section: Entry Section: It is part of the process which decides the entry of a particular process. Critical Section: T his is the segment of code where process changes common variables, updates a table, writes to a file and so on. When 1 process is executing in its critical section, no other process is allowed to execute in its critical section. Exit Section: Exit section allows the other process that are waiting in the Entry Section, to enter into the Critical Sections. It also checks that a process that finished its execution should be removed through this Section. Remainder Section: All other parts of the Code, which is not in Critical, Entry, and Exit Section, are known as the Remainder Section.

Here is an example of the race condition: The Current Balance is Rs.5000. Person A withdrawal(Rs.1000) and accesses the Current Balance of Rs.5000. But, before the new Current Balance is stored, the person B deposit Rs.1000 and accesses the Current Balance of Rs.5000. Then, the A’s program changes the shared variable Current Balance to Rs.4000., followed by the B’s program changing the Current Balance variable to Rs.6000. With a withdrawal Rs.1000. and a deposit Rs.1000, the Current Balance should again be Rs.5000., but it is Rs.6000. Such a situation is called Race Condition and is handled by Process Synchronization. RACE CONDITION: When more than one processes are executing the same code or accessing the same memory or any shared variable in that condition there is a possibility that the output or the value of the shared variable is wrong so for that all the processes doing the race to say that my output is correct this condition known as a race condition. Several processes access and process the manipulations over the same data concurrently, then the outcome depends on the particular order in which the access takes place. Race conditions in critical sections can be avoided if the critical section is treated as an atomic instruction. Also, proper process synchronization using locks or atomic variables can prevent race conditions.

Critical Section- Critical section is a section of the program where a process access the shared resources during its execution. The Critical-Section Problem Every process has a reserved segment of code which is known as Critical Section. In this section, process can change common variables, update tables, write files, etc. The key point to note about critical section is that when one process is executing in its critical section, no other process can execute in its critical section. Each process must request for permission before entering into its critical section and the section of a code implementing this request is the Entry Section, the end of the code is the Exit Section and the remaining code is the remainder section. The entry to the critical section is mainly handled by wait() function while the exit from the critical section is controlled by the signal() function.

Requirements of Synchronization mechanisms/Critical Section 1. Mutual Exclusion- The mechanism must ensure- The processes access the critical section in a mutual exclusive manner. Only one process is present inside the critical section at any time. No other process can enter the critical section until the process already present inside it completes. 2. Progress-maximum utilization of the system An entry of a process inside the critical section is not dependent on the entry of another process inside the critical section. A process can freely enter inside the critical section if there is no other process present inside it. A process enters the critical section only if it wants to enter. A process is not forced to enter inside the critical section if it does not want to enter. Progress : If no process is executing in the critical section and other processes are waiting outside the critical section, then only those processes that are not executing in their remainder section can participate in deciding which will enter in the critical section next, and the selection can not be postponed indefinitely. 3. Bounded Wait- The wait of a process to enter the critical section is bounded. A process gets to enter the critical section before its wait gets over. After a process makes a request for getting into its Critical Section, there is a limit for how many other processes can get into their Critical Section, before this process’s request is granted. So, after the limit is reached, system must grant the process permission to get into its Critical Section. Note-01:Mutual Exclusion and Progress are the mandatory criteria. They must be fulfilled by all the synchronization mechanisms. Note-02:Bounded waiting is the optional criteria. However, it is recommended to meet this criteria if possible.

Two process Solutions: Solution-1: turn= 0; Initially, turn value is set to 0. Turn value = 0 means it is the turn of process P0 to enter the critical section. Turn value = 1 means it is the turn of process P1 to enter the critical section. Solution-2: flag[0]=flag[1]=F; //Boolean variables flag[0] = False means that process P0 is not interested to enter the critical section. flag[0] = True means that process P0 is interested to enter the critical section. P0 While(1) { //trap while(turn!=0); Enter CS; turn=1; //exit Remainder section } P1 While(1) { //trap while(turn!=1); Enter CS; turn=0; //exit Remainder section } P0 While(1) { flag[0]=T; //if flag[1]=T then do nothing while(flag[1]); Enter CS; flag[0]=F; //exit Remainder section } P1 While(1) { flag[1]=T; //if flag[0]=T then do nothing while(flag[0]); Enter CS; flag[1]=F; //exit Remainder section } Here each process is giving turn to other so mutual exclusion is there but not progress. It ensures bounded waiting since processes are executed turn wise one by one and each process is guaranteed to get a chance. It ensures processes does not starve for the CPU. Processes have to compulsorily enter the critical section alternately whether they want it or not. This is because if one process does not enter the critical section, then other process will never get a chance to execute again. Here is progress is there as process is not forced to enter CS. But it suffers from deadlock if flag[0]=flag[1]=T. It does not guarantee bounded waiting.

Solution-3: PETERSON’s Solution flag[0]=flag[1]=F; and turn=0; ( i ) Boolean flag [ i ]: Initialized to FALSE. This is because no processes are allowed to enter into critical section. When process wants to enter their Critical Section, it sets flag [ i ] to true. (ii) int TURN: Indicates whose turn is it to enter into the Critical Section. If Turn == i , then process i is allowed into their Critical Section. P0 While(1) { flag[0]=T; turn=1; //trap while(turn==1 && flag[1]==T); Enter CS; flag[0]=F; //exit Remainder section } P1 While(1) { flag[1]=T; turn=0; //trap while(turn==0 && flag[0]==T); Enter CS; flag[1]=F; //exit Remainder section } Mutual Exclusion is comforted as at any time only one process can access the critical section. Progress is also comforted, as a process that is outside the critical section is unable to block other processes from entering into the critical section. Bounded Waiting is assured as every process gets a fair chance to enter the Critical section.

Semaphores A generalized solution to critical section problem is provided by Semaphores for process synchronization . Semaphore S is an integer variable which apart from initialization is accessed only through two standard atomic operations: wait() and signal (). Wait() was originally termed as P ( proberen : to test) Signal() was originally termed as V ( verhogen : to increment ) The operations wait() and signal() are atomic which means that if a process P is executing either wait() or signal() then no other process can preempt P until it finishes wait()/signal (). Wait The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation is performed . wait(S) { while (S<=0 ); S--; } Working of wait() The initial value of the Semaphore variable is ‘1’. 1 means that the resource is free and the process can enter the critical section. If the value of S is ‘0’ this means that some other process is in its critical section and thus, the current process should wait. wait () is called before the critical section. When a process calls wait(), it checks the value of S. If the value is less than or equal to ‘0’, then the process performs no operations. Hence, the process gets stuck in the while loop and is not able to come out of the wait() function. So, it is not able to enter critical section. But if the value of S is ‘1’, then it comes out of the while loop, decrements the value of S to 0 and enters the critical section.

Signal The signal operation increments the value of its argument S. signal(S) { S++; } Working of signal() Once the process has finished the critical section part, it calls signal(). Within the signal() function, the process increments the value of S. Finally, giving a signal to a waiting process to enter in its critical section. Implementation of Semaphore do { wait( mutex ); //critical section signal( mutex ); //remainder section }while(TRUE); Usage of Semaphore Semaphore are of two types: binary and counting. In Binary semaphore , the value of S can be 0 or 1 only. An alternate name for Binary semaphores is mutex locks because they provide mutual exclusion . Binary Semaphore is used when there is only one instance of a resource. Hence the semaphore can have two value: 1 for free and 0 for busy. Counting Semaphore is used when there are multiple instances of the same resource. For example, there are 5 Printers. This means there are 5 instances of the resource Printer. Now 5 process can simultaneously print. So, the initial value of the semaphore variable should be 5 i.e., equal to the number of instances available.

Counting Semaphores This type of Semaphore uses a count that helps task to be acquired or released numerous times. If the initial count = 0, the counting semaphore should be created in the unavailable state. However , If the count is > 0, the semaphore is created in the available state, and the number of tokens it has equals to its count.

Binary Semaphores(Also known as Mutex locks as they provide Mutual Exclusion) The binary semaphores are quite similar to counting semaphores, but their value is restricted to 0 and 1. In this type of semaphore, the wait operation works only if semaphore = 1, and the signal operation succeeds when semaphore= 0. It is easy to implement than counting semaphores . Problem with using Semaphores for Process Synchronization Busy waiting – suppose that a process P1 is in its critical section and another process P2 also wants to do so. So, P2 calls wait(). But , it gets stuck in the “while” loop. Now , P2 will do nothing except for checking the condition again and again. This means it will only waste the CPU cycles and perform no useful task. So , it kept the CPU busy while it was waiting for P1 to come of the critical section. Busy waiting wastes CPU cycles that some other process might be able to use productively. Such semaphores are also referred to as Spinlocks.

Solution of busy waiting – Sleep-wake solution is better between these two solution techniques , because in busy waiting solution , process remains executing the while loop until it enters the critical section , whereas in sleep-wake solution process sleeps (goes to wait state) until other process exits from the critical section . block – place the process invoking the operation on the appropriate waiting queue. wakeup – remove one of processes in the waiting queue and place it in the ready queue. wait( semaphore *S ) { S->value--; } if (S->value < 0) { } add this process to S-> list i.e. waiting queue; block (); signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S-> list i.e . remove a process P from the waiting queue; wakeup(P); } }

Classical Problems of Synchronization Bounded Buffer (Producer-Consumer) Problem Dining Philosophers Problem The Readers Writers Problem Bounded Buffer Problem What is the Problem Statement? There is a buffer of n slots and each slot is capable of storing one unit of data. There are two processes running, namely, producer and consumer, which are operating on the buffer. A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove data from a filled slot in the buffer. These two processes won't produce the expected output if they are being executed concurrently. There needs to be a way to make the producer and consumer work in an independent manner. Because the buffer pool has a maximum size, this problem is often called the Bounded buffer problem. One solution of this problem is to use semaphores. The semaphores which will be used here are: m , a binary semaphore which is used to acquire and release the lock. empty, a counting semaphore whose initial value is the number of slots in the buffer, since, initially all slots are empty. full, a counting semaphore whose initial value is 0. At any instant, the current value of empty represents the number of empty slots in the buffer and full represents the number of occupied slots in the buffer.

The Producer Operation do { produce(); // wait until empty > 0 and then decrement 'empty' wait(empty); // acquire lock wait( mutex ); /* perform the insert operation in a slot */append (); // release lock signal( mutex ); // increment 'full' signal(full); } while(TRUE) Looking at the above code for a producer, we can see that a producer first waits until there is at least one empty slot. Then it decrements the empty semaphore because, there will now be one less empty slot, since the producer is going to insert data in one of those slots. Then, it acquires lock on the buffer, so that the consumer cannot access the buffer until producer completes its operation. After performing the insert operation, the lock is released and the value of full is incremented because the producer has just filled a slot in the buffer.

The Consumer Operation { // wait until full > 0 and then decrement 'full' wait(full); // acquire the lock wait( mutex ); /* perform the remove operation in a slot */ take (); // release the lock signal( mutex ); // increment 'empty' signal(empty); u se(); } while(TRUE); The consumer waits until there is at least one full slot in the buffer. Then it decrements the full semaphore because the number of occupied slots will be decreased by one, after the consumer completes its operation. After that, the consumer acquires lock on the buffer. Following that, the consumer completes the removal operation so that the data from one of the full slots is removed. Then, the consumer releases the lock. Finally, the empty semaphore is incremented by 1, because the consumer has just removed data from an occupied slot, thus making it empty.

Consider : A buffer which can store n items. The producers and consumers share the same memory buffer that is of fixed-size. a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time) A producer cannot produce unless there is an empty buffer slot to fill. A consumer cannot consume unless there is at least one produced item. Semaphore empty=N, full=0, mutex =1; process producer { while (true) { empty.acquire (); mutex.acquire (); // produce mutex.release (); full.release (); } } process consumer { while (true) { full.acquire (); mutex.acquire (); // consume mutex.release (); empty.release (); }

Dining Philosophers Problem The dining philosophers problem is another classic synchronization problem which is used to evaluate situations where there is a need of allocating multiple resources to multiple processes. The dining philosopher's problem involves the allocation of limited resources to a group of processes in a deadlock-free and starvation-free manner. What is the Problem Statement? Consider there are five philosophers sitting around a circular dining table. The dining table has five chopsticks and a bowl of rice in the middle as shown. At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he uses two chopsticks - one from their left and one from their right. When a philosopher wants to think, he keeps down both chopsticks at their original place. A philosopher may only pick up one chopstick at a time and, obviously, cannot pick up a chopstick already in the hand of neighbor philosopher. The dining philosophers problems is an example of a large class or concurrency control problems; it is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free manner.

// represent each chopstick with a semaphore Semaphore chopstick[] = new Semaphore[5]; // all = 1 initially process philosopher_i { while (true) { // pick up left chopstick chopstick[ i ].acquire(); // pick up right chopstick chopstick[(i+1) % 5].acquire(); // eat // put down left chopstick chopstick[ i ].release(); // put down right chopstick chopstick[(i+1) % 5].release (); // think } } When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks up that chopstick. Then he waits for the right chopstick to be available, and then picks it too. After eating, he puts both the chopsticks down. But if all five philosophers are hungry simultaneously, and each of them pickup one chopstick, then a deadlock situation occurs because they will be waiting for another chopstick forever. The possible solutions for this are: A philosopher must be allowed to pick up the chopsticks only if both the left and right chopsticks are available. Allow only four philosophers to sit at the table. That way, if all the four philosophers pick up four chopsticks, there will be one chopstick left on the table. So, one philosopher can start eating and eventually, two chopsticks will be available. In this way, deadlocks can be avoided.

do { wait( chopstick[ i ] ); wait( chopstick[ (i+1) % 5] ); . . . EATING THE RICE . signal( chopstick[ i ] ); signal( chopstick[ (i+1) % 5] ); . . THINKING . } while(1);

Readers-Writers Problem The readers-writers problem relates to an object such as a file that is shared between multiple processes. Some of these processes are readers i.e. they only want to read the data from the object and some of the processes are writers i.e. they want to write into the object. The readers-writers problem is used to manage synchronization so that there are no problems with the object data. For example - If two readers access the object at the same time there is no problem. However if two writers or a reader and writer access the object at the same time, there may be problems. To solve this situation, a writer should get exclusive access to an object i.e. when a writer is accessing the object, no reader or writer may access it. However, multiple readers can access the object at the same time . Case Process 1 Process 2 Allowed / Not Allowed Case 1 Writing Writing Not Allowed Case 2 Reading Writing Not Allowed Case 3 Writing Reading Not Allowed Case 4 Reading Reading Allowed

Three variables are used: mutex , wrt , readcnt to implement solution semaphore mutex , wrt ; // semaphore mutex is used to ensure mutual exclusion when readcnt is updated i.e. when any reader enters or exit from the critical section and semaphore wrt is used by both readers and writers int readcnt ; // readcnt tells the number of processes performing read in the critical section, initially 0 Functions for sempahore : – wait () : decrements the semaphore value. – signal() : increments the semaphore value . Writer process : Writer requests the entry to critical section. If allowed i.e. wait() gives a true value, it enters and performs the write. If not allowed, it keeps on waiting. It exits the critical section. do { // writer requests for critical section wait( wrt ); // performs the write // leaves the critical section signal( wrt ); } while(true);

Semaphore: A semaphore is an integer variable in S, that apart from initialization is accessed by only two standard atomic operations - wait and signal, whose definitions are as follows: 1 . wait( S ) { while( S <= 0) ; S--; } 2. signal( S ) { S++; } static int readcount = 0; wait ( mutex ); readcount ++; // on each entry of reader increment readcount if ( readcount == 1) { wait (write); } signal( mutex ); wait( mutex ); readcount --; // on every exit of reader decrement readcount if ( readcount == 0) { signal (write); } signal( mutex ); In this code of reader, mutex and write are semaphores that have an initial value of 1, whereas the readcount variable has an initial value as 0. Both mutex and write are common in reader and writer process code, semaphore mutex ensures mutual exclusion and semaphore write handles the writing mechanism. The readcount variable denotes the number of readers accessing the file concurrently. The moment variable readcount becomes 1, wait operation is used to write semaphore which decreases the value by one. This means that a writer is not allowed how to access the file anymore. On completion of the read operation, readcount is decremented by one. When readcount becomes 0, the signal operation which is used to write permits a writer to access the file.

Code for Writer Process wait(write ); WRITE INTO THE FILE signal( wrt ); If a writer wishes to access the file, wait operation is performed on write semaphore, which decrements write to 0 and no other writer can access the file. On completion of the writing job by the writer who was accessing the file, the signal operation is performed on write. CASE 1: WRITING - WRITING → NOT ALLOWED. That is when two or more than two processes are willing to write, then it is not allowed. Let us see that our code is working accordingly or not? Explanation : The initial value of semaphore write = 1 Suppose two processes P0 and P1 wants to write, let P0 enter first the writer code, The moment P0 enters Wait( write ); will decrease semaphore write by one, now write = 0 And continue WRITE INTO THE FILE Now suppose P1 wants to write at the same time (will it be allowed?) let's see. P1 does Wait( write ), since the write value is already 0, therefore from the definition of wait, it will go into an infinite loop (i.e. Trap), hence P1 can never write anything, till P0 is writing. Now suppose P0 has finished the task, it will signal( write); will increase semaphore write by 1, now write = 1 if now P1 wants to write it since semaphore write > 0 This proofs that, if one process is writing, no other process is allowed to write.

CASE 2: READING - WRITING → NOT ALLOWED. That is when one or more than one process is reading the file, then writing by another process is not allowed. Initial value of semaphore mutex = 1 and variable readcount = 0 Suppose two processes P0 and P1 are in a system, P0 wants to read while P1 wants to write, P0 enter first into the reader code, the moment P0 enters Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0 Increment readcount by 1, now readcount = 1, next if ( readcount == 1)// evaluates to TRUE { wait (write); // decrement write by 1, i.e. write = 0(which clearly proves that if one or more than one reader is reading then no writer will be allowed. } signal( mutex ); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to enter. And reader continues to --READ THE FILE? Suppose now any writer wants to enter into its code then: As the first reader has executed wait (write); because of which write value is 0, therefore wait(writer); of the writer, code will go into an infinite loop and no writer will be allowed. This proofs that, if one process is reading, no other process is allowed to write. Now suppose P0 wants to stop the reading and wanted to exit then Following sequence of instructions will take place: wait( mutex ); // decrease mutex by 1, i.e. mutex = 0 readcount --; // readcount = 0, i.e. no one is currently reading if ( readcount == 0) // evaluates TRUE { signal (write); // increase write by one, i.e. write = 1 } signal( mutex );// increase mutex by one, i.e. mutex = 1 Now if again any writer wants to write, it can do it now, since write > 0

CASE 3: WRITING -- READING → NOT ALLOWED. The initial value of semaphore write = 1 Suppose two processes P0 and P1 are in a system, P0 wants to write while P1 wants to read, P0 enter first into the writer code, The moment P0 enters Wait( write ); will decrease semaphore write by 1, now write = 0 And continue WRITE INTO THE FILE Now suppose P1 wants to read the same time (will it be allowed?) let's see. P1 enters reader's code Initial value of semaphore mutex = 1 and variable readcount = 0 Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0 Increment readcount by 1, now readcount = 1, next if ( readcount == 1)// evaluates to TRUE { wait (write); // since value of write is already 0, hence it will enter into an infinite loop and will not be allowed to proceed further (which clearly proves that if one writer is writing then no reader will be allowed. } The moment writer stops writing and willing to exit then This proofs that, if one process is writing, no other process is allowed to read. The moment writer stops writing and willing to exit then it will execute: signal( write); will increase semaphore write by 1, now write = 1 if now P1 wants to read it can since semaphore write > 0

CASE 4: READING - READING → ALLOWED. Initial value of semaphore mutex = 1 and variable readcount = 0 Suppose three processes P0, P1 and P2 are in a system, all the three processes P0, P1, and P2 want to read, let P0 enter first into the reader code, the moment P0 enters Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0 Increment readcount by 1, now readcount = 1, next if ( readcount == 1)// evaluates to TRUE { wait (write); // decrement write by 1, i.e. write = 0(which clearly proves that if one or more than one reader is reading then no writer will be allowed. } signal( mutex ); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to enter. And P0 continues to --READ THE FILE? → Now P1 wants to enter the reader code current value of semaphore mutex = 1 and variable readcount = 1 let P1 enter into the reader code, the moment P1 enters Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0 Increment readcount by 1, now readcount = 2, next if ( readcount == 1)// eval . to False, it will not enter if block signal( mutex ); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to enter. Now P0 and P1 continues to --READ THE FILE?

→Now P2 wants to enter the reader code current value of semaphore mutex = 1 and variable readcount = 2 let P2 enter into the reader code, The moment P2 enters Wait( mutex ); will decrease semaphore mutex by 1, now mutex = 0 Increment readcount by 1, now readcount = 3, next if ( readcount == 1)// eval . to False, it will not enter if block signal( mutex ); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to enter. Now P0, P1, and P2 continues to --READ THE FILE? Suppose now any writer wants to enter into its code then: As the first reader P0 has executed wait (write); because of which write value is 0, therefore wait(writer); of the writer, code will go into an infinite loop and no writer will be allowed. Now suppose P0 wants to come out of system( stop reading) then wait( mutex ); //will decrease semaphore mutex by 1, now mutex = 0 readcount --; // on every exit of reader decrement readcount by one i.e. readcount = 2 if ( readcount == 0)// eval . to FALSE it will not enter if block signal( mutex ); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to exit → Now suppose P1 wants to come out of system (stop reading) then wait( mutex ); //will decrease semaphore mutex by 1, now mutex = 0 readcount --; // on every exit of reader decrement readcount by one i.e. readcount = 1 if ( readcount == 0)// eval . to FALSE it will not enter if block signal( mutex ); // will increase semaphore mutex by 1, now mutex = 1 i.e. other readers are allowed to exit → Now suppose P2 (last process) wants to come out of system (stop reading) then wait( mutex ); //will decrease semaphore mutex by 1, now mutex = 0 readcount --; // on every exit of reader decrement readcount by one i.e. readcount = 0 if ( readcount == 0)// eval . to TRUE it will enter into if block { signal (write); // will increment semaphore write by one, i.e. now write = 1, since P2 was the last process which was reading, since now it is going out, so by making write = 1 it is allowing the writer to write now. } signal( mutex ); // will increase semaphore mutex by 1, now mutex = 1

DEADLOCKS

Every process needs some resources to complete its execution. However, the resource is granted in a sequential order. The process requests for some resource. OS grant the resource if it is available otherwise let the process waits. The process uses it and release on the completion . A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to some another process. In this situation, none of the process gets executed since the resource it needs, is held by some other process which is also waiting for some other resource to be released. Deadlock is a common problem in multi-processing where several processes share a specific type of mutually exclusive resource. Example of Deadlock A real-world example would be traffic, which is going only in one direction. Here, a bridge is considered a resource. So, when Deadlock happens, it can be easily resolved if one car backs up (Preempt resources and rollback). Several cars may have to be backed up if a deadlock situation occurs. So starvation is possible.

Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1 . Coffman Conditions A deadlock occurs if the four Coffman conditions hold true . Mutual Exclusion There should be a resource that can only be held by one process at a time. In the diagram, there is a single instance of Resource 1 and it is held by Process 1 only. At least one resource must be held in a non-sharable mode; If any other process requests this resource, then that process must wait for the resource to be released.

Hold and Wait A process can hold multiple resources and still request more resources from other processes which are holding them. In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting the Resource 1 which is held by Process 1 . No Preemption A resource cannot be taken from a process unless the process releases the resource. A resource cannot be preempted from a process by force. A process can only release a resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when Process 1 relinquishes it voluntarily after its execution is complete.

Circular Wait A process is waiting for the resource held by the second process, which is waiting for the resource held by the third process and so on, till the last process is waiting for a resource held by the first process. This forms a circular chain. For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop. A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ]. Deadlock Starvation Deadlock is a situation where no process got blocked and no process proceeds Starvation is a situation where the low priority process got blocked and the high priority processes proceed. Deadlock is an infinite waiting. Starvation is a long waiting but not infinite. Every Deadlock is always a starvation. Every starvation need not be deadlock. The requested resource is blocked by the other process. The requested resource is continuously be used by the higher priority processes. Deadlock happens when Mutual exclusion, hold and wait, No preemption and circular wait occurs simultaneously. It occurs due to the uncontrolled priority and resource management.

The various strategies for handling deadlock are- Deadlock Prevention- This strategy involves designing a system that violates one of the four necessary conditions required for the occurrence of deadlock. This ensures that the system remains free from the deadlock. It's important to prevent a deadlock before it can occur. The system checks every transaction before it is executed to make sure it doesn't lead the deadlock situations. The various conditions of deadlock occurrence may be violated as- Mutual Exclusion- To violate this condition, all the system resources must be such that they can be used in a shareable mode. In a system, there are always some resources which are mutually exclusive by nature. So , this condition can not be violated. Shared resources such as read-only files do not lead to deadlocks. Unfortunately some resources, such as printers and tape drives, require exclusive access by a single process . Spooling For a device like printer, spooling can work. There is a memory associated with the printer which stores jobs from each of the process into it. Later, Printer collects all the jobs and print each one of them according to FCFS. By using this mechanism, the process doesn't have to wait for the printer and it can continue whatever it was doing. Later, it collects the output when it is produced. Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from two kinds of problems . After some point of time, there may arise a race condition between the processes to get space in that spool. We cannot force a resource to be used by more than one process at the same time since it will not be fair enough and some serious problems may arise in the performance. Therefore, we cannot violate mutual exclusion for a process practically.

2. Hold and Wait- This condition can be violated in the following ways- Approach-01: A process has to first request for all the resources it requires for execution. Once it has acquired all the resources, only then it can start its execution. This approach ensures that the process does not hold some resources and wait for other resources. The drawbacks of this approach are- It is less efficient. It is not implementable since it is not possible to predict in advance which resources will be required during execution . Approach-02: A process is allowed to acquire the resources it desires at the current moment. After acquiring the resources, it start its execution. Now before making any new request, it has to compulsorily release all the resources that it holds currently. This approach is efficient and implementable. Approach-03: A timer is set after the process acquires any resource. After the timer expires, a process has to compulsorily release the resource . 3. No Preemption- Preemption of process resource allocations can prevent this condition of deadlocks, when it is possible. One approach is that if a process is forced to wait when requesting a new resource, then all other resources previously held by this process are implicitly released, ( preempted ), forcing this process to re-acquire the old resources along with the new resources in a single request, similar to the previous discussion.

Another approach is that when a resource is requested and not available, then the system looks to see what other processes currently have those resources and are themselves blocked waiting for some other resource. If such a process is found, then some of their resources may get preempted and added to the list of resources for which the process is waiting. Either of these approaches may be applicable for resources whose states are easily saved and restored, such as registers and memory, but are generally not applicable to other devices such as printers and tape drives. A process is allowed to forcefully preempt the resources possessed by some other process only if- It is a high priority process or a system process. The victim process is in the waiting state. 4. Circular Wait- This condition can be violated by not allowing the processes to wait for resources in a cyclic manner. A natural number is assigned to every resource. Each process is allowed to request for the resources either in only increasing or only decreasing order of the resource number. In case increasing order is followed, if a process requires a lesser number resource, then it must release all the resources having larger number and vice versa. This approach is the most practical approach and implementable. However, this approach may cause starvation but will never lead to deadlock.

Deadlock Avoidance- This strategy involves maintaining a set of data using which a decision is made whether to entertain the new request or not. If entertaining the new request causes the system to move in an unsafe state, then it is discarded. This strategy requires that every process declares its maximum requirement of each resource type in the beginning. The main challenge with this approach is predicting the requirement of the processes before execution. Banker’s Algorithm is an example of a deadlock avoidance strategy . Deadlock Detection and Recovery- This strategy involves waiting until a deadlock occurs. After deadlock occurs, the system state is recovered. The main challenge with this approach is detecting the deadlock . Deadlock Ignorance- This strategy involves ignoring the concept of deadlock and assuming as if it does not exist . This approach is best suitable for a single end user system where User uses the system only for browsing and all other normal stuff. This strategy helps to avoid the extra overhead of handling deadlock. It is also called as Ostrich approach . There is always a tradeoff between Correctness and performance. The operating systems like Windows and Linux mainly focus upon performance. However, the performance of the system decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100 times then it is completely unnecessary to use the deadlock handling mechanism all the time. In these types of systems, the user has to simply restart the computer in the case of deadlock.

Deadlock avoidance In deadlock avoidance, the request for any resource will be granted if the resulting state of the system doesn't cause deadlock in the system. The state of the system will continuously be checked for safe and unsafe states. In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process can request to complete its execution. The Deadlock avoidance algorithm examines the resource allocations so that there can never be a circular wait condition . The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes Safe State When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state System is in safe state if there exists a sequence <P1, P2, …, Pn > of ALL the processes in the systems such that for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj , with j < I That is: If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate When Pi terminates, Pi +1 can obtain its needed resources, and so on Basic Facts If a system is in safe state ⇒ no deadlocks If a system is in unsafe state ⇒ possibility of deadlock Avoidance ⇒ ensure that a system will never enter an unsafe state.

In an Unsafe state, the operating system cannot prevent processes from requesting resources in such a way that any deadlock occurs. It is not necessary that all unsafe states are deadlocks; an unsafe state may lead to a deadlock . Avoidance algorithms Single instance of a resource type => Use a resource-allocation graph Multiple instances of a resource type => Use the banker’s algorithm Example 1: – 12 instances of a resource At time t0, P0 holds 5, P1 holds 2, P2 holds 2 Available = 3 free instances Is the system in a safe state? Can I find a safe sequence? Yes, the sequence <P1, P0, P2> is safe. • P1 maximum (currently has 2, so needs 2 more) and holds 4, then there is only 1 free resource • Then P1 then releases all of its held resources, so that there are 5 free • Next, suppose P0 requests its maximum (currently has 5, so needs 5 more) and holds 10, so that there are 0 free • Then P0 releases all of its held resources, so that there are 10 free • Next P2 requests its max of 9, leaving 3 free and then releases them all Thus the sequence <P1, P0, P2> is safe, and is able in the worst-case to request maximum resources for each process in the sequence, and release all such resources for the next process in the sequence

Example 2: at time t1, process P2 requests and is given 1 more instance of the resource, then Available = 2 free instances Is the system in a safe state? P1 can request its maximum (currently holds 2, and needs 2 more) of 4, so that there are 0 free P1 releases all its held resources, so that available =4 free Neither P0 nor P2 can request their maximum resources (P0 needs 5, P2 needs 6, and there are only 4 free) Both would have to wait, so there could be deadlock The system is deemed unsafe, The mistake was granting P2 an extra resource at time t1 Forcing P2 to wait for P0 or P1 to release their resources would have avoided potential deadlock Resource Allocation Graph In the Resource Allocation Graph, to represent of all states of system in the graphically form. Resource Allocation Graph contains the whole information related to all processes those are hold by few resources otherwise getting to wait for some resources . How many processes exist in the system? How many instances of each resource type exist? How many instances of each resource type are allocated? How many instances of each resource type are still available? How many instances of each resource type are held by each process? How many instances of each resource type does each process need for execution?

The resource allocation graph is the complete information about all the processes which are holding some resources or waiting for some resources. It also contains the information about all the instances of all the resources whether they are available or being used by the processes. In Resource allocation graph, the process is represented by a Circle while the Resource is represented by a rectangle . Vertices are mainly of two types, Resource and process. Each of them will be represented by a different shape. Circle represents process while rectangle represents resource . A resource can have more than one instance. Each instance will be represented by a dot inside the rectangle . Edges in RAG are also of two types, one represents assignment and other represents the wait of a process for a resource.

It gives the following information- There exist three processes in the system namely P1, P2 and P3. There exist two resources in the system namely R1 and R2. There exists a single instance of resource R1 and two instances of resource R2. Process P1 holds one instance of resource R1 and is waiting for an instance of resource R2. Process P2 holds one instance of resource R2 and is waiting for an instance of resource R1. Process P3 holds one instance of resource R2 and is not waiting for anything . Using Resource Allocation Graph, it can be easily detected whether system is in a Deadlock state or not Rule-01 In a Resource Allocation Graph where all the resources are single instance , If a cycle is being formed, then system is in a deadlock state. If no cycle is being formed, then system is not in a deadlock state . Rule-02 : In a Resource Allocation Graph where all the resources are NOT single instance , If a cycle is being formed, then system may be in a deadlock state. Banker’s Algorithm is applied to confirm whether system is in a deadlock state or not. If no cycle is being formed, then system is not in a deadlock state. Presence of a cycle is a necessary but not a sufficient condition for the occurrence of deadlock.

PRACTICE PROBLEMS BASED ON DETECTING DEADLOCK USING RAG Find if the system is in a deadlock state otherwise find a safe sequence. Solution- Method-01: The given resource allocation graph is single instance with a cycle contained in it. Thus, the system is definitely in a deadlock state . Method-02: Using the given resource allocation graph, we have- Available = [ R1 R2 ] = [ 0 0 ] Now, There are no instances available currently and both the processes require a resource to execute. Therefore, none of the process can be executed and both keeps waiting infinitely. Thus, the system is in a deadlock state. Allocation Need R1 R2 R1 R2 Process P1 1 1 Process P2 1 1

Solution- The given resource allocation graph is multi instance with a cycle contained in it. So , the system may or may not be in a deadlock state. Using the given resource allocation graph, we have- Allocation Need R1 R2 R1 R2 Process P1 1 1 Process P2 1 1 Process P3 1 Available = [ R1 R2 ] = [ 0 0 ] Since process P3 does not need any resource, so it executes. After execution, process P3 release its resources. Then, Available = [ 0 0 ] + [ 0 1 ] = [ 0 1 ] With the instances available currently, only the requirement of the process P1 can be satisfied. So , process P1 is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then-Available = [ 0 1 ] + [ 1 0 ] = [ 1 1 ] With the instances available currently, the requirement of the process P2 can be satisfied. So , process P2 is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then- Available = [ 1 1 ] + [ 0 1 ] = [ 1 2 ] There exists a safe sequence P3, P1, P2 in which all the processes can be executed. So, the system is in a safe state.

The given resource allocation graph is multi instance with a cycle contained in it. So, the system may or may not be in a deadlock state. Available = [ R1 R2 R3 ] = [ 0 0 1 ] With the instances available currently, only the requirement of the process P2 can be satisfied. So, process P2 is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then- Available = [ 0 0 1 ] + [ 0 1 0 ] = [ 0 1 1 ] Step-02: With the instances available currently, only the requirement of the process P0 can be satisfied. So , process P0 is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then- Available = [ 0 1 1 ] + [ 1 0 1 ] = [ 1 1 2 ] Step-03: With the instances available currently, only the requirement of the process P1 can be satisfied. So , process P1 is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then- Available = [ 1 1 2 ] + [ 1 1 0 ] = [ 2 2 2 ] Step-04: With the instances available currently, the requirement of the process P3 can be satisfied. So , process P3 is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then- Available = [ 2 2 2 ] + [ 0 1 0 ] = [ 2 3 2 ]. Thus, There exists a safe sequence P2, P0, P1, P3 in which all the processes can be executed. So , the system is in a safe state . Allocation Need R1 R2 R3 R1 R2 R3 Process P0 1 1 1 1 Process P1 1 1 1 Process P2 1 1 Process P3 1 2

Example Let's consider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The resources are having 1 instance each. According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is waiting for R1 as well as R2. The graph is deadlock free since no cycle is being formed in the graph . Deadlock Detection using RAG If a cycle is being formed in a Resource allocation graph where all the resources have the single instance then the system is deadlocked. In Case of Resource allocation graph with multi-instanced resource types, Cycle is a necessary condition of deadlock but not the sufficient condition . The following example contains three processes P1, P2, P3 and three resources R2, R2, R3. All the resources are having single instances each.

Banker’s Algorithm in Operating System Banker's algorithm is a deadlock avoidance algorithm. Banker's Algorithm is used majorly in the banking system to avoid deadlock. It helps you to identify whether a loan will be given or not. This algorithm is used to test for safely simulating the allocation for determining the maximum amount available for all resources. It also checks for all the possible activities before determining whether allocation should be continued or not. For example, there are X number of account holders of a specific bank, and the total amount of money of their accounts is G. When the bank processes a car loan, the software system subtracts the amount of loan granted for purchasing a car from the total money ( G+ Fixed deposit + Monthly Income Scheme + Gold, etc.) that the bank has. It also checks that the difference is more than or not G. It only processes the car loan when the bank has sufficient money even if all account holders withdraw the money G simultaneously . Whenever a new process is created, it must specify the maximum instances of each resource type that it needs, exactly . Characteristics of Banker's Algorithm If any process requests for a resource, then it has to wait . This algorithm consists of advanced features for maximum resource allocation . There are limited resources in the system we have . In this algorithm, if any process gets all the needed resources, then it is that it should return the resources in a restricted period . Various resources are maintained in this algorithm that can fulfill the needs of at least one client.

Let us assume that there are n processes and m resource types. Data Structures used to implement the Banker’s Algorithm 1 . Available It is an array of length m. It represents the number of available resources of each type. If Available[j] = k, then there are k instances available, of resource type Rj. 2 . Max It is an n x m matrix which represents the maximum number of instances of each resource that a process can request. If Max[ i ][j] = k, then the process Pi can request at most k instances of resource type Rj. 3 . Allocation It is an n x m matrix which represents the number of resources of each type currently allocated to each process. If Allocation[ i ][j] = k, then process Pi is currently allocated k instances of resource type Rj. 4 . Need It is a two-dimensional array. It is an n x m matrix which indicates the remaining resource needs of each process. If Need[ i ][j] = k, then process Pi may need k more instances of resource type Rj to complete its task. Need[ i ][j] = Max[ i ][j] - Allocation [ i ][j] Banker’s algorithm comprises of two algorithms: Safety algorithm Resource request algorithm SAFETY ALGORITHM- A safety algorithm is an algorithm used to find whether or not a system is in its safe state. The algorithm is as follows: 1. Let Work and Finish be vectors of length m and n, respectively. Initially, Work = Available Finish[ i ] =false for i = 0, 1, ... , n - 1. This means, initially, no process has finished and the number of available resources is represented by the Available array.

2. Find an index i such that both Finish[ i ] ==false Need_i <= Work If there is no such i present, then proceed to step 4. It means, we need to find an unfinished process whose needs can be satisfied by the available resources. If no such process exists, just go to step 4. 3. Perform the following: Work = Work + Allocation_i Finish[ i ] = true Go to step 2. When an unfinished process is found, then the resources are allocated and the process is marked finished. And then, the loop is repeated to check the same for all other processes. 4. If Finish[ i ] == true for all i , then the system is in a safe state. That means if all processes are finished, then the system is in safe state. This algorithm may require an order of mxn² operations in order to determine whether a state is safe or not . RESOURCE REQUEST ALGORITHM Now the next algorithm is a resource-request algorithm and it is mainly used to determine whether requests can be safely granted or not . Let Request_i be the request vector for the process Pi. If Request_i [j ]==k, then process Pi wants k instance of Resource type Rj . When a request for resources is made by the process Pi, the following are the actions that will be taken: 1 . If Request_i <= Need_i , then go to step 2;else raise an error condition, since the process has exceeded its maximum claim. 2.If Request_i <= Available_i then go to step 3; else Pi must have to wait as resources are not available .

3.Now we will assume that resources are assigned to process Pi and thus perform the following steps: Available = Available- Request_i ; Allocation_i = Allocation_i + Request_i ; Need_i = Need_i – Request_i ; If the resulting resource allocation state comes out to be safe, then the transaction is completed and, process Pi is allocated its resources. But in this case, if the new state is unsafe, then Pi waits for Request_i , and the old resource-allocation state is restored . Let us consider the following snapshot for understanding the banker's algorithm: Processes A B C Allocation A B C Max A B C Available P0 1 1 2 4 3 3 2 1 0 P1 2 1 2 3 2 2 P2 4 0 1 9 0 2 P3 2 0 7 5 3 P4 1 1 2 1 1 2 Calculate the content of the need matrix? Check if the system is in a safe state? Determine the total sum of each type of resource ? The Content of the need matrix can be calculated as: Need = Max – Allocation

2. Let us now check for the safe state. Safe sequence: For process P0, Need = (3, 2, 1) & Available = (2, 1, 0) Need <=Available = False So, the system will move to the next process. For Process P1, Need = (1, 1, 0) & Available = (2, 1, 0) Need <= Available = True Request of P1 is granted. Available = Available +Allocation = (2, 1, 0) + (2, 1, 2) = (4, 2, 2) (New Available) For Process P2, Need = (5, 0, 1) & Available = (4, 2, 2) Need <=Available = False So, the system will move to the next process. For Process P3, Need = (7, 3, 3) & Available = (4, 2, 2) Need <=Available = False So, the system will move to the next process. For Process P4, Need = (0, 0, 0) Available = (4, 2, 2) Need <= Available = True Request of P4 is granted. Available = Available + Allocation = (4, 2, 2) + (1, 1, 2) = (5, 3, 4) now, (New Available) Now again check for Process P2, Need = (5, 0, 1) Available = (5, 3, 4) Need <= Available = True Request of P2 is granted. Available = Available + Allocation = (5, 3, 4) + (4, 0, 1) = (9, 3, 5) now, (New Available) Now again check for Process P3, Need = (7, 3, 3) Available = (9, 3, 5) Need <=Available = True The request for P3 is granted. Available = Available +Allocation = (9, 3, 5) + (0, 2, 0) = (9, 5, 5) Now again check for Process P0, = Need (3, 2, 1) = Available (9, 5, 5) Need <= Available = True So, the request will be granted to P0. Safe sequence: < P1, P4, P2, P3, P0> The system allocates all the needed resources to each process. So, we can say that the system is in a safe state. The total amount of resources= sum of columns of allocation + Available = [8 5 7] + [2 1 0] = [10 6 7]

A system has 4 processes and 5 allocatable resource. The current allocation and maximum needs are as follows- If Available = [ 0 0 X 1 1 ], what is the smallest value of X for which this is a safe state? Solution- Let us calculate the additional instances of each resource type needed by each process . Need = Maximum – Allocation Case-01 : For X = If X = 0, then- Available = [ 0 0 0 1 1 ] With the instances available currently, the requirement of any process can not be satisfied. So, for X = 0, system remains in a deadlock which is an unsafe state. Allocation Maximum A 1 2 1 1 1 1 2 1 3 B 2 1 1 2 2 2 1 C 1 1 1 1 2 1 3 1 1 D 1 1 1 1 1 1 2 2 Need A 1 2 B 2 1 C 1 3 D 1 1

Case-02 : For X = 1 Available = [ 0 0 1 1 1 ] Step-01 : With the instances available currently, only the requirement of the process D can be satisfied. So, process D is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then- Available = [ 0 0 1 1 1 ] + [ 1 1 1 1 0 ] = [ 1 1 2 2 1 ] With the instances available currently, the requirement of any process can not be satisfied. So, for X = 1, system remains in a deadlock which is an unsafe state. Case-02: For X = 2 Available= [ 0 0 2 1 1 ] Step-01: With the instances available currently, only the requirement of the process D can be satisfied. So, process D is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then- Available = [ 0 0 2 1 1 ] + [ 1 1 1 1 0 ] = [ 1 1 3 2 1 ] Step-02: With the instances available currently, only the requirement of the process C can be satisfied. So, process C is allocated the requested resources. It completes its execution and then free up the instances of resources held by it. Then- Available = [ 1 1 3 2 1 ] + [ 1 1 0 1 1 ] = [ 2 2 3 3 2 ] Step-03: With the instances available currently, the requirement of both the processes A and B can be satisfied. So, processes A and B are allocated the requested resources one by one. They complete their execution and then free up the instances of resources held by it. Then- Available = [ 2 2 3 3 2 ] + [ 1 0 2 1 1 ] + [ 2 0 1 1 0 ] = [ 5 2 6 5 3 ] So , the system is in a safe state. Thus , minimum value of X that ensures system is in safe state = 2.

An operating system uses the banker’s algorithm for deadlock avoidance when managing the allocation of three resource types X, Y and Z to three processes P0, P1 and P2. The table given below presents the current system state. Here, the Allocation matrix shows the current number of resources of each type allocated to each process and the Max matrix shows the maximum number of resources of each type required by each process during its execution. There are 3 units of type X, 2 units of type Y and 2 units of type Z still available. The system is currently in safe state. Consider the following independent requests for additional resources in the current state- REQ1 : P0 requests 0 units of X, 0 units of Y and 2 units of Z REQ2 : P1 requests 2 units of X, 0 units of Y and 0 units of Z Which of the following is TRUE? Only REQ1 can be permitted Only REQ2 can be permitted Both REQ1 and REQ2 can be permitted Neither REQ1 nor REQ2 can be permitted Allocation Maximum X Y Z X Y Z P0 1 8 4 3 P1 3 2 6 2 P2 2 1 1 3 3 3

Answer: (B) AVAILABLE X=3, Y=2, Z=2 MAX ALLOCATION X Y Z X Y Z P0 8 4 3 0 0 1 P1 6 2 0 3 2 0 P2 3 3 3 2 1 1 Now , if the request REQ1 is permitted, the state would become : AVAILABLE X=3, Y=2, Z=0 MAX ALLOCATION NEED X Y Z X Y Z X Y Z P0 8 4 3 0 0 3 8 4 0 P1 6 2 0 3 2 0 3 0 0 P2 3 3 3 2 1 1 1 2 2 Now, with the current availability, we can service the need of P1. The state would become : AVAILABLE X=6, Y=4, Z=0 MAX ALLOCATION NEED X Y Z X Y Z X Y Z P0 8 4 3 0 0 3 8 4 0 P1 6 2 0 3 2 0 0 0 0 P2 3 3 3 2 1 1 1 2 2 With the resulting availability, it would not be possible to service the need of either P0 or P2, owing to lack of Z resource . Therefore, the system would be in a deadlock. ⇒ We cannot permit REQ1.

Now , at the given safe state, if we accept REQ2 : AVAILABLE X=1, Y=2, Z=2 MAX ALLOCATION NEED X Y Z X Y Z X Y Z P0 8 4 3 0 0 1 8 4 2 P1 6 2 0 5 2 0 1 0 0 P2 3 3 3 2 1 1 1 2 2 With this availability, we service P1 (P2 can also be serviced). So, the state is : AVAILABLE X=6, Y=4, Z=2 MAX ALLOCATION NEED X Y Z X Y Z X Y Z P0 8 4 3 0 0 1 8 4 2 P1 6 2 0 5 2 0 0 0 0 P2 3 3 3 2 1 1 1 2 2 With the current availability, we service P2. The state becomes : AVAILABLE X=8, Y=5, Z=3 MAX ALLOCATION NEED X Y Z X Y Z X Y Z P0 8 4 3 0 0 1 8 4 2 P1 6 2 0 5 2 0 0 0 0 P2 3 3 3 2 1 1 0 0 0 Finally , we service P0. The state now becomes : AVAILABLE X=8, Y=5, Z=4 MAX ALLOCATION NEED X Y Z X Y Z X Y Z P0 8 4 3 0 0 1 0 0 0 P1 6 2 0 5 2 0 0 0 0 P2 3 3 3 2 1 1 0 0 0 The state so obtained is a safe state. ⇒ REQ2 can be permitted. So , only REQ2 can be permitted. Hence , B is the correct choice.

Memory Management in Operating System Computer Memory is basically known as a collection of data that is represented in binary format. Main Memory refers to a physical memory that is the internal memory of the computer. Main memory is also known as RAM. The computer is able to change only data that is in the main memory. Therefore , every program we execute and every file we access must be copied from a storage device into main memory . All the programs are loaded in the main memory for execution. Sometimes the complete program is loaded into the memory, but sometimes a certain part or routine of the program is loaded into the main memory only when it is called by the program, this mechanism is called Dynamic Loading , which enhances the performance . Also, at times one program is dependent on some other program. In such a case, rather than loading all the dependent programs, the CPU links the dependent programs to the main executing program when required. This mechanism is known as Dynamic Linking . Memory Management is the process of coordinating and controlling the memory in a computer, Blocks are assigning portions that are assigned to various running programs in order to optimize the overall performance of the system. This technique helps in keeping the track of each and every memory location, whether the memory location is allocated to some process or it is free . This technique decides which process will get memory at what time. It also keeps the count of how much memory can be allocated to a process. As it keeps the tracks of everything so whenever some memory gets freed or unallocated it updates the status correspondingly.

Need for Memory Management in OS This technique helps in placing the programs in memory in such a way so that memory is utilized at its fullest extent. This technique helps to protect different processes from each other so that they do not interfere with each other's operations. It helps to allocate space to different application routines. This technique allows you to check how much memory needs to be allocated to processes that decide which processor should get memory at what time. It keeps the track of each memory location whether it is free or allocated . This technique keeps the track of inventory whenever memory gets freed or unallocated and it will update the status accordingly . Contiguous Memory Allocation It allows to store the process only in a contiguous fashion. Thus, entire process has to be stored as a single entity at one place inside the memory.

Static Partitioning- Static partitioning is a fixed size partitioning scheme. In this technique, main memory is pre-divided into fixed size partitions. The size of each partition is fixed and can not be changed. Each partition is allowed to store only one process. Example- Under fixed size partitioning scheme, a memory of size 10 KB may be divided into fixed size partitions as- These partitions are allocated to the processes as they arrive. The partition allocated to the arrived process depends on the algorithm followed . Algorithms for Partition Allocation- First Fit Algorithm Best Fit Algorithm Worst Fit Algorithm

1. First Fit Algorithm- This algorithm starts scanning the partitions serially from the starting. When an empty partition that is big enough to store the process is found, it is allocated to the process. The partition size has to be greater than or at least equal to the process size. 2 . Best Fit Algorithm- This algorithm first scans all the empty partitions. It then allocates the smallest size partition to the process. 3 . Worst Fit Algorithm- This algorithm first scans all the empty partitions. It then allocates the largest size partition to the process. Point-01: For static partitioning, Best Fit Algorithm works best. This is because space left after the allocation inside the partition is of very small size. Thus, internal fragmentation is least . Point-02: For static partitioning, Worst Fit Algorithm works worst. This is because space left after the allocation inside the partition is of very large size. Thus, internal fragmentation is maximum . Internal Fragmentation It occurs when the space is left inside the partition after allocating the partition to a process. This space is called as internally fragmented space. This space can not be allocated to any other process. This is because only static partitioning allows to store only one process in each partition. Internal Fragmentation occurs only in static partitioning. External Fragmentation It occurs when the total amount of empty space required to store the process is available in the main memory. But because the space is not contiguous, so the process can not be stored.

Translating Logical Address into Physical Address- CPU always generates a logical address. A physical address is needed to access the main memory . Step-01 : The translation scheme uses two registers that are under the control of operating system. During context switching, the values corresponding to the process being loaded are set in the registers . These two registers are- Relocation Register Limit Register Relocation Register stores the base address or starting address of the process in the main memory. Limit Register stores the size or length of the process. Step-02: CPU generates a logical address containing the address of the instruction that it wants to read. Step-03: The logical address generated by the CPU is compared with the limit of the process. Now, two cases are possible- Case-01 : Generated Address >= Limit If address is found to be greater than or equal to the limit, a trap is generated. This helps to prevent unauthorized access. Case-02 : Generated Address < Limit The address must always lie in the range [0, limit-1]. If address is found to be smaller than the limit, then the request is treated as a valid request. Then, generated address is added with the base address of the process. The result obtained after addition is the address of the memory location storing the required word.

Advantages- It is simple and easy to implement. It supports multiprogramming since multiple processes can be stored inside the main memory. Only one memory access is required which reduces the access time . Disadvantages- It suffers from both internal fragmentation and external fragmentation. It utilizes memory inefficiently. The degree of multiprogramming is limited equal to number of partitions. There is a limitation on the size of process since processes with size greater than the size of largest partition can’t be stored and executed.

Non-Contiguous Memory Allocation- It allows to store parts of a single process in a non-contiguous fashion. Thus, different parts of the same process can be stored at different places in the main memory . There are two popular techniques used for non-contiguous memory allocation- Paging Segmentation Paging- Paging is a fixed size partitioning scheme. In paging , secondary memory and main memory are divided into equal fixed size partitions. The partitions of secondary memory are called as pages. The partitions of main memory are called as frames . Each process is divided into parts where size of each part is same as page size. The pages of process are stored in the frames of main memory depending upon their availability.

Translating Logical Address into Physical Address- CPU always generates a logical address. A physical address is needed to access the main memory. Step-01: CPU generates a logical address consisting of two parts- Page Number Page Offset Page Number specifies the specific page of the process from which CPU wants to read the data. Page Offset specifies the specific word on the page that CPU wants to read. Step-02: For the page number generated by the CPU, Page Table provides the corresponding frame number (base address of the frame) where that page is stored in the main memory. Step-03: The frame number combined with the page offset forms the required physical address . Frame number specifies the specific frame where the required page is stored. Page Offset specifies the specific word that has to be read from that page.

For Main Memory-   Physical Address Space = Size of main memory Size of main memory = Total number of frames x Page size Frame size = Page size If number of frames in main memory = 2 X , then number of bits in frame number = X bits If Page size = 2 X  Bytes, then number of bits in page offset = X bits If size of main memory = 2 X  Bytes, then number of bits in physical address = X bits For Process-   Virtual Address Space = Size of process Number of pages the process is divided = Process size / Page size If process size = 2 X  bytes, then number of bits in virtual address space = X bits   For Page Table-   Size of page table = Number of entries in page table x Page table entry size Number of entries in pages table = Number of pages the process is divided Page table entry size = Number of bits in frame number + Number of bits used for optional fields if any NOTE-   In general, if the given address consists of ‘n’ bits, then using ‘n’ bits, 2 n  locations are possible. Then, size of memory = 2 n  x Size of one location. If the memory is byte-addressable, then size of one location = 1 byte. Thus, size of memory = 2 n  bytes. If the memory is word-addressable where 1 word = m bytes, then size of one location = m bytes. Thus, size of memory = 2 n  x m bytes.

Calculate the size of memory if its address consists of 22 bits and the memory is 2-byte addressable.   Solution- Number of locations possible with 22 bits = 2 22  locations It is given that the size of one location = 2 bytes  Thus, Size of memory = 2 22  x 2 bytes = 2 23  bytes = 8 MB   Problem-02: Calculate the number of bits required in the address for memory having size of 16 GB. Assume the memory is 4-byte addressable.   Solution- Let ‘n’ number of bits are required. Then, Size of memory = 2 n  x 4 bytes. Since, the given memory has size of 16 GB, so we have- 2 n  x 4 bytes = 16 GB 2 n  x 4 = 16 G 2 n  x 2 2  = 2 34 2 n  = 2 32 ∴ n = 32 bits

Problem-03: Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and page table entries of 4 bytes each. The size of the page table in the system in megabytes is _____. 2 4 8 16   Solution- Number of bits in logical address = 32 bits Page size = 4KB Page table entry size = 4 bytes   Process Size-   Number of bits in logical address = 32 bits Thus, Process size = 2 32  B = 4 GB Number of Entries in Page Table- Number of pages the process is divided = Process size / Page size = 4 GB / 4 KB = 2 20  pages Thus , Number of entries in page table = 2 20   entries Page Table Size- = Number of entries in page table x Page table entry size = 2 20  x 4 bytes = 4 MB

Problem-04: Consider a machine with 64 MB physical memory and a 32 bit virtual address space. If the page size is 4 KB, what is the approximate size of the page table? 16 MB 8 MB 2 MB 24 MB  Size of main memory = 64 MB Number of bits in virtual address space = 32 bits Page size = 4 KB   Number of Bits in Physical Address- Size of main memory = 64 MB = 2 26  B Thus, Number of bits in physical address = 26 bits Number of Frames in Main Memory- = Size of main memory / Frame size = 64 MB / 4 KB = 2 26  B / 2 12  B = 2 14 Thus, Number of bits in frame number = 14 bits Number of Bits in Page Offset- Page size = 4 KB = 2 12  B Thus, Number of bits in page offset = 12 bits Process Size- Number of bits in virtual address space = 32 bits Process size= 2 32  B= 4 GB   Number of Entries in Page Table-  = Process size / Page size = 4 GB / 4 KB = 2 20  pages  Thus, Number of entries in page table = 2 20  entries Page Table Size-  = Number of entries in page table x Page table entry size = Number of entries in page table x Number of bits in frame number = 2 20  x 14 bits= 2 20  x 16 bits (Approximating 14 bits ≈ 16 bits)= 2 20  x 2 bytes= 2 MB  

Problem-05: In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is 4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of the following is the maximum number of bits that can be used for storing protection and other information in each page table entry? 10 12 14 Solution- Number of bits in virtual address = 32 bits Number of bits in physical address = 30 bits Page size = 4 KB Page table entry size = 32 bits Size of Main Memory-   Number of bits in physical address = 30 bits Size of main memory= 2 30   B= 1 GB Number of Frames in Main Memory- = Size of main memory / Frame size= 1 GB / 4 KB= 2 30  B / 2 12   B= 2 18 Thus, Number of bits in frame number = 18 bits   Number of Bits used for Storing other Information-   Maximum number of bits that can be used for storing protection and other information = Page table entry size – Number of bits in frame number = 32 bits – 18 bits = 14 bits

Optimal Page Size | Overhead in Paging- In paging scheme, there are mainly two overheads- 1. Overhead of Page Tables- Paging requires each process to maintain a page table. So , there is an overhead of maintaining a page table for each process. 2 . Overhead of Wasting Pages- There is an overhead of wasting last page of each process if it is not completely filled. On an average, half page is wasted for each process . Thus , Total overhead for one process = Size of its page table + (Page size / 2 ) Optimal Page Size- Optimal page size is the page size that minimizes the total overhead. It is given as- Problem-01: In a paging scheme, virtual address space is 4 KB and page table entry size is 8 bytes. What should be the optimal page size? Solution-Given-Virtual address space = Process size = 4 KB Page table entry size = 8 bytes Optimal page size = = bytes = 256 bytes Thus , Optimal page size = 256 bytes.  

Problem-02: In a paging scheme, virtual address space is 16 MB and page table entry size is 2 bytes. What should be the optimal page size?   Solution-  Virtual address space = Process size = 16 MB Page table entry size = 2 bytes Optimal page size= (2 x Process size x Page table entry size) 1/2 = (2 x 16 MB x 2 bytes) 1/2 = (2 26  bytes x bytes) 1/2 = 2 13  bytes = 8 KB Thus, Optimal page size = 8 KB.   Problem-03: In a paging scheme, virtual address space is 256 GB and page table entry size is 32 bytes. What should be the optimal page size? Solution- Virtual address space = Process size = 256 GB Page table entry size = 32 bytes  Optimal page size= (2 x Process size x Page table entry size) 1/2 = (2 x 256 GB x 32 bytes) 1/2 = (2 44  bytes x bytes) 1/2 = 2 22  bytes= 4 MB Thus, Optimal page size = 4 MB.

Problem-01: Consider a single level paging scheme. The virtual address space is 4 MB and page size is 4 KB. What is the maximum page table entry size possible such that the entire page table fits well in one page?   Solution- For page table, to fit well in one page, we must have- Page table size <= Page size   Number of Pages of Process-  = Process size / Page size= 4 MB / 4 KB= 2 10  pages   Page Table Size-  Let page table entry size = B bytes Page table size = Number of entries in the page table x Page table entry size = Number of pages the process is divided x Page table entry size = 2 10  x B bytes Now, According to the above condition, we must have- 2 10  x B bytes <= 4 KB 2 10  x B <= 2 12 B <= 4 Thus, maximum page table entry size possible = 4 bytes.

Problem-02: Consider a single level paging scheme. The virtual address space is 4 GB and page size is 128 KB. What is the maximum page table entry size possible such that the entire page table fits well in one page? Page table size <= Page size   Number of Pages of Process- = Process size / Page size= 4 GB / 128 KB= 2 32  B / 2 17   B= 2 15  pages Page Table Size- Let page table entry size = B bytes Page table size= Number of entries in the page table x Page table entry size= Number of pages the process is divided x Page table entry size= 2 15  x B bytes   According to the above condition, we must have- 2 15  x B bytes <= 128 KB 2 15  x B <= 2 17 B <= 4 Thus, maximum page table entry size possible = 4 bytes. Problem-03: Consider a single level paging scheme. The virtual address space is 128 TB and page size is 32 MB. What is the maximum page table entry size possible such that the entire page table fits well in one page? Solution- For page table, to fit well in one page, we must have- Page table size <= Page size Number of Pages of Process-   = Process size / Page size= 128 TB / 32 MB= 2 47  B / 2 25   B= 2 22  pages Page Table Size- Let page table entry size = B bytes Page table size= Number of entries in the page table x Page table entry size= Number of pages the process is divided x Page table entry size= 2 22  x B bytes According to the above condition, we must have- 2 22  x B bytes <= 32 MB 2 22  x B <= 2 25 B <= 8

Problem-04: Consider a single level paging scheme. The virtual address space is 256 MB and page table entry size is 4 bytes. What is the minimum page size possible such that the entire page table fits well in one page ? Solution- For page table, to fit well in one page, we must have- Page table size <= Page size Let page size = B bytes . Number of Pages of Process- = Process size / Page size= 256 MB / B bytes= 2 28  / B   Page Table Size- = Number of entries in the page table x Page table entry size = Number of pages the process is divided x Page table entry size = (2 28  / B) x 4 bytes = (2 30  / B) bytes ( 2 30  / B) bytes <= B bytes B 2  >= 2 30 B >= 2 15 Thus , minimum page size possible = 2 15  bytes or 32 KB .   Problem-05: Consider a single level paging scheme. The virtual address space is 512 KB and page table entry size is 2 bytes. What is the minimum page size possible such that the entire page table fits well in one page?   For page table, to fit well in one page, we must have- Page table size <= Page size Let page size = B bytes.   Number of Pages of Process- = Process size / Page size= 512 KB / B bytes = 2 19  / B Page Table Size- = Number of entries in the page table x Page table entry size= Number of pages the process is divided x Page table entry size = (2 19  / B) x 2 bytes = (2 20  / B) bytes According to the above condition, we must have- ( 2 20  / B) bytes <= B bytes B 2  >= 2 20 B >= 2 10 Thus , minimum page size possible = 2 10  bytes or 1 KB.

Disadvantage Of Paging- It increases the effective access time due to increased number of memory accesses. One memory access is required to get the frame number from the page table. Another memory access is required to get the word from the page. Translation Look aside Buffer- Translation Look aside Buffer (TLB) is a solution that tries to reduce the effective access time. Being a hardware, the access time of TLB is very less as compared to the main memory . Structure- Translation Look aside Buffer (TLB) consists of two columns- Page Number Frame Number Translating Logical Address into Physical Address- In a paging scheme using TLB, The logical address generated by the CPU is translated into the physical address using following steps- Step-01: CPU generates a logical address consisting of two parts- Page Number Page Offset Step-02:TLB is checked to see if it contains an entry for the referenced page number. The referenced page number is compared with the TLB entries all at once. Now , two cases are possible- Case-01: If there is a TLB hit- If TLB contains an entry for the referenced page number, a TLB hit occurs. In this case, TLB entry is used to get the corresponding frame number for the referenced page number.

Case-02: If there is a TLB miss- If TLB does not contain an entry for the referenced page number, a TLB miss occurs. In this case, page table is used to get the corresponding frame number for the referenced page number. Then, TLB is updated with the page number and frame number for future references . Step-03: After the frame number is obtained, it is combined with the page offset to generate the physical address. Then, physical address is used to read the required word from the main memory.

The following flowchart illustrates the above steps of translating logical address into physical address- Point-01 :   Unlike page table, there exists only one TLB in the system. So , whenever context switching occurs, the entire content of TLB is flushed and deleted. TLB is then again updated with the currently running process.   Point-02:   When a new process gets scheduled- Initially, TLB is empty. So, TLB misses are frequent. With every access from the page table, TLB is updated. After some time, TLB hits increases and TLB misses reduces.   Point-03:   The time taken to update TLB after getting the frame number from the page table is negligible. Also, TLB is updated in parallel while fetching the word from the main memory.

Advantages-   TLB reduces the effective access time. Only one memory access is required when TLB hit occurs.   Disadvantages-   This happens again and again.   Other disadvantages are- TLB can hold the data of only one process at a time. When context switches occur frequently, the performance of TLB degrades due to low hit ratio. As it is a special hardware, it involves additional cost.  Effective Access Time- In a single level paging using TLB, the effective access time is given as- Problem-01: A paging scheme uses a Translation Look aside buffer (TLB). A TLB access takes 10 ns and a main memory access takes 50 ns. What is the effective access time (in ns) if the TLB hit ratio is 90% and there is no page fault? Solution- TLB access time = 10 ns Main memory access time = 50 ns TLB Hit ratio = 90% = 0.9 -Calculating TLB Miss Ratio- = 1 – TLB Hit ratio= 1 – 0.9= 0.1   Calculating Effective Access Time-   = 0.9 x { 10 ns + 50 ns } + 0.1 x { 10 ns + 2 x 50 ns } = 0.9 x 60 ns + 0.1 x 110 ns = 54 ns + 11 ns = 65 ns

Problem-02: A paging scheme uses a Translation Look aside buffer (TLB). The effective memory access takes 160 ns and a main memory access takes 100 ns. What is the TLB access time (in ns) if the TLB hit ratio is 60% and there is no page fault? Solution- Effective access time = 160 ns Main memory access time = 100 ns TLB Hit ratio = 60% = 0.6 Calculating TLB Miss Ratio-   = 1 – TLB Hit ratio = 1 – 0.6 = 0.4   Calculating TLB Access Time-   Let TLB access time = T ns. Substituting values in the above formula, we get- 160 ns = 0.6 x { T + 100 ns } + 0.4 x { T + 2 x 100 ns } 160 = 0.6 x T + 60 + 0.4 x T + 80 160 = T + 140 T = 160 – 140 T = 20

Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is _________. TLB hit ratio = 0.6 Therefore, TLB miss ratio = 0.4 Time taken to access TLB (t) = 10 ms Time taken to access main memory (m) = 80 ms Effective Access Time (EAT) = 0.6 ( 10 + 80 ) + 0.4 ( 10 + 80 + 80 ) = 90 X 0.6 + 0.4 X 170 = 122 Page Fault- When a page referenced by the CPU is not found in the main memory, it is called as a page fault. When a page fault occurs, the required page has to be fetched from the secondary memory into the main memory. A page has to be replaced if all the frames of main memory are already occupied. Page Replacement- is a process of swapping out an existing page from the frame of a main memory and replacing it with the required page. Page replacement is required when- All the frames of main memory are already occupied. Thus, a page has to be replaced to create a room for the required page . Page Replacement Algorithms- Page replacement algorithms help to decide which page must be swapped out from the main memory to create a room for the incoming page . Various page replacement algorithms are- FIFO Page Replacement Algorithm LIFO Page Replacement Algorithm LRU Page Replacement Algorithm Optimal Page Replacement Algorithm Random Page Replacement Algorithm A good page replacement algorithm is one that minimizes the number of page faults.

NOTE Only frame is used for page replacement during entire procedure after all the frames get occupied. FIFO Page Replacement Algorithm-  As the name suggests, this algorithm works on the principle of “ First in First out “. It replaces the oldest page that has been present in the main memory for the longest time. It is implemented by keeping track of all the pages in a queue.   LIFO Page Replacement Algorithm-  As the name suggests, this algorithm works on the principle of “ Last in First out “. It replaces the newest page that arrived at last in the main memory. It is implemented by keeping track of all the pages in a stack.    LRU Page Replacement Algorithm-  As the name suggests, this algorithm works on the principle of “ Least Recently Used “. It replaces the page that has not been referred by the CPU for the longest time.   Optimal Page Replacement Algorithm-  This algorithm replaces the page that will not be referred by the CPU in future for the longest time. It is practically impossible to implement this algorithm. This is because the pages that will not be used in future for the longest time can not be predicted. However, it is the best known algorithm and gives the least number of page faults. Hence, it is used as a performance measure criterion for other algorithms.   Random Page Replacement Algorithm-  As the name suggests, this algorithm randomly replaces any page. So, this algorithm may behave like any other algorithm like FIFO, LIFO, LRU, Optimal etc.

Problem-01: A system uses 3 page frames for storing process pages in main memory. It uses the First in First out (FIFO) page replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while processing the page reference string given below- 4 , 7, 6, 1, 7, 6, 1, 2, 7, 2. Also calculate the hit ratio and miss ratio . From here, Total number of page faults occurred = 6 Calculating Hit ratio- Total number of page hits = Total number of references – Total number of page misses or page faults = 10 – 6 = 4 Thus, Hit ratio = Total number of page hits / Total number of references = 4 / 10 = 0.4 or 40 % Calculating Miss ratio-   Total number of page misses or page faults = 6 Thus, Miss ratio = Total number of page misses / Total number of references = 6 / 10 = 0.6 or 60 % Alternatively , Miss ratio= 1 – Hit ratio= 1 – 0.4= 0.6 or 60%

Problem-02: A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while processing the page reference string given below- 4 , 7, 6, 1, 7, 6, 1, 2, 7, 2. Also calculate the hit ratio and miss ratio . Total number of page faults occurred = 6 Hit ratio = 0.4 or 40% Miss ratio = 0.6 or 60 % Problem-03: A system uses 3 page frames for storing process pages in main memory. It uses the Optimal page replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while processing the page reference string given below- 4 , 7, 6, 1, 7, 6, 1, 2, 7, 2. Also calculate the hit ratio and miss ratio. Total number of page faults occurred = 5 Hit ratio = 0.5 or 50% Miss ratio = 0.5 or 50%

Belady’s Anomaly- Belady’s Anomaly is the phenomenon of increasing the number of page faults on increasing the number of frames in main memory.   NOTE-  “Algorithms suffer from Belady’s Anomaly” does not mean that always the number of page faults will increase on increasing the number of frames in main memory. This unusual behavior is observed only sometimes. Consider the reference string is- 0, 1, 2, 3, 0, 1, 4, 0, 1, 2, 3, 4 Now, observe the following two cases- Case-01: When frame size = 3 Number of page faults = 9 Case-02: When frame size = 4 Number of page faults = 10 Hence , it suffers from Belady’s Anomaly.
Tags