Process management in Operating System_Unit-2

mohanaps 103 views 56 slides May 14, 2024
Slide 1
Slide 1 of 56
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56

About This Presentation

In this PPT Of operating system it covers:
Process Concept; Process Control Block; Process Scheduling; CPU Scheduling - Basic Concepts; Scheduling Algorithms – FIFO; RR; SJF; Multi- level; Multi-level feedback. Process Synchronization and deadlocks: The Critical Section Problem; Synchronization ha...


Slide Content

Process management Ms. Mohana Priya Department of Computer Science [UG]

Process A process is basically a program in execution. The execution of a process must progress in a sequential fashion . To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program. When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data. The following image shows a simplified layout of a process inside main memory −

Stack The process Stack contains the temporary data such as method/function parameters, return address and local variables . Heap This is dynamically allocated memory to a process during its run time . Text This includes the current activity represented by the value of Program Counter and the contents of the processor's registers . Data This section contains the global and static variables

Process Life Cycle (Process State ) When a process executes, it passes through different states. 

Start This is the initial state when a process is first started/created . Ready The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process . Running Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions . Waiting Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available . Terminated or Exit Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory.

Context Switching A  context switch  is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique, a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features . P rocess control block: When the scheduler switches the CPU from executing one process to execute another, the state from the current running process is stored into the process control block

CPU Scheduling Algorithms A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. There are six popular process scheduling algorithms in which we are going to discuss four algorithms in this chapter − First-Come, First-Served (FCFS) Scheduling Shortest-Job-First (SJF) Scheduling Round Robin (RR) Scheduling Multiple-Level Queues Scheduling Multi-level Feedback Scheduling

A lgorithms are either  non-preemptive or preemptive . Non-preemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time Preemptive scheduling is based on priority where a scheduler may preempt a low priority running process anytime when a high priority process enters into a ready state.

CPU Scheduling in Operating Systems Arrival Time :  Time at which the process arrives in the ready queue . Completion Time :  Time at which process completes its execution. Burst Time :  Time required by a process for CPU execution. Turn Around Time:   Time Difference between completion time and arrival time.

Important formula Turn Around Time = Completion Time – Arrival Time Waiting Time (W.T):  Time Difference between turn around time and burst time. Waiting Time = Turn Around Time – Burst Time

Objectives of Process Scheduling Algorithm Max CPU utilization [Keep CPU as busy as possible] Fair allocation of CPU. Max throughput [Number of processes that complete their execution per time unit] Min turnaround time [Time taken by a process to finish execution] Min waiting time [Time a process waits in ready queue] Min response time [Time when a process produces first response]

Process Synchronization There are two ways any process can execute – In Concurrent Execution –  the CPU scheduler switches rapidly between processes. A process is stopped at any points and the processor is assigned to another instruction execution. Here, only one instruction is executed at a time. Parallel execution –  2 or more instructions of different process execute simultaneously on different processing cores.

Why is Process Synchronization Important?  When several processes share data, running in parallel on different cores, then changes made by one process may override changes made by another process running parallel. Resulting in inconsistent data. So, this requires processes to be synchronized, handling system resources and processes to avoid such situation is known as Process Synchronization.

Classic Banking Example – Consider your bank account has 5000$. You try to withdraw 4000$ using net banking and simultaneously try to withdraw via ATM too. For Net Banking at time t = 0ms bank checks you have 5000$ as balance and you’re trying to withdraw 4000$ which is lesser than your available balance. So, it lets you proceed further and at time t = 1ms it connects you to server to transfer the amount Imagine, for ATM at time t = 0.5ms bank checks your available balance which currently is 5000$ and thus let’s you enter ATM password and withdraw amount. At time t = 1.5 ms ATM dispenses the cash of 4000$ and at time t = 2 net banking transfer is complete of 4000$

Now, due to concurrent access and processing time that computer takes in both ways you were able to withdraw 3000$ more than your balance. In total 8000$ were taken out and balance was just 5000$.

How to solve this Situation To avoid such situations  process synchronization is used , so another concurrent process P2 is notified of existing concurrent process P1 and not allowed to go through as there is P1 process which is running and P2 execution is only allowed once P1 completes. Process Synchronization also prevents  race around condition . It’s the condition in which several processes access and manipulate the same data. In this condition, the outcome of the execution depends upon the particular order in which access takes place.

What is Race Condition? Race Condition occurs when more than one process tries to access and modify the same shared data or resources because many processes try to modify the shared data or resources there are huge chances of a process getting the wrong result or data. Therefore, every process race to say that it has correct data or resources and this is called a race condition . The value of the shared data depends on the execution order of the process as many processes try to modify the data or resources at the same time. The race condition is associated with the critical section. Now the question arises that how to handle a race condition. We can tackle this problem by implementing logic in the critical section like only one process at a time can access the critical section and this section is called the atomic section.

Types of Process Synchronization On the basis of synchronization, processes are categorized as one of the following two types: Independent Process:  Execution of one process does not affects the execution of other processes. Cooperative Process:  Execution of one process affects the execution of other processes.

Elements of the Process Entry Section –  To enter the critical section code, a process must request permission. Entry Section code implements this request. Critical Section –  This is the segment of code where process changes common variables, updates a table, writes to a file and so on.  When 1 process is executing in its critical section, no other process is allowed to execute in its critical section. Exit Section –  After the critical section is executed, this is followed by exit section code which marks the end of critical section code. Remainder Section –  The remaining code of the process is known as remaining section.

Critical Section Problem A solution to the critical section problem must satisfy the following three conditions: Mutual Exclusion:  If a process is executing in its critical section, then no other process is allowed to execute in the critical section. Progress:  If no process is executing in the critical section and other processes are waiting outside the critical section, then only those processes that are not executing in their remainder section can participate in deciding which will enter in the critical section next, and the selection cannot be postponed indefinitely. Bounded Waiting:  A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Semaphores Critical Section Test and Set Mutex

Semaphores Semaphore is simply a variable that is non-negative and shared between threads. It is another algorithm or solution to the critical section problem. It is a signaling mechanism and a thread that is waiting on a semaphore, which can be signaled by another thread . It uses two atomic operations, 1) wait, and 2) signal for the process synchronization . Example WAIT ( S ): while ( S <= 0 ); S = S - 1; SIGNAL ( S ): S = S + 1;

Classical Problems of Synchronization - The Bounded Buffer Problem The  Bounded Buffer Problem  (also called the  Producer-Consumer Problem ) The Readers-Writers Problem The Dining Philosophers Problem These problems are used to test nearly every newly proposed synchronization scheme or primitive.

The  Bounded Buffer Problem  (Producer-consumer problem) Consider: a buffer which can store  n  items a producer process which creates the items (1 at a time) a consumer process which processes them (1 at a time ) A producer cannot produce unless there is an empty buffer slot to fill. A consumer cannot consume unless there is at least one produced item.

Semaphore empty=N, full=0, mutex =1 ; process producer { while (true) { empty.acquire (); mutex.acquire (); // produce mutex.release (); full.release (); } } process consumer { while (true) { full.acquire (); mutex.acquire (); // consume mutex.release (); empty.release (); } The semaphore mutex provides mutual exclusion for access to the buffer.

Readers-Writers Problem A data item such as a file is shared among several processes. Each process is classified as either a  reader  or  writer . Multiple readers may access the file simultaneously. A writer must have exclusive access (i.e., cannot share with either a reader or another writer).

A solution gives priority to either readers or writers. readers' priority : no reader is kept waiting unless a writer has already obtained permission to access the database writers' priority : if a writer is waiting to access the database, no new readers can start reading A solution to either version may cause starvation in the readers' priority version, writers may starve in the writers' priority version, readers may starve

A semaphore solution to the  readers' priority version  (without addressing starvation ): Semaphore mutex = 1; Semaphore db = 1; int readerCount = 0; process writer { db.acquire (); // write db.release (); } process reader { // protecting readerCount mutex.acquire (); ++ readerCount ; if ( readerCount == 1) db.acquire (); mutex.release (); // read // protecting readerCount mutex.acquire (); -- readerCount ; if ( readerCount == 0) db.release ; mutex.release (); }

Dining Philosophers Problem The dining philosopher's problem states that there are 5 philosophers sharing a circular table and they eat and think alternatively. There is a bowl of rice for each of the philosophers and 5 chopsticks. A philosopher needs both their right and left chopstick to eat. A hungry philosopher may only eat if there are both chopsticks available. Otherwise a philosopher puts down their chopstick and begin thinking again.

Solution of Dining Philosophers Problem A solution of the Dining Philosophers Problem is to use a semaphore to represent a chopstick. A chopstick can be picked up by executing a wait operation on the semaphore and released by executing a signal semaphore. The structure of the chopstick is shown below − semaphore chopstick [5]; Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table and not picked up by a philosopher.

The structure of a random philosopher i is given as follows − do { wait( chopstick[ i ] ); wait( chopstick[ (i+1) % 5] ); EATING THE RICE signal( chopstick[ i ] ); signal( chopstick[ (i+1) % 5] ); THINKING } while(1); In the above structure, first wait operation is performed on chopstick[ i ] and chopstick[ (i+1) % 5]. This means that the philosopher i has picked up the chopsticks on his sides. Then the eating function is performed . After that, signal operation is performed on chopstick[ i ] and chopstick[ (i+1) % 5]. This means that the philosopher i has eaten and put down the chopsticks on his sides. Then the philosopher goes back to thinking.

Deadlock A  deadlock  happens in operating system when two or more processes need some resource to complete their execution that is held by the other process.

Coffman Conditions A  deadlock  occurs if the four Coffman conditions hold true . Mutual Exclusion Hold and wait No Preemption Circular wait

Mutual Exclusion There should be a resource that can only be held by one process at a time. In the diagram below, there is a single instance of Resource 1 and it is held by Process 1 only.

Hold and Wait - A process can hold multiple resources and still request more resources from other processes which are holding them. In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting the Resource 1 which is held by Process 1.

No Preemption A resource cannot be preempted from a process by force. A process can only release a resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when Process 1 relinquishes it voluntarily after its execution is complete.

Circular Wait - A process is waiting for the resource held by the second process, which is waiting for the resource held by the third process and so on, till the last process is waiting for a resource held by the first process. This forms a circular chain. For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop

Deadlock Prevention Eliminate Mutual Exclusion Eliminate Hold and wait Eliminate No Preemption Eliminate Circular Wait

Eliminate Mutual Exclusion It is not possible to dis-satisfy the mutual exclusion because some resources, such as the tape drive and printer, are inherently non-shareable . Eliminate No Preemption Preempt resources from the process when resources required by other high priority processes.

Eliminate Hold and wait Allocate all required resources to the process before the start of its execution, this way hold and wait condition is eliminated but it will lead to low device utilization. for example, if a process requires printer at a later time and we have allocated printer before the start of its execution printer will remain blocked till it has completed its execution. The process will make a new request for resources after releasing the current set of resources. This solution may lead to starvation.

Eliminate Circular Wait Each resource will be assigned with a numerical number. A process can request the resources increasing/decreasing. order of numbering. For Example, if P1 process is allocated R5 resources, now next time if P1 ask for R4, R3 lesser than R5 such request will not be granted, only request for resources more than R5 will be granted .

Deadlock Avoidance Deadlock  avoidance can be done with Banker’s Algorithm. Banker’s Algorithm Banker’s Algorithm is resource allocation and  deadlock  avoidance algorithm which test all the request made by processes for resources, it checks for the safe state, if after granting request system remains in the safe state it allows the request and if there is no safe state it doesn’t allow the request made by the process.

Inputs to Banker’s Algorithm: Max need of resources by each process. Currently allocated resources by each process. Max free available resources in the system.

The request will only be granted under the below condition: If the request made by the process is less than equal to max need to that process. If the request made by the process is less than equal to the freely available resource in the system.

The Banker's Algorithm is the combination of the safety algorithm and the resource request algorithm to control the processes and avoid deadlock in a system : Safety algorithm Resource request algorithm

Safety Algorithm It is a safety algorithm used to check whether or not a system is in a safe state or follows the safe sequence in a banker's algorithm : Step 1: There are two vectors  Wok  and  Finish  of length m and n in a safety algorithm . Initialize: Work = Available Finish[ i ] = false; for I = 0, 1, 2, 3, 4… n - 1 . Step 2: Check the availability status for each type of resources [ i ], such as : Need[ i ] <= Available Finish[ i ] == false If the i does not exist, go to step 4. Step 3: Work = Work +Allocation( i ) // to get new resource allocation Finish[ i ] = true Go to step 2 to check the status of resource availability for the next process . Step 4: If Finish[ i ] == true; it means that the system is safe for all processes.

Resource Request Algorithm A resource request algorithm checks how a system will behave when a process makes each type of resource request in a system as a request matrix . When the number of  requested resources  of each type is less than the  Need  resources, go to step 2 and if the condition fails, which means that the process P[ i ] exceeds its maximum claim for the resource. As the expression suggests: If Request( i ) <= Need Go to step 2 ; If Request( i ) <= Available Else Process P[ i ] must wait for the resource since it is not available for use.

When the requested resource is allocated to the process by changing state: Available = Available - Request Allocation( i ) = Allocation( i ) + Request ( i ) Need i   = Need i  - Request i

Consider a system that contains five processes P1, P2, P3, P4, P5 and the three resource types A, B and C. Following are the resources types: A has 10, B has 5 and the resource type C has 7 instances .

Answer the following questions using the banker's algorithm: What is the reference of the need matrix? Determine if the system is safe or not. What will happen if the resource request (1, 0 , 2) for process P1 can the system accept this request immediately?

Ans. 2: Context of the need matrix is as follows: Need [ i ] = Max [ i ] - Allocation [ i ] Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3 Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2 Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0 Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1 Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1

Ans. 2: Apply the Banker's Algorithm: Available Resources of A, B and C are 3, 3, and 2. Now we check if each type of resource request is available for each process . Step 1:  For Process P1 : Need <= Available 7, 4, 3 <= 3, 3, 2 condition is  false . So, we examine another process, P2. For Process P2: Need <= Available 1, 2, 2 <= 3, 3, 2 condition  true New available = available + Allocation (3, 3, 2) + (2, 0, 0) => 5, 3, 2

Step 3:  For Process P3: P3 Need <= Available 6, 0, 0 < = 5, 3, 2 condition is  false . Similarly, we examine another process, P4 . Step 4:  For Process P4: P4 Need <= Available 0, 1, 1 <= 5, 3, 2 condition is  true New Available resource = Available + Allocation 5, 3, 2 + 2, 1, 1 => 7, 4, 3 Similarly, we examine another process P5

Step 5:  For Process P5: P5 Need <= Available 4, 3, 1 <= 7, 4, 3 condition is  true New available resource = Available + Allocation 7, 4, 3 + 0, 0, 2 => 7, 4, 5 Now, we again examine each type of resource request for processes P1 and P3 . Step 6:  For Process P1: P1 Need <= Available 7, 4, 3 <= 7, 4, 5 condition is  true New Available Resource = Available + Allocation 7, 4, 5 + 0, 1, 0 => 7, 5, 5 So, we examine another process P2.

Step 7:  For Process P3: P3 Need <= Available 6, 0, 0 <= 7, 5, 5 condition is true New Available Resource = Available + Allocation 7, 5, 5 + 3, 0, 2 => 10, 5, 7 Hence, we execute the banker's algorithm to find the safe state and the safe sequence like P2, P4, P5, P1 and P3 .

Ans. 3:   For granting the Request (1, 0, 2), first we have to check that  Request <= Available , that is (1, 0, 2) <= (3, 3, 2), since the condition is true. So the process P1 gets the request immediately.
Tags