UNIT 2-UNDERSTANDING THE SYNCHRONIZATION PROCESS.pptx
LeahRachael
183 views
33 slides
Mar 04, 2024
Slide 1 of 33
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
About This Presentation
Operating System: Synchronization process
Size: 1.18 MB
Language: en
Added: Mar 04, 2024
Slides: 33 pages
Slide Content
Operating System UNIT 2: UNDERSTANDING THE SYNCHRONIZATION PROCESS
Race Conditions SSCR- Canlubang Campus 6-Process Synchronization Race conditions usually occur if two or more processes are allowed to modify the same shared variable at the same time. To prevent race conditions, the operating system must perform process synchronization to guarantee that only one process is updating a shared variable at any one time.
Critical Section SSCR- Canlubang Campus 6-Process Synchronization part of the process that contains the instruction or instructions that access a shared variable or resource
Critical Section SSCR- Canlubang Campus 6-Process Synchronization The solution to this problem involves three aspects. Mutual exclusion requirement should always be met in order to guarantee that only one process may enter its critical section at a time. If a process is currently executing its critical section, any process that attempts to enter its own critical section should wait until the other process leaves its critical section. Progress requirement. The solution must guarantee that if a process wants to enter its critical section and no other process is in its critical section, the process will be able to execute in its critical section. Bounded waiting requirement. The solution must guarantee that processes will not wait for an indefinite amount of time before it can enter its critical section.
Critical Section SSCR- Canlubang Campus 6-Process Synchronization Software solutions to the critical section problem: The first solution uses a global or shared variable called TurnToEnter . If TurnToEnter = 0, P can execute its critical section. P changes its value to 1 to indicate that it has finished executing its critical section. If TurnToEnter = 1, P 1 can execute its critical section. P 1 changes its value to 0 to indicate that it has finished executing its critical section. The figure below illustrates this.
Critical Section SSCR- Canlubang Campus 6-Process Synchronization The second solution uses a two-element, Boolean array called WantToEnter . If a process wants to enter its critical section, it sets its variable to true. If WantToEnter [0] = true, P wants to enter its critical section. If WantToEnter [1] = true, P 1 wants to enter its critical section. Then before a process enters its critical section, it first checks the WantToEnter variable of the process. If the WantToEnter variable of the other process is also true, it will wait until it becomes false before proceeding in its critical section. A process will set its WantToEnter variable to false once it exits its critical section.
Critical Section SSCR- Canlubang Campus 6-Process Synchronization This is also called Peterson’s Algorithm . The Peterson’s Algorithm uses both TurnToEnter and WantToEnter variables. Suppose P wants to enter its critical section, it sets WantToEnter [0] = true and TurnToEnter = 1. P then checks if P 1 wants to enter its critical section ( WantToEnter [1] == true) and if it is P 1’s turn to enter its critical section ( TurnToEnter == 1). P will only proceed to execute if one of these conditions does not exist.
Critical Section SSCR- Canlubang Campus 6-Process Synchronization The solution to the critical section problem involving several processes is often called Bakery Algorithm . It follows the system used by bakeries in attending to their customers. Each customer that enters the bakery gets a number. As customers enter the bakery, the numbers increase by one. The customer that has the lowest number will be served first. Once finished, the customer discards the assigned number. The customer must get a new number if he or she wants to be served again.
Critical Section SSCR- Canlubang Campus 6-Process Synchronization Hardware solutions to the critical section problem: Disabling interrupts. This feature enables a process to disable interrupts before it starts modifying a shared variable. If interrupts is disabled, the CPU will not be switched from one process to another. The operating system will then simply allow the process to complete the execution of the critical section even though its time quantum has finished. Upon exiting the critical section, the process will enable interrupts. Special hardware instructions. There are special machine instructions that will allow a process to modify a variable or memory location atomically. This means that if a process executes an atomic operation, no other process will be able to preempt it.
Critical Section SSCR- Canlubang Campus 6-Process Synchronization An example of this special instruction is the test_and_set instruction. This instruction is illustrated in the figure below.
Critical Section SSCR- Canlubang Campus 6-Process Synchronization This algorithm uses a shared Boolean variable called lock . The test_and_set instruction is used to test if lock is true or false. The variable lock is true if a process is inside its critical section. Otherwise, lock is set to false.
what is process Synchronization Process Synchronization is the coordination of execution of multiple processes in a multi-process system to ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the problem of race conditions and other synchronization issues in a concurrent system.
what is process Synchronization Process Synchronization is the coordination of execution of multiple processes in a multi-process system to ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the problem of race conditions and other synchronization issues in a concurrent system.
Objective of process synchronization To ensure that multiple processes access shared resources without interfering with each other and to prevent the possibility of inconsistent data due to concurrent access. To achieve this, various synchronization techniques such as critical sections, semaphores, and monitors are used.
Process Synchronization in multi-process system To ensure data consistency and integrity, and to avoid the risk of deadlocks and other synchronization problems. Ensuring the correct and efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types: Independent Process : The execution of one process does not affect the execution of other processes. Cooperative Process : A process that can affect or be affected by other processes executing in the system. Process synchronization problem arises in the case of Cooperative processes also because resources are shared in Cooperative processes.
Semaphores SSCR- Canlubang Campus 6-Process Synchronization Semaphore is a tool that can easily be used to solve more complex synchronization problems and does not use busy waiting. 1. wait ( S ) This operation waits for the value of the semaphore S to become greater than 0. If this condition is satisfied, wait will decrement its value by 1. wait ( S ): while S <= 0 { }; S --; 2. signal ( S ) This operation increments the value of semaphore S by 1. signal ( S ): S ++; The wait operation is atomic or cannot be interrupted once semaphore S is found to be greater than 0.
Semaphore provides mutual exclusion Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE);
Monitor Monitor is one of the ways to achieve Process synchronization. Monitor is supported by programming languages to achieve mutual exclusion between processes. For example Java Synchronized methods. Java provides wait() and notify() constructs. It is the collection of condition variables and procedures combined together in a special kind of module or a package. The processes running outside the monitor can’t access the internal variable of monitor but can call procedures of the monitor. Only one process at a time can execute code inside monitors. Syntax of Monitor
Classic Synchronization Problems SSCR- Canlubang Campus 6-Process Synchronization The Dining Philosophers Problem Restrictions: A philosopher cannot start eating unless he has both forks. A philosopher cannot pick up both forks at the same time. He has to do it one at a time. He cannot get the fork that is being used by the philosopher to his right or to his left.
Classic Synchronization Problems SSCR- Canlubang Campus 6-Process Synchronization A possible solution is to use a semaphore to represent each fork. The wait operation is used when picking up a fork while the signal operation is used when putting down a fork. The mutual exclusion requirement is satisfied since each fork is represented by a semaphore which guarantees that only one philosopher can use a particular fork at a time.
Classic Synchronization Problems SSCR- Canlubang Campus 6-Process Synchronization This solution is often called readers-preference solution . If it happens that a steady stream of incoming reader processes wants to access the database, this may cause the starvation of writer processes. Writer processes will have to wait for an indefinite period of time before it can access the database.
Classic Synchronization Problems SSCR- Canlubang Campus 6-Process Synchronization A solution that favors the writer processes is called writers-preference solution . It still does not grant a writer access to the database if there are readers or writers already in the database. However, a reader is denied access to the database if a writer is currently accessing the database or if there is a waiting writer process (even if other reader processes are already in the database). If it happens that a steady stream of incoming writer processes wants to access the database, this may cause the starvation of reader processes.
Advantages of Process Synchronization Ensures data consistency and integrity Avoids race conditions Prevents inconsistent data due to concurrent access Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization Adds overhead to the system This can lead to performance degradation Increases the complexity of the system Can cause deadlocks if not implemented properly.
What is Deadlock? A deadlock is a situation which occurs when a process enters in a waiting state because the requested resource is being held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock
A practical example of Deadlock You can't get a job without experience. You can't get experience without a job. Let’s visualize Deadlock Two processes competing for two resources in opposite order. A single process goes through. The later process has to wait. A deadlock occurs when the first process locks the first resource at the same time as the second process locks the second resource. The deadlock can be resolved by cancelling and restarting the first process
Necessary Conditions There are four conditions that are necessary to achieve deadlock: Mutual Exclusion - At least one resource must be held in a non-sharable mode; If any other process requests this resource, then that process must wait for the resource to be released. Hold and Wait - A process must be simultaneously holding at least one resource and waiting for at least one resource that is currently being held by some other process. No preemption - Once a process is holding a resource ( i.e. once its request has been granted ), then that resource cannot be taken away from that process until the process voluntarily releases it. Circular Wait - A set of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ]. ( Note that this condition implies the hold-and-wait condition, but it is easier to deal with the conditions if the four are considered separately. )
METHODS FOR HANDLING DEADLOCK:- Generally speaking there are three ways of handling deadlocks: Deadlock prevention or avoidance - Do not allow the system to get into a deadlocked state. Deadlock detection and recove ry - Abort a process or pre-empt some resources when deadlocks are detected. Ignore the problem all together - If deadlocks only occur once a year or so, it may be better to simply let them happen and reboot as necessary than to incur the constant overhead and system performance penalties associated with deadlock prevention or detection. This is the approach that both Windows and UNIX take.
Deadlock prevention:- One way to handle deadlocks is to ensure that at least one of the four necessary conditions causing deadlocks is prevented by design. This is deadlock prevention. Deadlock prevention approach is to design a system in such a way that possibility of deadlock is excluded. Deadlocks can be prevented by preventing at least one of the four required conditions: Mutual Exclusion Hold and Wait No Preemption Circular Wait
The basic idea of deadlock avoidance is to grant only those requests for available resources, which cannot possibly results in deadlock. A decision is made dynamically whether the current resource if allocated/generated lead to deadlock. If possibly cannot, the resource is granted to the requesting process. Otherwise, the requesting process is suspended till the time when its pending request can be safely granted. The two approaches followed for deadlock avoidance are:- Do not start the process if its demand might lead to deadlock. Do not grant an incremental resource request to a process if this allocation might result in deadlock. Deadlock avoidance:-
In this approach, the available resources are granted freely and deadlocks are checked occasionally. Detection means discovering a deadlock. If deadlock exists the system must break or recover the deadlock. This approach is to grant resources freely but occasionally examine the system state for deadlock and take remedial action when required. That is why it is called deadlock detection and recovery. This approach involved two steps:- First, the deadlocked processes are identified or deleted. The next step is to break or recover the deadlock. The various strategies for recovery of deadlock are:- Abort all deadlocked processes. Backup each deadlocked process to some previously defined checkpoint and restart. Successively abort deadlocked processes until deadlock no longer exists. Successively preempt resources until deadlock no longer exists. Deadlock detection and recovery:-