Resource Preemption To eliminate deadlocks using resource preemption, we preempt some resources from processes and give those resources to other processes. This method will raise three issues – Selecting a victim: We must determine which resources and which processes are to be preempted. Rollback: One simple idea is total rollback . That means aborting the process and restarting the process.
Starvation: In a system, it may happen that the same process is always picked as a victim . This situation is called Starvation and must be avoided. Starvation is a problem of resource management where in the process does not have resources because it is being used by other processes. One solution is that a process must be picked as a victim only a finite number of times.
Advantages of Resource Pre-emption It can help in breaking a deadlock without terminating any processes. It is more efficient than process termination as it targets only the resources that are causing the deadlock. It can potentially avoid the need for restarting the system.
Disadvantages of Resource Preemption It may lead to increased overhead due to the need for determining which resources and processes should be preempted. It may cause further problems if the preempted resources were critical to the system’s operation. It may cause delays in the completion of processes if resources are frequently preempted.
Process Management And Synchronization: The main objective of process synchronization is to ensure that multiple processes access shared resources without interfering with each other. To achieve this, various synchronization techniques such as semaphores, monitors, and critical sections are used.
Critical Section Problem Consider a system consisting of n processes {P0, P1, …, Pn - 1}. The critical section refers to the segment of code where processes access shared resources, such as common variables and files, and perform write operations on them. Updating a table, writing a file, etc., when one process is executing in its critical section, no other process is allowed into its critical section.
Requirements: Mutual exclusion - Only one process can execute their critical sections at any time. Progress - If no process is executing in its critical section, any other process can enter based on the selection. Bounded waiting - Bounded waiting bounds on the number of times that the other processes are allowed to enter their critical sections after a process has made a request to enter into its critical section and before that the request is granted.
Synchronization With Hardware In solving critical-section problem, in uniprocessor environment we need to disable the interrupts when a shared variable is being modified. Whereas in multiprocessor environment disabling the interrupts is a time consuming process and system efficiency also decreases.
Process Synchronization problems occur when two processes running concurrently share the same data or same variable. The value of that variable may not be updated correctly before its being used by a second process. Such a condition is known as Race Around Condition .
Test and Set: The shared variable is lock which is initialized to false . The first process will enter the critical section at once as TestAndSet (lock) will return false . The other processes cannot enter now as lock is set to true and so the while loop continues to be true. Mutual exclusion is ensured. Once the first process gets out of the critical section, lock is changed to false . So, now the other processes can enter one by one. There is no queue maintained, so any new process that finds the lock to be false again can enter. So bounded waiting is not ensured.
lock: A shared Boolean variable initialized to false . It serves as the mutual exclusion lock. Individual key : A variable representing an individual process. Initialized to false . Individual waiting[ i ]: An array of Boolean variables, one for each process . Initialized to false . These variables indicate whether a process is waiting to enter the critical section.
1. Each process sets its waiting flag to true , indicating that it wants to enter the critical section. 2 .It then sets its own key to true , indicating that it is ready to obtain the lock. 3 .The process enters a while loop that checks two conditions: waiting[ i ]: The process is waiting to enter the critical section. key : The process is ready to obtain the lock.
Within the while loop, the process attempts to obtain the lock using the TestAndSet operation. 5 .If the process successfully acquires the lock , it proceeds to the critical section where it executes its critical code. 6. After leaving the critical section, the process resets its waiting flag to false .
7. The process now looks for the next process (j) to give the opportunity to enter the critical section. It does this by scanning through the other processes in a circular manner. 8. If it finds that no other process is waiting to enter the critical section ( waiting[j] is false for all j), it releases the lock ( lock = false ) so that another process can enter the critical section. 9. If it finds that another process (j) is waiting to enter the critical section ( waiting[j] is true ), it sets the waiting[j] flag to false , giving the opportunity to process j to enter the critical section.
10. The process then enters the remainder section where it performs non-critical operations before starting the process again by going back to the beginning of the loop. TestAndSet operation, which is used to safely update the lock variable and avoid race conditions.
Swap operates on one global shared Boolean variable = lock and another local Boolean variable = key . when there are no processes in the critical section, the lock turns to false and allows other processes to enter. Both of which are initially set to false.
Instead of directly setting lock to true in the swap function, key is set to true and then swapped with lock. First process will be executed, and in while(key), since key=true , swap will take place and hence lock=true and key=false . Again next iteration takes place while(key) but key=false , first process will enter in critical section. Now another process will try to enter in Critical section, so again key=true and hence while(key) loop will run and swap takes place so, lock=true and key=true.
Again on next iteration while(key) is true so this will keep on executing and another process will not be able to enter in critical section. Therefore Mutual exclusion is ensured. Lock is changed to false, so any process finding it enters the critical section.
// Shared variable lock initialized to false // and individual key initialized to false; boolean lock; Individual key; void swap( boolean &a, boolean &b){ boolean temp = a; a = b; b = temp; }
3. Unlock and Lock Unlock and lock algorithm uses the TestAndSet method to control the value of lock. Unlock and lock algorithm uses a variable waiting[ i ] for each process i . waiting[ i ] checks if the process i is waiting to enter into the critical section. All the processes are maintained in a ready queue before entering into the critical section. The processes are added to the queue with respect to their process number. The queue is the circular queue .
In Unlock and lock algorithm the lock is not set to false as one process comes out of the critical section. once the ith process comes out of the critical section the algorithm checks the waiting queue for the next process waiting to enter the critical section i.e. jth process If there is a jth process waiting in the ready queue to enter the critical section, the waiting[j] of the jth process is set to false . If no process is waiting in the ready queue to enter the critical section the algorithm then sets the lock to false so that any other process comes and enters the critical section easily.
// Shared variable lock initialized to false // and individual key initialized to false boolean lock; Individual key; Individual waiting[ i ]; while(1){ waiting[ i ] = true; key = true; while(waiting[ i ] && key) key = TestAndSet (lock); waiting[ i ] = false; critical section
Semaphore in OS is a synchronization tool used to regulate access to shared resources such as files, memory, or network connections. It is a variable that controls access to the shared resource. Semaphores can be used to ensure that only one process can access the shared resource at a time. The value of the semaphore determines whether a process can access the shared resource or not.
Operations in Semaphores Wait() and signal() are the two basic operations used to manipulate semaphores in an operating system. Wait(): When a process performs a wait() operation on a semaphore, it checks the current value of the semaphore . If the value is positive , the process acquires the semaphore and decrements its value. If the value is zero, the process is blocked and added to a queue of waiting processes until the semaphore’s value becomes positive.
Signal(): When a process or thread performs a signal() operation on a semaphore, it increments the semaphore's value. If there are any processes waiting on the semaphore, one of them is unblocked and allowed to acquire the semaphore.
Types of Semaphores in OS There are two types of semaphores: Binary Semaphores: Binary semaphores are also known as mutual exclusion. A binary semaphore can only have two states: 0 or 1. It is typically used to implement mutual exclusion, which ensures that only one process or thread can access a shared resource at any given time.
Counting Semaphores: Counting semaphores can have any non-negative integer value. Semaphores are typically used to coordinate access to resources, and the number of semaphores is initialized to the number of free resources. The value then automatically increases the count when the resource is added and decreases the count atomically when the resource is deleted .
If you have a resource with three instances, the initial value of the Semaphore is 3(i.e., S=3). Whenever a process needs to access a critical section/resource, it calls the wait function and then decrements the semaphore value by one. In this case, three processes can access the critical section/resource. When the fourth process needs it, It is blocked and put in a waiting queue and wakes up only when any process comes out executing process performs the signal function. In other words, the Semaphore has increased by 1.
Advantages of Semaphores in OS Enforce mutual exclusion to prevent race conditions. Synchronize process execution. Prevent deadlocks. Efficiently manage system resources.
Producer-Consumer Problem The Producer-Consumer problem is a classic synchronization problem in operating systems. The problem is defined as follows: there is a fixed-size buffer and a Producer process , and a Consumer process . The Producer process creates an item and adds it to the shared buffer . The Consumer process takes items out of the shared buffer and “ consumes ” them.
Conditions for Producer and the Consumer processes for data synchronization: The Producer process must not produce an item if the shared buffer is full . The Consumer process must not consume an item if the shared buffer is empty . Access to the shared buffer must be mutually exclusive ; this means that at any given instance, only one process should be able to access the shared buffer and make changes to it.
The solution to the Producer-Consumer problem involves three semaphore variables. semaphore Full: Tracks the space filled by the Producer process. It is initialized with a value of 0 as the buffer will have 0 filled spaces at the beginning. semaphore Empty : Tracks the empty space in the buffer. It is initially set to buffer_size as the buffer is empty at the beginning. semaphore mutex : Used for mutual exclusion so that only one process can access the shared buffer at a time.
Producer Process The Producer process waits for the Empty semaphore . This means that the Producer process is kept in busy-waiting if the Empty semaphore value is 0 , indicating that there are 0 empty spaces available. The Producer will have to wait for the Consumer to consume some items from the buffer and make some space available for itself. The Producer then waits for the mutex semaphore . which ensures that once a Process entered the critical section of the code, the rest of the Process cannot access it.
The add() function appends the item to the shared buffer. Producer process uses the signal() operation on the Full semaphore , increasing its value by 1, indicating that an item has been added to the shared buffer . After the Producer process adds the item to the shared buffer, it uses the signal() operation to increase the value of the mutex semaphore by one . So any other Process which were busy-waiting in the mutex semaphore to access the critical section.
Consumer Process The Consumer waits for the Full semaphore . If the Full semaphore value is 0, it indicates that there are no items to consume, and it must wait for the Producer process to produce an item and add it to the shared buffer for consumption. Mutex semaphore ensures mutually exclusive access to the critical section of the code so that the shared buffer is only accessed by one Process at a time for data synchronization.
Consumer process reaches the critical section of the code, i.e., it executes consume() function and takes one item from the shared buffer. Consumer uses signal(Empty) to increase the value of the Empty semaphore by one, indicating that a free slot has been made in the shared buffer. Any Producer processes that may have been waiting in the Empty semaphore are now allowed to add an item to the shared buffer After taking an item from the buffer, the Consumer process first uses signal(mutex) to release the mutex semaphore, allowing other Process that may have been busy-waiting in the mutex to access the critical section.
2) Dining-Philosophers Problem: Dining-philosophers problem is posed by Disjkstra in 1965 . Five philosophers are seated around a circular table. The life of each philosopher consists of thinking and eating . For eating five plates are there, one for each philosopher . There is a big serving bowl present the middle of the table with enough food in it. Only five forks are available on the whole. Each philosopher needs to use two forks on either side of the plate to eat food.
Now the problem is algorithm must satisfy mutual exclusion ( i.e., no two philosophers can use the same fork at the same time) deadlock and starvation. For this problem various solutions are available. Each philosopher picks up first the fork on the left and then the fork on the right. After eating two forks are replaced on the table. In this case if all the philosophers are hungry all will sit down and pick up the fork on their left and all reach out for the other fork, which is not there. In this un dignified position, all philosophers will starve. It is the deadlock situation
2. Buy five additional forks 3. Eat with only one fork 4. Allow only four philosophers at a time, due to this restriction at least one philosopher will have two forks. He will eat and then replace, then there replaced forks are utilized by the other philosophers (washing of the forks is implicit).
5 . Allow philosopher to pick up the two forks if both are available. 6. An asymmetric solution is, an odd philosopher picks up first their left fork and then their right side fork whereas an even philosopher picks up their right side fork first and then their left side fork.
Using semaphore to solve problem In this solution, each fork is represented by a semaphore, and a philosopher must acquire both the semaphore for the fork to their left and the semaphore for the fork to their right before they can begin eating. If a philosopher cannot acquire both semaphores, they must wait until they become available.
1. Initialize the semaphores for each fork to 1 (indicating that they are available). 2. Initialize a binary semaphore (mutex) to 1 to ensure that only one philosopher can attempt to pick up a fork at a time. 3. For each philosopher process, create a separate thread that executes the following code: While true: Think for a random amount of time. Acquire the mutex semaphore to ensure that only one philosopher can attempt to pick up a fork at a time. Attempt to acquire the semaphore for the fork to the left.
If successful, attempt to acquire the semaphore for the fork to the right. If both forks are acquired successfully, eat for a random amount of time and then release both semaphores. If not successful in acquiring both forks, release the semaphore for the fork to the left (if acquired) and then release the mutex semaphore and go back to thinking.
Program dining philosophers
3) Readers and Writers Problem:- A data object (i.e., file or record) is to be shared among several concurrent processes. Some of these processes may want only to read the content of the shared object, whereas others may want to update (i.e., read and write) the shared object. In this context if two readers access the shared data object simultaneously then no problem at all. If a writer and some other process (either reader or writer) access the shared object simultaneously then conflict will raise. For solving these problems various solutions are there:-
For solving these problems various solutions are there: 1. Assign higher priorities to the reader processes, as compared to the writer processes. 2. The writers have exclusive access to the shared object. 3. No reader should wait for other readers to finish. Here the problem is writers may starve. 4. If a writer is waiting to access the object, no new readers may start reading. Here readers may starve.
General structure of a writer process
3) Sleeping Barber Problem: A barber shop has one barber chair for the customer being served currently and few chairs for the waiting customers (if any). The barber manages his time efficiently. When there are no customers, the barber goes to sleep on the barber chair. As soon as a customer arrives, he has to wake up the sleeping barber, and ask for a hair cut. If more customers arrive while the barber is serving a customer, they either sit down in the waiting chairs or can simply leave the shop.
Solution : The solution to this problem includes three semaphores . First is for the customer which counts the number of customers present in the waiting room (customer in the barber chair is not included because he is not waiting). Second , the barber 0 or 1 is used to tell whether the barber is idle or is working, Third mutex is used to provide the mutual exclusion which is required for the process to execute. In the solution, the customer has the record of the number of customers waiting in the waiting room if the number of customers is equal to the number of chairs in the waiting room then the upcoming customer leaves the barbershop.
The barber executes the procedure barber, to block on the semaphore customers because it is initially 0. Then the barber goes to sleep until the first customer comes up. When a customer arrives, he executes customer procedure the customer acquires the mutex for entering the critical region. If another customer enters thereafter, the second one will not be able to anything until the first one has released the mutex.
The customer then checks the chairs in the waiting room if waiting customers are less then the number of chairs then he waits otherwise he leaves and releases the mutex. If the chair is available then customer sits in the waiting room and increments the variable waiting value and also increases the customer’s semaphore this wakes up the barber if he is sleeping. customer and barber are both awake and the barber is ready to give that person a haircut. When the haircut is over, the customer exits the procedure and if there are no customers in waiting room barber sleeps.
Code: # Define Chairs 4 typedef int semaphore; semaphore customers = 0;semaphore barber = 0; semaphore mutex = 1; int waiting = 0; Void barber (Void) { while (TRUE) { waiting = waiting - 1; signal (barber);
Critical Regions The critical-region construct guards against certain simple errors associated with the semaphore . Critical-regions does not eliminate all synchronization errors, it reduces them.
All processes share a semaphore variable mutex, which is initialized to each process must execute wait (mutex) before entering into the critical section, and signal (mutex) afterward. Difficulties Suppose that a process interchanges the order wait and signal then several processes may be executing in their critical section simultaneously, violating the mutual exclusion requirement.
This error may be discovered only if several processes are simultaneously active in their critical sections. Here a deadlock will occur. If a process omits the wait (mutex) or the signal (mutex), or both. In this case, either mutual exclusion is violated or a deadlock will occur. Solution is use high-level synchronization constructs called critical region and monitor.
Monitors A monitor is an another high-level synchronization construct. It is an abstraction over semaphores. The main purpose of abstraction provided by the operating system is to hide the working complexity or technical details of the system Coding monitor is similar to programming in high-level programming languages. Monitors are easy to program. The compiler of a programming language usually implements them, reducing the scope of programmatic errors.
A monitor is a group or collection of data items, a set of procedures to operate on the data items and a set of special variables known as ‘condition variables’. The condition variables are similar to semaphores. The operations possible on a condition variable are signal and wait just like a semaphore.
The client process cannot access the data items inside a monitor directly. The monitor guards them closely. The processes, which utilize the services of the monitor, need not know about the internal details of the monitors.
At any given time, only one process can be a part of a monitor. For example consider the situation, Monitor M Shared variable V, Procedure R to operate on V Condition variable C. There are two processes P1 and P2 that want to execute the procedure P. In such a situation the following events will take place:
Process P1 is scheduled by the operating system P1 invokes procedure R to manipulate V. While P1 is inside the monitor, the time slice gets exhausted and process P2 is scheduled. P2 attempt to invoke R, but it gets blocked as P1 is not yet out of the monitor . As a result, P2 performs wait operation on C
P1 is scheduled again and it exists the monitor and performs signal operation on C• Next time P2 is scheduled and it can enter the monitor, invoke the procedure R and manipulate V. Monitor concept cannot guarantee that the preceding access sequence will be observed.
Problems A process may access a resource without first gaining access permission to the resource. A process might never release a resource when it has been granted access to the resource A process might attempt to release a resource that it never requested. A process may request, the same resource twice without first releasing the resource.
The same difficulties are encountered with the use of semaphores. The possible solution to the current problem is to include the resource access operations within the resource allocator monitor. To ensure that the processes observe the appropriate sequences, we must inspect all the programs. This inspection is possible for a small, static system, it is not reasonable for a large or dynamic system.
Interprocess communication Processes executing concurrently in the operating system may be either independent processes or cooperating processes Independent Processes : They cannot affect or affected by the other process executing in the system. Cooperating processes : They can affect or affected by the other process executing in the system. Note: Any process share the data with other processes is called cooperating process
Reasons for providing an environment that allows process cooperation. Information sharing: Several user may be interested in a single piece of information. We need to provide an environment such scenarios Computational speed-up: single task is broken into sub tasks and run them concurrently. Here all the sub tasks are different processes of main task and need to be communicated with each other to achieve computational speed. Modularity: Dividing the System into sub Modules, They need to communicate with each other while building up main system Convenience: Concurrently running multiple things in single system without any delay.
Cooperating processes require an Inter process Mechanism that will allow them to exchange data and information. Two fundamentals of IPC are: Shared memory: A region of memory that is shared by cooperating process is established Process can exchange information by reading and writing data into shared memory Message passing: Communication takes places by means of messages exchange between cooperating processes
Process Synchronization: When multiple processes access shared resources or execute critical sections, synchronization mechanisms are required to avoid conflicts and ensure consistency. Techniques like locks, condition variables, and atomic operations are used to enforce synchronization among processes. Process Resource Management: The operating system manages resources allocated to processes, including CPU time, memory, I/O devices, and files. It tracks resource usage, enforces resource limits, and provides mechanisms for process resource requests and releases.
Process Communication and Signals: Processes can communicate and signal each other through inter process communication channels or signals. Signals are used to notify processes of events, such as interrupt requests, termination signals, or custom-defined signals. Process Monitoring and Debugging: The operating system provides tools and utilities to monitor and debug processes. This includes tracking process performance, resource usage, and identifying and resolving issues like deadlocks or abnormal process behavior. These operations collectively enable the management, execution, and coordination of processes within an operating system, ensuring efficient utilization of resources and proper execution of programs.
Inter process communication Inter process communication is the mechanism provided by the operating system that allows processes to communicate with each other. This communication could involve a process letting another process know that some event has occurred or the transferring of data from one process to another.
Message passing model allows multiple processes to read and write data to the message queue without being connected to each other. Messages are stored on the queue until their recipient retrieves them. Message queues are quite useful for inter process communication and are used by most operating systems.
In the above diagram, both the processes P1 and P2 can access the message queue and store and retrieve data.
Pipe Pipe is a communication medium between two or more related or interrelated processes. Pipe mechanism can be viewed with a real-time scenario such as filling water with the pipe into some container, say a bucket, and someone retrieving it, say with a mug. The filling process is nothing but writing into the pipe and the reading process is nothing but retrieving from the pipe.
Example program 1 − Program to write and read two messages using pipe. Step 1 − Create a pipe. Step 2 − Send a message to the pipe. Step 3 − Retrieve the message from the pipe and write it to the standard output. Step 4 − Send another message to the pipe. Step 5 − Retrieve the message from the pipe and write it to the standard output.
Two-way Communication Using Pipes Pipe communication is viewed as only one-way communication i.e., either the parent process writes and the child process reads or vice-versa but not both. If both the parent and the child needs to write and read from the pipes simultaneously, the solution is a two-way communication using pipes. Two pipes are required to establish two-way communication.
Step 1 − Create two pipes. First one is for the parent to write and child to read, say as pipe1. Second one is for the child to write and parent to read, say as pipe2. Step 2 − Create a child process. Step 3 − Close unwanted ends as only one end is needed for each communication. Step 4 − Close unwanted ends in the parent process, read end of pipe1 and write end of pipe2. Step 5 − Close the unwanted ends in the child process, write end of pipe1 and read end of pipe2. Step 6 − Perform the communication as required.
socket A socket is a tool provided by the operating system that enables two separated processes to communicate with each other.
Socket Types: Sockets are classified into two main types: stream sockets (also known as TCP sockets) and datagram sockets (also known as UDP sockets). Stream Sockets (TCP): Stream sockets provide a reliable, c onnection-oriented communication channel. They ensure that data is delivered in the correct order and without errors. Datagram Sockets (UDP): Datagram sockets offer connectionless service. They are suitable for applications such as real-time multimedia streaming and online gaming
Socket API: The socket API (Application Programming Interface ) is a set of functions provided by the operating system that allows applications to create, configure, and manage sockets. Programming with sockets involves using functions like socket(), bind(), listen(), accept(), connect(), send(), and recv () to establish and manage network connections.
Server-Client Model: Sockets are often used in a server-client model. A server listens for incoming connections on a specific port, while clients connect to the server to send or receive data. This architecture is commonly used for services like web servers, where clients (browsers) request web pages from a server.
Shared memory Shared memory is a memory shared between two or more processes. Shared memory is the fastest inter-process communication mechanism. The operating system maps a memory segment in the address space of several processes to read and write in memory segment.
For applications that exchange large amounts of data, shared memory is far superior to message passing. To use shared memory, we have to perform two basic steps: Request a memory segment that can be shared between processes to the operating system. Associate a part of that memory or the whole memory with the address space of the calling process.
A shared memory segment is a portion of physical memory that is shared by multiple processes. In this region, processes performs read/write on them. When a shared memory region is established in two or more processes, there is no guarantee that the regions will be placed at the same base address. Semaphores can be used when synchronization is required.
For example, one process might have the shared region starting at address 0x60000 while the other process uses 0x70000. It is critical to understand that these two addresses refer to the exact same piece of data. So storing the number 1 in the first process's address 0x60000 means the second process has the value of 1 at 0x70000. The two different addresses refer to the exact same location.
FIFO A FIFO (First-In-First-Out) is a type of inter-process communication (IPC) mechanism in operating systems that allows processes to communicate by using a special type of file known as a named pipe . Named pipes provide a way for unrelated processes to exchange data in a structured manner.
Named Pipe Creation: A named pipe (FIFO) is a file-like object that is created in the file system using a specific name. The mkfifo system call is typically used to create a named pipe. This command specifies the name of the pipe and creates a special file entry in the file system
Reading and Writing: Processes can open the named pipe using standard file I/O operations (e.g., open(), read(), write(), close()). One or more processes can write data into the pipe, and other processes can read that data in the order (hence the term "First-In-First-Out").
Blocking Behavior: Reading from a named pipe will block if there is no data available to read. Similarly, writing to a named pipe will block if there is no space available for writing data.
Synchronous Communication: Named pipes provide synchronous communication, meaning the reading and writing processes must be coordinated to ensure proper communication. If a reader attempts to read before any data is written to the pipe, it will block until data is available.
Unidirectional Communication: Named pipes are bidirectional, meaning that a separate named pipe is needed for each direction of communication (one for reading, one for writing). Two unrelated processes can create a named pipe with the same name to establish communication between them.
Cleanup: When processes are done using a named pipe, they should close it using the close() system call. The named pipe persists in the file system until it is explicitly removed using the unlink() system call or until the system is restarted.