Threads A thread (or lightweight process) is a basic unit of CPU utilization; it consists of: program counter, register set , stack space A thread shares with its other threads belonging to the same process: code section, data section ,operating-system resources collectively know as a task. A traditional or heavyweight process is equal to a task with one thread A multi-threaded or lightweight application have multiple threads program Process thread
Single and Multithreaded Processes
Benefits Responsiveness : Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. Resource Sharing : By default threads share common code, data, and other resources, which allows multiple tasks to be performed simultaneously in a single address space. Economy : Creating and managing threads is much faster than performing the same tasks for processes. Context switching between threads takes less time Utilization of Multiprocessor Architectures : multithreading can be greatly increased in a multiprocessor architecture, where threads may be running in parallel on different processors. Multithreading on a multi-CPU machine increases concurrency.
Multithread models There are two types of threads to be managed in a modern system: User threads and kernel threads. User threads are supported above the kernel, without kernel support. Kernel threads are supported within the kernel of the OS itself. All modern OS support kernel level threads, allowing the kernel to perform multiple tasks simultaneously. Three common ways of establishing this relationship . Many-to-one One-to-one Many-to-many
Many-to-one many user-level threads are all mapped onto a single kernel thread. If a blocking system call is made by one of the threads, then the entire process blocks. Only one user thread can access the kernel at a time, as there is only one kernel thread. Thus the threads are unable to run in parallel on multiprocessors .
One-to-one model The one-to-one model creates a separate kernel thread to handle each user thread. One-to-one model overcomes the problems listed above involving blocking system calls and the splitting of processes across multiple CPUs. However the overhead of managing the one-to-one model is more significant, involving more overhead and slowing down the system.
Many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel threads ,. Blocking kernel system calls do not block the entire process. Processes can be split across multiple processors. Individual processes may be allocated variable numbers of kernel threads, depending on the number of CPUs present and other factors.
Two-level Model
Threading Issues Semantics of fork() and exec() system calls Thread cancellation Signal handling Thread pools Thread specific data Scheduler activations
fork() and exec () system call Does fork() duplicate only the calling thread or all threads ? Some UNIX systems have chosen to have two versions of fork () duplicates all threads and duplicates only the thread that invoked the fork () system call Exec() -the program specified in the parameter to exec( ) will be executed by the thread created
Thread Cancellation Terminating a thread before it has finished A thread that is to be canceled is often referred to as the target thread Two general approaches: Asynchronous cancellation - on thread terminates immediately the target thread Deferred cancellation allows the target thread to periodically check if it should be cancelled Where The difficulty with cancellation occurs resources have been allocated to a canceled thread where a thread is canceled while in the midst of updating data it is sharing with other threads.
Signal Handling Signals are used in UNIX systems to notify a process that a particular event has occurred A signal can be invoked in 2 ways : synchronous or asynchronous. Synchronous signal – signal delivered to the same program. Eg – illegal memory access, divide by zero error. Asynchronous signal – signal is sent to another program. Eg – Ctrl C A signal handler is used to process signals Signal is generated by particular event Signal is delivered to a process Signal is handled A signal can be handled by one of the two ways – Default signal handler and 2 . User-defined signal handler
Options: Deliver the signal to the thread to which the signal applies Deliver the signal to every thread in the process Deliver the signal to certain threads in the process Assign a specific thread to receive all signals for the process Thread Pools
Creating a separate process, in a multithreaded makes problems. Time is consumed in creation of the thread. A limit has to be placed on the number of active threads in the system. Unlimited thread creation may exhaust system resources Thread Pool :Create a number of threads in a pool where they await work Advantages : Servicing a request with an existing thread is usually faster than waiting to create a thread. A thread pool limits the number of threads that exist at anyone point. This is particularly important on systems that cannot support a large number of concurrent threads.
Thread-Specific Data Data of a thread, which is not shared with other threads is called thread specific data. Most major thread libraries ( pThreads , Win32, Java ) provide support for thread-specific data. Scheduler Activations Scheduler Activation is the technique used for communication between the user-thread library and the kernel.
It works as follows: the kernel must inform an application about certain events. This procedure is known as an upcall . Upcalls are handled by the thread library with an upcall handler, and upcall handlers must run on a virtual processor.
Basic Concepts The objective of multiprogramming is to have some process running at all times in processor, to maximize CPU utilization When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait The state of process under execution is called CPU burst and the state of process under I/O request & its handling is called I/O burst
Alternating Sequence of CPU And I/O Bursts
CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. The selection process is carried out by the short-term scheduler (or CPU scheduler) Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler. the functions involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running
Preemptive and non preemptive scheduling CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state Switches from waiting to ready Terminates
Scheduling under 1 and 4 is non preemptive or coperative . Scheduling under 2 and 3 is preemptive . Non - Preemptive Scheduling – once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. Preemptive Scheduling – The process under execution, may be released from the CPU, in the middle of execution due to some inconsistent state of the process
Scheduling Criteria Max CPU utilization : The CPU must be kept as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 to 90 percent . Max throughput : One measure of work is the number of processes that are completed per time unit, called throughput Min turnaround time : The interval from the time of submission of a process to the time of completion is the turnaround time Min waiting time : The total amount of time the process spends waiting in the ready queue. Min response time : The time taken from the submission of a request until the first response is produced is called the response time
Scheduling Algorithms First-Come , First-Served Scheduling the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue .. The average waiting time under the FCFS policy, however, is often quite long
First-Come, First-Served (FCFS) Scheduling Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1 , P 2 , P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 P 1 P 2 P 3 24 27 30
Suppose that the processes arrive in the order P 2 , P 3 , P 1 The Gantt chart for the schedule is: Waiting time for P 1 = 6 ; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case FCFS scheduling is non preemptive. Advantages : more predictable than other schemes since it offers time code for FCFS scheduling is simple to write and understand P 1 P 3 P 2 6 3 30
Disadvantages : Short jobs(process) may have to wait for long time Important jobs (with higher priority) have to wait cannot guarantee good response time average waiting time and turn around time is often quite long lower CPU and device utilization. Convoy effect – all the other process have to wait for the longer process to complete its execution
Shortest-Job-First ( SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time Two schemes: Non preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF )
Non preemptive consider the following set of processes, with the length of the CPU burst given in milliseconds : The Gantt chart for the schedule is: waiting time for P1 =3 millisecond waiting time for P2 =16 millisecond waiting time for P3 =9 millisecond waiting time for P4 =0 millisecond so Average waiting time = (3 +16+9+0)/4 =7 ms Process Burst Time P1 6 P2 8 P3 7 P4 3
preemptive consider the following set of processes, with the length of the CPU burst given in milliseconds : The Gantt chart for the schedule is: Process Arrival time Burst Time P1 8 P2 1 4 P3 2 9 P4 3 5 waiting time= Total waiting time – number of millisecond process executed –arrival time waiting time for P1 =10 – 1- 0=9 waiting time for P2 =1-0-1=0 waiting time for P3 =17-0-2=15 waiting time for P4 =5-0-3=2 Average waiting time = (9 +0+15+2)/4 =6.5 ms
SJF is optimal – gives minimum average waiting time for a given set of processes The real difficulty with the SJF algorithm is knowing the length of the next CPU request . Although the SJF algorithm is optimal, it cam10t be implemented at the level of short-term CPU scheduling One approach is to try to approximate SJF scheduling. We may not know the length of the next CPU burst, but we may be able to predict its value
Priority Scheduling A priority number (integer) is associated with each process and The CPU is allocated to the process with the highest priority . Equal-priority processes are scheduled in FCFS order Priorities are generally indicated by some fixed range of numbers, such as 0 to 7 or 0 to 4,095. However, there is no general agreement on whether 0 is the highest or lowest priority . Priority scheduling can be either preemptive or non preemptive Preemptive- A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. Non preemptive- the new process at the head of the ready queue SJF is a priority scheduling where priority is the predicted next CPU burst time
Non preemptive consider the following set of processes, with the length of the CPU burst given in milliseconds and arrived at time 0 in the order P1, P2, P3 ,P4, P5: The Gantt chart for the schedule is: Process Priority Burst Time P1 3 10 P2 1 1 P3 4 2 P4 5 1 P5 2 5 waiting time for P1 =6 waiting time for P2 =0 waiting time for P3 =16 waiting time for P4 =18 waiting time for P5 =1 Average waiting time = (6 +0+16+18+1)/5 =8.2 ms
preemptive consider the following set of processes, with the length of the CPU burst given in milliseconds and arrived at time 0 in the order P1, P2, P3 ,P4, P5: The Gantt chart for the schedule is: Process Priority Arrival time Burst Time P1 2 11 P2 5 28 P3 3 12 2 P4 1 2 10 P5 4 9 16 waiting time= Total waiting time – number of millisecond process executed –arrival time waiting time for P1 =40 – 2- 0=38 waiting time for P2 =5-0-5=0 waiting time for P3 =49-0-12=37 waiting time for P4 =33-3-2=28 waiting time for P5 =51-0-9=42 Average waiting time = (38+0+37+28+42)/5 =29 ms
A major problem with priority scheduling algorithms is indefinite blocking, or starvation . A process that is ready to run but waiting for the CPU can be considered blocked. A priority scheduling algorithm can leave some low- priority processes waiting indefinitely. In a heavily loaded computer system, a steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU. solution to the problem of indefinite blockage of low-priority processes is aging . Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time . For example, if priorities range from 127 (low) to 0 (high), we could increase the priority of a waiting process by 1 every 15 minutes. Eventually, even a process with an initial priority of 127 would have the highest priority in the system and would be executed
Round Robin (RR ) scheduling The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. It is similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of time, called a time quantum or time slice, is defined. A time quantum is generally from 10 to 100 milliseconds. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue , allocating the CPU to each process for a time interval of up to 1 time quantum
To implement RR scheduling, we keep the ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process.
The process may have a CPU burst of less than 1 time quantum the process itself will release the CPU voluntarily process is having CPU burst longer than 1 time quantum, the timer will go off and will cause an interrupt to the operating system A context switch will be executed, and the process will be put at the tail of the ready queue The scheduler will then proceed to the next process in the ready queue The CPU scheduler will then select the next process in the ready queue One of two things will then happen
consider the following set of processes, with the length of the CPU burst given in milliseconds and arrived at time 0 with time quantum of 4 ms : calculate the waiting time and turnaround time The Gantt chart for the schedule is: Process Burst Time P1 24 P2 3 P3 3 waiting time for P1 =10-4=6 waiting time for P2 =4 waiting time for P3 =7 Average waiting time = (6+4+7)/3 =5.66 ms
Turnaround time=completion time – arrival time Waiting time = turnaround time – burst time Average Turnaround time= (30+7+10)/3 = 15.67 ms Average Waiting time = ( 6+4+7)/ 3 5.66 ms Process Burst Time P1 24 P2 3 P3 3 Process Completion time Turnaround time Waiting time P1 30 30-0=30 30-24=6 P2 7 7-0=7 7-3=4 P3 10 10-0=10 10-3=7
Multilevel Queue scheduling class of scheduling algorithms has been created for situations in which processes are easily classified into different groups. for example These two types of processes have different response-time requirements and so may have different scheduling needs A multilevel queue scheduling algorithm partitions the ready queue into several separate queues The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type Each queue has its own scheduling algorithm Foreground process (interactive) Background process (batch )
Multilevel Queue Scheduling
Multilevel Feedback Queue scheduling The idea is to separate processes according to the characteristics of their CPU bursts . If a process uses too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound and interactive processes in the higher-priority queues . In addition, a process that waits too long in a lower-priority queue may be moved to a higher-priority queue. This form of aging prevents starvation
Multilevel Feedback Queues
scheduler defined by the following parameters: 1.number of queues 2.scheduling algorithms for each queue 3.method used to determine when to upgrade a process to a higher priority queue 4.method used to determine when to demote a process to a lower priority queue 5.method used to determine which queue a process will enter when that process needs service
Multiple-Processor Scheduling Approaches to Multiple-Processor Scheduling Asymmetric multiprocessing- all scheduling decisions, I/O processing, and other system activities handled by a single processor—the master server . The other processors execute only user code. This is simple symmetric multiprocessing ( SMP)- each processor is self-scheduling. All processes may be in a common ready queue, or each processor may have its own private queue of ready processes. Processor Affinity SMP systems try to avoid migration of processes from one processor to another and instead attempt to keep a process running on the same processor. This is known as processor affinity
There are 2 forms of Processor affinity soft affinity -When an operating system has a policy of attempting to keep a process running on the same processor—but not guaranteeing that it will do. Hard affinity- provide system calls that support hard affinity, thereby allowing a process to specify that it is not to migrate to other processors 3 Load Balancing There are two general approaches to load balancing : push migration-a specific task periodically checks the load on each processor and—if it finds an imbalance—evenly distributes the load by moving (or pushing) processes from overloaded to idle or less-busy processors. Pull migration- occurs when an idle processor pulls a waiting task from a busy processor. Push and pull migration need not be mutually exclusive and are in fact often implemented in parallel on load-balancing systems.
4 Symmetric Multithreading T o allow several threads to run concurrently by providing multiple logical processors is known as symmetric multithreading (or SMT ). This also known as hyper threading technology on Intel processor Each logical processor has its own architecture state, which includes general-purpose and machine-state registers SMT is provided in hardware.
Thread Scheduling Contention Scope many-to-one and many-to-many models, the thread library schedules user-level threads to run on an available LWP, a scheme known as process-contention scope (PCS) To decide which kernel thread to schedule onto a CPU, the kernel uses system-contention scope (SCS). PCS is done according to priority-the scheduler selects the runnable thread with the highest priority to run .
pthread Scheduling Pthreads identifies the following contention scope values: PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling. PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling . The Pthread IPC provides the following two functions for getting-and setting-the contention scope policy: pthread_attr_setscope ( pthread_attr_t * attr , int scope) pthread_attr_getscope ( pthread_attr_t * attr , int *scope)
Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors
Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer.
Solution to Critical-Section Problem Requirements : Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted .
Peterson’s Solution Two process solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. The two processes share two variables: int turn ; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. flag[ i ] = true implies that process P i is ready!
do { flag[ i ] = TRUE; turn = j; while (flag[j] && turn == j); critical section flag[ i ] = FALSE; remainder section } while (TRUE); Algorithm for Process P i
Synchronization Hardware Many systems provide hardware support for critical section code Uniprocessors – could disable interrupts Currently running code would execute without preemption Generally too inefficient on multiprocessor systems Operating systems using this not broadly scalable Modern machines provide special atomic hardware instructions Atomic = non-interruptible Either test memory word and set value Or swap contents of two memory words
Solution to Critical-section Problem Using Locks do { acquire lock critical section release lock remainder section } while (TRUE);
Solution using Swap Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key Solution: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section } while (TRUE);
Bounded-waiting Mutual Exclusion with TestandSet () do { waiting[ i ] = TRUE; key = TRUE; while (waiting[ i ] && key) key = TestAndSet (&lock); waiting[ i ] = FALSE; // critical section j = ( i + 1) % n; while ((j != i ) && !waiting[j]) j = (j + 1) % n; if (j == i ) lock = FALSE; else waiting[j] = FALSE; // remainder section } while (TRUE);
Semaphore Semaphore S – integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }
Semaphore as General Synchronization Tool Counting semaphore – integer value can range over an unrestricted domain Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement Also known as mutex locks Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore mutex ; // initialized to 1 do { wait ( mutex ); // Critical Section signal ( mutex ); // remainder section } while (TRUE);
Semaphore Implementation Disadvantage : busy waiting in critical section Busy waiting waste CPU cycle Spin lock- process spin while waiting for the lock Two operations: block – place the process invoking the operation on the appropriate waiting queue. wakeup – remove one of processes in the waiting queue and place it in the ready queue.
Semaphore Implementation with no Busy waiting With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: value (of type integer) pointer to next record in the list Typedef struct { int value ; struct process *list; }semaphore;
Semaphore Implementation with no Busy waiting (Cont.) Implementation of wait: wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } Implementation of signal: signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(P); } }
Deadlock and Starvation Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P P 1 wait (S); wait (Q); wait (Q); wait (S); . . . . . . signal (S); signal (Q); signal (Q); signal (S); Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended
Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem
Bounded-Buffer Problem N buffers, each can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N.
Bounded Buffer Problem (Cont.) The structure of the producer process do { // produce an item in nextp wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } while (TRUE);
Bounded Buffer Problem (Cont.) The structure of the consumer process do { wait (full); wait (mutex); // remove an item from buffer to nextc signal (mutex); signal (empty); // consume the item in nextc } while (TRUE);
Readers-Writers Problem A data set is shared among a number of concurrent processes. Readers – only read the data set; they do not perform any updates Writers – can both read and write Problem – allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time. Shared Data Data set Semaphore mutex initialized to 1 (controls access to readcount ) Semaphore wrt initialized to 1 (writer access) Integer readcount initialized to 0 (how many processes are reading object)
Readers-Writers Problem (Cont.) The structure of a writer process do { wait (wrt) ; // writing is performed signal (wrt) ; } while (TRUE);
Readers-Writers Problem (Cont.) The structure of a reader process do { wait ( mutex ) ; readcount ++ ; if ( readcount == 1) wait ( wrt ) ; signal ( mutex ) // reading is performed wait ( mutex ) ; readcount - - ; if ( readcount == 0) signal ( wrt ) ; signal ( mutex ) ; } while (TRUE);
Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem (Cont.) The structure of Philosopher i : do { wait ( chopstick[i] ); wait ( chopStick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (TRUE);
More Problems with Semaphores Incorrect use of semaphore operations: signal ( mutex ) …. wait ( mutex ) wait ( mutex ) … wait ( mutex )
Monitors A high-level abstraction that provides a convenient and effective mechanism for process synchronization Only one process may be active within the monitor at a time monitor monitor-name { // shared variable declarations function P1 (…) { …. } … function Pn (…) {……} Initialization code ( ….) { … } … } }
Schematic view of a Monitor
Condition Variables condition x, y; Two operations on a condition variable: x.wait () – a process that invokes the operation is suspended. x.signal () – resumes one of processes (if any) that invoked x.wait ()
Monitor with Condition Variables
Solution to Dining Philosophers monitor DP { enum { THINKING; HUNGRY, EATING) state [5] ; condition self [5]; void pickup ( int i ) { state[ i ] = HUNGRY; test( i ); if (state[ i ] != EATING) self [ i ].wait; } void putdown ( int i ) { state[ i ] = THINKING; // test left and right neighbors test(( i + 4) % 5); test(( i + 1) % 5); }
Solution to Dining Philosophers (Cont.) void test ( int i ) { if ( (state[( i + 4) % 5] != EATING) && (state[ i ] == HUNGRY) && (state[( i + 1) % 5] != EATING) ) { state[ i ] = EATING ; self[ i ].signal () ; } } initialization_code () { for ( int i = 0; i < 5; i ++) state[ i ] = THINKING; } }
Monitor Implementation Using Semaphores Variables semaphore mutex ; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0; Each procedure F will be replaced by wait( mutex ); … body of F ; … if ( next_count > 0) signal(next) else signal( mutex ); Mutual exclusion within a monitor is ensured.
Monitor Implementation For each condition variable x , we have: semaphore x_sem ; // (initially = 0) int x-count = 0; The operation x.wait can be implemented as: x-count++; if ( next_count > 0) signal(next); else signal( mutex ); wait( x_sem ); x-count--;
Monitor Implementation The operation x.signal can be implemented as: if (x-count > 0) { next_count ++; signal( x_sem ); wait(next); next_count --; }
A Monitor to Allocate Single Resource monitor ResourceAllocator { boolean busy; condition x; void acquire( int time) { if (busy) x.wait (time); busy = TRUE; } void release() { busy = FALSE; x.signal (); } initialization code() { busy = FALSE; } }