Mutual exclusion is a fundamental concept in computer science and concurrent programming that ensures that only one process or thread can access a shared resource at a time. It prevents multiple processes from simultaneously executing a critical section of code, which can lead to race conditions and...
Mutual exclusion is a fundamental concept in computer science and concurrent programming that ensures that only one process or thread can access a shared resource at a time. It prevents multiple processes from simultaneously executing a critical section of code, which can lead to race conditions and data inconsistencies. Various algorithms have been developed to achieve mutual exclusion, and in this essay, we will explore some of the key concepts and techniques used in designing mutual exclusion algorithms.
To begin, let's define some basic terms. A critical section is a portion of code that accesses shared resources, such as variables or data structures. The goal of mutual exclusion is to ensure that only one process can execute its critical section at any given time, while other processes are blocked or waiting for access.
One of the simplest approaches to mutual exclusion is the concept of locks. A lock is a synchronization primitive that can be acquired by a process to gain exclusive access to a resource. When a process wants to enter its critical section, it attempts to acquire the lock. If the lock is currently held by another process, the requesting process is blocked until the lock becomes available. Once a process finishes executing its critical section, it releases the lock, allowing another process to acquire it.
A widely used lock-based mutual exclusion algorithm is the Peterson's algorithm. It was proposed by Gary L. Peterson in 1981 and works for two processes. Peterson's algorithm relies on two shared variables: turn and flag. The turn variable indicates whose turn it is to enter the critical section, while the flag variable signals a process's intention to enter the critical section.
While Peterson's algorithm provides mutual exclusion for two processes, it suffers from some limitations. Firstly, it relies on busy-waiting, where a process continuously checks for the turn of the other process. Busy-waiting wastes CPU cycles and is not suitable for systems where processes need to wait for long durations. Secondly, Peterson's algorithm is susceptible to memory coherence issues on modern multi-core processors.
To overcome these limitations, more sophisticated mutual exclusion algorithms have been developed. One such algorithm is Dekker's algorithm, proposed by Th. J. Dekker in 1965. Dekker's algorithm extends Peterson's algorithm to provide mutual exclusion for two processes without busy-waiting. It introduces two additional shared variables, flag0 and flag1, to avoid busy-waiting.
Dekker's algorithm avoids busy-waiting by introducing the loop in step 5, which allows a process to yield the CPU and avoid wasting resources while waiting for its turn. However, Dekker's algorithm suffers from the problem of mutual exclusion violation when both processes attempt to enter their critical sections simultaneously.
Page 2
Mutual Exclusion?
•A condition in which there is a set of
processes, only one of which is able to
access a given resource or perform a given
function at any time
Page 3
Centralized Systems
Mutual exclusion via:
–Test & set
–Semaphores
–Messages
–Monitors
Page 4
Distributed Mutual Exclusion
•Assume there is agreement on how a resource
is identified
–Pass identifier with requests
•Create an algorithm to allow a process to
obtain exclusive access to a resource
Page 6
Centralized algorithm
•Mimic single processor system
•One process elected as coordinator
P
C
request(R)
grant(R)
1.Requestresource
2.Wait for response
3.Receive grant
4.access resource
5.Release resource release(R)
Page 7
Centralized algorithm
If another process claimed resource:
–Coordinator does not reply until release
–Maintain queue
•Service requests in FIFO order
P
0
C
request(R)
grant(R)
release(R)
P
1
P
2
request(R)
Queue
P
1
request(R)
P
2
grant(R)
Page 8
Centralized algorithm
Benefits
•Fair
–All requests processed in order
•Easy to implement, understand, verify
Problems
•Process cannot distinguish being blocked from
a dead coordinator
•Centralized server can be a bottleneck
Page 9
Token Ring algorithm
Assume known group of processes
–Some ordering can be imposed on group
–Construct logical ring in software
–Process communicates with neighbor
P
0
P
1
P
2
P
3
P
4
P
5
Page 10
Token Ring algorithm
•Initialization
–Process 0 gets token for resource R
•Token circulates around ring
–From P
ito P
(i+1)mod N
•When process acquires token
–Checks to see if it needs to enter critical section
–If no, send token to neighbor
–If yes, access resource
•Hold token until done
P
0
P
1
P
2
P
3
P
4
P
5
token(R)
Page 11
Token Ring algorithm
•Only one process at a time has token
–Mutual exclusion guaranteed
•Order well-defined
–Starvation cannot occur
•If token is lost (e.g. process died)
–It will have to be regenerated
•Does not guarantee FIFO order
Page 12
Ricart & Agrawala algorithm
•Distributed algorithm using reliable multicast
and logical clocks
•Process wants to enter critical section:
–Compose message containing:
•Identifier (machine ID, process ID)
•Name of resource
•Timestamp (totally-ordered Lamport)
–Send request to all processes in group
–Wait until everyone gives permission
–Enter critical section / use resource
Page 13
Ricart & Agrawala algorithm
•When process receives request:
–If receiver not interested:
•Send OKto sender
–If receiver is in critical section
•Do not reply; add request toqueue
–If receiver just sent a request as well:
•Compare timestamps: received & sent messages
•Earliest wins
•If receiver is loser, send OK
•If receiver is winner, do not reply, queue
•When donewith critical section
–Send OKto all queued requests
Page 14
Ricart & Agrawala algorithm
•N points of failure
•A lot of messaging traffic
•Demonstrates that a fully distributed
algorithm is possible
Page 15
Lamport’s Mutual Exclusion
Each process maintains request queue
–Contains mutual exclusion requests
Requesting critical section:
–Process P
isends request(i, T
i)to all nodes
–Places request on its own queue
–When a process P
jreceives
a request, it returns a timestamped ack
Lamport time
Page 16
Lamport’s Mutual Exclusion
Entering critical section (accessing resource):
–P
ireceived a message (ackor release) from every
other process with a timestamp larger than T
i
–P
i’s request has the earliest timestamp in its queue
Difference from Ricart-Agrawala:
–Everyone responds … always -no hold-back
–Process decides to go based on whether its
request is the earliest in its queue
Page 17
Lamport’s Mutual Exclusion
Releasing critical section:
–Remove request from its own queue
–Send a timestamped releasemessage
–When a process receives a releasemessage
•Removes request for that process from its queue
•This may cause its own entry have the earliest timestamp in
the queue, enabling it to access the critical section
Characteristics of Decentralized
Algorithms
No machine has complete information about the system state
Machines make decisions based only on local information
Failure of one machine does not ruin the algorithm
Three is no implicit assumption that a global clock exists
Page 19
Decentralized Algorithm
•Based on the Distributed Hash Table (DHT)
system structure previously introduced
–Peer-to-peer
–Object names are hashed to find the successor
node that will store them
•Here, we assume that nreplicas of each
object are stored
Page 20
Placing the Replicas
•The resource is known by a unique name:
rname
–Replicas: rname-0, rname-I, …, rname-(n-1)
–rname-i is stored at succ(rname-i), where names
and site names are hashed as before
–If a process knows the name of the resource it
wishes to access, it also can generate the hash
keys that are used to locate all the replicas
Page 21
The Decentralized Algorithm
•Every replica has a coordinator that controls
access to it (the coordinator is the node that
stores it)
•For a process to use the resource it must
receive permission from m > n/2coordinators
•This guarantees exclusive access as long as a
coordinator only grants access to one process
at a time
Page 22
The Decentralized Algorithm
•The coordinator notifies the requester when
it has been denied access as well as when it is
granted
–Requester must “count the votes”, and decide
whether or not overall permission has been granted
or denied
•If a process (requester) gets fewer than m
votes it will wait for a random time and then
ask again
Page 23
Analysis
•If a resource is in high demand, multiple
requests will be generated
•It’s possible that processes will wait a long
time to get permission
•Deadlock?
•Resource usage drops
Page 24
Analysis
•More robust than the central coordinator
approach and the distributed approaches. If
one coordinator goes down others are
available.
–If a coordinator fails and resets then it will not
remember having granted access to one requestor,
and may then give access to another. According to
the authors, it is highly unlikely that this will lead
to a violation of mutual exclusion. (See the text
for a probabilistic argument.)