Mutual-Exclusion Algorithm.ppt

409 views 24 slides May 29, 2023
Slide 1
Slide 1 of 24
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24

About This Presentation

Mutual exclusion is a fundamental concept in computer science and concurrent programming that ensures that only one process or thread can access a shared resource at a time. It prevents multiple processes from simultaneously executing a critical section of code, which can lead to race conditions and...


Slide Content

Page 1Page 1
Mutual Exclusion*
Distributed Systems
Prepared By:
Er. Shikha Manrai

Page 2
Mutual Exclusion?
•A condition in which there is a set of
processes, only one of which is able to
access a given resource or perform a given
function at any time

Page 3
Centralized Systems
Mutual exclusion via:
–Test & set
–Semaphores
–Messages
–Monitors

Page 4
Distributed Mutual Exclusion
•Assume there is agreement on how a resource
is identified
–Pass identifier with requests
•Create an algorithm to allow a process to
obtain exclusive access to a resource

Page 5
Distributed Mutual Exclusion
•Centralized Algorithm
•Token Ring Algorithm
•Distributed Algorithm
•Decentralized Algorithm

Page 6
Centralized algorithm
•Mimic single processor system
•One process elected as coordinator
P
C
request(R)
grant(R)
1.Requestresource
2.Wait for response
3.Receive grant
4.access resource
5.Release resource release(R)

Page 7
Centralized algorithm
If another process claimed resource:
–Coordinator does not reply until release
–Maintain queue
•Service requests in FIFO order
P
0
C
request(R)
grant(R)
release(R)
P
1
P
2
request(R)
Queue
P
1
request(R)
P
2
grant(R)

Page 8
Centralized algorithm
Benefits
•Fair
–All requests processed in order
•Easy to implement, understand, verify
Problems
•Process cannot distinguish being blocked from
a dead coordinator
•Centralized server can be a bottleneck

Page 9
Token Ring algorithm
Assume known group of processes
–Some ordering can be imposed on group
–Construct logical ring in software
–Process communicates with neighbor
P
0
P
1
P
2
P
3
P
4
P
5

Page 10
Token Ring algorithm
•Initialization
–Process 0 gets token for resource R
•Token circulates around ring
–From P
ito P
(i+1)mod N
•When process acquires token
–Checks to see if it needs to enter critical section
–If no, send token to neighbor
–If yes, access resource
•Hold token until done
P
0
P
1
P
2
P
3
P
4
P
5
token(R)

Page 11
Token Ring algorithm
•Only one process at a time has token
–Mutual exclusion guaranteed
•Order well-defined
–Starvation cannot occur
•If token is lost (e.g. process died)
–It will have to be regenerated
•Does not guarantee FIFO order

Page 12
Ricart & Agrawala algorithm
•Distributed algorithm using reliable multicast
and logical clocks
•Process wants to enter critical section:
–Compose message containing:
•Identifier (machine ID, process ID)
•Name of resource
•Timestamp (totally-ordered Lamport)
–Send request to all processes in group
–Wait until everyone gives permission
–Enter critical section / use resource

Page 13
Ricart & Agrawala algorithm
•When process receives request:
–If receiver not interested:
•Send OKto sender
–If receiver is in critical section
•Do not reply; add request toqueue
–If receiver just sent a request as well:
•Compare timestamps: received & sent messages
•Earliest wins
•If receiver is loser, send OK
•If receiver is winner, do not reply, queue
•When donewith critical section
–Send OKto all queued requests

Page 14
Ricart & Agrawala algorithm
•N points of failure
•A lot of messaging traffic
•Demonstrates that a fully distributed
algorithm is possible

Page 15
Lamport’s Mutual Exclusion
Each process maintains request queue
–Contains mutual exclusion requests
Requesting critical section:
–Process P
isends request(i, T
i)to all nodes
–Places request on its own queue
–When a process P
jreceives
a request, it returns a timestamped ack
Lamport time

Page 16
Lamport’s Mutual Exclusion
Entering critical section (accessing resource):
–P
ireceived a message (ackor release) from every
other process with a timestamp larger than T
i
–P
i’s request has the earliest timestamp in its queue
Difference from Ricart-Agrawala:
–Everyone responds … always -no hold-back
–Process decides to go based on whether its
request is the earliest in its queue

Page 17
Lamport’s Mutual Exclusion
Releasing critical section:
–Remove request from its own queue
–Send a timestamped releasemessage
–When a process receives a releasemessage
•Removes request for that process from its queue
•This may cause its own entry have the earliest timestamp in
the queue, enabling it to access the critical section

Characteristics of Decentralized
Algorithms
No machine has complete information about the system state
Machines make decisions based only on local information
Failure of one machine does not ruin the algorithm
Three is no implicit assumption that a global clock exists

Page 19
Decentralized Algorithm
•Based on the Distributed Hash Table (DHT)
system structure previously introduced
–Peer-to-peer
–Object names are hashed to find the successor
node that will store them
•Here, we assume that nreplicas of each
object are stored

Page 20
Placing the Replicas
•The resource is known by a unique name:
rname
–Replicas: rname-0, rname-I, …, rname-(n-1)
–rname-i is stored at succ(rname-i), where names
and site names are hashed as before
–If a process knows the name of the resource it
wishes to access, it also can generate the hash
keys that are used to locate all the replicas

Page 21
The Decentralized Algorithm
•Every replica has a coordinator that controls
access to it (the coordinator is the node that
stores it)
•For a process to use the resource it must
receive permission from m > n/2coordinators
•This guarantees exclusive access as long as a
coordinator only grants access to one process
at a time

Page 22
The Decentralized Algorithm
•The coordinator notifies the requester when
it has been denied access as well as when it is
granted
–Requester must “count the votes”, and decide
whether or not overall permission has been granted
or denied
•If a process (requester) gets fewer than m
votes it will wait for a random time and then
ask again

Page 23
Analysis
•If a resource is in high demand, multiple
requests will be generated
•It’s possible that processes will wait a long
time to get permission
•Deadlock?
•Resource usage drops

Page 24
Analysis
•More robust than the central coordinator
approach and the distributed approaches. If
one coordinator goes down others are
available.
–If a coordinator fails and resets then it will not
remember having granted access to one requestor,
and may then give access to another. According to
the authors, it is highly unlikely that this will lead
to a violation of mutual exclusion. (See the text
for a probabilistic argument.)