rameshwarchintamani
1 views
70 slides
Oct 08, 2025
Slide 1 of 70
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
About This Presentation
Replication: Data-Centric Consistency Models,Client-Centric
Consistency Models, Reasons for replication. Replica management:
Finding the best server location, Content replication and placement,
Content distribution, Managing replicated objects.
Consistency protocols: Primary based protocols, replica...
Replication: Data-Centric Consistency Models,Client-Centric
Consistency Models, Reasons for replication. Replica management:
Finding the best server location, Content replication and placement,
Content distribution, Managing replicated objects.
Consistency protocols: Primary based protocols, replicated write
protocols.
Fault Tolerance: Introduction to fault tolerance, Reliable client server
communication, Reliable group communication, distributed commit,
Recovery – Check pointing, Message logging.
Case Study: Caching and replication in web.
Size: 1.85 MB
Language: en
Added: Oct 08, 2025
Slides: 70 pages
Slide Content
Fault Tolerance Basic Concepts Dealing successfully with partial failure within a Distributed System. Being fault tolerant is strongly related to what are called dependable systems. Dependability implies the following: Availability Reliability Safety Maintainability
Dependability Basic Concepts Availability – the system is ready to be used immediately. It refers to the probability that the system is operating correctly at any given moment. Reliability – the system can run continuously without failure. Safety – if a system fails, nothing catastrophic will happen. Maintainability – when a system fails, it can be repaired easily and quickly (sometimes, without its users noticing the failure).
But, What Is “Failure”? A system is said to “fail” when it cannot meet its promises. A error is a part of a system’s state that may lead to failure . For eg. When transmitting packets across a network,it is to be expected that some packets have been damaged when they arrive at receiver.Dameged in this context means that receiver may incorrectly sense a bit value (eg.reading a 1 instead of 0) A failure is brought about by the existence of “errors” in the system. The cause of an error is a “fault”.
Types of Fault There are three main types of ‘fault’: Transient Fault – appears once, then disappears. Intermittent Fault – occurs, vanishes, reappears; but: follows no real pattern (worst kind). Permanent Fault – once it occurs, only the replacement/repair of a faulty component will allow the DS to function normally.
Failure Models
Types of “Failure”? Crash failure once server halted nothing is heard from it. Eg.of a crash failure is an OS that comes to a grinding halt and for which there is only one solution reboot. Omission failure occurs when a server fails to respond to a request.Several thing might go wrong. Timing failure occur when the response lies outside a specified real time interval. A serious type of failure is a response failure for which the srever’s response is simply incorrect. Two kinds of response failure .In case of value failure server provides the wrong reply to a request.State transition failure this failure happen when the server reacts unexpectedly to incoming request.
Failur masking by redudancy. If a system is to be fault tolerant the best it can do is try to hide the occurrence of failures from other process. The key technique for masking fault is redudancy.
Failure Masking by Redundancy Strategy : hide the occurrence of failure from other processes using redundancy . Three main types: Information Redundancy – add extra bits to allow for error detection/recovery (e.g., Hamming codes and the like). Time Redundancy – perform operation and, if needs be, perform it again. Think about how transactions work (BEGIN/END/COMMIT/ABORT). Physical Redundancy – add extra (duplicate) hardware and/or software to the system.Extra process can be added to the system so that if a small no.of them crash the system can still functions replicating processes a high degree of fault tolerance may be achieved.
Failure Masking by Redundancy Physical redundancy is well known technique for providing tolerance. It is used in biology(mammals have to eyes,ears,lungs),aircraft have 4 engines but can fly on 3). It has also been used in electronics ckts for years. Consider for eg.the ckt of fig.a.Here a signal passes through devices A,B,C in sequence.if one of them is faulty the final result will be incorrect. In fig.b each devices is replicated 3 times following each stage in the ckt is a triplicated voter. Each voter is ckt that has 3 i/p and one o/p. If 2 or 3 of the i/p are the same the o/p is equal to that i/p. If all these are different the o/p is undefined. This kind of design is known as Triple Modular redundancy.
Failure Masking by Redundancy Suppose that element A2 fails. Each of the voters V1,V2,V3 gets 2 good(identical ) i/p and one rogue i/p and each of them o/p the correct value to second stage. The effect of A2 failing is completely masked so that the i/p B1,B2,B3 are exactly the same as they would have been had no fault occurred. Now consider what happens if B3 and c1 are also faulty,in addition to A2.These effects are also masked,so the three final o/p are still correct. Why 3 voters are needed at each stage.voter is also a component and can also be faulty. Suppose for eg. That v1 malfunctions. The i/p to B1 will then be wrong but as long as everything works else works B2 and B3 will produce the same o/p and V4,V5 and V6 will all produce correct result in stage3
Failure Masking by Redundancy Triple modular redundancy
DS Fault Tolerance Topics Process Resilience Reliable Client/Server Communications Reliable Group Communication Distributed Commit Recovery Strategies
Process Resilience The key approach to tolerating faulty is Processes can be made fault tolerant by arranging to have a group of processes, with each member of the group being identical . A message sent to the group is delivered to all of the “copies” of the process (the group members), and then only one of them performs the required service. If one of the processes fail, it is assumed that one of the others will still be able to function (and service any pending request or operation). N ew group can be created and old can be destroyed.A process can join a group or leave one during system operations. A process can be a member of several groups at the same time. Mechanism needed to manage groups and group membership.
Process Resilience The purpose of group is to allow processes to deal with single abstractions. Thus a process can send message to a group of srevers. In some groups all the processes are equal.No one is boss and all decisions are made collectively. In other groups some kinf of hierarchy exists.One processor is coordinator and all the others are workers.In this model when a request is generated,either by an external client or by one of the other workers it is sent to coordinator. The coordinator then decides which worker is best suited to carry it out and forwards it there. Flat group is symmetrical and no single point of failure In hierarchical if coordinator fails it brings entire group halt.
Flat Groups versus Hierarchical Groups (a) Communication in a flat group. (b) Communication in a simple hierarchical group.
Process Resilience:Group membership For group commnication some method is needed for creating and deleting groups as well as for allowing processes to join and leave groups. One possible approach is to have a group server to which all these requests can be sent.The group srver maintain a complete database of all the groups and membership. Two way to manage group membership 1.Centralized 2.Distributed way.Centralized-Single point of failure In Distributed way process can send message to all group members to join group. To leave a group a member just sends a goodbye message to every one.
Failure Masking and Replication By organizing a fault tolerant group of processes , we can protect a single vulnerable process. There are two approaches to arranging the replication of the group: Primary (backup) Protocols Replicated-Write Protocols
The Goal of Agreement Algorithms “ To have all non-faulty processes reach consensus on some issue (quickly).” The two-army problem . Even with non-faulty processes, agreement between even two processes is not possible in the face of unreliable communication.
History Lesson: The Byzantine Empire Time : 330-1453 AD. Place : Balkans and Modern Turkey. Endless conspiracies, intrigue, and untruthfulness were alleged to be common practice in the ruling circles of the day ( sounds strangely familiar … ). That is: it was typical for intentionally wrong and malicious activity to occur among the ruling group. A similar occurrence can surface in a DS, and is known as ‘Byzantine failure’. Question : how do we deal with such malicious group members within a distributed system?
Agreement in Faulty Systems (1) Possible cases: Synchronous (lock-step) versus asynchronous systems. Communication delay is bounded (by globally and predetermined maximum time) or not. Message delivery is ordered (in real-time) or not. Message transmission is done through unicasting or multicasting.
Agreement in Faulty Systems (2) Circumstances under which distributed agreement can be reached
Agreement in Faulty Systems (3) The Byzantine agreement problem for three non-faulty and one faulty process. (a) Each process sends their value to the others.
Agreement in Faulty Systems (4) The Byzantine agreement problem for three non-faulty and one faulty process. (b) The vectors that each process assembles based on (a). (c) The vectors that each process receives in step 3.
Agreement in Faulty Systems (5) The same as before, except now with two correct process and one faulty process
Reliable Client/Server Communications In addition to process failures, a communication channel may exhibit crash, omission, timing, and/or arbitrary failures. In practice, the focus is on masking crash and omission failures. For example : the point-to-point TCP masks omission failures by guarding against lost messages using ACKs and retransmissions. However, it performs poorly when a crash occurs (although a DS may try to mask a TCP crash by automatically re-establishing the lost connection).
RPC Semantics and Failures The RPC mechanism works well as long as both the client and server function perfectly. Five classes of RPC failure can be identified: The client cannot locate the server , so no request can be sent. The client’s request to the server is lost , so no response is returned by the server to the waiting client. The server crashes after receiving the request , and the service request is left acknowledged, but undone. The server’s reply is lost on its way to the client , the service has completed, but the results never arrive at the client The client crashes after sending its request , and the server sends a reply to a newly-restarted client that may not be expecting it.
The Five Classes of Failure (1) A server in client-server communication: The normal case. Crash after service execution. Crash before service execution.
The Five Classes of Failure (2) An appropriate exception handling mechanism can deal with a missing server. However, such technologies tend to be very language-specific, and they also tend to be non-transparent (which is a big DS ‘no-no’). Dealing with lost request messages can be dealt with easily using timeouts. If no ACK arrives in time, the message is resent. Of course, the server needs to be able to deal with the possibility of duplicate requests.
The Five Classes of Failure (3) Server crashes are dealt with by implementing one of three possible implementation philosophies: At least once semantics : a guarantee is given that the RPC occurred at least once, but (also) possibly more that once. At most once semantics : a guarantee is given that the RPC occurred at most once, but possibly not at all. No semantics : nothing is guaranteed, and client and servers take their chances! It has proved difficult to provide exactly once semantics .
Server Crashes (1) Remote operation: print some text and (when done) send a completion message. Three events that can happen at the server: Send the completion message (M), Print the text (P), Crash (C).
Server Crashes (2) These three events can occur in six different orderings: M →P →C: A crash occurs after sending the completion message and printing the text. M →C (→P): A crash happens after sending the completion message, but before the text could be printed. P →M →C: A crash occurs after sending the completion message and printing the text. P→C(→M): The text printed, after which a crash occurs before the completion message could be sent. C (→P →M): A crash happens before the server could do anything. C (→M →P): A crash happens before the server could do anything.
Server Crashes (3) Different combinations of client and server strategies in the presence of server crashes
The Five Classes of Failure (4) Lost replies are difficult to deal with. Why was there no reply? Is the server dead , slow , or did the reply just go missing ? Emmmmm? A request that can be repeated any number of times without any nasty side-effects is said to be idempotent . (For example: a read of a static web-page is said to be idempotent). Nonidempotent requests (for example, the electronic transfer of funds) are a little harder to deal with. A common solution is to employ unique sequence numbers . Another technique is the inclusion of additional bits in a retransmission to identify it as such to the server.
The Five Classes of Failure (5) When a client crashes, and when an ‘old’ reply arrives, such a reply is known as an orphan . Four orphan solutions have been proposed: extermination (the orphan is simply killed-off). reincarnation (each client session has an epoch associated with it, making orphans easy to spot). gentle reincarnation (when a new epoch is identified, an attempt is made to locate a requests owner, otherwise the orphan is killed). expiration (if the RPC cannot be completed within a standard amount of time, it is assumed to have expired). In practice, however, none of these methods are desirable for dealing with orphans. Research continues …
Reliable Group Communication Reliable communication means that a message that is sent to a process group should be delivered to each member of that group. Reliable multicast services guarantee that all messages are delivered to all members of a process group. For a small group, multiple, reliable point-to-point channels will do the job, however, such a solution scales poorly as the group membership grows. Also: What happens if a process joins the group during communication? Worse: what happens if the sender of the multiple, reliable point-to-point channels crashes half way through sending the messages? To cover such situation distinction should be made between reliable communication in presence of faulty process and nonfaulty.
Reliable Group Communication In first case multicasting is considered to be reliable when it can be guaranted that all nonfaulty group members receive the message. If we assume agreement exists on who is a member of the group.If we assume that process do not fail and processes do not join or leave the group while communication is going on. Reliable multicating simply means that every message should be delivered to each current group member.No requirement that all group members receive messages in the same order. A simple solution is shown in fig.The sending process assigns a sequence no. to each message it multicasts.We assume that messages are received in the order they are sent. So it is easy for a receiver to detect it a missing message.Each multicast message is stored locally in a history buffer at the sender. Assuming that receivers are known to sender,the sender simply keeps the message in its buffer until each receiver has returned ack.
Reliable Group Communication If a receiver detects it is missing message ,it may return NACK,requesting the sender for a retransmission. Alternatively sender may automatically retransmit the message when it has not received all ack within a certain time.
Basic Reliable-Multicasting Schemes A simple solution to reliable multicasting when all receivers are known and are assumed not to fail. (a) Message transmission. (b) Reporting feedback.
SRM: Scalable Reliable Multicasting Receivers never acknowledge successful delivery. Only missing messages are reported. NACKs are multicast to all group members. This allows other members to suppress their feedback, if necessary. To avoid “retransmission clashes”, each member is required to wait a random delay prior to NACKing.
SRM: Scalable Reliable Multicasting The key issue to scalable solutions for reliable multicasting is to reduce the number of feedback messages that are returned to the sender.A popular model is feedback suppression. In SRM receivers never ack the successful delivery of a multicast message but report only when they are missing a message. Only NACK are returned as feedback.Whenever a receiver notices that it missed a message it multicats its feedback to the rest of the group. Multicasting feedback allows another group member to suppress its own feedback.Suppose several receivers missed message m.Each of them will need to return a NACK to the sender S so that m can be retransmitted.However if we assume that retransmission are always multicast to the entire group,it is sufficient that only a single request for retransmission reaches S. This scheme is shown in fig.
Nonhierarchical Feedback Control Several receivers have scheduled a request for retransmission, but the first retransmission request leads to the suppression of others
Hierarchical Feedback Control Feedback Suppression is basically a Nonhierarchical solution. Acheving scalability for very large groups of receivers requires that hierarchical approaches are adopted. Assume there is only a single sender that needs to multicast message to very large group of receivers.The group of receivers are partitioned into number of subgroups which is organized into tree. Within each subgroup any reliable multicasting scheme that works for small groups can be used. Each subgroup appoints a local coordinator which is responsible for handling retransmission of receivers contained in its subgroup. The local coordinator have its own history buffer. If coordinator itself has missed a message m it asks the coordinator of parent subgroup to retransmit m.
Hierarchical Feedback Control In a scheme based on ack a local coordinator sends an ack to its parent if it has received the message. If a coordinator has received ack for message m from all members in its subgroup as well as from its children it can remove m from its history buffer. The main problem of hierarchical solution is construction of tree. In many cases tree needs to constructed dynamically.
Hierarchical Feedback Control The essence of hierarchical reliable multicasting. Each local coordinator forwards the message to its children and later handles retransmission requests.
Atomic multicast
Virtual Synchrony Reliable multicast in the presence of process failures can be accurately defined in terms of process groups and changes to group membership. We make a distinction between receiving and delivering a message. We adopt a model in which DS consists of a communication layer as shown in fig. Within this communication layer messages are sent and received.A received message is locally buffered in the communication layer until it can be delivered to the application that is logically placed at a higher layer.
Virtual Synchrony (1) The logical organization of a distributed system to distinguish between message receipt and message delivery.
Virtual Synchronous multicast Consider the 4 process shown in fig..At a certain point in time process p1 joins the group,which then consists of p1,p2,p3,p4.After some messages have been multicast process p3 crashes. However before crashing it succeeded in multicasting a message to process p2 and p4 but not to p1. However virtual synechrony guarantee that the message is not delivered at all,effectively establishing the situation that the message had never been sent before p3 crashed. After p3 has removed from group communication proceeds between the remaining group members.Later when p3 recovers it can join the group again after its state hasbeen brought up to date.
Virtual Synchrony (2) The principle of virtual synchronous multicast
Message Ordering (1) Four different orderings are distinguished: Unordered multicasts FIFO-ordered multicasts Causally-ordered multicasts Totally-ordered multicasts
Message Ordering (2) Three communicating processes in the same group. The ordering of events per process is shown along the vertical axis.
Message Ordering (3) Four processes in the same group with two different senders, and a possible delivery order of messages under FIFO-ordered multicasting
Implementing Virtual Synchrony (1) Six different versions of virtually synchronous reliable multicasting
Implementing Virtual Synchrony (2) (a) Process 4 notices that process 7 has crashed and sends a view change
Implementing Virtual Synchrony (3) (b) Process 6 sends out all its unstable messages, followed by a flush message
Implementing Virtual Synchrony (4) (c) Process 6 installs the new view when it has received a flush message from everyone else
Distributed Commit General Goal: We want an operation to be performed by all group members, or none at all. [In the case of atomic multicasting, the operation is the delivery of the message.] There are three types of “commit protocol”: single-phase commit two-phase commit three-phase commit
Commit Protocols One-Phase Commit Protocol : An elected coordinator tells all the other processes to perform the operation in question. But, what if a process cannot perform the operation? There’s no way to tell the coordinator! Whoops … The solutions : The Two-Phase and Three-Phase Commit Protocols .
The Two-Phase Commit Protocol First developed in 1978!!! Summarized: GET READY, OK, GO AHEAD. The coordinator sends a VOTE_REQUEST message to all group members. The group member returns VOTE_COMMIT if it can commit locally, otherwise VOTE_ABORT . All votes are collected by the coordinator. A GLOBAL_COMMIT is sent if all the group members voted to commit. If one group member voted to abort, a GLOBAL_ABORT is sent. The group members then COMMIT or ABORT based on the last message received from the coordinator.
Two-Phase Commit (1) (a) The finite state machine for the coordinator in 2PC. (b) The finite state machine for a participant.
Two-Phase Commit (2) Actions taken by a participant P when residing in state READY and having contacted another participant Q
Big Problem with Two-Phase Commit It can lead to both the coordinator and the group members blocking , which may lead to the dreaded deadlock . If the coordinator crashes, the group members may not be able to reach a final decision , and they may, therefore, block until the coordinator recovers … Two-Phase Commit is known as a blocking-commit protocol for this reason. The solution? The Three-Phase Commit Protocol.
Three-Phase Commit (1) The main problem with 2PC is that when coordinator has craeshed participants may nit be able to reach a final decision.Participants remain blocked until the coordinator recovers. The states of the coordinator and each participant satisfy the following two conditions: There is no single state from which it is possible to make a transition directly to either a COMMIT or an ABORT state. There is no state in which it is not possible to make a final decision, and from which a transition to a COMMIT state can be made.
Three-Phase Commit (2) (a) The finite state machine for the coordinator in 3PC. (b) The finite state machine for a participant.
Recovery Strategies Once a failure has occurred, it is essential that the process where the failure happened recovers to a correct state. Recovery from an error is fundamental to fault tolerance. Two main forms of recovery: Backward Recovery : return the system to some previous correct state (using checkpoints ), then continue executing. Forward Recovery : bring the system into a correct state, from which it can then continue to execute.
Forward and Backward Recovery Major disadvantage of Backward Recovery : Checkpointing can be very expensive (especially when errors are very rare). [Despite the cost, backward recovery is implemented more often. The “logging” of information can be thought of as a type of checkpointing.]. Major disadvantage of Forward Recovery : In order to work, all potential errors need to be accounted for up-front . When an error occurs, the recovery mechanism then knows what to do to bring the system forward to a correct state.
Recovery Example Consider as an example: Reliable Communications Retransmission of a lost/damaged packet is an example of a backward recovery technique. When a lost/damaged packet can be reconstructed as a result of the receipt of other successfully delivered packets, then this is known as Erasure Correction . This is an example of a forward recovery technique.
Checkpointing A recovery line
Independent Checkpointing The domino effect – Cascaded rollback
Characterizing Message-Logging Schemes Incorrect replay of messages after recovery, leading to an orphan process