Congestion control

44,729 views 37 slides Nov 10, 2016
Slide 1
Slide 1 of 37
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37

About This Presentation

hjkjh


Slide Content

Congestion Control

24. 2 24-1 DATA TRAFFIC The main focus of congestion control and quality of service is data traffic . In congestion control we try to avoid traffic congestion. In quality of service, we try to create an appropriate environment for the traffic. So, before talking about congestion control and quality of service, we discuss the data traffic itself. Traffic Descriptor Traffic Profiles Topics discussed in this section:

24. 3 Figure 24.1 Traffic descriptors

24. 4 Figure 24.2 Three traffic profiles

24. 5 24-2 CONGESTION Congestion in a network may occur if the load on the network—the number of packets sent to the network—is greater than the capacity of the network—the number of packets a network can handle. Congestion control refers to the mechanisms and techniques to control the congestion and keep the load below the capacity. Network Performance Topics discussed in this section:

Congestion Control Algorithms Congestion - the situation in which too many packets are present in the subnet.

Causes of Congestion Congestion occurs when a router receives data faster than it can send it Insufficient bandwidth Slow hosts Data simultaneously arriving from multiple lines destined for the same outgoing line. The system is not balanced Correcting the problem at one router will probably just move the bottleneck to another router.

Congestion Causes More Congestion Incoming messages must be placed in queues The queues have a finite size Overflowing queues will cause packets to be dropped Long queue delays will cause packets to be resent Dropped packets will cause packets to be resent Senders that are trying to transmit to a congested destination also become congested They must continually resend packets that have been dropped or that have timed-out They must continue to hold outgoing/unacknowledged messages in memory.

Congestion Control versus Flow Control Flow control controls point-to-point traffic between sender and receiver e.g., a fast host sending to a slow host Congestion Control controls the traffic throughout the network

24. 10 24-3 CONGESTION CONTROL Congestion control refers to techniques and mechanisms that can either prevent congestion, before it happens, or remove congestion, after it has happened. In general, we can divide congestion control mechanisms into two broad categories: open-loop congestion control (prevention) and closed-loop congestion control (removal). Open-Loop Congestion Control Closed-Loop Congestion Control Topics discussed in this section:

11 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one of two things must happen: The subnet must prevent additional packets from entering the congested region until those already present can be processed. The congested routers can discard queued packets to make room for those that are arriving.

Two Categories of Congestion Control Open loop solutions Attempt to prevent problems rather than correct them Does not utilize runtime feedback from the system Closed loop solutions Uses feedback (measurements of system performance) to make corrections at runtime.

November 3, 2016 Veton Këpuska 13 General Principles of Congestion Control Analogy with Control Theory: Open-loop, and Closed-loop approach. Open-loop approach Problem is solved at the design cycle Once the system is running midcourse correction are NOT made. Tools for doing open-loop control: Deciding when to accept new traffic, Deciding when to disregard packets and which ones. Making scheduling decision at various points in the network. Note that all those decisions are made without regard to the current state of the network.

November 3, 2016 Veton Këpuska 14 General Principles of Congestion Control Closed-loop approach It is based on the principle of feedback-loop. The approach has three parts when applied to congestion control: Monitor the system to detect when and where congestion occurs, Pass this information tot places where action can be taken Adjust system operation to correct the problem.

24. 15 Figure 24.5 Congestion control categories

16 Warning Bit/ Backpressure A special bit in the packet header is set by the router to warn the source when congestion is detected. The bit is copied and piggy-backed on the ACK and sent to the sender. The sender monitors the number of ACK packets it receives with the warning bit set and adjusts its transmission rate accordingly.

24. 17 Figure 24.6 Backpressure method for alleviating congestion

18 Choke Packets A more direct way of telling the source to slow down. A choke packet is a control packet generated at a congested node and transmitted to restrict traffic flow. The source, on receiving the choke packet must reduce its transmission rate by a certain percentage. An example of a choke packet is the ICMP Source Quench Packet.

24. 19 Figure 24.7 Choke packet

Open-Loop Control Network performance is guaranteed to all traffic flows that have been admitted into the network Initially for connection-oriented networks Key Mechanisms Admission Control Policing Traffic Shaping Traffic Scheduling

Time Bits/second Peak rate Average rate Typical bit rate demanded by a variable bit rate information source Admission Control Flows negotiate contract with network Specify requirements: Peak, Avg., Min Bit rate Maximum burst size Delay, Loss requirement Network computes resources needed “Effective” bandwidth If flow accepted, network allocates resources to ensure QoS delivered as long as source conforms to contract

Policing Network monitors traffic flows continuously to ensure they meet their traffic contract When a packet violates the contract, network can discard or tag the packet giving it lower priority If congestion occurs, tagged packets are discarded first Leaky Bucket Algorithm is the most commonly used policing mechanism Bucket has specified leak rate for average contracted rate Bucket has specified depth to accommodate variations in arrival rate Arriving packet is conforming if it does not result in overflow

23 Traffic Shaping Another method of congestion control is to “shape” the traffic before it enters the network. Traffic shaping controls the rate at which packets are sent (not just how many). Used in ATM and Integrated Services networks. At connection set-up time, the sender and carrier negotiate a traffic pattern (shape). Two traffic shaping algorithms are: Leaky Bucket Token Bucket

24 The Leaky Bucket Algorithm The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single-server queue with constant service time. If the bucket (buffer) overflows then packets are discarded.

25 The Leaky Bucket Algorithm (a) A leaky bucket with water. (b) a leaky bucket with packets.

26 Leaky Bucket Algorithm, cont. The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness of the input. Does nothing when input is idle. The host injects one packet per clock tick onto the network. This results in a uniform flow of packets, smoothing out bursts and reducing congestion. When packets are the same size (as in ATM cells), the one packet per tick is okay. For variable length packets though, it is better to allow a fixed number of bytes per tick. E.g. 1024 bytes per tick will allow one 1024-byte packet or two 512-byte packets or four 256-byte packets on 1 tick.

24. 27 Figure 24.19 Leaky bucket

24. 28 Figure 24.20 Leaky bucket implementation

24. 29 A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket is full. Note

24. 30 The token bucket allows bursty traffic at a regulated maximum rate. Note

Incoming traffic Shaped traffic Size N Packet Server Leaky Bucket Traffic Shaper Buffer incoming packets Play out periodically to conform to parameters Surges in arrivals are buffered & smoothed out Possible packet loss due to buffer overflow Too restrictive, since conforming traffic does not need to be completely smooth

32 Token Bucket Algorithm In contrast to the LB, the Token Bucket Algorithm, allows the output rate to vary, depending on the size of the burst. In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must capture and destroy one token. Tokens are generated by a clock at the rate of one token every  t sec. Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to send larger bursts later.

33 The Token Bucket Algorithm (a) Before. (b) After. 5-34

24. 34 Figure 24.21 Token bucket

Incoming traffic Shaped traffic Size N Size K Tokens arrive periodically Server Packet Token Token Bucket Traffic Shaper Token rate regulates transfer of packets If sufficient tokens available, packets enter network without delay K determines how much burstiness allowed into the network An incoming packet must have sufficient tokens before admission into the network

36 Leaky Bucket vs Token Bucket LB discards packets; TB does not. TB discards tokens. With TB, a packet can only be transmitted if there are enough tokens to cover its length in bytes. LB sends packets at an average rate. TB allows for large bursts to be sent faster by speeding up the output. TB allows saving up tokens (permissions) to send large bursts. LB does not allow saving.

37 Load Shedding When buffers become full, routers simply discard packets. Which packet is chosen to be the victim depends on the application and on the error strategy used in the data link layer. For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in the received data. For real-time voice or video it is probably better to throw away old data and keep new packets. Get the application to mark packets with discard priority.
Tags