mux aloha csma schemes in networking world

circularsuom 10 views 32 slides Oct 16, 2024
Slide 1
Slide 1 of 32
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32

About This Presentation

multiplexing


Slide Content

Multiple Access Protocols
The Data Link Layer is responsible for
transmission of data between two nodes. Its
main functions are-
Data Link Control
Multiple Access Control

Data Link control –

Multiple Access Control –
are required to decrease collision and avoid
crosstalk.

Channelization
In this, the available bandwidth of the link is
shared in time, frequency and code to multiple
stations to access channel simultaneously.

Frequency Division Multiplexing
(FDM)

This is possible in the case where
transmission media has a bandwidth
than the required bandwidth of signals
to be transmitted.

A number of signals can be
transmitted at the same time.

Each source is allotted a frequency
range in which it can transfer it's
signals, and a suitable frequency gap
is given between two adjacent signals
to avoid overlapping.

This is type of multiplexing is
commonly seen in the cable TV
networks.

Time Division Multiplexing (TDM)

This is possible when data transmission rate of the media is
much higher than that of the data rate of the source.

Multiple signals can be transmitted if each signal is allowed to
be transmitted for a definite amount of time.

These time slots are so small that all transmissions appear to
be in parallel.

Synchronous TDM

Time slots are preassigned and are
fixed.

Each source is given it's time slot at
every turn due to it.

This turn may be once per cycle, or
several turns per cycle ,if it has a high
data transfer rate, or may be once in a
no. of cycles if it is slow.

This slot is given even if the source is
not ready with data. So this slot is
transmitted empty.

Asynchronous TDM

In this method, slots are not fixed.

They are allotted dynamically depending on
speed of sources, and whether they are ready
for transmission.

Random Access Protocol
In this, all stations have same superiority that is no station has
more priority than another station.
Any station can send data depending on medium’s state( idle or
busy). It has two features:
There is no fixed time for sending data
There is no fixed sequence of stations sending data

Aloha Protocols

The Aloha protocol was designed as part of a project at the University
of Hawaii.
--- It provided data transmission between computers on several of the
Hawaiian Islands using radio transmissions.
1. Communications was typically between remote stations and a
central sited named Menehune or vice versa.
2. All message to the Menehune were sent using the same frequency.
3. When it received a message intact, the Menehune would broadcast
an ack on a distinct outgoing frequency.
4. The outgoing frequency was also used for messages from the
central site to remote computers.
5. All stations listened for message on this second frequency.

Pure Aloha

Pure Aloha is an unslotted,
fully-decentralized
protocol.

It is extremely simple and
trivial to implement.

The ground rule is - "when you want
to talk, just talk!".

So, a node which wants to
transmits, will go ahead and send
the packet on its broadcast channel,
with no consideration whatsoever as
to anybody else is transmitting or
not.

drawbacks

One serious drawback here is that, you dont know whether what
you are sending has been received properly or not (so as to
say, "whether you've been heard and understood?").

To resolve this, in Pure Aloha, when one node finishes
speaking, it expects an acknowledgement in a finite amount of
time - otherwise it simply retransmits the data.

This scheme works well in small networks where the load is not
high.

But in large, load intensive networks where many nodes may
want to transmit at the same time, this scheme fails miserably.

This led to the development of Slotted Aloha.

Slotted Aloha

This is quite similar to Pure
Aloha, differing only in the way
transmissions take place.

Instead of transmitting right at
demand time, the sender waits
for some time.

This delay is specified as follows
- the timeline is divided into equal
slots and then it is required that
transmission should take place
only at slot boundaries.

To be more precise, the slotted-
Aloha makes the following
assumptions:
Nodes must send only at the beginning of
the time slot
Nodes must wait until the next time slot if
they miss the current one
The vulnerable time is halved from the
Pure ALOHA case

Carrier Sense Mutiple Access Protocols
In both slotted and pure ALOHA, a
node's decision to transmit is made
independently of the activity of the
other nodes attached to the
broadcast channel
As humans, we have human
protocols that allow us to not only
behave with more civility.
but also to decrease the amount of
time spent "colliding" with each other
in conversation and consequently
increasing the amount of data we
exchange in our conversations.
There are two important rules for polite human
conversation:
1.Listen before speaking:
If someone else is speaking, wait until they are
done.
In the networking world, this is termed carrier
sensing - a node listens to the channel before
transmitting.
If a frame from another node is currently being
transmitted into the channel, a node then
waits ("backs off") a random amount of time
and then again senses the channel.
If the channel is sensed to be idle, the node then
begins frame transmission.
Otherwise, the node waits another random
amount of time and repeats this process.

2. If someone else begins talking at the same
time, stop talking.

In the networking world, this is termed collision
detection - a transmitting node listens to the
channel while it is transmitting.

If it detects that another node is transmitting an
interfering frame, it stops transmitting and uses
some protocol to determine when it should next
attempt to transmit.

CSMA- Carrier Sense Multiple Access

This is the simplest version CSMA protocol as
described above.

It does not specify any collision detection or
handling.

this is not a very good protocol for large, load
intensive networks.

So, we need an improvement over CSMA - this
led to the development of CSMA/CD.

CSMA/CD- CSMA with Collision
Detection

In this protocol, while transmitting the data, the
sender simultaneously tries to receive it.

So, as soon as it detects a collission (it doesn't
receive its own data) it stops transmitting.

Thereafter, the node waits for some time interval
before attempting to transmit again.

Simply put, "listen while you talk".

But, how long should one wait for the carrier to
be freed?

There are three schemes to handle this:

1-Persistent:
In this scheme, transmission proceeds
immediately if the carrier is idle.
However, if the carrier is busy, then sender
continues to sense the carrier until it becomes
idle.
The main problem here is that, if more than one
transmitters are ready to send, a collision is
GUARANTEED!!

Non-Persistent:
In this scheme, the broadcast channel is not monitored
continuously.
The sender polls it at random time intervals and transmits
whenever the carrier is idle.
This decreases the probability of collisions.
But, it is not efficient in a low load situation, where number of
collisions are anyway small.
The problems it entails are:
-If back-off time is too long, the idle time of carrier is wasted in
some sense.
-It may result in long access delays.

p-Persistent:
Even if a sender finds the carrier to be idle, it uses a probabilistic distribution to
determine whether to transmit or not.
Put simply, "toss a coin to decide".
If the carrier is idle, then transmission takes place with a probability p and the
sender waits with a probability 1-p.
This scheme is a good trade off between the Non-persistent and 1-persistent
schemes.
So, for low load situations, p is high (example: 1-persistent); and for high load
situations, p may be lower.
Clearly, the value of p plays an important role in determining the performance of
this protocol.
Also the same p is likely to provide different performance at different loads.

CSMA/CD doesn't work in some wireless scenarios called "hidden node"
problems.
Consider a situation, where there are 3 nodes - A, B and C
communicating with each other using a wireless protocol.

CSMA with Collision Avoidance
Hidden Node Problem:
In the case of wireless network it is
possible that A is sending a
message to B.
But C is out of its range and hence
while "listening" on the network it
will find the network to be free and
might try to send packets to B at the
same time as A.
So, there will be a collision at B-----
…….B will receive 2 packets at the
same time and might not be able to
understand either of them.
The problem can be looked upon as if
A and C are hidden from each other.
Hence it is called the "hidden node
problem".
Exposed Node Problem
If C is transmitting a message to D and B wants to transmit a message
to A, B will find the network to be busy as B hears C transmitting
Even if B would have transmitted to A, it would not have been a problem
at A or D.
CSMA/CD would not allow it to transmit message to A, while the two
transmissions could have gone in parallel.

Addressing hidden node problem (CSMA/CA)
Suppose A wants to send a packet to B.
Then it will first send a small packet to B called "Request to
Send" (RTS).
In response, B sends a small packet to A called "Clear to
Send" (CTS).
Only after A receives a CTS, it transmits the actual data.
Now, any of the nodes which can hear either CTS or RTS
assume the network to be busy.
However even if some other node 'D' which is out of range of
both A and B sends an RTS to C (which can hear at least one
of the RTS or CTS between A and B)
C would not send a CTS to it and hence the communication
would not be established between C and D.

Does CSMA/CD work in the wireless
networks ?
Wireless transceivers can't send and receive on the same channel at the same
time, so they can't detect collisions.
This is due to the fact that there's an incredible difference between send
power (generally around 100mw) and receive sensitivity (commonly
around 0.01 to 0.0001mw).
The sending would cover up any possible chance of receiving a foreign
signal, no chance of "Collision Detection".
For this reason Collision Avoidance with Control Messages is necessary.
On most wired networks the (like Ethernet) the voltage is around 1 to 2.5v;
both sending and receiving are roughly the same voltage.
So if you're sending a 2.5v signal, and someone else collides with a -2.5v
signal, the "Detection" parts will see a signal somewhere around 0v and
know a collision occurred.

What is "Fair" ?
Fairness:
Humans have a "built-
in" sense of fairness
for certain things...
Example 1:

4 children sharing a cake

everyone can eat up the
share he/she receives

In this case, the "fair"
sharing is when each child
receive a 1/4 of the cake


4 children sharing a
cake but:
Child 1 can only eat
1/10 of the cake.

Fact: it is not wise to
give child 1 more
than 1/10 of the cake
because the extra
cake (resource) is
wasted
In this case, the "fair"
sharing is:

Child 1 gets 1/10

The other 3 children
gets 3/10 each
Example 2
This sense of "fairness" is called
max-min fair:

The least demanding one will
get its fair share first

After this, the next least
demanding one will get its fair
share first

And so on..

Max-Min Fairness Algorithm
A sharing of a network resource is
"max-min fair" when:
The lowest demand on data rate
is maximized;
After the lowest demand on the
network resource has been
satisfied, only then the second
lowest demand on the network
resource will be maximized;
After the second lowest demand
on the network resource has been
satisfied, only then the third lowest
demand on the network resource
will be maximized;
and so on.
1. Compute the fair share of each
unsatisfied flow
2. Assign the fair share to each
unsatisfied flow
3. Some flows may have received
more than its request
If there are over-assignments:
1. Take back the over-assigned
shares The sum of the over-
assigned shares is the residual
amount
2. Assign the residual amount to
the remaining unsatisfied flows

Example of a Max-Min Fair
Bandwidth assignment
Consider the following 4
flows sharing a
common bottle neck
link:

The bottleneck
link has a
bandwidth of 10
Mbps

There are 4 flows
sharing the
bottleneck link

The demands of
each flow is given
in the figure

Iteration 1:


compute the fair share of
each unsaisfied flow:
10 Mbps
--------- = 2.5 Mbps (per flow)
4

Assignment:
1. Flow 1: 2 Mbps
(with 0.5 Mbps
over-assignment)
2. Flow 2: 2.5 Mbps
3. Flow 3: 2.5 Mbps
4. Flow 4: 2.5 Mbps

Residual:
Unused bandwidth = 0.5 Mpbs
The flow with minimum demand (i.e., flow 1 with 2 Mbps demand) has The flow with minimum demand (i.e., flow 1 with 2 Mbps demand) has
been maximized been maximized

Iteration 2:


compute the fair share of
each unsaisfied flow:
0.5 Mbps
--------- = 0.166Mbps (per
flow) 3

Assignment:
1. Flow 2: 2.5 + 0.1
Mbps = 2.6 Mbps
(because demand
= 2.6 Mbps)
2. Flow 3: 2.5 +
0.16666 Mbps =
2.66666 Mbps
3. Flow 4: 2.5 +
0.16666 Mbps =
2.66666 Mbps
Residual:
Unused bandwidth = 0.06666 Mpbs

After the minimum After the minimum
demand (i.e., flow 1 demand (i.e., flow 1
with 2 Mbps) has with 2 Mbps) has
been maximized, the been maximized, the
second lowest second lowest
demand (i.e., flow 2 demand (i.e., flow 2
with 2.6 Mbps) is now with 2.6 Mbps) is now
maximized; maximized;

Iteration 3:


compute the fair share of
each unsaisfied flow:
0.06666 Mbps
--------- = 0.03333 Mbps (per
flow) 2

Assignment:
1. Flow 3: 2.66666
+ 0.03333 Mbps =
2.7 Mbps
2. Flow 4: 2.66666
+ 0.03333 Mbps =
2.7 Mbps

Residual:
Unused bandwidth = 0.0 Mpbs

Max-min fair assignment:
Flow 1: 2 Mbps
Flow 2: 2.6 Mbps
Flow 3: 2.7 Mbps
Flow 4: 2.7 Mbps
the lowest demand (= flow 1 with its 2 Mbps) is maximized;
the second lowest demand (= flow 2 with its 2.6 Mbps) is maximized;
the third lowest demand (= flow 3 with its 4 Mbps) is maximized;
(Note that maximized is not the same as satisfied. We gave flow 3 the highest possible assignment that is
fair)
the fourth lowest demand (= flow 4 with its 5 Mbps) is maximized;
(Note that maximized is not the same as satisfied. We gave flow 4 the highest possible assignment that is
fair)
Tags