[music] Good day, viewers.
In this segment we'll talk about the
sliding window algorithm for reliable data
transfer.
So the sliding window is one of the most
well known algorithms in computer
networking.
It combines pipelining for performance
with reliability.
And it does this by building on what we've
seen before, the Stop-and-Wait algorithm,
which used ARQ automatic repeat request,
with a window size of one.
Okay, so let me go back there, and start
at the beginning.
I said that ARQ, automatic repeat request,
well that's a mechanism that repl, the,
that provides reliability by
re-transmitting packets, or segments, or
frames, depending what bay they're
operating at.
And till they're acknowledged by the other
side.
So it ensures they'll get through.
Now with Stop-and-Wait, which we saw
before, only one message can be
outstanding at a time.
Here's the time sequence diagram for the
normal case where nothing is lost.
You can see the senders and the frames.
And it numbers it number 0 and ACK comes
back.
And then we're happy the sender is knows
that frame has made it to the receiver.
So the sender can move on it's in the next
frame.
That's frame one, and ACK will come back
for it eventually and so forth.
There's a big limitation of Stop-and-Wait,
however, and that's the fact that only one
frame can be outstanding in the network at
a given time, at any time.
Now, this is fine if we're talking about a
local network, because the propagation to
layover a local area network is usually
small enough that only, that sending a
single packet fills that network and uses
it well.
On the other hand, imagine an internet
path where we might have bandwidth delay
which is much greater than 1 packet.
In this case the packet will only fill a
very small portion of the network path.
It won't be using the network
inefficiently.
This is similar to the picture you can see
here in which there's just this small
packet in the midst of a long network path
between two hosts.
So let's take a look at how much of a
limitation this is.
Imagine that we have a network with
around, with a rate of 1 megabit per
second, that's the capacity of the
network.
And the propagation delay is 50
milliseconds, so the round trip time will
be 100 per milliseconds.
How many packets a second can we transfer
reliably over this network using sliding
window, sorry, using stop and wait, which
will turn out to be a sliding window of
size one.
Well I can see by inspection that if it
takes a 100 milliseconds, that's a 10th of
a second to send a bit to the other side
and back, then at most we can send 10
packets a second.
Now, if every packet is 1250 bytes chosen,
so there's 10 kilobits.
That is going to be an information
transfer rate of 100 kilobits a second.
That's not much, considering that the
network path could handle a megabit per
second.
So we're only using a fraction of that.
We're using about ten percent of the
network part's bandwidth.
Suppose we upgrade the link to be ten
megabits per second because we just want
everything to be faster.
How many packets could we send?
Again, the answer is 10 packets a second,
why?
Because they were constrained by the round
trip time, the bandwidth really didn't
affect them at all.
This is going to be the same, 100 kilobits
a second, and now it's actually going to
be more like 1% of the capacity instead of
10%.
So for our fast and long networks,
stop-and-wait really uses them
inefficiently.
Okay so sliding window generalizes
Stop-and-Wait and considers that to be the
special case of a window size of one.
What sliding window will allow a window of
up to W packets to be outstanding at any
given time.
By the way, I'll talk for the sliding
window about frames, packets, or segments
as is convenient.
Since this algorithm could be used in
either the link network or transport
layers.
We're really using it at the transport
layer, and that's what TCP does.
But since building on AIQ and we saw that
at the, the link layer, you'll still see
frames and also sometimes talk about
packets.
Now with a sliding window and a window
sized W that means we'll be able to send W
packets per rouind trip time.
The pipe lining here that we're getting by
sending multiple packets at once is what's
improving perfomance for us on the
network.
To use the entire network path.
So if I have array of 10 megabits per
second to use all of that then I would
want the window to be large enough so that
I could send.
So, so that it is same as 2BD expressed in
packets.
The reason for this is that we said that
you could send w packets per round trip
time, a round trip time is really 2D.
If we want to send at the bandwidth, the
full ride, then we want to send B bits per
second for RTT seconds.
Multiplying it all together we get 2B
times, sorry, 2D up here times B for the
bandwidth which is 2BD.
That's the amount of data that can be
transferred over the path in one round
trip time.
So if we set W to that, we'll be able to
send at that rate.
Let's see what kind of window sizes this
implies to get it working.
So what window size, w, would we like to
use to fill up the network to use it to
its capacity?
For the examples before now with a rate of
1 megabit per second and a delay of 50
milliseconds.
So we would like w to be equal to 2BD,
that is ooh, twice the bandwidth times the
delay.
I'm just going to multiply the bandwidth
first, that's 10 to the 6 bits per second,
and twice the delay is 100 milliseconds.
So that's 100, 10 to the minus 3 seconds,
so that's a 100 kilobits, kilobits.
Now expressed in packets if a packet is
about 10 kilobit, that's 1250 bytes, this
is about 10 packets.
So to use the path fully, we needed to
have sliding window size w of about 10
packets rather than the 1 packet and
Stop-and-Wait.
And this is what you might imagine since
we worked out before we're only getting to
utilize about 10% of its bandwidth, so we
were off by a factor of 10.
Now, what if we increase the rate to 10
megabits per second?
Well, I could do my w equals 2BD
calculation again, this is a w.
And actually the, the only thing that's
happened, is the rate.
Which is the b in this equation, has gone
up by a factor of 10.
So it's going to be 1,000.
Kilobit.
Or approximately 100 packets.
So I, you can see that I would need a
reasonably sized sliding window to fully
utilize this link.
And the sliding window would just have to
get bigger for faster and longer lengths.
Okay.
So here's all of this tidied up.
And hopefully, I got all of the math
right.
So, let's now talk about the sliding
window protocol in a little more detail.
We've just said that there are these w
packets outstanding.
But how, exactly, does it work?
Actually, there are many variations of the
sliding window protocol.
Depending on exactly the way you want to
handle the buffering, the
acknowledgements, and the retransmissions.
I'm going to sketch for you two versions.
One is called Go-Back-N.
This is the simplest version.
It's not, terribly efficient when there
are errors.
But it's quite simple, and you don't have
to do much the receiver.
And I'll also talk about a version called
Selective Repeat.
Which is a little more involved especially
at the receiver.
But on the other hand it gives us slightly
better performance.
Or actually it could be a lot better
performance, in the case where there are
errors.
So this is what happens at the sender.
Now the picture below is showing you the
buffering as a function of the sequence
number space.
So a sequence numbers, go up to the right
here.
So this is the, the segments of data go
along to the right, here in this picture.
The sender is allowed to buffer up to w
segments of data until their acknowledged.
So this is information that's been sent
and not yet been acknowledged.
The sender needs to buffer this because if
it's not acknowledged the sender is going
to have to retransmit it, so better have a
copy.
We'll express the sliding window with a
couple of sequence variables.
Sorry, a couple of state variables.
One is going to be LFS, that's the last
frame sent.
That's here, so it's the highest numbered
segment which has actually been sent out
but not yet acknowledged.
The other variable is LAR, the last
acknowledgement received.
That's the number of the highest in
sequence acknowledgement which has been
received from the other side.
A sliding window allows us to have a
buffer of up to W packets, so we can have
a difference between LAR and LFS that's
the number of packets we've sent into the
network that haven't been acknowledged of
up to W packets.
You can see here is the sliding window
here, and we actually have four packets
out in the network in windows five.
So actually we could fit one more.
All of the segments before the left side
of this window were finished with them.
They've all been sent through the network,
acknowledged by the other side.
We've got the acknowledgement.
They're done, we're finished.
And to the right side of this sliding
window there's segments that we can't yet
send, because our windows not.
Not big enough.
Okay, so this is the sliding window it
evolves as we send you packets and receive
Acks.
So a couple of things could happen here,
what could happen?
Well one thing that could happen is the
transport layer could accept another
segment from the application, and that
would be this segment here.
And in this case it can go ahead and send
it.
Why Because we only had four packets
outstanding in the network, and our
sliding window size is five.
If we send this one, now we have five.
And you can see I've updated LFS here.
I haven't updated LAR because we haven't
received any acknowledgements.
So now our window is full.
At this stage, we can't send any more
segments, new segments into the network,
until some of the old segments which were
in the network are acknowledged.
Here's something else that can happen.
We can receive the next higher
acknowledgement.
Now you might have noticed that LAR has
moved along.
We actually received an acknowledgement
for this packet here, this segment here.
So LAR now points here.
I've not colored that pink anymore,
because we can free that buffer and throw
it away.
So in fact, now we only have these four
segments outstanding in the network, and
you can see that this sliding window is
still size 5, but it's slid to the right.
That's why it's called a sliding window,
it moves along.
That's opened up space for us to send one
new segment into the network.
And there you are.
That's how the sliding window is operating
in the normal case.
At the sender, you can see it moving along
sending new segments into the network as
old segments are acknowledged.
Let's look at what goes on at the
receiver, because we just want to send a
half there.
And recall I was going to describe two
different variations for you.
The first variation is Go-Back-N in, and
it's very simple.
It's actually the simplest sliding window
receiver implementation.
Here the receiver only keeps a single
packet buffer, segment buffer to put the
next segment in.
So it passes when it gets the next
segment, the right segment, it passes it
up to the application.
So the application sees data in order >>
All we need here is one state variable,
LAS the last ACK sent.
What we're looking to do is foresee a
segment which is the next one beyond that.
So this is what happens when a segment is
received.
We look at the sequence number, if it is
the next one, then we accept it, we put it
in as the buffer.
We pass it up to the app as soon as the
app can take it.
We update LAS, this is now climbed up by
one, and then we send an enlargement to
the other side to say, yep, got LAS plus
1.
If the sequence summary is anything else
then we can simply discard it and not
worry about it.
In effect there has been some kind of
error.
If it's something else, then a segment has
been lost in the network.
And the sender's going to work that out,
and retransmit the lost segments.
We'll see that in a little while.
Well now, let's look at a different
version of the receiver for a selective
repeat receiver.
This is a receiver that does a little more
work.
But can use the network capacity more
effectively when there's loss.
So here, the rec, the receiver still
passes data up to the app in order but it
buffers out-of-order segments to reduce
tr, retransmissions.
So instead of having a buffer of size of
one, it's going to have a larger buffer.
And it will take segments in there even if
there's, they're not the next point they
will hold onto them.
So that way when it later gets the, the
segment that was missing it will be able
to send the segments up to the application
in order.
The ACK number we use is still going to
convey the highest in order segment that's
received.
So it's still like LAS, it's that bottom
edge of the sliding window.
Plus depending on the variation it's quite
common to send hints about which out of
order segment.
That's have or haven't arrived.
This is going to allow the center to do a
better job by understanding what packets
or segments might have been lost.
Tcp actually uses a selective repeat
design.
And we'll see some of the details later
on, when we get to TCP.
But let me say a little more about
selective repeat at the receiver here.
I haven't really said how it works.
Here's how it works.
It has a buffer for up to w segments.
W is the same size as the sliding window.
That's the largest number of segments
which could be outstanding at any time.
And we're, we're going to use the same one
state variable.
The last acknowledgement sent.
When you receive segments.
If they fit in the window from the next
segment LAS plus 1, well actually any of
the next w segments from plus 1 to plus w,
then you buffer them.
You put them in the buff you have.
Then you look at your buffer and you see
if you have any segments in order starting
at las plus 1 and onwards.
If you do you pass up those in order
segments to the app and you update las to
push it up to the highest acknowledge.
The highest segment, which is being
delivered to the app.
And then you send an ACK for this LAS,
regardless of whether it's a new LAS
because it's moved, or if it's the old
LAS.
If it's the old LAS, you're really making
the channel a little more reliable by
repeating the acknowledgement, so the
sender.
Even if it's lost some ACK's since every
ACK here is really telling you that you
received all segments up to and including
this segment.
If we get any of the, we have some idea
what's going on, on the other side we
don't have to get them.
Okay, so now we've seen the receiver a
little bit.
We've seen the sender in the normal case.
We haven't seen the sender with
retransmissions.
What happens when there is loss?
Well this is what happens.
In the case of the Go-Back-N sender, we
can simply use a single timer to detect
losses.
If a segment is not acknowledged after
some timeout, then we assume that it's
lost.
In that case, all segments after it will
have been thrown away by the receiver.
So we simply start resending the buffered
packets at, from the, from LAR plus one.
The next By acknowledgement which we
wanted to receive.
So we're going back in, we're basically
going back the whole window, up to the
whole window.
On the other hand, in a selective repeat
sender, because we're buffering packets or
segments out of order to the receiver on
the other side, we can retransmit them
individually.
So we can have a timer for each different
segment and when that goes off we can
re-transmit that segment.
If that segment hasn't been acknowledged.
The hope here is that we'll resend fewer
of the segments because we might time out
and send one segment but if many other
higher segments have been received by the
receiver pretty soon we'll get
acknowledgements telling us that these,
these new segments have been received so
we won't need to resend them.
Okay, so now you've seen the sliding
window operation for a couple of these
variations.
One more detail I need to tell you about
the sliding window is the sequence
numbers.
Remember first, stop and wait, we simply
needed two sequence number 0 and 1 we can
alternate between.
That's not going to work for a sliding
window of size w because there can be more
than two packets in a window, we need
different sequence numbers for every
packet in the window at least.
So, how many sequence numbers do we need?
Well, for selective repeat if we're
sending up to W packets then they can be
outstanding in the network at one time.
We need at least W sequence numbers for
these outstanding packets.
Now we can also at the same time the
network can be carrying acknowledgements
up to W though for earlier packets that
were sent with earlier sequence numbers.
Altogether W plus W, we need at least two
W sequence numbers.
Turns out we go back in, you can need
fewer because we won't have all of those
acks standing at one time.
We can only have you know enact for one
thing outstanding at a time.
But it doesn't worry too much because
usually we would not get too tight on the
sequence numbers we need.
Typically, sequence numbers would be
implemented with an N bit counter.
That's a number that would simply climb up
to some power of two and just as it is
about to get there, it wrap and go to zero
again.
So for instance, an 8 bit counter might
count up through up to 255, 253, 254, 255.
That's the highest number you can
represent with 8 bits, and then that will
wrap.
You go down to 0, 1, 2, 3 again, and so
forth, so it's a modular counter.
As long as we have enough bits to cover
the range here, we'll be fine with our
sequence numbers.
And finally, just to show you a little
more about how Sliding Windows work, we're
going to look at a Sequence Time Plot for
the Sliding Numbers Sequences.
This graph shows the evolution of a time
on the y axis over the sequence number.
Sorry, time on the x axis of the sequence
number on the y axis.
Now if you look at these curves, this blue
curve gives you a line for the
transmissions of the sender and you can
see here that, that sequence number's
going up over time.
Overtime sending newer more highly
numbered segments.
There's also another staircase here, and
that's for the receiver.
These are the acknowledgements coming
back.
And you can see that after some delay, the
Acks come back for the same number of
packets that were already sent.
This delay here.
Sorry, this is when the receiver sends the
Acks.
This delay will be d, or the round trip
time on 2.
Because it will take about that long for
the segment to go through the network, for
it to be acknowledged.
So that's how you read these diagrams.
And now, something funny is happening
here.
And we'll look at that in just a minute.
Okay, so let's look at it.
This is a, a hypothetical sequence number
plot for a Go-Back-N scenario.
If everything's going fine and there's no
loss we have these two parallel lines or
staircases, they're going in fine steps
going up.
We're sending and we're being acknowledged
to show more later.
But now something funny happens here.
What happens here?
Well we must have loss, that's because the
acknowledgement number stops going up, so
something's been lost and there's a hole
in the receiver.
There's some maybe some, well the
syntheses that go back in scenario, the
receiver couldn't get the next ACK and so
it's not sending further ACKS , so this
ACK number is not climbing.
What's going to happen then?
Well, when maybe this segment was sent
there's no Ack that comes back for it.
After some amount of time, this is a
timeout.
The receiver will work, sorry, the sender
will work out there's a problem.
This timeout is going to be approximately
the RTT, I mean, that's the minimum it
could be because that's the shortest
amount of time you could take to send a
packet and get an Ack back from the other
side.
So, a timeout needs to be longer than
that.
The ITT's the minimum.
But you can see here, if one and a half
was the delay.
This is about 2t, 2d, which is the ITT.
After that time, the receiver, sorry.
This is, it's hard to work through these
things.
After that time, the sender will time out,
and it will start sending these things
again.
So, up through here, these are all
re-transmissions.
That is, there segments that we sent
previously, we're now sending them again.
And after another short delay, because
here at the bottom, the receiver will
start sending acknowledgment's for them.
And it looks like we get those
acknowledgment's because we keep climbing
up here, there's no more loss and we keep
going.
So as you can see there's a lot of
information in this time sequence diagram
if you know how to read it.
This is just one scenario for the possible
evolution of a sliding window.
There are many others, but I hope from
this, this lecture, you have the, a sense
of how the sliding window works.
Oh, and you can see here I cleaned it up.
And hopefully I've gotten all of those
inferences from what's going on correct.
Wyszukiwarka
Podobne podstrony:
sliding windowVisual Studio 05 Programowanie z Windows API w jezyku C vs25pw2006 05 Password Tricks Customizing the Password Popup Window05 4?1 Power Windows05 5?1 Sliding Tilting SunroofWindows XP SP3 Pro PL Multiboot FIX 04 05 2008 [edycja SyMiLiOn]Wykład 05 Opadanie i fluidyzacjaPrezentacja MG 05 2012windowsInstalacja systemu Windows z pendrive a2011 05 P05 2więcej podobnych podstron