[MUSIC].
To close this section we're going to work
through an a full example of what a
memory system might look like.
This will be a fairly small virtual
memory system but one that should serve
as a good example of what goes on in in
real systems.
Right, so here's our very simple memory
system.
We're going to have 14 bit virtual
address and a 12 bit physical address.
Not all that many bits but you get the
idea.
just think about replacing that with 32
and 64 for the virtual address.
the page size in this case is going to be
64 bytes.
Also a very tiny page size, not what we
typically see.
As it would be, it would be more
something on the order of eight kBs for
example.
So, if we have these values, then here is
our 14 bit virtual address and here's our
12 bit physical address.
Given that we have 64 byte page size, our
virtual page offset will be six bits as
will be our physical page offset.
Because those pages are exactly the same
size.
The remaining bits are the virtual page
number and the physical page number.
Eight and six bits respectively.
The page table for this system is going
to have three columns: a virtual page
number, the corresponding physical page
number and then a valid bit; signifying
whether that particular entry is valid or
not.
here on this diagram I'm only showing the
first 16 entries.
Of course there would be 256 total
entries for the page table because our
VPN, our virtual page number is eight
bits.
So we would have two to the eight entries
in our page table, or 256.
Now, you can see that if we had a much,
much larger virtual address base.
the number of page table entries could be
huge.
And in fact that's a whole another
discussion of what do we do when we have
very, very large page tables that we
couldn't actually even fit in memory.
in the physical memory.
So we'll defer that discussion however.
For now though keep in mind that this is
only 16 out of 256.
So that there's many more entries going
all the way to FF for our virtual
address.
For our virtual page number.
Okay.
Our TLB for this simple memory system is
going to be four way associative.
That's actually pretty typical for TLBs.
We want to have some flexibility of of
how we can store these page table entries
in the cache ,so we, we do not use direct
map caches typically for TLB.
and this case there's only 16 entries
this is also a very very tiny TLB.
a more typical number would have been 256
or maybe 1K of entries.
but again, here we only have 16 and four
way set associative, meaning that there's
four entries per set.
that means we'll be using two bits for
our set index in our cache.
And the remaining bits, in this case, the
remaining six bits of the virtual page
number are for the tag.
And you notice that we're only using the
bits of the virtual page number for this
TLB because the virtual page offset will
not enter into things.
We're not translating that.
We're not looking anything up for that.
That just goes straight down to the
physical page number.
So the TLB only needs to cache the
virtual page numbers and their
corresponding physical page number.
So here at the bottom you then see
typical contents for this TLB.
It has these four sets four entries four
entries per set for a total of 16 some of
which are valid, some are invalid.
And for the ones that are valid we have a
corresponding physical page number.
Okay?
And of course, the tag to, to check the,
the high order six bits of the virtual
page number to also make sure is the same
as the one we're trying to access.
Just like any basic cache would do.
the, the last part of our sample example
system is the system cache.
the main memory to CPU cache.
And in this case, this will be using
physical addresses.
So we'll be taking our physical address
and breaking it up into the bits we need
for this for these cache accesses.
there will be a block offset of only two
bits because our, we're going to use a
block size of four bytes.
Okay?
We only need two bits to index into that
block.
we will then have four bytes for the set
index.
and that's because this particular cache
only has a total of 16 lines and it's
direct mapped.
that means that the remaining part of the
physical address those last six high
order bits will be the tagged component.
And here you see some sample contents for
that cache and here we're showing the
entire cache as well.
Okay, so when we out all of those
together here's our system cache, here's
our TLB and here is the first 16 entries
of our page table.
Remember the page table has a total of
256 entries, only showing 16 here.
Now you might want to save this page and
have it open in another window as we do
some example address translations, using
the data that we've put into these tables
and caches shown on this slide.
So keep this one around have it open in
another window.
Alright, so let's take a look at our
first example.
we're going to go to the address 03D4 in
hex and see what what happens.
first we need to map it to a physical
address and then access our cache.
So what, how do we do that?
Well let's take that virtual address and
break it up into its components.
Okay.
the lower the six bits are that offset,
that virtual page offset.
the high order eight bits are the virtual
page number.
those bits are further divided into the
TLB index and the TLB tag.
And you can see in this case that our TLB
index is three and our TLB tag is zero
three.
So that's what that first part of our
construct will be.
Now is this a TLB hit or not?
Well, if we go to that particular set in
the TLB and look for that tag, if that
tag is present, we would like to know if
that's a valid entry or not.
And we'll see that in fact, if we're to
refer back to our TLB contents, that is a
hit and it is not a page fault because
that is a valid entry.
So let's take a look at that real quick,
and you'll notice that in set three, here
we have the tag 03 and in fact there is a
valid bit.
So that's why that worked out.
We can now pick out the [INAUDIBLE] the
physical page number as being zero D.
That is now our physical page number.
Okay, now that we have that, we have
everything we need to put our physical
address together.
And here it is.
Here's the zero D.
Okay, and then of course we're just
moving the virtual physical the virtual
page offset down to the physical page
offset.
we can now break things up into our
components for the system cache.
And we see here that those correspond to
zero block off set index into set five.
And the tag of zero D.
Okay.
Now that's purely coincidental in this
case that the tag is the same as the
physical page number.
That's just the way these these num, this
number of bits for each of the components
worked out, but that would not be what
would be happen in very case.
Now is this a hit?
Well, let's go look at the at the cache
that we have.
And whether in set five we have a valid
entry with the tag 0D.
And what we'll see is that in fact we do.
And then we want the particular byte at
offset zero, and that byte is the byte
36.
Okay, and you should make sure to verify
that on that example page that I asked
you to save earlier.
Alright, let's do another example.
In this case the address 0B8F, and you
can see here we've already broken it down
to our TLB index and TLB tag.
And of course we go and look at this and
it's not in the TLB, so we have a TLB
miss, and that means that we don't know
what to do at this point.
We don't have our table entry, our page
table entry so we don't even know if this
page is in memory or not.
Hence the question mark here.
And we, of course don't have any handle
on a physical page number.
So what we're going to have to do is go
and read that page table entry bring it
into the TLB cache, to resolve this.
Okay, and until we do that we really
can't go any further.
Let's take a look at a third example.
This time the address 0020 hex.
And, again here we're looking at a TLB
index of zero and a TLB tag of zero zero.
If we go look in the TLB we'll see that
this is also a TLB miss the valid bit is
set to 0 for this particular entry.
but unlike the previous example, we have
the page table to go look things up in.
so if we take care of that and access the
page table for this first page table
entry, namely the one starting for
virtual page number zero.
we go there and we see that in fact it's
not a page fault.
The page is in memory, we just didn't
happen to have it cached in the TLB.
And that that page number the physical
page number corresponding to this is, 28
hex.
so that gives us enough to put together
our physical address.
here's our two and eight from the page
table entry, and the physical page offset
carry down.
and then if we go to our cache and look
at the set eight with the tag 28 we'll
see that in fact we have a miss.
And we're going to have to go to memory
to read that location.
So this is a case where we got a TLB miss
but, we were able to read the page table
entry quickly we had that available.
And once we've got the physical address,
we now have a miss in the cache and have
to go and bring that in from memory.
Okay.
So these are, those are a few examples
and there's many more available in the
text and so on.
I encourage you to take a look at that.
To summarize this section.
the programmer's view of murtual, virtual
memory is that each process has it's own
private linear address space that cannot
be corrupted by other processes although
we might be sharing some parts of it with
other processes.
Mostly read only sections like library
code for example.
the system view of virtual memory is that
we're using memory efficiently by caching
virtual memory pages, so we're making
good use of that small physical memory
that we have.
and it's efficient only because locality
works that remember when we're accessing
one part of memory, we're likely to
access other parts of memory near by.
this level of indirection that we use to
implement virtual memory simplifies
memory management.
And sharing and provides a good way for
providing for protection by inter
positioning this place to check
permissions.
by looking at the page table entries and
seeing which bits are set, how we can
access these different memory pages.
Okay.
So the, to complete the summary of our
memory systems, we have L1 and L2 memory
caches.
these are purely a speed up technique for
main memory to CPU.
it's totally invisible to the application
programmer, and even to the operating
system for the most part.
It's implemented completely in hardware.
that's how processors are designed to
support these caches directly.
virtual memory on the other hand needs
the operating system to step in.
It would, needs the operating system to
create and kill processes, switch between
processes, to help it with protection and
to help it with getting pages from disk
and bringing it into physical memory.
the software that is involved in the
virtual memory system allocates and
shares these pieces of physical memory
among the processes.
it has to maintain these page table
entries and how they should be shared.
And has to handle exceptions how to find
victim pages in a replacement algorithm
for determining the best way to allocate
that small physical memory to best handle
the needs of these large virtual memory
spaces.
And this is all done through hardware
defined mapping tables that are made to
be as fast as possible because of course,
we need to use them for every memory
access.
So that hardware is really critical to
making virtual memory practical.
And we do a lot of acceleration of that
process.
an example of that is the Translation
Look-aside Buffer.
A super specialized little cache just to
aid with the virtual memory address
translation problem.
Wyszukiwarka
Podobne podstrony:
05 Linux Urządzenia w systemie Linux05 2?1 Central Locking System05 systemowe ujecie logistyki05 System InformacjiidX4505 6?1 Anti theft Systemmonter systemow rurociagowychq3[04] z1 05 nmonter systemow rurociagowychq3[04]? 05 u05 obslugiwanie statkow powietrznych systemy i organizacja obslug technicznych statkow powietrznychmonter systemow rurociagowychq3[04]? 05 n2002 05 Migration Finding Controls to Tailor Your System1999 05 Wielokanałowy system zdalnego sterowania2 05 Systemy zarządzania oceną ryzyka zawodowegoElectrical System Mazda1 2200SRM1143 (05 2005) US ENwięcej podobnych podstron