[MUSIC].
Before we get to our example for how
virtual memory actually, works in
practice, lets, cover a few more topics
before we do that.
One is how virtual memory helps with
memory management and protection.
sharing and protection and how we do, how
we speed up address translation.
First how does a virtual memory manage
multiple processes?
Well the key abstraction remember is that
each process has its own virtual address
space.
the virtual address space is a simple
array in memory.
A linear array of one byte after the
other.
but this linear virtual address space
does not need to be contiguous and
physical memory.
Because we're mapping things at the level
of pages or blocks of virtual memory, we
can put any virtual page at any location
in physical memory, at any physical page
in physical mempry.
So we don't have to worry about the, the
virtual page one being just before
virtual page two and then virtual page
three and so on, as we think about it in
the virtual address space.
These can be scattered throughout the
physical memory.
And can be in any order.
So this helps us really fit things in as
needed and not have to worry about moving
things around so that they're always in
exact same order.
How does virtual memory help with
protection and sharing.
Well now we can do things like for
example, here we have a physical page,
physical page six, that might be some
library code that two processes need.
Well they can both have part of their
virtual address space mapped to that same
physical address.
Where physical page six is.
That way they can share the code for that
routine, of that library code.
likewise, we can protect the processes
from each other by giving them pages that
they are the only ones that have an
address for.
they have the physical address but in the
other process does not.
In this case, process two cannot access
physical page two because it doesn't have
the address for physical page two
anywhere in it's page tables.
so that's a very easy way to keep two
processes from stepping on each other.
Just making sure that they have different
physical pages allocated to them.
Okay lastly about these page table
entries is they're not, they don't have
to be just simply addresses and physical
memory.
We can add additional bits to that.
You've already seen the valid bit as
telling us whether this entry contains a
valid address in physical memory or not.
But we can also have a bit for example
that tells us we have right permission to
this area.
or we could only read this area.
This would be very useful for example in
the case when we have those shared
libraries of code.
Where we might want to be able to read
that code but not necessarily be able to
write to it.
we can also, have a bit for example that
provides permission to execute code in
that, physical memory.
So that we can protect from code
injection attacks like, we saw with the
buffer overflow earlier.
So these can be quite useful in doing
that and there is a special hardware that
checks, that the kind of memory access
we're trying to do, or read or write or
an execute.
an instruction fetch is actually allowed
for this physical address.
And if it's not allowed then the
operating system raises a a segmentation
fault exception and our program crashes.
And we can then go about the debugging
process to make sure that we're using it
correctly.
Okay so that's the how we implement
protection of various forms.
Let's go back to address translation and
what happens in the case of a page hit.
And then we'll look at a page fault as
well.
So we start off of course with generating
a virtual address from our CPU
instructions.
this goes to our memory management unit
for translation.
What the memory management unit has to do
is go to the page table entry.
And for that it needs to generate a page
table entry address.
Remember it looks at that page table
buffer register.
To give it the starting address of the
page table.
And then indexes it appropriately for the
virtual address involved.
And goes and reads that entry from main
memory.
if it's valid and there's a good address
there then it does the mapping generating
the physical address.
And accesses the memory again to read
the, that location in physical memory.
That data comes back to the CPU as the
result.
So we've done two memory accesses.
One to get the page table entry, and one
to get the actual data that we're
interested in.
Alright, what happens on a page fault?
Well on a page fault what happens is that
we've gone and gotten our page table
entry brought that back to the memory
management unit and found that the page
is invalid.
so we now have to go and involve the
operating system in helping us get that
page.
From disk and loading it into, the
physical memory.
So that's going to cause an exception
that goes to this page fault handler.
Which is a special, piece of code in the
operating system that knows what to do in
these situations.
When the, page fault handler might write
the victim page, the page it has to
replace.
back into disk in case that we had
written anything there we want to make
sure we're saving that away.
So it writes that back to disk that
optionally.
and then gets the new page from the disk,
the one that we really want and brings
that into that location, in physical
memory where the victim page was.
It now has to update the page table
entries, to reflect these change in the
physical memory, and then can re-execute
the instruction, have it generate a new
virtual address, again.
that, actually the same virtual address
but issue it again so that now, when we
go and read the page table entry, we'll
find a valid a valid bit on and then can
execute that instruction.
So we're executing quite a lot of stuff
here, we're doing a lot of operations to
get this.
memory mapping to happen properly.
the MMU accesses memory twice.
Once to get the [INAUDIBLE] the PTE for
translation.
And then again for the actual memory
request.
Okay, and we have to remember that since
the page table entries are, are in fact
in memory they can be cached.
just like any other memory word.
but might get evicted by other data
references, just like any other memory
word.
and this starts to potentially add up,
and since we're doing this for every
memory address, so how can we make this
process go faster.
So to do that, we're going to create
another construct called a Translation
Lookaside Buffer or TLB.
This is another cache which we're
going to use just for the MMU, this is a
special little tiny cache that the MMU is
going to use to store away page table
entries.
Basically, so keep them around in case it
needs them again.
And remember, because of locality its
changes are we'll be accessing many bytes
of memory in the same page.
Therefore we'll be re-using the same page
table entry over and over again.
So the TLB is going to be a special
mapping cache for virtual page numbers to
physical page numbers.
and it's going to contain these page
table and treats typically a TLB is 128
to 256 entries.
not all that much but things on the order
of the working set size.
Okay, so how does this work now with the
TLB in place?
and remember the TLB there is to
eliminate memory access, to go get that
page table entry.
So, now we generate our virtual address
but the MMU isn't going to go to memory
to find the page table entry, it's first
going to check in it's little cache, the
TLB.
And if it finds it there, great, it just
gets it really quickly, ideally within a
single cycle rather than, the three or
more cycles it might need to go to the
cache.
since it has the page table entry now it
can do that translation right away and
just go immedietaly to.
the physical address and try to get that
out of the cache and memory system.
Okay so that looks a little faster.
Looks a little bit better.
We're still doing a little bit of look up
but this can be done now very quickly
since we have this tiny little cache that
can help us out.
And we can make special hardware to make
that fast.
Okay, now what happens on this?
Let's say we got to the TLB for a
particular page table entry and its not
there?
Well, now we have the same situation, we
had before.
We have to go back out to the cache, read
that page table entry at the right
address, bring that entry back to the
MMU.
But at the same time we're going to load
it into the TLB incase we need it again.
And of course that involves the TLB
finding place, a place to put it.
Which might mean finding, a spot in it's
little cache.
So that could be, potentially, replacing
some other entry, which.
we might also need but now will be gone
because we have to override it with this
one.
Okay.
TLB misses tend to be pretty rare.
So fortunately this doesn't happen that
often.
but when it does it's just like any other
cache, is the way to think about it.
We have to find room for the new element
to be the new block to be placed in the
cache.
Okay.
And then of course once we have that new
page table entry we can then generate the
physical address we need to go to memory.
With the data coming back to the CPU.
Wyszukiwarka
Podobne podstrony:
translate pĆwiczenia translacyjneZakupy z Alice translated by Annie2001 01 Know How Commandline Control of Babelfish Translation ServiceFor translatetranslation instructionsTranslation Techniquestranslate ntranslation2010 lecture 5 transl groupswięcej podobnych podstron