Lab 7: Page tables
Advanced Operating Systems
Zubair Nabi [email protected]
March 27, 2013
Introduction
Page tables allow the OS to:
Multiplex the address spaces of different processes onto a single
physical memory space
Protect the memories of different processes
Map the same kernel memory in several address spaces
Map the same user memory more than once in one address
space (user pages are also mapped into the kernel's physical
view of memory)
Introduction
Page tables allow the OS to:
Multiplex the address spaces of different processes onto a single
physical memory space
Protect the memories of different processes
Map the same kernel memory in several address spaces
Map the same user memory more than once in one address
space (user pages are also mapped into the kernel's physical
view of memory)
Introduction
Page tables allow the OS to:
Multiplex the address spaces of different processes onto a single
physical memory space
Protect the memories of different processesMap the same kernel memory in several address spaces
Map the same user memory more than once in one address
space (user pages are also mapped into the kernel's physical
view of memory)
Introduction
Page tables allow the OS to:
Multiplex the address spaces of different processes onto a single
physical memory space
Protect the memories of different processesMap the same kernel memory in several address spacesMap the same user memory more than once in one address
space (user pages are also mapped into the kernel's physical
view of memory)
Introduction
Page tables allow the OS to:
Multiplex the address spaces of different processes onto a single
physical memory space
Protect the memories of different processesMap the same kernel memory in several address spacesMap the same user memory more than once in one address
space (user pages are also mapped into the kernel's physical
view of memory)
Page table structure
An x86 page table contains 2
20
page table entries (PTEs)
Each PTE contains a 20-bit physical page number (PPN) and
some ags
The paging hardware translates virtual addresses to physical
ones by:
1Using the top 20 bits of the virtual address to index into the page
table to nd a PTE
2Replacing the top 20 bits with the PPN in the PTE3Copying the lower 12 bits verbatim from the virtual to the physical
address
Translation takes place at the granularity of 2
12
byte (4KB)
chunks, calledpages
Page table structure
An x86 page table contains 2
20
page table entries (PTEs)
Each PTE contains a 20-bit physical page number (PPN) and
some ags
The paging hardware translates virtual addresses to physical
ones by:
1Using the top 20 bits of the virtual address to index into the page
table to nd a PTE
2Replacing the top 20 bits with the PPN in the PTE3Copying the lower 12 bits verbatim from the virtual to the physical
address
Translation takes place at the granularity of 2
12
byte (4KB)
chunks, calledpages
Page table structure
An x86 page table contains 2
20
page table entries (PTEs)
Each PTE contains a 20-bit physical page number (PPN) and
some ags
The paging hardware translates virtual addresses to physical
ones by:
1Using the top 20 bits of the virtual address to index into the page
table to nd a PTE
2Replacing the top 20 bits with the PPN in the PTE3Copying the lower 12 bits verbatim from the virtual to the physical
address
Translation takes place at the granularity of 2
12
byte (4KB)
chunks, calledpages
Page table structure
An x86 page table contains 2
20
page table entries (PTEs)
Each PTE contains a 20-bit physical page number (PPN) and
some ags
The paging hardware translates virtual addresses to physical
ones by:
1Using the top 20 bits of the virtual address to index into the page
table to nd a PTE
2Replacing the top 20 bits with the PPN in the PTE3Copying the lower 12 bits verbatim from the virtual to the physical
address
Translation takes place at the granularity of 2
12
byte (4KB)
chunks, calledpages
Page table structure
An x86 page table contains 2
20
page table entries (PTEs)
Each PTE contains a 20-bit physical page number (PPN) and
some ags
The paging hardware translates virtual addresses to physical
ones by:
1Using the top 20 bits of the virtual address to index into the page
table to nd a PTE
2Replacing the top 20 bits with the PPN in the PTE3Copying the lower 12 bits verbatim from the virtual to the physical
address
Translation takes place at the granularity of 2
12
byte (4KB)
chunks, calledpages
Page table structure
An x86 page table contains 2
20
page table entries (PTEs)
Each PTE contains a 20-bit physical page number (PPN) and
some ags
The paging hardware translates virtual addresses to physical
ones by:
1Using the top 20 bits of the virtual address to index into the page
table to nd a PTE
2Replacing the top 20 bits with the PPN in the PTE3Copying the lower 12 bits verbatim from the virtual to the physical
address
Translation takes place at the granularity of 2
12
byte (4KB)
chunks, calledpages
Page table structure
An x86 page table contains 2
20
page table entries (PTEs)
Each PTE contains a 20-bit physical page number (PPN) and
some ags
The paging hardware translates virtual addresses to physical
ones by:
1Using the top 20 bits of the virtual address to index into the page
table to nd a PTE
2Replacing the top 20 bits with the PPN in the PTE3Copying the lower 12 bits verbatim from the virtual to the physical
address
Translation takes place at the granularity of 2
12
byte (4KB)
chunks, calledpages
Page table structure (2)
A page table is stored in physical memory as a two-level tree
Root of the tree: 4KB page directory
Each page directory index: page table pages (PDE)
Each page table page: 1024 32-bit PTEs
1024 x 1024 = 2
20
Page table structure (2)
A page table is stored in physical memory as a two-level tree
Root of the tree: 4KB page directoryEach page directory index: page table pages (PDE)
Each page table page: 1024 32-bit PTEs
1024 x 1024 = 2
20
Page table structure (2)
A page table is stored in physical memory as a two-level tree
Root of the tree: 4KB page directoryEach page directory index: page table pages (PDE)Each page table page: 1024 32-bit PTEs
1024 x 1024 = 2
20
Page table structure (2)
A page table is stored in physical memory as a two-level tree
Root of the tree: 4KB page directoryEach page directory index: page table pages (PDE)Each page table page: 1024 32-bit PTEs1024 x 1024 = 2
20
Page table structure (2)
A page table is stored in physical memory as a two-level tree
Root of the tree: 4KB page directoryEach page directory index: page table pages (PDE)Each page table page: 1024 32-bit PTEs1024 x 1024 = 2
20
Translation
Use top 10 bits of the virtual address to index the page directory
If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
If either the PDE or the PTE is missing, raise a fault
This two-level structure increases efciency
How?
Translation
Use top 10 bits of the virtual address to index the page directory
If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
If either the PDE or the PTE is missing, raise a fault
This two-level structure increases efciency
How?
Translation
Use top 10 bits of the virtual address to index the page directory
If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
If either the PDE or the PTE is missing, raise a faultThis two-level structure increases efciency
How?
Translation
Use top 10 bits of the virtual address to index the page directory
If the PDE is present, use next 10 bits to index the page table
page and obtain a PTE
If either the PDE or the PTE is missing, raise a faultThis two-level structure increases efciency
How?
Permissions
Each PTE contains associated ags
Flag Description
PTE_P Whether the page is present
PTE_W Whether the page can be written to
PTE_U Whether user programs can access the page
PTE_PWTWhether write through or write back
PTE_PCDWhether caching is disabled
PTE_A Whether the page has been accessed
PTE_D Whether the page is dirty
PTE_PS Page size
Process address space
Each process has a private address space which is switched on a
context switch (viaswitchuvm)
Each address space starts at 0 and goes up toKERNBASE
allowing 2GB of space (specic to xv6)
Each time a process requests more memory, the kernel:
1Finds free physical pages2Adds PTEs that point to these physical pages in the process' page
table
3SetsPTE_U,PTE_W, andPTE_P
Process address space
Each process has a private address space which is switched on a
context switch (viaswitchuvm)
Each address space starts at 0 and goes up toKERNBASE
allowing 2GB of space (specic to xv6)
Each time a process requests more memory, the kernel:
1Finds free physical pages2Adds PTEs that point to these physical pages in the process' page
table
3SetsPTE_U,PTE_W, andPTE_P
Process address space
Each process has a private address space which is switched on a
context switch (viaswitchuvm)
Each address space starts at 0 and goes up toKERNBASE
allowing 2GB of space (specic to xv6)
Each time a process requests more memory, the kernel:
1Finds free physical pages2Adds PTEs that point to these physical pages in the process' page
table
3SetsPTE_U,PTE_W, andPTE_P
Process address space
Each process has a private address space which is switched on a
context switch (viaswitchuvm)
Each address space starts at 0 and goes up toKERNBASE
allowing 2GB of space (specic to xv6)
Each time a process requests more memory, the kernel:
1Finds free physical pages2Adds PTEs that point to these physical pages in the process' page
table
3SetsPTE_U,PTE_W, andPTE_P
Process address space
Each process has a private address space which is switched on a
context switch (viaswitchuvm)
Each address space starts at 0 and goes up toKERNBASE
allowing 2GB of space (specic to xv6)
Each time a process requests more memory, the kernel:
1Finds free physical pages2Adds PTEs that point to these physical pages in the process' page
table
3SetsPTE_U,PTE_W, andPTE_P
Process address space
Each process has a private address space which is switched on a
context switch (viaswitchuvm)
Each address space starts at 0 and goes up toKERNBASE
allowing 2GB of space (specic to xv6)
Each time a process requests more memory, the kernel:
1Finds free physical pages2Adds PTEs that point to these physical pages in the process' page
table
3SetsPTE_U,PTE_W, andPTE_P
Process address space (2)
Each process' address space also contains mappings (above
KERNBASE) for the kernel to run. Specically:
KERNBASE:KERNBASE+PHYSTOP is mapped to0:PHYSTOP
The kernel can use its own instructions and data
The kernel can directly write to physical memory (for instance,
when creating page table pages)
A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
PTE_Uis not set for all entries aboveKERNBASE
Process address space (2)
Each process' address space also contains mappings (above
KERNBASE) for the kernel to run. Specically:
KERNBASE:KERNBASE+PHYSTOP is mapped to0:PHYSTOPThe kernel can use its own instructions and data
The kernel can directly write to physical memory (for instance,
when creating page table pages)
A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
PTE_Uis not set for all entries aboveKERNBASE
Process address space (2)
Each process' address space also contains mappings (above
KERNBASE) for the kernel to run. Specically:
KERNBASE:KERNBASE+PHYSTOP is mapped to0:PHYSTOPThe kernel can use its own instructions and dataThe kernel can directly write to physical memory (for instance,
when creating page table pages)
A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
PTE_Uis not set for all entries aboveKERNBASE
Process address space (2)
Each process' address space also contains mappings (above
KERNBASE) for the kernel to run. Specically:
KERNBASE:KERNBASE+PHYSTOP is mapped to0:PHYSTOPThe kernel can use its own instructions and dataThe kernel can directly write to physical memory (for instance,
when creating page table pages)
A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
PTE_Uis not set for all entries aboveKERNBASE
Process address space (2)
Each process' address space also contains mappings (above
KERNBASE) for the kernel to run. Specically:
KERNBASE:KERNBASE+PHYSTOP is mapped to0:PHYSTOPThe kernel can use its own instructions and dataThe kernel can directly write to physical memory (for instance,
when creating page table pages)
A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
PTE_Uis not set for all entries aboveKERNBASE
Process address space (2)
Each process' address space also contains mappings (above
KERNBASE) for the kernel to run. Specically:
KERNBASE:KERNBASE+PHYSTOP is mapped to0:PHYSTOPThe kernel can use its own instructions and dataThe kernel can directly write to physical memory (for instance,
when creating page table pages)
A shortcoming of this approach is that the kernel can only make
use of 2GB of memory
PTE_Uis not set for all entries aboveKERNBASE
Example: Creating an address space formain
mainmakes a call tokvmalloc
kvmalloccreates a page table with kernel mappings above
KERNBASEand switches to it
1void kvmalloc (void)
2{
3 kpgdir = setupkvm ();
4 switchkvm ();
5}
Example: Creating an address space formain
mainmakes a call tokvmalloc
kvmalloccreates a page table with kernel mappings above
KERNBASEand switches to it
1void kvmalloc (void)
2{
3 kpgdir = setupkvm ();
4 switchkvm ();
5}
Example: Creating an address space formain
mainmakes a call tokvmalloc
kvmalloccreates a page table with kernel mappings above
KERNBASEand switches to it
1void kvmalloc (void)
2{
3 kpgdir = setupkvm ();
4 switchkvm ();
5}
setupkvm
1Allocates a page of memory to hold the page directory2Callsmappagesto install kernel mappings (kmap)
Instructions and data
Physical memory up toPHYSTOP
Memory ranges for I/O devices
Does not install mappings for user memory
setupkvm
1Allocates a page of memory to hold the page directory2Callsmappagesto install kernel mappings (kmap)
Instructions and data
Physical memory up toPHYSTOP
Memory ranges for I/O devices
Does not install mappings for user memory
setupkvm
1Allocates a page of memory to hold the page directory2Callsmappagesto install kernel mappings (kmap)
Instructions and data
Physical memory up toPHYSTOP
Memory ranges for I/O devices
Does not install mappings for user memory
setupkvm
1Allocates a page of memory to hold the page directory2Callsmappagesto install kernel mappings (kmap)
Instructions and data
Physical memory up toPHYSTOPMemory ranges for I/O devices
Does not install mappings for user memory
setupkvm
1Allocates a page of memory to hold the page directory2Callsmappagesto install kernel mappings (kmap)
Instructions and data
Physical memory up toPHYSTOPMemory ranges for I/O devices
Does not install mappings for user memory
setupkvm
1Allocates a page of memory to hold the page directory2Callsmappagesto install kernel mappings (kmap)
Instructions and data
Physical memory up toPHYSTOPMemory ranges for I/O devices
Does not install mappings for user memory
mappages
Installs virtual to physical mappings for a range of addresses
For each virtual address:
1Callswalkpgdirto nd address of the PTE for that address2Initializes the PTE with the relevant PPN and the desired
permissions
mappages
Installs virtual to physical mappings for a range of addresses
For each virtual address:
1Callswalkpgdirto nd address of the PTE for that address2Initializes the PTE with the relevant PPN and the desired
permissions
mappages
Installs virtual to physical mappings for a range of addresses
For each virtual address:
1Callswalkpgdirto nd address of the PTE for that address2Initializes the PTE with the relevant PPN and the desired
permissions
mappages
Installs virtual to physical mappings for a range of addresses
For each virtual address:
1Callswalkpgdirto nd address of the PTE for that address2Initializes the PTE with the relevant PPN and the desired
permissions
walkpgdir
1Uses the upper 10 bits of the virtual address to nd the PDE2Uses the next 10 bits to nd the PTE
walkpgdir
1Uses the upper 10 bits of the virtual address to nd the PDE2Uses the next 10 bits to nd the PTE
Physical memory allocation
Physical memory between the end of the kernel andPHYSTOPis
allocated on the y
Free pages are maintained through a linked liststruct run
*freelistprotected by a spinlock
1Allocation: Remove a page from the list:kalloc()2Deallocation: Add the page to the list:kfree()
1struct {
2 struct spinlock lock;
3 int use_lock ;
4 struct runfreelist ;
5} kmem;
Physical memory allocation
Physical memory between the end of the kernel andPHYSTOPis
allocated on the y
Free pages are maintained through a linked liststruct run
*freelistprotected by a spinlock
1Allocation: Remove a page from the list:kalloc()2Deallocation: Add the page to the list:kfree()
1struct {
2 struct spinlock lock;
3 int use_lock ;
4 struct runfreelist ;
5} kmem;
Physical memory allocation
Physical memory between the end of the kernel andPHYSTOPis
allocated on the y
Free pages are maintained through a linked liststruct run
*freelistprotected by a spinlock
1Allocation: Remove a page from the list:kalloc()2Deallocation: Add the page to the list:kfree()
1struct {
2 struct spinlock lock;
3 int use_lock ;
4 struct runfreelist ;
5} kmem;
Physical memory allocation
Physical memory between the end of the kernel andPHYSTOPis
allocated on the y
Free pages are maintained through a linked liststruct run
*freelistprotected by a spinlock
1Allocation: Remove a page from the list:kalloc()2Deallocation: Add the page to the list:kfree()
1struct {
2 struct spinlock lock;
3 int use_lock ;
4 struct runfreelist ;
5} kmem;
Physical memory allocation
Physical memory between the end of the kernel andPHYSTOPis
allocated on the y
Free pages are maintained through a linked liststruct run
*freelistprotected by a spinlock
1Allocation: Remove a page from the list:kalloc()2Deallocation: Add the page to the list:kfree()
1struct {
2 struct spinlock lock;
3 int use_lock ;
4 struct runfreelist ;
5} kmem;
exec
Creates the user part of an address space from the program
binary, in Executable and Linkable Format (ELF)
Initializes instructions, data, and stack
exec
Creates the user part of an address space from the program
binary, in Executable and Linkable Format (ELF)
Initializes instructions, data, and stack
Today's task
Most operating systems implement anticipatory paging in which
on a page fault, the next few consecutive pages are also loaded
to preemptively reduce page faults
Chalk out a design to implement this strategy in xv6
Reading(s)
Chapter 2, Page tables from xv6: a simple, Unix-like teaching
operating system