Virtual memory management in Operating System

3,018 views 40 slides Apr 15, 2024
Slide 1
Slide 1 of 40
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40

About This Presentation

Virtual memory management in Operating System


Slide Content

Virtual Memory

Introduction Virtual Memory Allows the execution of processes that are not completely in memory Abstracts main memory into an extremely large, uniform array of storage Allows processes to share files easily and to implement shared memory Benefits: A program would no longer be constrained by the amount of physical memory that is available More programs could be run at the same time Less I/O would be needed to load or swap each user program into memory

Virtual Memory Involves the separation of logical memory as perceived by users from physical memory Virtual address space the logical (or virtual) view of how a process is stored in memory The hole in stack and heap is known as sparse address space Allows files and memory to be shared by two or more processes through page sharing System libraries can be shared by several processes enables processes to share memory allow pages to be shared during process creation with the fork( ) system call Stack heap data max

Demand Paging What is problem with loading entire program in memory? Demand paging Pages are only loaded when they are demanded during program execution A paging system with swapping System uses a lazy swapper Never swaps a page into memory unless that page will be needed Use term pager, rather than swapper Avoids reading into memory pages that will not be used anyway Decreases the swap time and the amount of physical memory needed

Concept of Demand Paging Requires a mechanism to distinguish between the pages that are in memory and the pages that are on the disk The valid-invalid bit scheme is used Valid - the associated page is both legal and in memory Invalid - the page either is not valid or is valid but is currently on the disk 1 2 3 4 5 A B C D E F 1 2 3 4 5 4 V I 6 V I I 9 V 1 2 3 4 5 6 7 8 9 10 A C F A B c D E F frame valid-invalid bit Logical memory Page table physical memory Disk

Concept of Demand Paging If pages marked as invalid and process never accesses those pages? If a process access memory resident pages, execution proceeds normally Page-fault trap The process tries to access a page that was not brought into memory (Access to a page marked invalid) How to handle a page fault? Check internal table to determine whether reference is valid or invalid memory reference If invalid, terminate the process . If valid, page it in . Find a free frame Read page into newly allocated frame Modify the internal table Restart the instruction

Pure Demand Paging Never bring a page into memory until it is required Executing a process with no pages in memory Process faults for first page. Page is brought into memory Process continues to execute Faults as necessary until all pages are in memory. Process now executes with no more page faults Hardware Page table Secondary memory: a high speed disk Holds those pages that are not present in main memory Known as a swap device The section of disk used: a swap space Important requirement is the need to be able to restart any instruction after a page fault

Performance of Demand Paging effective access time for a demand-paged memory If no page fault, effective access time = memory access time If page fault occurs effective access time = (1 - p) x ma + p x page fault time p = Probability of a page fault ma = memory access time page fault time = time requires to service a page fault It is important to keep the page-fault rate low in a demand-paging system Page Service routine

Page Service routine

Consider a demand-paging system with a paging disk that has an average access and transfer time of 20 ms. Addresses are translated through a page table in main memory, with an access time of 1 us per memory access. Thus, each memory reference through the page table takes two accesses. To improve this time, we have added an associative memory that reduces access time to one memory reference, if the page-table entry is in the associative memory . 80% of access are in associative memory and those of remaining 10%(2% of total) cause page fault. What is effective memory access time?

= (0.80 * 1 us) + (0.18 * 2 us) + (0.02 * 20,002 us) = .8 us + .36 us + 400.04 us = 401.2 us

In demand paging, system takes 100 ms to service a page fault and 300 ms to replace a dirty page. Memory access time is 1 ms.The probability of page fault is P. In case of a page fault, the probability of page being dirty is also P. It is observed that average access time is 3 ms. Calculate the value of P. solution : EMAT = P(P*300+(1-p)100)+(1-P)1 P=0.0194

Consider the page fault service time of 10 ms in computer with average memory access time being 20 ns. If one page fault is generated for every 10 6 memory access. Calculate effective memory access time E MAT Solution = (1-10 -6 )*20nsec+(10 -6 )*10 = 30nec

Need for Page Replacement Over-allocation of memory To increase degree of multiprogramming On page fault, system finds no free frame to page in the desired page Possible solutions: Process Termination Swapping out a process and free all its frames Page Replacement

Page Replacement “If no frame is free, find one that is not currently being used and free it” Page-fault service routine with page replacement If no frames are free, two page transfers Doubles the page-fault service time and increases the effective access time Use of a modify bit (or dirty bit) associated with each page When a page is selected for replacement If set- indicating that the page has been modified since it was read into memory, must write that page to the disk If not set - the page has not been modified since it was read into memory

Page Replacement

Page & Frame Replacement Algorithms Frame allocation algorithm Decides how many frames to allocate to each process Page replacement algorithm select the frames that are to be replaced Several algorithms are there Evaluation is carried out by running algorithm on a reference string and compute the number of page faults We consider only page numbers rather than entire address If we have a reference to a page p, then any immediately following references to page p will never cause a page fault

Page & Frame Replacement Algorithms E.g. for a process we have a sequence of address 0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103, 0104, 0101, 0610, 0102, 0103, 0104, 0101, 0609, 0102, 0105 at 100 bytes per page. We reduce this sequence to reference string as 1,4,1,6,1,6,1,6,1,6,1 To determine #page faults for particular reference string and page replacement algorithm Must know the number of page frames available

FIFO Page Replacement Algorithm associates with each page the time when that page was brought into memory “The oldest page is chosen to replace” We have A reference string 7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 7, 0,1 Three frames initially empty 7 7 7 1 7 1 2 1 2 3 2 3 1 4 2 4 2 2 3 4 3 3 4 2 3 2 3 3 2 1 1 3 2 1 2 1 7 7 1 2 7 2 1 7 1 Page Frame Total Page faults = 15

Belady’s Anomaly FIFO page-replacement algorithm is easy to understand Its performance is not always good Possible problem with FIFO page replacement As we increase no. of page frames, page fault increases unexpectedly: Belady’s Anomaly

Optimal Page Replacement Has the lowest page-fault rate of all algorithms Never suffer from Belady's anomaly Such an algorithm does exist and has been called OPT or MIN “Replace the page that will not be used for the longest period of time .” Three page frames(empty initially) and a reference string 7, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 7, 0,1 7 7 7 1 7 1 Page Frame 2 3 4 2 3 3 2 1 2 1 7 1 2 1 2 3 2 4 3 2 3 2 1 7 1 Total Page faults = 9

Optimal Page Replacement Guarantees the lowest possible page fault rate for a fixed number of frames Difficult to implement, Requires future knowledge of the reference string . Used mainly for comparison studies.

LRU Page Replacement FIFO algorithm uses the time when a page was brought into memory OPT algorithm uses the time when a page is to be used . Use the recent past as an approximation of the near future Associates with each page the time of that page's last use “Replace the page that has not been used for the longest period of time” The optimal page-replacement algorithm looking backward in time 7 7 7 1 7 1 Page Frame 2 3 4 2 3 3 2 1 2 1 7 1 2 1 2 3 4 3 3 2 1 3 2 1 7 Total Page faults = 12 4 2 4 3 2 1 2

LRU Page Replacement The major problem to implement LRU: How to implement LRU replacement? How to determine an order for the frames defined by the time of last use? Solution: Counters An association of a time-of-use field in page table entry Addition of logical clock or counter We replace the page with the smallest time value Stack Keeping a stack of page numbers Whenever a page is referenced, it is removed from the stack and put on the top Does not suffer from Belady's anomaly

LRU Page Replacement OPT and LRU belongs to S tack algorithm Can never exhibit Belady's anomaly Algorithm: The set of pages in memory for n frames is always a subset of the set of pages that would be in memory with n + 1 frames

1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6 FIFO : 16 Optimal : 11 LRU : 12

LRU Approximation Page Replacement To implement LRU page replacement, sufficient hardware support is required For system which does not supporting such support Either other page replacement algorithms must be used Or provides help in the form of a reference bit Is set for a page by hardware whenever the page is referenced . Reference bits are associated with each entry in the page table . We can determine which pages have been used and which have not been used by examining the reference bits We do not know the order of use

Additional Reference Bits Algorithm Additional ordering information can be gained by recording the reference bits regular intervals After time interval OS: Shifts the reference bit for each page into the high-order bit of its 8-bit byte Shifts the other bits right by 1 bit Discards the low-order bit Contains the history of page use for the last eight time periods The numbers are not guaranteed to be unique

Second Chance Page Replacement Derived from FIFO page replacement algorithm Second chance algorithm When a page is selected for replacement, check the reference bit If 0  replace this page If 1  give the page a second chance and move on to select the next FIFO page When page gets a second time, Its reference bit is cleared & i ts arrival time is reset to the current time Clock algorithm Implemented using a circular queue

1 1 1 Pages Reference bits Next victim . . . . . . 1 1 1 1 1 Pages Reference bits . . . . . . 1 Page is selected for replacement Page is selected for replacement

Enhanced Second Chance Page Replacement Considers the reference bit and the modify bit as an ordered pair (0, 0) neither recently used nor modified- best! (0, 1) not recently used but modified - not quite as good ( 1, 0) recently used but clean - probably will be used again soon (1,1) recently used and modified - probably will be used again soon , and the page will be need to be written out to disk before it can be replaced Requires three loops Cycle through buffer for (0, 0). If one found, use that page. Cycle through buffer for (0, 1). Set the reference bit to zero for all page bypassed If above step failed, all reference bits will be zero and repetition of step 1 and 2 guarantees to find page for replacement

Counting Based Page Replacement Keep a counter of the number of references that have been made to each page Least Frequently Used (LFU) Page Replacement Requires that the page with the smallest count be replaced Most Frequently Used (MFU) Page Replacement Requires that the page with the largest count be replaced

Allocation of Frames How do we allocate the fixed amount of free memory among the various processes ? allocate at least a minimum number of frames Number of frames  the page fault rate Allocation Algorithms Equal Allocation Proportional Allocation  

Global and Local Allocation Page replacement Local replacement Requires that each process select from only its own set of allocated frames The number of frames allocated to a process does not change Might hinder a process Global replacement Allows a process to select a replacement frame from the set of all frames Increases the number of frames allocated to it A process cannot control its own page-fault rate Results in greater system throughput

Thrashing What is thrashing? If a process does not have ''enough" frames? High paging activity – thrashing A process is thrashing if it is spending more time paging than executing Why does thrashing occur? OS increases degree of multiprogramming to increase CPU utilization global page-replacement algorithm is used Processes start faulting for pages faulting processes must use the paging device to swap pages in and out ready queue empties CPU utilization decreases Thrashing has occurred and system throughput plunges Page fault rate increases tremendously

Thrashing How to limit effect of thrashing? By using a local replacement algorithm ( or priority replacement algorithm ) It cannot steal frames from another process But average service time for a page fault will increase The effective access time will increase even for a process that is not thrashing To prevent thrashing We must provide a process with as many frames as it needs How many frames a process needs? Working Set Strategy: defines the locality model of a process execution

Working Set Model Based on the assumption of locality Examines the most recent ∆ page references ∆ = working set window Set of pages in the most recent ∆ page references: working set Approximation of the program's locality Total demand for frames D = If D > m , thrashing will occur OS monitors the working set of each process If the sum of the working-set sizes increases? OS selects a process to suspend Pages are written out (swapped ), Its frames are reallocated to other processes   . . . 2 6 1 5 7 7 7 7 5 1 6 2 3 4 1 2 3 4 4 4 3 4 3 4 4 4 1 3 2 3 4 4 4 3 4 4 4… A t 1 WS( t 1 ) = {1,2,5,6,7} t 2 WS( t 2 ) = {3,4}

Working Set Model Optimizes CPU utilization By preventing thrashing while keeping the degree of multiprogramming as high as possible Difficulty in keeping track of the working set working-set window is a moving window A page is in the working set if it is referenced anywhere in the working-set window

Keeping Track of the Working Set Approximate with interval timer + a reference bit E.g. ∆ = 10000 Timer interrupts after every 5000 time units Keep 2 bits in memory for each page Whenever a timer interrupts copy and sets the values of all reference bits to 0 If one of the bits in memory = 1  page in in working set

Page Fault Frequency More direct approach than WSS Establish “acceptable” page-fault frequency (PFF) rate Uses a local replacement policy If actual rate too low  process loses frames If actual rate too high  process gains frames