Main Memory Management in Operating System

764 views 40 slides Apr 15, 2024
Slide 1
Slide 1 of 40
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40

About This Presentation

Memory management in OS


Slide Content

Main Memory Management Module 5

What is Memory? Memory: A large array of words / bytes, each having its own address Instruction Execution Cycle Basic Hardware Main memory and registers Must provide protection to the user processes The Base register: holds the smallest legal physical memory address The Limit register: specifies the size of the range The base and limit registers can be loaded only by the operating system with a special privileged instruction CPU Disk Cache CPU Disk

Base & Limit Registers Make sure that each process has a separate memory space Determine the range of legal addresses the process may access Protection of memory CPU hardware compare the address generated in user mode with the registers Base and Limit registers can be loaded only by the OS with a special privileged instruction Operating System Process 1 Process 2 Process 3 Process N 25600 30000 42500 67505 80800 102400 42500 25005 base limit

Hardware address protection with base and limit registers CPU > < base base + limit Memory address Yes Yes No No trap to operating system monitor – addressing error

Address Binding For execution, a program has to be brought into memory Waiting processes form Input queue Addresses may be represented in different ways during program execution A compiler will bind symbolic addresses to relocatable addresses The linkage editor(linker) or loader will in turn bind the relocatable addresses to absolute addresses Binding is a mapping from one address space to another Compile time binding: absolute code Load time binding: relocatable code Execution time binding

Logical Vs. Physical Address Space Logical: generated by CPU Physical: seen by memory unit Both addresses are Identical when generated by compile time and load time binding Different when generated by execution time binding Logical address V irtual address Virtual address space Physical address space Memory Management Unit: Run-time mapping from virtual to physical addresses Memory Relocation register 20000 + CPU Logical Address 356 Physical Address 20356 MMU User program sees logical addresses They never sees real physical addresses Dynamic relocation using a relocation register

Dynamic Loading A routine is not loaded until it is called. All routines are kept on disk in a relocatable load format The relocatable linking loader loads the desired routine into memory Advantage: An unused routine is never loaded Useful when large amounts of code are needed to handle infrequently occurring cases Does not require special support from the operating system Dynamic Linking and Shared Libraries Static linking Dynamic Linking Linking is postponed until execution time A stub is included in the image for each library routine reference A small piece of code indicating how to locate the appropriate memory-resident library routine how to load the library if the routine is not already present Shared Libraries Requires help from the operating system.

Swapping What is swapping? A process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution.( e.g. Round robin CPU scheduling ) User processes space Operating System Process P1 Process P9 Process P7 Swap out Swap in Backing Store Main Memory

Swapping When priority based scheduling: Roll out, Roll in A process that is swapped out will be swapped back in same memory space previously occupied Requires a backing store: Fast disk Must provide direct access to memory images Context-switch time high The major part of the swap time: transfer time transfer time amount of memory swapped Process must be completely idle to swap out

Contiguous Memory Allocation Memory mapping and protection Using a relocation register and a limit register Size of operating-system may change dynamically Transient operating-system code Memory allocation Divide memory into several fixed-sized partitions A table keeps information of occupied and available partitions One large block of available memory: A hole A set of holes of various sizes scattered throughout memory Dynamic storage allocation problem How to satisfy a request of size n from a list of free holes First fit: Allocate the first hole that is big enough Best fit: Allocate the smallest hole that is big enough Worst fit: Allocate the largest hole OS P1 P2

Dynamic Memory Allocation Strategies 20K 45K A 25K 40K D C (20K) 20K A 25K 40K 30K C 25K D B (30) 20K A 25K 40K 30K C 25K B X (20K) 20K A 25K 20K 30K C 25K B X First Fit Strategy Best Fit Strategy Worst Fit Strategy

Example: User input = 12K process size Best Fit 6K 14K 19K 11K 13K Worst Fit First Fit 6K 14K 19K 11K 6K 14K 11K 13K 6K 19K 11K 13K 12K 12K 12K

Example: User input = 300K process 100K sizes 400K 100K 200K 300K 500K 600K Worst Fit 100K 200K 300K 500K 300K 100K 400K 300K 100K 200K 300K 500K 600K 300K 100K 400K 100K Best fit 100K 200K 300K 500K 600K 300K 100K 400K Next Fit 100K 200K 300K 500K 600K First Fit 300K 100K 400K 1 00K

1. Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in order), how would the first-fit, best-fit, and worst-fit algorithms place processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which algorithm makes the most efficient use of memory? 2. Consider the requests from processes in given order 300K, 25K, 125K and 50K. Let there be two blocks of memory available of size 150K followed by a block size 350K. Which of the partition allocation schemes can satisfy above requests? 3. Consider a swapping system in which memory consists of the following hole sizes in memory order: 10K, 4K, 20K, 15K, and 9K. Which hole is taken for successive segment requests of: (a) 8K (b) 12K (c) 10K for first fit, best fit, and worst fit.

Contiguous Memory Allocation Fragmentation The first-fit and best-fit strategies suffer from external fragmentation External fragmentation may be a minor or a major problem Internal fragmentation is also possible memory that is internal to a partition but is not being used Solution to external fragmentation: Compaction shuffle the memory contents so as to place all free memory together in one large block Not possible if static address binding at assembly or load time Permit the logical address space of the processes to be noncontiguous . 44K D B 16K C A G (65K) F 10K E 44K 60K B 16K C A F 10K D (50K)

Paging Permits the physical address space of a process to be noncontiguous . Main memory as well as backing store also has the fragmentation problems Basic method Breaking physical memory into fixed-sized blocks called frames Breaking logical memory into blocks of the same size called pages CPU p d f f d f0000 …0000 f1111….0000 p f Page table Physical memory Logical Address Physical Address

Paging The page size is defined by the hardware: typically a power of 2 The translation of a logical address into a page number and page offset Suppose logical address space is 2 m and page size is 2 n addressing units How to map logical address into physical address? We have no external fragmentation but may have some internal fragmentation Process size is expressed in pages The first page of the process is loaded into one of the allocated frames, and the frame number is put in the page table for this process. P d page number page offset m - n n

Frames : Physical Addresses divided into no. of fixed size blocks Pages : Logical Addresses divided into no. of fixed size blocks Frames Size = Page Size No.frames = Physical Address Space/Frame size No.Pages = Logical Address Space/Page size

Address generated by CPU divided into: Page Number(p) : No. of bits required to represent the pages in logical address space. Page Offset (d) : No. of bits required to represent particular space in pages of logical address space. Physical Address divided into: Frame Number(f): No. of bits required to represent the frame in physical address space. Frame Offset (d) : No. of bits required to represent particular space in frames of physical address space.

Physical Memory Address= Frame no.* Page Size + Offset Example : n=2 m=4, p=d=2 Physical Memory=32Bytes Page Size=4 Bytes Solution: No.of frames=8 Logical Address of 10 : 1010 Physical Address= 7*4+2=30

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Example: Physical Address = 12 bits Logical Address = 13 bits Page Size=1K No.frames ? No. of pages? Solution : No.frames = Physical Address Space/Frame size =4= 2 2 No.pages = Logical Address Space/Page Size =8= 2 3

Let program consist of 8 pages and 16 frames of memory. A page consist of 4096 words. Page 0-> Frame 2, Page 4-> frame 15 Page 6-> Frame 5, Page 7-> frame 9 No other pages in memory. Translate Address:111000011110000 000000000000000 111 000011110000 1001000011110000 Physical 0010000000000000 0000000000000000 Physical

Paging The clear separation between the user's view of memory and the actual physical memory Frame table Status of page frame & to which page of which process it is allocated How page tables are implemented? A page table for each process, a pointer to page table in PCB As a set of dedicated registers are used Use of registers is efficient when page table is reasonably small For large page tables, page tables are in main memory, a page table base register(PTBR) points to page table in memory Problem: time required to access a user memory (two memory access)

Paging How page tables are implemented? Standard solution: a Translation Look-aside Buffer (TLB) Associative, high-speed memory Entry in TLB Searching within TLB is fast but is expensive Number of entries are small Contains only a few of the page-table entries What is TLB hit and TLB miss? A memory reference to the page table must be made The frame number is obtained Use it to access memory Add the page number and frame number to the TLB Some TLBs store address-space identifiers (ASIDs) in each TLB entry used to provide address-space protection for the process key value

Paging hardware with TLB. CPU p d f f d f0000 …0000 f1111….0000 p f Page table Physical memory Logical Address Physical Address Page number frame number TLB hit TLB miss TLB table

Effective Memory Access Time (EAT) Hit ratio (H) The percentage of times that a particular page number is found in the TLB effective access time =cache hit ratio*cache access time+ cache miss ratio *(cache access time +main memory access time)  Effective access time: Let : T: Time requires to access TLB P: Time required to access page table M: Time required to access memory EAT = {(H)×(T+M) + (1- H)×(T+P+M)}

Examples: An 80-percent hit ratio 20 nanoseconds to search TLB 100 nanoseconds to access memory Then: Access time if TLB hits? Access Time if TLB miss? Effective Access Time?

Solution: 1 Access time if TLB hits? =20+100=120 2 Access time if TLB miss? =20+100+100 =220 3. Effective Access Time? =hit ratio *TLB hits + hit miss ratio* TLB miss = 140

Example: An 60-percent hit ratio 10 nanoseconds to search TLB 80 nanoseconds to access memory Effective Access Time ??? Solution : = 0.6*(10+80) + (1-0.6)*(10+2*80) = 0.6 * (90) + 0.4 * (170) = 122

Example Effective access time = 180msec TLB access time = 40msec Main memory access time = 120msec Hit Ratio ??? Solution : h=0.833

Memory Protection in Paging Protection bit associated with each frame Kept in page table One bit can define a page to be read-write or read-only Can also provide separate protection bits for each kind of access A valid-invalid bit Valid- the associated page is in the process's logical address space and is thus a legal page Invalid the page is not in the process's logical address space Operating system sets this bit for each page to allow or disallow access to the page Page Table Length Register(PTLR) to indicate the size of the page table 4 V 6 V 1 V 3 V o I I 1 2 3 4 5

Shared Pages It is possible to share common code so as it is possible to share pages of common code Such code is re-entrant code (or pure code) non-self-modifying code Heavily used programs can also be shared To be sharable, the code must be re-entrant Process 1 4 6 1 Process 2 4 3 1 Process 3 4 5 1 ed2 data2 ed1 data3 data1 1 2 3 4 5 6 7 Page Table for Process 1 Page Table for Process 2 Page Table for Process 3 ed1 data1 ed2 ed1 data 3 ed2 ed1 data 2 ed2

Hierarchical Paging Page table itself is paged. P1 is an index to outer page table and p2 is index to inner page table. Address Translation works from outer to inner, known as forward-mapped page table . P1 P2 d offset Page number

. . . . . . . . . . . . . . . . . . . . . . Page of page table 1 . . . 500 100 . . . 708 929 . . . 900 . . . . . . . . . . . . . . Page Table Memory Outer Page Table 1 100 500 708 900 929 Two-Level Page-Table Scheme

P1 P2 d P1 P2 d Page of Page Table Outer Page Table Offset Logical Address Address Translation

Suppose we have a system with 32 bit virtual address, page size of 4 KB, and 4 bytes per page table entry. Suppose we use two-level paging and arrange for all page tables to fit into a single page frame. How will the bits of the address be divided up as outer index, inner index and offset? Solution : 10,10,12 P1 P2 d Page number offset 10 10 12 Outer Page Inner Page

Hashed Paged Tables Handling address space larger than 32 bits. Hash value is virtual page number. Hash Table Entry ->Linked list elements hash to same location. Each element consist of: Virtual page number Value of mapped frame number Pointer to next element in linked list Algorithm: Virtual page no.is hashed into hash table. Virtual Page no. is compared with field 1 in first element in list. If there is match, field is used to get physical address. If no match, subsequent entries in list are searched for match.

Hashed Paged Tables CPU p d r d Hash Table Physical memory Logical Address Physical Address Hash Function q s r p

Virtual Address: < process_id , page-number, offset> Inverted page Entry: <process-id, page-number> Process-id is A ddress- S pace ID entifier (ASID) When memory reference occurs, <process-id, page-number>  memory system. Inverted page table searched for match. If match found at i , physical address < i , offset> generated. If no match found, illegal attempt for address access. Inverted Page Tables Part of Virtual Address