Chapter 4 - Memory Management in Operating System.pptx

meghanathani16 0 views 46 slides Oct 13, 2025
Slide 1
Slide 1 of 46
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46

About This Presentation

it is osy 3rd year diploma memory management concept ppt made by me


Slide Content

Memory Management in Operating System Chapter-4

AGENDA Basic Memory Management: Partitioning - Fixed and Variable, Free Space Management Techniques: Bit map, Linked List

Memory management is a critical aspect of operating systems that ensures efficient use of the computer's memory resources. It controls how memory is allocated and deallocated to processes, which is key to both performance and stability. Below is a detailed overview of the various components and techniques involved in memory management.

Why Memory Management is Required? Allocate and de-allocate memory before and after process execution. To keep track of used memory space by processes. To minimize  fragmentation  issues. To proper utilization of main memory. To maintain data integrity while executing of process.

Partitioning Memory partitioning is an operating system technique for dividing a computer's main memory into distinct sections called partitions, each of which can hold a process.  How Partitioning Works Division:  The main memory is divided into sections or blocks called partitions.  Allocation:  Each process is loaded into a single partition.  Management:  The operating system keeps track of which memory locations are free and which are allocated to processes. 

Partitioning - Fixed and Variable Fixed Partitioning  (Static Partitioning): How it works:  The memory is divided into a fixed number of partitions, which are set up before processes are loaded. The number and size of these partitions do not change.  Pros:  Simple to implement and requires less overhead for memory allocation.  Cons:  Leads to internal fragmentation, where memory within a partition is wasted because a process doesn't fill the entire partition. It also causes external fragmentation when free memory is scattered in non-contiguous chunks, making it difficult to load a large process. 

2. Dynamic Partitioning  (Variable Partitioning): How it works:  Partitions are created with sizes that match the exact memory requirements of a process. They are not fixed in size and can change dynamically as processes are loaded or removed.  Pros:  Reduces internal fragmentation because partitions can be exactly sized.  Cons:  Prone to external fragmentation, as free memory can become fragmented into many small, non-contiguous blocks over time. 

Free space management involves managing the available storage space on the hard disk or other secondary storage devices. To reuse the space released from deleting the files, a free space list is maintained. The free space list can be implemented mainly as: Free Space Management in Operating System

Bitmap or Bit vector In this approach, A  Bitmap  or Bit Vector is series or collection of bits where each bit corresponds to a disk block. The bit can take two values 0 and 1: 0 indicates that the block is free and 1 indicates an allocated​ block. The given instance of disk blocks on the disk in Figure 1 can be represented by a bitmap of 16 bits as: 1111000111111001.

Conti……. Advantages: Simple to understand. Finding the first free block is efficient. It requires scanning the words (a group of 8 bits) in a bitmap for a non-zero word. (A 0-valued word has all bits 0). The first free block is then found by scanning for the first 1 bit in the non-zero word. Disadvantages: For finding a free block, Operating System needs to iterate all the blocks which is time consuming. The efficiency of this method reduces as the disk size increases.

Linked List In this approach, Free blocks are linked together in a list. Each free block stores the address of the next free block. The list is maintained dynamically as blocks are allocated and freed.

Conti…….. Advantages: The total available space is used efficiently using this method. Dynamic allocation in Linked List is easy, thus can add the space as per the requirement dynamically. Disadvantages: When the size of Linked List increases, the headache of miniating pointers is also increases. This method is not efficient during iteration of each block of memory. I/O IS required for free space list traversal.

Swapping in Operating System To increase CPU utilization in multiprogramming, a memory management scheme known as swapping can be used. Swapping is the process of bringing a process into memory and then temporarily copying it to the disc after it has run for a while. The purpose of swapping in an operating system is to access data on a hard disc and move it to RAM so that application programs can use it.

What is Swapping in the Operating System? Swapping in an operating system is a process that moves data or programs between the computer's main memory (RAM) and a secondary storage (usually a  hard disk  or  SSD ). This helps manage the limited space in RAM and allows the system to run more programs than it could otherwise handle simultaneously.

Process of Swapping When the RAM is full and a new program needs to run, the operating system selects a program or data that is currently in RAM but not actively being used. The selected data is moved to secondary storage, freeing up space in RAM for the new program When the swapped-out program is needed again, it can be swapped back into RAM, replacing another inactive program or data if necessary.

Advantages Swapping minimizes the waiting time for processes to be executed by using the swap space as an extension of RAM, allowing the CPU to keep working efficiently without long delays due to memory limitations. Swapping allows the operating system to free up space in the main memory (RAM) by moving inactive or less critical data to secondary storage (like a hard drive or SSD). This ensures that the available RAM is used for the most active processes and applications, which need it the most for optimal performance.

Disadvantages Risk of data loss during swapping arises because of the dependency on secondary storage for temporary data retention. If the system loses power before this data is safely written back into RAM or saved properly, it can result in the loss of important data, files, or system states.

Compaction in Operating System Compaction is a technique to collect all the free memory present in the form of fragments into one large chunk of free memory, which can be used to run other processes. It does that by moving all the processes towards one end of the memory and all the available free space towards the other end of the memory so that it becomes contiguous.

Before Compaction Before compaction, the main memory has some free space between occupied space. This condition is known as  external fragmentation  . Due to less free space between occupied spaces, large processes cannot be loaded into them.

After Compaction After compaction, all the occupied space has been moved up and the free space at the bottom. This makes the space contiguous and removes external fragmentation. Processes with large memory requirements can be now loaded into the main memory.

What is Fragmentation in Operating System? The process of dividing a computer file, such as a data file or an executable program file, into fragments that are stored in different parts of a computer's storage medium, such as its hard disc or RAM, is known as fragmentation in computing. When a file is fragmented, it is stored on the storage medium in non-contiguous blocks, which means that the blocks are not stored next to each other.

Effect of Fragmentation This can reduce system performance and make it more difficult to access the file. It is generally best to defragment your hard disc on a regular basis to avoid fragmentation, which is a process that rearranges the blocks of data on the disc so that files are stored in contiguous blocks and can be accessed more quickly.

Algorithms: First fit, Best fit, Worst fit In operating systems, First Fit, Best Fit, and Worst Fit are contiguous memory allocation algorithms used to assign free memory blocks to processes.  First Fit allocates the first free block that is large enough.  Best Fit assigns the smallest possible free block that can accommodate the process.  Worst Fit allocates the largest available free block, aiming to leave a large remaining chunk

First Fit How it works:   The OS scans the memory blocks from the beginning and allocates the first free block that is large enough for the process.  Advantage:   It is a fast algorithm because it stops searching once the first suitable block is found.  Disadvantage:   It can lead to external fragmentation, where small, unusable gaps are left in memory, making it difficult to allocate future larger processes. 

Best Fit  How it works:   The OS searches through the entire list of free memory blocks to find the smallest available block that can hold the process. Advantage:   It tends to utilize memory more efficiently by minimizing the size of the remaining unused space within the chosen block. Disadvantage:   It is slower than First Fit because it must scan the entire list of free blocks. It can also create many small, useless fragments (holes) that may prevent larger processes from being allocated later.

Worst Fit How it works:   The OS searches for and allocates the largest available free block that can accommodate the process.  Advantage:   It is designed to leave the largest possible remaining free space, which can be useful for subsequent large processes.  Disadvantage:   Like Best Fit, it requires searching the entire list of free blocks, making it slow. While it reduces the rate of small gaps, it can break up large free blocks into smaller, less useful ones. 

Non- contiguous Memory Management Techniques: Non-contiguous memory management techniques like Paging and Segmentation allow a process's memory to be split into smaller, non-adjacent parts, preventing external fragmentation and enabling efficient memory usage.  In Paging, a process is broken into fixed-size "pages" that are stored in equally fixed-size "frames" in physical memory, with a page table mapping pages to frames.  In Segmentation, a process is divided into logical, variable-sized "segments" (e.g., code, data) stored in memory, with a segment table storing each segment's base address and size.  

Paging Concept :  Physical memory is divided into fixed-size blocks called frames, and logical memory (processes) is divided into fixed-size blocks called pages.  Mechanism :  Pages are loaded into any available memory frames, not necessarily contiguously.  Management :  A  page table , unique to each process, maps virtual pages to physical frames.  Fragmentation :  Eliminates external fragmentation (scattered small, unused blocks) because any free frame can be allocated to a page, but internal fragmentation (unused space within a page) can still occur.  Purpose :  Primarily to efficiently manage physical memory. 

Segmentation Concept :  A program is logically divided into variable-sized blocks or segments, such as code, data, stack, etc.  Mechanism :  Segments are stored in physical memory without needing to be contiguous.  Management :  A  segment table  stores the base address and limit (size) for each segment, mapping logical segments to their physical locations.  Fragmentation :  Also avoids external fragmentation by allocating segments to available memory.  Purpose :  To divide a program into logical, manageable units and provide better protection and management at a logical level. 

Virtual memory Virtual memory is an operating system technique that uses secondary storage (like a hard drive) to create the illusion of a larger main memory (RAM) than is physically available, enabling the execution of larger processes and more concurrent tasks.  It works by dividing a process into pages, storing frequently used pages in RAM and less-used pages on disk, and transferring them between RAM and disk as needed via processes like demand paging, which also involves the Memory Management Unit (MMU) to translate logical addresses to physical ones. 

How it Works Illusion of Large Memory :  Virtual memory creates a unified, continuous block of memory for applications, even when the physical RAM is limited and not contiguous.  Paging/Segmentation :  The OS divides processes into fixed-size blocks called pages or variable-sized segments.  Demand Paging :  When a CPU requests a page that isn't in RAM (a  page fault ), the OS fetches the page from the secondary storage (hard drive) into a free slot in RAM.  Page Replacement :  If RAM is full, the OS decides which page to move from RAM to disk to make space for the new page.  Address Translation :  The Memory Management Unit (MMU), a hardware component, converts the logical memory addresses used by a process into physical memory addresses. 

Key Benefits Run Larger Programs :  Allows programs that are larger than the available physical RAM to execute.  Increase Multiprogramming :  Allows more processes to be loaded and run simultaneously by only keeping necessary parts of them in main memory.  Efficient Memory Usage :  Optimizes memory usage by not requiring the entire process to be in RAM at once.  Process Isolation :  Provides a layer of isolation between processes, preventing them from interfering with each other's memory. 

Types of Virtual Memory 1. Paging Paging divides memory into small fixed-size blocks called pages. When the computer runs out of RAM, pages that aren't currently in use are moved to the hard drive, into an area called a swap file. Here, The swap file acts as an extension of RAM. When a page is needed again, it is swapped back into RAM, a process known as page swapping. This ensures that the operating system (OS) and applications have enough memory to run.

What is Demand Paging in Operating System? Demand paging is a technique used in virtual memory systems where pages enter main memory only when requested or needed by the CPU. OS loads only the necessary pages of a program into memory at runtime, instead of loading the entire program into memory at the start. Here, A page fault occurred when the program needed to access a page that is not currently in memory. The operating system then loads the required pages from the disk into memory and updates the page tables accordingly. This process is transparent to the running program and it continues to run as if the page had always been in memory.

Page Replacement Algorithms in Operating Systems In an operating system that uses paging, a page replacement algorithm is needed when a page fault occurs and no free page frame is available. In this case, one of the existing pages in memory must be replaced with the new page. The virtual memory manager performs this by: Selecting a victim page using a page replacement algorithm. Marking its page table entry as “not present.” If the page was modified (dirty), writing it back to disk before replacement. The efficiency of a page replacement algorithm directly affects the page fault rate, which in turn impacts system performance.

Common Page Replacement Techniques First In First Out (FIFO) Optimal Page replacement Least Recently Used (LRU) Most Recently Used (MRU)

1. First In First Out (FIFO) This is the simplest page replacement algorithm. In this algorithm, the operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.

Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3-page frames. Find the number of page faults using FIFO Page Replacement Algorithm. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots --->  3 Page Faults. When 3 comes, it is already in memory so ---> 0  Page Faults. Then 5 comes, it is not available in memory, so it replaces the oldest page slot i.e 1. ---> 1  Page Fault. 6 comes, it is also not available in memory, so it replaces the oldest page slot i.e 3 ---> 1  Page Fault. Finally, when 3 come it is not available, so it replaces 0 1-page  fault.

2. Optimal Page Replacement In this algorithm, pages are replaced which would not be used for the longest duration of time in the future.

Example:  Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page frame. Find number of page fault using Optimal Page Replacement Algorithm. initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots ---> 4 Page faults 0 is already there so ---> 0 Page fault. when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future---> 1 Page fault. 0 is already there so ---> 0 Page fault. 4 will takes place of 1 ---> 1 Page Fault.

3. Least Recently Used Example  Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page frames. Find number of page faults using LRU Page Replacement Algorithm. Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots ---> 4 Page faults. 0 is already there so ---> 0 Page fault. when 3 came it will take the place of 7 because it is least recently used ---> 1 Page fault. 0 is already in memory so ---> 0 Page fault. 4 will takes place of 1 ---> 1 Page Fault. Now for the further page reference string ---> 0 Page fault because they are already available in the memory.

4. Most Recently Used (MRU) Example 4:  Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4-page frames. Find number of page faults using MRU Page Replacement Algorithm.

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots ---> 4 Page faults 0 is already their so--> 0 page fault when 3 comes it will take place of 0 because it is most recently used ---> 1 Page fault when 0 comes it will take place of 3  ---> 1 Page fault when 4 comes it will take place of 0  ---> 1 Page fault 2 is already in memory so ---> 0 Page fault when 3 comes it will take place of 2  ---> 1 Page fault when 0 comes it will take place of 3  ---> 1 Page fault when 3 comes it will take place of 0  ---> 1 Page fault when 2 comes it will take place of 3  ---> 1 Page fault when 3 comes it will take place of 2  ---> 1 Page fault

THANK YOU Feroz Khan Pathan HOD GTMC