UNIT 3-EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND AND PHYSICAL DATA FLOW DIAGRAMS.pptx

LeahRachael 173 views 38 slides Mar 04, 2024
Slide 1
Slide 1 of 38
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38

About This Presentation

Memory Management Logical and Physical Data Flow Diagrams


Slide Content

UNIT 3: EXPLAINING THE MEMORY MANAGEMENT LOGICAL AND PHYSICAL DATA FLOW DIAGRAMS

Outline Address Binding Overlays Contiguous Allocation Non-Contiguous Allocation Paging and Segmentation Schemes

Memory Memory is central to the operation of a modern computer system. Memory is a large array of words or bytes, each location with its own address. Interaction is achieved through a sequence of reads/writes of specific memory address. The CPU fetches the program from the hard disk and stores in the memory. If a program is to be executed, it must be mapped to absolute addresses and loaded into memory.

Memory… In a multiprogramming environment, in order to improve both the CPU utilisation and the speed of the computer’s response, several processes must be kept in memory. There are many different algorithms depending on the particular situation to manage the memory. Selection of a memory management scheme for a specific system depends upon many factors, but especially upon the hardware design of the system. Each algorithm requires its own hardware support.

Types of memory addresses Local address Physical address

Logical Address Logical Address is generated by CPU while a program is running. The logical address is virtual address as it does not exist physically, therefore, it is also known as Virtual Address. This address is used as a reference to access the physical memory location by CPU. The term Logical Address Space is used for the set of all logical addresses generated by a program’s perspective. The hardware device called Memory-Management Unit is used for mapping logical address to its corresponding physical address.

Physical Address Physical Address identifies a physical location of required data in a memory. The user never directly deals with the physical address but can access by its corresponding logical address. The user program generates the logical address and thinks that the program is running in this logical address but the program needs physical memory for its execution, therefore, the logical address must be mapped to the physical address by Memory Management Unit before they are used. The term Physical Address Space is used for all physical addresses corresponding to the logical addresses in a Logical address space.

Differences Between Logical and Physical Address in Operating System

Memory Management – Responsibilities of OS Keep track of which parts of memory are currently being used and by whom. Decide which processes are to be loaded into memory when memory space becomes available. Allocate and deallocate memory space as needed. In the multiprogramming environment operating system dynamically allocates memory to multiple processes. Thus memory plays a significant role in the important aspects of computer system like performance, S/W support, reliability and stability

What is address binding? The Address Binding refers to the mapping of computer instructions and data to physical memory locations. Both logical and physical addresses are used in computer memory. It assigns a physical memory region to a logical pointer by mapping a physical address to a logical address known as a virtual address. It is also a component of computer memory management that the OS performs on behalf of applications that require memory access.

Types of Address Binding in Operating System There are mainly three types of an address binding in the OS. These are as follows: Compile Time Address Binding Load Time Address Binding Execution Time or Dynamic Address Binding

Compile Time Address Binding It is the first type of address binding. It occurs when the compiler is responsible for performing address binding, and the compiler interacts with the operating system to perform the address binding. Memory addresses are assigned to a program when it is being compiled. Since the memory addresses are assigned during the compile time, that is, before the program is executed, the addresses are fixed. They cannot be changed when the program is being executed. When a program is executed, it allocates memory to the system code of the computer. The address binding assigns a logical address to the beginning of the memory segment to store the object code. Memory allocation is a long-term process and may only be modified by recompiling the program.

Load Time Address Binding This type of binding is done after loading the program in the memory, and it would be done by the operating system memory manager, i.e., loader. Memory addresses are assigned to the program during its load time. So, memory addresses can be changed during the execution of the program.

Execution Time Address Binding In this type of address binding, addresses are assigned to programs while the program is running. This means that the memory addresses can change while the program is being executed. It is also known as run-time binding.

The Context of Address Binding

Overlays Overlay memory management technique allows multiple programs to be loaded into memory simultaneously, but only a portion of each program is resident in memory at any given time. This is used to increase the overall memory utilization and efficiency of the computer system. The technique swaps different parts of the programs in and out of memory as required. The overlay memory management technique is commonly used in situations where the memory requirements of the programs exceed the available physical memory. In such cases, the operating system can load the program into the memory in smaller sections, known as overlays. Each overlay contains a portion of the program code and data that is required to execute a specific function. When the program needs to execute different function, it is swapped out of the memory, and a new overlay is loaded.

The need for overlays Sometimes it happens that compared to the size of the biggest partition, the size of the program will be even more. Then, in that case, you should go with overlays. So overlay is a technique to run a program that is bigger than the size of the physical memory by keeping only those instructions and data that are needed at any given time. Divide the program into modules so that not all modules need to be in the memory simultaneously. In memory management, overlays work in the following steps, such as: The programmer divided the program into many logical sections. A small portion of the program had to remain in memory at all times, but the remaining sections (or overlays) were loaded only when needed. The use of overlays allowed programmers to write programs much larger than physical memory, although memory usage depends on the programmer rather than the operating system.

The need for overlays…

Advantages of using overlays include: Increased memory utilization: Overlays allow multiple programs to share the same physical memory space, increasing memory utilization and reducing the need for additional memory. Reduced load time: Only the necessary parts of a program are loaded into memory, reducing load time and increasing performance. Improved reliability: Overlays reduce the risk of memory overflow, which can cause crashes or data loss. Reduce memory requirement Reduce time requirement

Disadvantages of using overlays include: Complexity: Overlays can be complex to implement and manage, especially for large programs. Performance overhead: The process of loading and unloading overlays can result in increased CPU and disk usage, which can slow down performance. Compatibility issues: Overlays may not work on all hardware and software configurations, making it difficult to ensure compatibility across different systems. Overlap map must be specified by programmer Programmer must know memory requirement Overlapped module must be completely disjoint Programming design of overlays structure is complex and not possible in all cases

Types of Overlay Memory Management Fixed Overlay: The size and position of each overlay are predetermined in fixed overlay memory management, and the system loads each overlay into a particular region of memory. Shifting Overlay: With shifting overlay memory management, the system switches overlays in and out of memory as necessary, depending on the programme's current memory needs. Demand Paging Overlay: The system only loads the necessary parts of each overlay into memory based on the programme's current memory needs. Variable Partition Overlay: Memory management with variable partition overlays divides the available memory into partitions of varying sizes, with each overlay being loaded into its own partition.

Example of Overlays The best example of overlays is assembler. Consider the assembler has 2 passes, 2 pass means at any time it will be doing only one thing, either the 1st pass or the 2nd pass. This means it will finish 1st pass first and then 2nd pass. Let's assume that the available main memory size is 150KB and the total code size is 200KB. As the total code size is 200KB and the main memory size is 150KB, it is impossible to use 2 passes together. So, in this case, we should go with the overlays technique.

Example of Overlays… According to the overlay concept, only one pass will be used, and both the passes always need a symbol table and common routine. If the overlays driver is 10KB, what minimum partition size is required? For pass 1 total memory needed is = (70KB + 30KB + 20KB + 10KB) = 130KB. For pass 2 total memory needed is = (80KB + 30KB + 20KB + 10KB) = 140KB. So if we have a minimum 140KB size partition, we can run this code very easily.

Memory Allocation

Contiguous Allocation Contiguous memory allocation is a technique where the operating system allocates a contiguous block of memory to a process. This memory is allocated in a single, continuous chunk, making it easy for the operating system to manage and for the process to access the memory. Contiguous memory allocation is suitable for systems with limited memory sizes and where fast access to memory is important.

Contiguous memory allocation can be done in two ways Fixed Partitioning − In fixed partitioning, the memory is divided into fixed-size partitions, and each partition is assigned to a process. This technique is easy to implement but can result in wasted memory if a process does not fit perfectly into a partition. Dynamic Partitioning − In dynamic partitioning, the memory is divided into variablesize partitions, and each partition is assigned to a process. This technique is more efficient as it allows the allocation of only the required memory to the process, but it requires more overhead to keep track of the available memory.

Advantages of Contiguous Memory Allocation Simplicity − Contiguous memory allocation is a relatively simple and straightforward technique for memory management. It requires less overhead and is easy to implement. Efficiency − Contiguous memory allocation is an efficient technique for memory management. Once a process is allocated contiguous memory, it can access the entire memory block without any interruption. Low fragmentation − Since the memory is allocated in contiguous blocks, there is a lower risk of memory fragmentation. This can result in better memory utilization, as there is less memory wastage.

Disadvantages of Contiguous Memory Allocation Limited flexibility − Contiguous memory allocation is not very flexible as it requires memory to be allocated in a contiguous block. This can limit the amount of memory that can be allocated to a process. Memory wastage − If a process requires a memory size that is smaller than the contiguous block allocated to it, there may be unused memory, resulting in memory wastage. Difficulty in managing larger memory sizes − As the size of memory increases, managing contiguous memory allocation becomes more difficult. This is because finding a contiguous block of memory that is large enough to allocate to a process becomes challenging. External Fragmentation − Over time, external fragmentation may occur as a result of memory allocation and deallocation, which may result in non − contiguous blocks of free memory scattered throughout the system.

Non-contiguous Memory Allocation Non-contiguous memory allocation, on the other hand, is a technique where the operating system allocates memory to a process in non-contiguous blocks. The blocks of memory allocated to the process need not be contiguous, and the operating system keeps track of the various blocks allocated to the process. Non-contiguous memory allocation is suitable for larger memory sizes and where efficient use of memory is important.

Advantages of Non-Contiguous Memory Allocation Reduced External Fragmentation − One of the main advantages of non-contiguous memory allocation is that it can reduce external fragmentation, as memory can be allocated in small, non-contiguous blocks. Increased Memory Utilization − Non-contiguous memory allocation allows for more efficient use of memory, as small gaps in memory can be filled with processes that need less memory. Flexibility − This technique allows for more flexibility in allocating and deallocating memory, as processes can be allocated memory that is not necessarily contiguous. Memory Sharing − Non-contiguous memory allocation makes it easier to share memory between multiple processes, as memory can be allocated in non-contiguous blocks that can be shared between multiple processes.

Disadvantages of Non-Contiguous Memory Allocation Internal Fragmentation − One of the main disadvantages of non-contiguous memory allocation is that it can lead to internal fragmentation, as memory can be allocated in small, non-contiguous blocks that are not fully utilized. Increased Overhead − This technique requires more overhead than contiguous memory allocation, as the operating system needs to maintain data structures to track memory allocation. Slower Access − Access to memory can be slower than contiguous memory allocation, as memory can be allocated in non-contiguous blocks that may require additional steps to access.

Paging Paging is a memory management technique in which process address space is broken into blocks of the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). Paging is a storage mechanism used to retrieve processes from the secondary storage into the main memory in the form of pages. The main idea behind the paging is to divide each process in the form of pages. The main memory will also be divided in the form of frames. One page of the process is to be stored in one of the frames of the memory. The pages can be stored at the different locations of the memory but the priority is always to find the contiguous frames or holes. Pages of the process are brought into the main memory only when they are required otherwise they reside in the secondary storage.

Paging

Segmentation Segmentation is a memory management technique in which each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Each segment is actually a different logical address space of the program. When a process is to be executed, its corresponding segmentation are loaded into non-contiguous memory though every segment is loaded into a contiguous block of available memory. Segmentation memory management works very similar to paging but here segments are of variable-length where as in paging pages are of fixed size. A program segment contains the program's main function, utility functions, data structures, and so on. The operating system maintains a segment map table for every process and a list of free memory blocks along with segment numbers, their size and corresponding memory locations in main memory. For each segment, the table stores the starting address of the segment and the length of the segment. A reference to a memory location includes a value that identifies a segment and an offset.

Segmentation

Advantages of Segmentation Using segmentation, system efficiency can be improved by allowing various sub-parts to run on separate processors. The processes in the Operating system are performed concurrently by the powerful way of responding to the system request The Central Processing Unit is utilized maximum for better usage The Problem caused by internal fragmentation is resolved with the help of Segmentation. The tracking of segments is possible using the segment table, but this takes some memory for storing these segments. Compared to paging, segmentation involves less processing overhead. Moving segments on a disc is easier than moving entire address spaces because of segmentation. Segment tables use less memory

Disadvantages If the overall storage capacity is used and some sort of memory is left behind without usage then the segmentation may experience external fragmentation. As a result, allocating adjacent memory to partitions of different sizes may be challenging. The Segmentation of memory allocation can be costly. When the segments are of different sizes they cannot undergo the process of swapping.

Difference between Paging and Segmentation
Tags