OS_Unit-4 Operating system class notes for the reference of the semister exams.pdf

downloadingmoviesato 8 views 79 slides Oct 15, 2024
Slide 1
Slide 1 of 79
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79

About This Presentation

Operating system


Slide Content

MEMORY MANAGEMENT
AND
VIRTUAL MEMORY
MANAGEMENT
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of Information
Technology

Memory Management






Introduction
Swapping
Contiguous Memory allocation
Paging
Structure of the page table
Segmentation

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Introduction





Basic Hardware
Address Binding
Logical vs Physical Address Space
Dynamic Loading
Dynamic Linking & Shared Libraries

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Basic Hardware






Main memory and registers are only storage that
CPU can access directly.
Program must be brought (from disk) into
memory and placed within a process for it to be
run.
Register access in one CPU clock (or less).
Main memory can take many cycles.
Cache sits between main memory and CPU registers.
Protection of memory required to ensure correct
operation.

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Base and Limit Registers
◦A pair of base and limit registers define the logical address space

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Address Binding









Symbolic names  Logical names  Physical names
Symbolic Names: known in a context or path
file names, program names, printer/device names, user
names
Logical Names: used to label a specific entity
job number, major/minor device numbers, process id
(pid), uid, gid..
Physical Names: address of entity
address on disk or memory
entry point or variable address
PCB address

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Address Binding




Address binding of instructions and data to memory
addresses can happen at three different stages
Compile time: If memory location known a priori,
absolute code can be generated; must recompile code if
starting location changes
Load time: Must generate relocatable code if memory
location is not known at compile time
Execution time: Binding delayed until run time if the
process can be moved during its execution from one
memory segment to another. Need hardware support
for address maps (e.g., base and limit registers)
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Multi Step Processing of a User Program
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Binding Time Tradeoffs












Early binding
compiler - produces efficient code
allows checking to be done early
allows estimates of running time and space
Delayed binding
Linker, loader
produces efficient code, allows separate compilation
portability and sharing of object code
Late binding
VM, dynamic linking/loading, overlaying, interpreting
code less efficient, checks done at runtime
flexible, allows dynamic reconfiguration

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Logical vs. Physical Address Space




Logical address – generated by the CPU; also referred
to as virtual address
Physical address – address seen by the memory unit

Logical and physical addresses are the same in compile-
time and load-time address-binding schemes;
logical (virtual) and physical addresses differ in
execution-time address-binding scheme
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Memory Management Unit (MMU)



Hardware device that maps virtual to physical address

In MMU scheme, the value in the relocation register is added to every
address generated by a user process at the time it is sent to memory

The user program deals with logical addresses; it never sees the real
physical addresses

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Dynamic Relocation Using a Relocation
Register
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Dynamic Loading





Routine is not loaded until it is called

Better memory-space utilization; unused routine is never loaded

Useful when large amounts of code are needed to handle infrequently
occurring cases

No special support from the operating system is required implemented
through program design
https://www.youtube.com/watch?v=lWVQsld8hMI

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Dynamic Linking






Linking postponed until execution time

Small piece of code, stub, used to locate the appropriate memory-resident
library routine

Stub replaces itself with the address of the routine, and executes the
routine

Operating system needed to check if routine is in processes’ memory
address

Dynamic linking is particularly useful for libraries

System also known as shared libraries
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Swapping
A process must be in memory to be executed.
A process can be swapped temporarily out of memory to a backing store.
For example, assume a multiprogramming environment with a round-robin
CPU-scheduling algorithm.
A variant of this swapping policy is used for priority based scheduling
algorithms.
This variant of swapping is sometimes called roll out, roll in.

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Swapping Cont..
Swapping requires a backing store.
Whenever the CPU scheduler decides to execute a process, it calls the
dispatcher.
The dispatcher checks to see whether the next process in the queue is in
memory.
If it is not the dispatcher swaps out a process currently in memory and
swaps in the desired process.
It then reload registers and transfers control to the selected process.

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Swapping Cont..
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Contiguous Memory Allocation
Introduction
Memory Mapping and Protection
Memory Allocation
Fragmentation
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Contiguous Memory Allocation
Contiguous: sharing a common border or together in sequence.
The memory is usually divided into two partitions: one for the resident
operating system and one for the user processes.
We can place the operating system in either low memory or high memory.

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Memory Mapping and Protection
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Memory Mapping and Protection
When the CPU scheduler selects a process for execution, the dispatcher
loads the relocation and limit registers with the correct values
Because every address generated by the CPU is checked against these
registers, we can protect both the operating system and the other users'
programs and data.

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Memory Allocation
Variable Partition Scheme(Dynamic Partitioning):
Partitions are of variable length and number
Each process is allocated exactly as much memory as it
requires
Hole – block of available memory; holes of various size
are scattered throughout memory
Eventually holes are formed in main memory. This is
called external fragmentation
Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Memory Allocation Cont..
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Memory Allocation Cont..
How to satisfy a request of size n from a list of free
holes:
First-fit: Allocate the first hole that is big enough.
Best-fit: Allocate the smallest hole that is big enough;
must search entire list, unless ordered by size. Produces
the smallest leftover hole
Worst-fit: Allocate the largest hole; must also search
entire list
Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of
speed and storage utilization

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Memory Allocation Cont..
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Fragmentation
◦As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that
processes cannot be allocated to memory blocks considering their small size
and memory blocks remains unused. This problem is known as
Fragmentation.
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Fragmentation
External Fragmentation – Total memory space exists to satisfy a request,
but it is not contiguous
Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition,
but not being used.
Reduce external fragmentation by compaction
Shuffle memory contents to place all free memory together in one large
block

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Paging
Introduction
Hardware Support
Protection
Shared Pages

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Introduction
Paging is a memory-management scheme
that permits the physical address space of
a process to be noncontiguous.
The basic method for paging involves
breaking physical memory into fixed-sized
blocks called frames and breaking logical
memory into blocks of the same size called
pages.

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Cont…
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Cont…
Every address generated by the CPU is divided
into two parts: a page number (p) and a page
offset (d).
The page number is used as an index into a page
table.
The page table contains the base address of each
page in physical memory.
This base address is combined with the page
offset to define the physical memory address

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Cont…
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Free Frames
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Hardware Support





Page table is kept in main memory
Page-table base register (PTBR) points to the page table
Page-table length register (PTLR) indicates size of the page table
In this scheme every data/instruction access requires two memory
accesses. One for the page table and one for the data/instruction.
The two memory access problem can be solved by the use of a special
fast-lookup hardware cache called associative memory or translation look-
aside buffers (TLBs)

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Hardware Support Cont…
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Hardware Support Cont…
The percentage of times that a particular page number is found in the
TLB is called the hit ratio.
An 80-percent hit ratio means that we find the desired page number in
the TLB 80 percent of the time.
If it takes 20 nanoseconds to search the TLB and 100 nanoseconds to
access memory, then a mapped-memory access takes 120 nanoseconds
when the page number is in the TLB.

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Hardware Support Cont…
If we fail to find the page number in the TLB (20 nanoseconds), then we
must first access memory for the page table and frame number (100
nanoseconds) and then access the desired byte in memory (100
nanoseconds), for a total of 220 nanoseconds.
To find the effective memory-access time, we weight each case by its
probability:

effective access time = 0.80 x 120 + 0.20 x 220
= 140 nanoseconds.

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Protection
Memory protection implemented by associating
protection bit with each frame

Valid-invalid bit attached to each entry in the page
table.
Valid indicates that the associated page is in the
process’ logical address space, and is thus a legal page
Invalid indicates that the page is not in the process’
logical address space

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Protection Cont...
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Shared Pages
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text
editors, compilers, window systems).

Private code and data
Each process keeps a separate copy of the code and data

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Shared Pages Cont…
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Structure of the Page Table



Hierarchical Paging
Hashed Page Tables
Inverted Page Tables
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Hierarchical Paging



Break up the logical address space into multiple page tables

A simple technique is a two-level page table

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Two-Level Page Table Scheme
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Two-Level Paging Example







A logical address (on 32-bit machine with 1K page size) is divided
into:
a page number consisting of 22 bits
a page offset consisting of 10 bits

Since the page table is paged, the page number is further divided
into:
a 12-bit page number
a 10-bit page offset

Thus, a logical address is as follows:







where pii is an index into the outer page table, and p2 is the
displacement within the page of the outer page table

page number page offset
p
i
p
2 d
12 10 10
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Address Translation Scheme
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Three-Level Paging Scheme
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Hashed Page Tables





Common in address spaces > 32 bits

The virtual page number is hashed into a page table
This page table contains a chain of elements hashing
to the same location

Virtual page numbers are compared in this chain
searching for a match
If a match is found, the corresponding physical
frame is extracted

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Hashed Page Tables Cont…
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Inverted Page Tables




One entry for each real page of memory

Entry consists of the virtual address of the page stored in that real
memory location, with information about the process that owns that page

Decreases memory needed to store each page table, but increases time
needed to search the table when a page reference occurs

Use hash table to limit the search to one — or at most a few — page-
table entries

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Inverted Page Table Architecture
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Segmentation



Memory-management scheme that supports user view of memory
A program is a collection of segments
A segment is a logical unit such as:
--main program
--procedure
--method
--object
--local variables, global
variables
--common block
--stack
--symbol table
--arrays
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

User’s View of a Program
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Logical View of Segmentation
1
3
2
4
1
4
2
3
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Segmentation Architecture






Logical address consists of a two tuple:
<segment-number, offset>,

Segment table – maps two-dimensional physical addresses; each table entry
has:
base – contains the starting physical address where the segments reside in
memory
limit – specifies the length of the segment

Segment-table base register (STBR) points to the segment table’s location in
memory

Segment-table length register (STLR) indicates number of segments used by a
program;
segment number s is legal if s < STLR
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Segmentation Architecture







Protection
With each entry in segment table associate:
validation bit = 0  illegal segment
read/write/execute privileges

Protection bits associated with segments; code sharing occurs at
segment level

Since segments vary in length, memory allocation is a dynamic
storage-allocation problem

A segmentation example is shown in the following diagram

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Segmentation Hardware
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Example of Segmentation
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Virtual Memory Management





Virtual Memory
Demand Paging
Page Replacement Algorithms
Allocation of Frames
Thrashing

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Virtual Memory







Virtual memory – separation of user logical memory from physical memory.
Only part of the program needs to be in memory for execution
Logical address space can therefore be much larger than physical address space
Allows address spaces to be shared by several processes
Allows for more efficient process creation

Virtual memory can be implemented via:
Demand paging
Demand segmentation
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Virtual Memory
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Virtual Memory
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Virtual Memory
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Demand Paging


Basic Concepts
Performance of Demand Paging
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Basic Concepts







Bring a page into memory only when it is needed
Less I/O needed
Less memory needed
Faster response
More users

Page is needed --> reference to it
invalid reference --> abort
Not-in-memory --> bring to memory

Lazy swapper – never swaps a page into memory unless page will be needed
Swapper that deals with pages is a pager

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Demand Paging
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Performance of Demand Paging







If there is a reference to a page, first reference to that page will trap to
operating system:
page fault

Operating system looks at another table to decide:
Invalid reference  abort
Get empty frame
Swap page into frame
Reset tables
Set validation bit = v
Restart the instruction that caused the page fault

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Performance of Demand Paging
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Performance of Demand Paging
Page Fault Rate 0  p  1.0
if p = 0 no page faults
if p = 1, every reference is a fault

Effective Access Time (EAT):
EAT = (1 – p) x memory access+ p (page fault overhead
+ swap page out
+ swap page in
+ restart overhead)

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Allocation of Frames



Introduction
Equal vs. Proportional Allocation
Global vs. Local Replacement
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Introduction










Each process needs minimum number of frames
Example: IBM 370 – 6 pages to handle SS MOVE
instruction:
instruction is 6 bytes, might span 2
pages
2 pages to handle from
2 pages to handle to
Maximum of course is total frames in the system
Two major allocation schemes
Fixed allocation
Proportional allocation

Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Equal vs. Proportional Allocation



Equal Allocation – For example, if there are 100
frames (after allocating frames for the OS) and 5
processes, give each process 20 frames

Proportional Allocation – Allocate according to the
size of process


Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Global vs. Local Replacement





Global replacement – process selects a replacement frame from the set of
all frames; one process can take a frame from another
But then process execution time can vary greatly
But greater throughput so more common

Local replacement – each process selects from only its own set of allocated
frames
More consistent per-process performance
But possibly underutilized memory
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Thrashing






A process is thrashing if it is spending more time paging than executing
If the process does not have the minimum number of frames it will quickly
page-fault.
At this point, it must replace some page.
However, since all its pages are in active use, it must replace a page that will
be needed again right away.
Consequently, it quickly faults again, and again, and again, replacing pages
that it must bring back in immediately.
This high paging activity is called thrashing.


Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Thrashing
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Page Fault Frequency







Thrashing has a high page-fault rate.
When the page fault rate is too high, we know that the process needs
more frames.
Conversely, if the page-fault rate is too low, then the process may have
too many frames.
We can establish upper and lower bounds on the desired page-fault rate.
If the actual page-fault rate exceeds the upper limit, we allocate the
process another frame;
if the page-fault rate falls below the lower limit, we remove a frame
from the process.
Thus, we can directly measure and control the page-fault rate to prevent
thrashing
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology

Cont..
Mrs Venkata Naga Rani Bandaru, Assistant Professor, Department of
Information Technology