operating system module 2 presentation notes

ksamjish 141 views 72 slides May 10, 2024
Slide 1
Slide 1 of 72
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72

About This Presentation

operating system notes


Slide Content

MODULE 2 PROCESSES

Module 2 Processes Process states, Process control block, Threads, Scheduling Operations on processes Process creation and termination Inter-process communication Shared memory systems Message passing systems Process Scheduling Basic concepts, Scheduling criteria Scheduling Algorithms First come First Served Shortest Job First Priority scheduling Round robin scheduling

Objectives To introduce the notion of a process -- a program in execution, which forms the basis of all computation To describe the various operations on process To explore inter-process communication using shared memory and message passing To introduce CPU scheduling, which is the basis for multi-programmed operating systems To describe various CPU-scheduling algorithms

PROCESS BASICS

Process Process – a program in execution

Process Process – a program in execution Process execution must progress in sequential fashion Program is passive entity stored on disk ( executable file ), process is active entity. Program becomes process when executable file loaded into memory Execution of program started via GUI mouse clicks, command line entry of its name, etc

Process Concept(cont.) Process in main memory consist of multiple parts Text section containing the program code Data section containing global and static variables Stack containing temporary data Function parameters, return addresses, local variables Heap containing memory dynamically allocated during run time ( managed via calls to new, delete, malloc, free, etc.) Note that the stack and the heap start at opposite ends of the process's free space and grow towards each other. If they should ever meet, then either a stack overflow error will occur, or else a call to new or malloc will fail due to insufficient memory available.

Process State As a process executes, it changes state new : The process is being created running : Instructions are being executed waiting : The process is waiting for some event to occur ready : The process is waiting to be assigned to a processor terminated : The process has finished execution

Process Control Block (PCB) Each process is represented in the operating system by a PCB PCB holds the information associated with each process Also called task control block The  process control block  is kept in a  memory  area that is protected from the normal user access. This is done because it contains important  process  information

Process Control Block (PCB) PCB includes: Process state Process number – process id Program counter – address of next instruction to execute CPU registers – contents of all process-centric registers Memory management information – memory allocated to the process List of open files CPU scheduling information - priorities, scheduling queue pointers Accounting information – CPU used, clock time elapsed since start, time limits I/O status information – I/O devices allocated to process

Process Scheduling The process scheduling is the activity of the OS that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing. Maximize CPU use, quickly switch processes onto CPU for time sharing Process scheduler selects among available processes for next execution on CPU

Scheduling Queues OS maintains scheduling queues of processes OS maintains a separate queue for different process states and PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue. Different scheduling queues are: Job queue – set of all processes in the system. When a process is created it is entered into job queue. Ready queue – set of all processes residing in main memory, ready and waiting to execute Device queues – set of processes waiting for an I/O device Process migrate among various queues.

Ready Queue And Various I/O Device Queues Each device has its own device queue.

Queuing Diagram - Representation of Process Scheduling Queueing diagram represents queues, resources, flows

Schedulers Schedulers are special system software which handle process scheduling. Types of process schedulers: Short- term scheduler Long- term scheduler Medium- term scheduler Short-term scheduler (or CPU scheduler) Selects which process should be executed next and allocates CPU Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be fast)

Schedulers Long-term scheduler (or job scheduler ) Selects which processes should be brought into the ready queue Long-term scheduler is invoked infrequently (seconds, minutes) ⇒ (may be slow) The long-term scheduler controls the degree of multiprogramming (number of processes in memory). Medium – term scheduler A running process may become suspended if it makes an I/O request. A suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is moved to the secondary storage. Remove process from memory, store on disk, bring back in from disk to continue execution: swapping Medium-term scheduler takes care of process swapping. Medium-term scheduler reduces the degree of multiprogramming.

Schedulers Processes can be described as either: I/O-bound process – spends more time doing I/O than computations, many short CPU bursts CPU-bound process – spends more time doing computations; few very long CPU bursts Long-term scheduler strives for good process mix

Context Switch The process of storing the state of the old process and load the saved state for the new process, when CPU switches to another process, is called context switching This allows multiple processes to share a single CPU Essential feature of a multitasking operating system. Context of a process represented in the PCB

CPU Switch From Process to Process Context-switch time is overhead; the system does no useful work while switching The more complex the OS and the PCB 🡺 the longer the context switch

Tutorial - 3 Threads in Operating System Process vs Thread

OPERATIONS ON PROCESSES

Operations on Processes Operations on processes are: process creation process termination

Process Creation Process creation means the construction of a new process for the execution. This might be performed by system, user or old process itself. Parent process create children processes, which, in turn create other processes, forming a tree of processes Generally, process are identified and managed via a process identifier ( pid )

A Tree of Processes in Linux Reading Activity – Page 114, para 3

Process Creation(Cont.) Resource (CPU time, memory, files, I/O devices) sharing options Parent and children share all resources Children share subset of parent’s resources Parent and child share no resources Execution options Parent and children execute concurrently Parent waits until children terminate Address space Child is a duplicate of parent(child has same program and data as parent) Child process has a new program loaded into it

Process Creation (Cont.) UNIX examples fork () exec()

Fork () system call Creates new process Created process is called  child process , which runs concurrently with the process that makes the fork() call ( parent process ). After a new child process is created, both processes will execute the next instruction following the fork() system call. Process IDs are different. A child process uses the same pc(program counter), same CPU registers, same open files which use in the parent process.

Total Number of Processes created= 2 n , where n is number of fork system calls

Fork () system call It takes no parameters Returns an integer value. Negative Value - creation of a child process was unsuccessful. Zero : Returned to the newly created child process. Positive value : Returned to parent or caller. The value contains process ID of newly created child process.

exec () system call exec() system call used after a fork() to replace the process memory space with a new program

#include<string.h> #include<sys/types.h> #include<sys/wait.h> int main() { int status; int childpid; int pid_child; if((childpid =fork())==-1) { printf("\nCant fork\n"); } else if(childpid==0) { execl("/bin/ls","ls",NULL); printf("\n I am the child with pid=%d",getpid());  exit(0); } else { pid_child=wait(&status); printf(“I am the parent with pid=%d",getpid()); printf("child with processid=%d compltely successfullly ",pid_child); } return(0); }

C Program Forking Separate Process https://www.youtube.com/watch?v=IFEFVXvjiHY

Process Termination Process executes last statement and then asks the operating system to delete it using the exit() system call. Returns status data from child to parent (via wait()) Process resources are deallocated by operating system Parent may terminate the execution of children processes using the abort() system call. Some reasons for doing so: Child has exceeded allocated resources Task assigned to child is no longer required The parent is exiting and the operating systems does not allow a child to continue if its parent terminates

Process Termination (cont.) The parent process may wait for termination of a child process by using the wait() system call . The call returns status information and the pid of the terminated process pid = wait(&status); Some operating systems do not allow child to exists if its parent has terminated. If a process terminates, then all its children must also be terminated. cascading termination - All children, grandchildren, etc. are terminated. The termination is initiated by the operating system.

Process Termination (cont.) Zombie process A zombie process is a process whose execution is completed but it still has an entry in the process table. Once the parent process reads the child’s exit status, the zombie process is eliminated from the process table. This is known as reaping the zombie process. Orphan process An  orphan  process is a process that is still executing, but whose parent has died.

Other operations on process Process Blocking When a process invokes an input-output system call, OS blocks that process Process Preemption When a timeout occurs (that means the process hadn’t been terminated in the allotted time interval and next process is ready to execute) then the operating system preempts the process. This operation is only valid where CPU scheduling supports preemption. 

Multiprocess Architecture – Chrome Browser Many web browsers ran as single process (some still do) If one web site causes trouble, entire browser can hang or crash Google Chrome Browser is multiprocess with 3 different types of processes: Browser process manages user interface, disk and network I/O Renderer process renders web pages, deals with HTML, Javascript. A new renderer created for each website opened Runs in sandbox restricting disk and network I/O, minimizing effect of security exploits Plug-in process for each type of plug-in Reading Activity

INTERPROCESS COMMUNICATION

Interprocess Communication Processes executing concurrently in the OS may be independent process cooperating process Independent process They cannot affect or be affected by other processes executing in the system. Cooperating process The can affect or be affected by other processes, executing in the system Any process that share data with other process is a cooperating process

Interprocess Communication Reasons for cooperating processes: Information sharing Computation speedup Modularity Convenience

Interprocess Communication Interprocess communication(IPC)   is the mechanism provided by the  operating system  that allows processes to  communicate  with each other Two models of IPC Shared memory In shared memory-memory model, a region of memory that is shared by the cooperating process is established. Processes can then exchange information by reading and writing data to the shared region. Message passing In message passing model, communication takes place by means of exchanged messages between cooperating processes.

IPC Models

IPC using shared memory system

Producer-Consumer Problem To illustrate the concept of cooperating process, we can consider producer – consumer problem, which is a common paradigm for cooperating processes Producer process produces information that is consumed by a consumer process Compiler produces assembly code that is consumed by an assembler Assembler produces object modules that are consumed by the loader Web server provides HTML files and images that are consumed by web browser.

Bounded-Buffer – Shared-Memory Solution Shared data #define BUFFER_SIZE 10 typedef struct { . . . } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0; Solution is correct, but can only use BUFFER_SIZE-1 elements

Bounded-Buffer – Producer item next_produced; while (true) { /* produce an item in next produced */ while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing */ buffer[in] = next_produced; in = (in + 1) % BUFFER_SIZE; }

Bounded Buffer – Consumer item next_consumed; while (true) { while (in == out) ; /* do nothing */ next_consumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; /* consume the item in next consumed */ }

Assignment 2 IPC using message passing

Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation

Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various CPU-scheduling algorithms To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system To examine the scheduling algorithms of several operating systems

Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait CPU burst followed by I/O burst CPU burst distribution is of main concern

Histogram of CPU-burst Times

CPU Scheduler Short-term scheduler selects from among the processes in ready queue, and allocates the CPU to one of them Queue may be ordered in various ways CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready Terminates Scheduling under 1 and 4 is nonpreemptive All other scheduling is preemptive Consider access to shared data Consider preemption while in kernel mode Consider interrupts occurring during crucial OS activities

Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running

Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time

First- Come, First-Served (FCFS) Scheduling Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1 , P 2 , P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17

FCFS Scheduling (Cont.) Suppose that the processes arrive in the order: P 2 , P 3 , P 1 The Gantt chart for the schedule is: Waiting time for P 1 = 6 ; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect - short process behind long process Consider one CPU-bound and many I/O-bound processes

Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst Use these lengths to schedule the process with the shortest time SJF is optimal – gives minimum average waiting time for a given set of processes The difficulty is knowing the length of the next CPU request Could ask the user

Example of SJF Process Arriva l Time Burst Time P 1 0.0 6 P 2 2.0 8 P 3 4.0 7 P 4 5.0 3 SJF scheduling chart Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Determining Length of Next CPU Burst Can only estimate the length – should be similar to the previous one Then pick process with shortest predicted next CPU burst Can be done by using the length of previous CPU bursts, using exponential averaging Commonly, α set to ½ Preemptive version called shortest-remaining-time-first

Prediction of the Length of the Next CPU Burst

Examples of Exponential Averaging α =0 τ n+1 = τ n Recent history does not count α =1 τ n+1 = α t n Only the actual last CPU burst counts If we expand the formula, we get: τ n +1 = α t n +(1 - α ) α t n -1 + … +( 1 - α ) j α t n - j + … +( 1 - α ) n +1 τ Since both α and (1 - α ) are less than or equal to 1, each successive term has less weight than its predecessor

Example of Shortest-remaining-time-first Now we add the concepts of varying arrival times and preemption to the analysis Process A arri Arrival Time T Burst Time P 1 0 8 P 2 1 4 P 3 2 9 P 4 3 5 Preemptive SJF Gantt Chart Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec

Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer ≡ highest priority) Preemptive Nonpreemptive SJF is priority scheduling where priority is the inverse of predicted next CPU burst time Problem ≡ Starvation – low priority processes may never execute Solution ≡ Aging – as time progresses increase the priority of the process

Example of Priority Scheduling Process A arri Burst Time T Priority P 1 10 3 P 2 1 1 P 3 2 4 P 4 1 5 P 5 5 2 Priority scheduling Gantt Chart Average waiting time = 8.2 msec

Round Robin (RR) Each process gets a small unit of CPU time ( time quantum q ), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q , then each process gets 1/ n of the CPU time in chunks of at most q time units at once. No process waits more than ( n -1) q time units. Timer interrupts every quantum to schedule next process Performance q large ⇒ FIFO q small ⇒ q must be large with respect to context switch, otherwise overhead is too high

Example of RR with Time Quantum = 4 Process Burst Time P 1 24 P 2 3 P 3 3 The Gantt chart is: Typically, higher average turnaround than SJF, but better response q should be large compared to context switch time q usually 10ms to 100ms, context switch < 10 usec

Time Quantum and Context Switch Time

Turnaround Time Varies With The Time Quantum 80% of CPU bursts should be shorter than q

Multilevel Queue Ready queue is partitioned into separate queues, eg: foreground (interactive) background (batch) Process permanently in a given queue Each queue has its own scheduling algorithm: foreground – RR background – FCFS Scheduling must be done between the queues: Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS

Multilevel Queue Scheduling
Tags