riyachoudhary148
12,307 views
24 slides
Nov 12, 2017
Slide 1 of 24
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
About This Presentation
process scheduling in computer processing
Size: 444.58 KB
Language: en
Added: Nov 12, 2017
Slides: 24 pages
Slide Content
PROCESS SCHEDULING
Contents Introduction Process behavior for Scheduling Scheduling levels Process Scheduling Goals Scheduling Algorithms
Introduction In computer science, scheduling is the method by which processes or data flows are given access to system resources. This is usually done to share system resources effectively or achieve a target quality of service. The need for a scheduling algorithm arises from the requirement for most modern systems to perform multitasking (executing more than one process at a time ).
Process Behaviour F or S cheduling CPU BURST-when a process has long computations to be executed by the processor. I/O BURST-The occurrence of I/O operation. CPU BOUND-process with intensive CPU-burst i.e., longer CPU cycles and a low number of I/O bursts I/O BOUND-process has a large number of frequent I/O bursts within the smaller CPU-bursts
Long-term scheduling Long-term scheduling is done when a new process is created. It initiates processes and so controls the degree of multi-programming (number of processes in memory).
Medium term scheduling Medium-term scheduling involves suspending or resuming processes by swapping (rolling) them out of or into memory
Short term scheduling Short-term (process or CPU) scheduling occurs most frequently and decides which process to execute next.
Process Scheduling Goals The goals of scheduling may be categorised as user-based scheduling goals and system-based scheduling goals. User based goals are the criteria that benefit the user. For e.g. a user in a multi user environment expects the system to give quick response to the job. System based goals are the criteria that benefit the system, i.e., performance of the system. For e.g. how much the processor is being utilized in processing the processes.
User-Based Scheduling Goals: Turnaround time –time to execute a process: from start to completion i .e. sum of Waiting time, Executing t ime Blocked waiting for an event, for example waiting for I/O Waiting time – time in Ready queue, directly impacted by scheduling algorithm. Response time – time from submission of request to production of first response . Predictability – prediction about completion time of a process by human mind.
System-Based Scheduling Goals: Throughput – number of processes that complete their execution per unit time CPU utilization – percent time CPU busy Fairness – all processes should be treated equal, until priority is set Balance – there should be a balance between I/O bound and CPU bound processes.
Scheduling Algorithms First Come First Served(FCFS) Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 P1 P2 P3 24 27 30
First Come First Serve The arriving process is added onto the tail of the queue and the process at the head of the queue is dispatched to the processor for execution. A process that has been interrupted by any means does not maintain its previous order in the ready queue.
Priority Scheduling The processes are executed according to the priority. 1.Priority number based scheduling In it preference is given to the processes, based on a priority number assigned to it. 2.Shortest Process Next (SPN) The process with the shortest execution time is executed first . It is non-pre-emptive scheduling algorithm.
Shortest Remaining Time Next(SRN ) This algorithm also considers the execution time of processes as in SPN. but it is a pre-emptive version of SPN.
Round Robin Scheduling Time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. The design is such that whenever a process finishes before the time slice expires, the timer will stop and send the interrupt signal ,so that the next process can be scheduled.
Improved Round Robin Scheduling CPU consumption ratio=Actual CPU time consumed/total estimated execution time Schedule a process if its CPU consumption ratio is greater than 0.60, else schedule a process whose CPU consumption ratio is minimum.
Highest Response Ratio Next(HRRN) Response ratio= time elapsed in the system/CPU time consumed by the process
Multilevel Queue Scheduling When processes can be readily categorized, then multiple separate queues can be established, each implementing whatever scheduling algorithm is most appropriate for that type of job, and/or with different parametric adjustments. Scheduling must also be done between queues, that is scheduling one queue to get time relative to other queues. Two common options are strict priority ( no job in a lower priority queue runs until all higher priority queues are empty ) and round-robin ( each queue gets a time slice in turn, possibly of different sizes. ) Note that under this algorithm jobs cannot switch from queue to queue - Once they are assigned a queue, that is their queue until they finish.
Multilevel Feedback-Queue Scheduling Multilevel feedback queue scheduling is similar to the ordinary multilevel queue scheduling described above, except jobs may be moved from one queue to another for a variety of reasons: If the characteristics of a job change between CPU-intensive and I/O intensive, then it may be appropriate to switch a job from one queue to another. Aging can also be incorporated, so that a job that has waited for a long time can get bumped up into a higher priority queue for a while. Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any situation. But it is also the most complex to implement because of all the adjustable parameters. Some of the parameters which define one of these systems include: The number of queues. The scheduling algorithm for each queue. The methods used to upgrade or demote processes from one queue to another. ( Which may be different. ) The method used to determine which queue a process enters initially.
Fair-share scheduling Fair-share scheduling is a scheduling strategy for computer operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes.
Lottery scheduling In Lottery Scheduling p rocesses are each assigned some number of lottery tickets, and the scheduler draws a random ticket to select the next process. The distribution of tickets need not be uniform; granting a process more tickets provides it a relative higher chance of selection. This technique can be used to approximate other scheduling algorithms, such as Shortest job next and Fair-share scheduling. Lottery scheduling solves the problem of starvation. Giving each process at least one lottery ticket guarantees that it has non-zero probability of being selected at each scheduling operation.