Operating System Definition and Function In the Computer System (comprises of Hardware and software), Hardware can only understand machine code (in the form of 0 and 1) which doesn't make any sense to a naive user. We need a system which can act as an intermediary and manage all the processes and resources present in the system . An Operating System can be defined as an interface between user and hardware. It is responsible for the execution of all the processes, Resource Allocation, CPU management, File Management and many other tasks. The purpose of an operating system is to provide an environment in which a user can execute programs in convenient and efficient manner.
Structure of a Computer System A Computer System consists of: Users (people who are using the computer) Application Programs (Compilers, Databases, Games, Video player, Browsers, etc.) System Programs (Shells, Editors, Compilers, etc.) Operating System ( A special program which acts as an interface between user and hardware ) Hardware ( CPU, Disks, Memory, etc )
What does an Operating system do? Process Management Process Synchronization Memory Management CPU Scheduling File Management Security
Functions of Operating System It helps with memory management. It keeps a track of the files being saved in the Main memory and the primary memory of the computer device Whenever a computer is turned on, the Operating system automatically starts to work. Thus, the booting and rebooting process of a computer device is also an important function of the OS It provides a user interface Managing of basic peripheral devices is done by the operating system Using the password protection option of an operating system, the data in the device can be kept secure It coordinates with the software and the user Easy navigation and organisation of files and programs are managed by the OS Any kind of program which needs to be run through the system is done by the operating system If any kind of error or bug is found during the program is detected using the operating system
Components of an Operating Systems Components of an Operating Systems There are two basic components of an Operating System. Shell Kernel Shell Shell is the outermost layer of the Operating System and it handles the interaction with the user. The main task of the Shell is the management of interaction between the User and OS. Shell provides better communication with the user and the Operating System Shell does it by giving proper input to the user it also interprets input for the OS and handles the output from the OS. It works as a way of communication between the User and the OS.
Kernel The kernel is one of the components of the Operating System which works as a core component. The rest of the components depends on Kernel for the supply of the important services that are provided by the Operating System. The kernel is the primary interface between the Operating system and Hardware. Functions of Kernel The following functions are to be performed by the Kernel. It helps in controlling the System Calls. It helps in I/O Management. It helps in the management of applications, memory, etc.
Types of Operating System 1. Batch Operating System There is no direct communication between the computer and the OS There is an intermediate, the Operator, which needs to distribute the work into batches and sort similar jobs Multiple users can use it Can easily manager a large amount of work 2. Real-Time Operating System It has a data processing system The processing time is very small between the user’s command and the output Used in fields where the response needs to be quick and rapid 3. Time-Sharing Operating System Multiple people at various terminals can use a program at the same time The main motive is to minimize the response time
4. Distributed Operating System When two or more systems are connected to each other and one can open files which are not present in their system but in other devices connected in the network Its usage has now increased over the years They use multiple central processors to serve real-time applications Failure of one system does not affect the other systems connected in the network 5. Embedded Operating System These special Operating systems are built into larger systems They generally are limited to single specific functions like an ATM 6. Network Operating System They have one main server which is connected to other client servers All the management of files, processing of data, access to sharing files, etc. are performed over this small network It is also a secure operating system for working with multiple users 7. Mobile Operating System With the advancement in the field of technology, smartphones now are released with an Operating system. They are designed in a manner that they can help a small device work efficiently
Batch Processing Operating System
Real-Time Operating System
Time Sharing Operating System
System calls in Operating System The interface between a process and an operating system is provided by system calls. In general, system calls are available as assembly language instructions. They are also included in the manuals used by the assembly level programmers. System calls are usually made when a process in user mode requires access to a resource. Then it requests the kernel to provide the resource via a system call.
System Calls Are Required In The Following Situations If a file system requires the creation or deletion of files. Reading and writing from files also require a system call. Creation and management of new processes. Network connections also require system calls. This includes sending and receiving packets. Access to a hardware devices such as a printer, scanner etc. requires a system call.
Types of System Calls Process Control These system calls deal with processes such as process creation, process termination etc. File Management These system calls are responsible for file manipulation such as creating a file, reading a file, writing into a file etc. Device Management These system calls are responsible for device manipulation such as reading from device buffers, writing into device buffers etc. Information Maintenance These system calls handle information and its transfer between the operating system and the user program. Communication These system calls are useful for interprocess communication. They also deal with creating and deleting a communication connection.
Types of System Calls Windows Linux Process Control CreateProcess () ExitProcess () WaitForSingleObject () fork() exit() wait() File Management CreateFile () ReadFile () WriteFile () CloseHandle () open() read() write() close() Device Management SetConsoleMode () ReadConsole () WriteConsole () ioctl () read() write() Information Maintenance GetCurrentProcessID () SetTimer () Sleep() getpid () alarm() sleep() Communication CreatePipe () CreateFileMapping () MapViewOfFile () pipe() shmget () mmap ()
Types of Operating Systems Structures Simple/Monolithic Structure Micro-Kernel Structure Hybrid-Kernel Structure Layered Structure Modular Structure Virtual Machine
Simple/Monolithic structure Operating systems do not have well-defined structures and are small, simple, and limited. The interfaces and levels of functionality are not well separated. MS-DOS is an example of such an operating system.
Micro-Kernel Structure A microkernel is an operating system structure that aims to keep the kernel as small and lightweight as possible. It provides only essential services, such as process scheduling and inter-process communication, while moving most non-essential services, like device drivers, into user space. Hybrid-Kernel Structure Hybrid-kernel structure is nothing but just a combination of both monolithic-kernel structure and micro-kernel structure. Basically, it combines properties of both monolithic and micro-kernel and make a more advance and helpful approach. It implement speed and design of monolithic and modularity and stability of micro-kernel structure.
Layered Structure An OS can be broken into pieces and retain much more control over the system. In this structure, the OS is broken into a number of layers (levels). The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user interface. These layers are so designed that each layer uses the functions of the lower-level layers. This simplifies the debugging process, if lower-level layers are debugged and an error occurs during debugging, then the error must be on that layer only, as the lower-level layers have already been debugged. UNIX is an example of this structure.
Modular Structure It is considered as the best approach for an OS. It involves designing of a modular kernel. The kernel has only a set of core components and other services are added as dynamically loadable modules to the kernel either during runtime or boot time. It resembles layered structure due to the fact that each kernel has defined and protected interfaces, but it is more flexible than a layered structure as a module can call any other module. For example Solaris OS
VMs (Virtual Machines) A virtualization-based kernel, also known as a hypervisor, is an operating system structure that enables the execution of multiple operating systems concurrently on the same hardware. A virtual machine (VM) is a virtual environment which functions as a virtual computer system with its own CPU, memory, network interface, and storage, created on a physical hardware system. Multiple OS systems use the same hardware and partition resources between virtual computers. Separate Security and configuration identity. Ability to move the virtual computers between the physical host computers as holistically integrated files.
Inter Process Communication (IPC) A process can be of two types: Independent process. Co-operating process. Inter-process communication (IPC) is a mechanism that allows processes to communicate with each other and synchronize their actions. The communication between these processes can be seen as a method of co-operation between them. Processes can communicate with each other through both: Shared Memory Message passing
Messaging Passing Method Shared Memory
Unit II - Process Synchronization Critical Section Problem in OS Critical Section is the part of a program which tries to access shared resources. That resource may be any resource in a computer like a memory location, Data structure, CPU or any IO device. The critical section cannot be executed by more than one process at the same time; operating system faces the difficulties in allowing and disallowing the processes from entering the critical section. The critical section problem is used to design a set of protocols which can ensure that the Race condition among the processes will never arise.
Mutex is special a binary semaphore that synchronizes the access to shared resources like memory or I/O. It is a locking mechanism. In case multiple threads want to access the critical section, mutex allows only one thread at a time to be in the critical section. Mutex ensures that the code in the critical section (which has shared resources) being controlled will only be used by a single thread at a time. Mutex & Semaphores
wait (mutex); //locks ….. Critical Section ….. signal (mutex); //unlocks The producer-consumer problem is an example of a multi-process synchronization problem. The problem describes two processes, the producer and the consumer that share a common fixed-size buffer and use it as a queue. The producer’s job is to generate data, put it into the buffer, and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time. What is the Producer-Consumer Problem?
Semaphore A semaphore is a non-negative integer variable that is shared between various threads. Semaphore works upon signaling mechanism, in this a thread can be signaled by another thread. Semaphore uses two atomic operations for process synchronisation : Wait (P) Signal (V)
Semaphores are just normal variables used to coordinate the activities of multiple processes in a computer system. They are used to enforce mutual exclusion, avoid race conditions, and implement synchronization between processes. The process of using Semaphores provides two operations: wait (P) and signal (V). The wait operation decrements the value of the semaphore, and the signal operation increments the value of the semaphore. When the value of the semaphore is zero, any process that performs a wait operation will be blocked until another process performs a signal operation.
Race Conditions: It occurs when some common shared resources are accessed by multiple processes or threads, leading to unpredictable behavior. Critical Section: section of code where shared resources are accessed. Only one process or thread should be allowed to execute the critical section simultaneously. Deadlocks : a situation occurs when two or more processes are holding the resources and do not release them causing a deadlock to occur because another is waiting for that resources. Deadlock prevention and detection mechanisms are critical in process synchronization. Starvation: One process is being unable to access resources due to other processes holding onto them for extended periods, often due to priority scheduling or unfair resource allocation.
Types of Semaphores Semaphores are of two Types Binary Semaphore: This is also known as a mutex lock. It can have only two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of critical section problems with multiple processes. Counting Semaphore: Its value can range over an unrestricted domain. It is used to control access to a resource that has multiple instances.
Classical Problems of Synchronization with Semaphore Solution These problems are used for testing nearly every newly proposed synchronization scheme. The following problems of synchronization are considered as classical problems: 1. Bounded-buffer (or Producer-Consumer) Problem, 2. Dining-Philosophers Problem, 3. Readers and Writers Problem, 4. Sleeping Barber Problem
Readers and Writers Problem,
Dining-Philosophers Problem The Dining Philosopher Problem states that K philosophers seated around a circular table with one chopstick between each pair of philosophers. There is one chopstick between each philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent followers but not both. This problem involves the allocation of limited resources to a group of processes in a deadlock-free and starvation-free manner.
Sleeping Barber Problem Barber shop with one barber, one barber chair and N chairs to wait in. When no customers the barber goes to sleep in barber chair and must be woken when a customer comes in. When barber is cutting hair new customers take empty seats to wait, or leave if no vacancy. This is basically the Sleeping Barber Problem.
The problem is to write a program that coordinates the actions of the customers and the barber in a way that avoids synchronization problems, such as deadlock or starvation. One solution to the Sleeping Barber problem is to use semaphores to coordinate access to the waiting chairs and the barber chair. The solution involves the following steps: Initialize two semaphores: one for the number of waiting chairs and one for the barber chair. The waiting chairs semaphore is initialized to the number of chairs, and the barber chair semaphore is initialized to zero. Customers should acquire the waiting chairs semaphore before taking a seat in the waiting room. If there are no available chairs, they should leave. When the barber finishes cutting a customer’s hair, he releases the barber chair semaphore and checks if there are any waiting customers. If there are, he acquires the barber chair semaphore and begins cutting the hair of the next customer in the queue. The barber should wait on the barber chair semaphore if there are no customers waiting.
The Need For CPU Scheduling Algorithm CPU scheduling is the process of deciding which process will own the CPU to use while another process is suspended. The main function of the CPU scheduling is to ensure that whenever the CPU remains idle, the OS has at least selected one of the processes available in the ready-to-use line. In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the time, the CPU remains idol. The task of Operating system is to optimize the utilization of resources. If most of the running processes change their state from running to waiting then there may always be a possibility of deadlock in the system. Hence to reduce this overhead, the OS needs to schedule the jobs to get the optimal utilization of CPU and to avoid the possibility to deadlock.
Objectives of Process Scheduling Algorithm Utilization of CPU at maximum level. Keep CPU as busy as possible . Allocation of CPU should be fair . Throughput should be Maximum . i.e. Number of processes that complete their execution per time unit should be maximized. Minimum turnaround time , i.e. time taken by a process to finish execution should be the least. There should be a minimum waiting time and the process should not starve in the ready queue. Minimum response time. It means that the time when a process produces the first response should be as less as possible.
Completion, Turn Around, Response and Waiting Times
Terminologies Used in CPU Scheduling Arrival Time: Time at which the process arrives in the ready queue. Completion Time: Time at which process completes its execution. Burst Time: Time required by a process for CPU execution. Turn Around Time: Time Difference between completion time and arrival time. Turn Around Time = Completion Time – Arrival Time Waiting Time(W.T): Time Difference between turn around time and burst time. Waiting Time = Turn Around Time – Burst Time
There are mainly two types of scheduling methods: Preemptive Scheduling: Preemptive scheduling is used when a process switches from running state to ready state or from the waiting state to the ready state. Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a process terminates , or when a process switches from running state to waiting state
First Come First Serve Shortest Job First – Pre-emptive & Non- Pre-emptive Priority Scheduling Round Robin Scheduling Types of CPU Scheduling Algorithms
Jobs are executed on first come, first serve basis. It is a non-preemptive scheduling algorithm. Easy to understand and implement. Its implementation is based on FIFO queue. Poor in performance as average wait time is high. First Come First Serve (FCFS)
Shortest Job Next (SJN) This is also known as shortest job first, or SJF This is a non-preemptive, pre-emptive scheduling algorithm. Best approach to minimize waiting time. Easy to implement in Batch systems where required CPU time is known in advance. Impossible to implement in interactive systems where required CPU time is not known. The processer should know in advance how much time process will take.
Non-preemptive
Preemptive
Priority Based Scheduling Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch systems. Each process is assigned a priority. Process with highest priority is to be executed first and so on. Processes with same priority are executed on first come first served basis. Priority can be decided based on memory requirements, time requirements or any other resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are considering 1 is the lowest priority.
Round-Robin Scheduling It is a Preemptive type of CPU scheduling algorithm in OS. The round-robin algorithm generally focuses on the Time Sharing technique. Round robin Scheduling is the simplest and one of the oldest algorithms. This algorithm is a real-time algorithm as it responds to an event within a specific time limit. Round robin is a widely used algorithm in traditional OS.
Process No Arrival Time (AT) Burst Time (BT) P1 5 P2 1 4 P3 2 2 P4 4 1
Process No Arrival Time (AT) Burst Time (BT) Completion Time (CT) Turn Around Time (TAT) Waiting Time (WT) Response Time (RT) P1 5 3 1 12 12 7 P2 1 4 2 11 10 6 1 P3 2 2 6 4 2 2 P4 4 1 9 5 4 4 Total Turn Around Time = 31 ms Average Turn Around Time = 31/4 = 7.75 ms Total Waiting Time = 19 ms Average Waiting Time = 19/4 = 4.75 ms Total Response Time = 7 ms Average Response Time = 7/4 = 1.75 ms