Multiprocessing is the capability of a computer system to execute multiple processes simultaneously by utilizing two or more central processing units (CPUs) or cores. This method enhances computational speed and efficiency by dividing tasks into smaller, concurrent processes that can be executed ind...
Multiprocessing is the capability of a computer system to execute multiple processes simultaneously by utilizing two or more central processing units (CPUs) or cores. This method enhances computational speed and efficiency by dividing tasks into smaller, concurrent processes that can be executed independently and in parallel, thereby optimizing the use of available hardware resources.Multiprocessing is a method used to achieve concurrent execution of tasks in a program by utilizing multiple processors or cores. It allows a system to execute several processes simultaneously, significantly improving performance for CPU-intensive tasks. This approach is particularly useful for computational tasks that can be divided into smaller, independent processes. Multiprocessing can help in optimizing the use of available hardware resources, leading to faster execution and improved efficiency.
Key aspects of multiprocessing include:
1. **Parallelism**: It enables true parallel execution by running multiple processes at the same time on different CPU cores.
2. **Process Isolation**: Each process runs independently in its own memory space, reducing the risk of memory corruption and enhancing reliability.
3. **Scalability**: Multiprocessing can scale efficiently with the number of available cores, making it suitable for modern multi-core processors.
However, multiprocessing also introduces challenges such as inter-process communication (IPC) overhead, synchronization issues, and the complexity of managing multiple processes.
Multiple-Level Queues Scheduling In every scheduling algorithm,only one ready queue Multiple-Level Queues Scheduling partitions ready queue into multiple ready queues.
In order to use os we require some processes,it is stored in system process Process which is having less response time stored in foreground process Process having high response time stored in background process Each ready queue having it’s own scheduling algorithm foreground process,it can be scheduled using RR scheduling(faster response) Background process can be scheduled using FCFS(more response time)
Multiple processor scheduling multiple-processor system consists of multiple cpus The processing load equally distributed equally between available processors Asymmetric & Symmetric Multiprocessing system
Asymmetric multi-processing system One processor made the master processor handles all the scheduling policies,I /O operations,resource allocation and other system actions Other processors works like a slave execute only user code. Symmetric multi-processing system Each processor can do the process scheduling on it’s own and selects a process to execute from the global ready queue
Issues related with SMP 1.Processor Affinity Systems try to avoid migration of processes from one processor to another and systems try to keep a process running on the same processor. This is known as Processor Affinity 2 types of Affinity 1.Soft Affinity 2.Hard Affinity
1.Soft affinity When an os has a policy of attempting to keep a process running on the same processor but not guaranteeing it will do so,this situation is called soft affinity 2.Hard Affinity-Some system such as Linux also provide some system calls that support Hard affinity which allows a process to migrate between processors.
2. Load Balancing It is the phenomena which keeps the workload equally distributed across all processors in an SMP system. Load balancing is necessary only on systems where each processor has its own private queue of process which are eligible to execute There are two approaches to load balancing 1.Push migration 2.Pull migration
Push migration- involves a separate process that runs periodically, ( e.g. every 200 milliseconds ), and moves processes from heavily loaded processors onto less loaded ones. Pull migration- involves idle processors taking processes from the ready queues of other busy processors.
Thread Scheduling 2 types of thread User level thread-managed by thread library Kernel level thread –managed by os To execute user level thread on cpu,we have to map user level thread to kernel level thread If multiple threads are there,then multiple tasks can be performed at a time. Among multiple threads,one thread is selected for execution..which is known as thread scheduling
Contention scope Distinction between user-level and kernel-level threads lies in how they are scheduled. PCS(process contention scope) Competition for cpu takes place among threads belonging to the same process. System contention scope(SCS) Competition for the CPU takes place among all threads in the system. To decide which kernel-level thread to schedule onto a cpu ,the kernel uses System –contention scope
Real time CPU scheduling Real-time scheduling in operating systems (OS) refers to the mechanisms and algorithms used to manage tasks with time constraints in a timely and predictable manner. These tasks often have deadlines that must be met, and failure to meet them can lead to serious consequences Soft real time systems There is no gurantee that process will be executed within a particular time limit Ex:games,weather forcasting Hard real time systems Process executed with in the specified deadline Ex:airbag deployment systems in cars,Pacemaker