What is thread In an operating system (OS), a thread refers to the smallest unit of execution within a process.
Thread Scheduling Thread scheduling can be defined as the process of determining the order and timing of execution for individual threads within a system. Each thread represents a sequence of instructions that needs to be executed, and the scheduler is responsible for making decisions about which thread should run next and for how long. Definition:
Thread scheduling is distinguishing between user-level and kernel-level threads. User-Level Threads : User-level threads are managed by a thread library, and the kernel is unaware of them. The thread creation, scheduling, and management are handled by the application or a user-level library, without involving the operating system . Thread library schedules which thread of the process to run on which LWP and how long.
Kernel-Level Threads : Kernel-level threads are managed by the operating system kernel, and the kernel is responsible for their creation. Lightweight processes act as intermediaries between user-level threads and kernel-level threads . Example: When you open a new tab to visit a website, the browser creates a user-level thread (representing the tab) and associates it with a lightweight process. The lightweight process communicates with the operating system kernel through kernel-level threads to perform tasks like fetching data over the network, managing local storage, and rendering graphics on the screen.
Contention Scope: The word Contention here refers to the competition or fight among the user level threads access kernel resources. It is defined by the application developer using the thread library.
Types : Process contention scope: The contention takes place among thread within same process.(Priority is specifies by the application developer during the thread creation.) System contention scope: The contention takes place among all threads in the system.
Understanding the priority levels and scheduling algorithms is essential for effective thread management. Proper task allocation and CPU utilization are key factors in achieving optimal performance.
Thread scheduling can be either preemptive or non-preemptive
Preemptive Scheduling: In preemptive scheduling, the operating system has the ability to interrupt a currently running thread and allocate the CPU to another thread. When a higher-priority process or thread is identified, the scheduler initiates a context switch.
Non- Preemptive Scheduling: In non-preemptive (or cooperative) scheduling, a running thread continues execution until it completes its task. The operating system does not forcibly interrupt the running thread. Non-preemptive scheduling can be simpler to implement but may lead to less responsive systems, especially if a high-priority thread is waiting for a lower-priority thread to finish.
Scheduling Algorithms Various scheduling algorithms such as Round Robin, Shortest Job First, and Multi-Level Feedback Queue offer different approaches to task dispatching. Each algorithm has its own impact on system performance and fairness.
Round Robin : This algorithm follows a simple, cyclic approach, allocating a fixed time slice to each task in a circular manner . It ensures fairness by providing equal opportunities to all tasks, preventing any single task from monopolizing the CPU.
Shortest Job First : SJF prioritizes tasks based on their burst time, executing the shortest job first . However, predicting the exact burst time in practical systems can be challenging, making it sensitive to inaccurate estimations.
Multi-Level Feedback Queue : MLFQ operates with multiple priority levels, typically High, Medium, and Low. Each priority level represents a different queue, and tasks move between these queues based on their behavior . A task from the highest priority queue is given the CPU to execute. If the task completes its execution within the time quantum, it may stay in the same priority level or be promoted to a higher one. If it uses up its time quantum, it might be moved to a lower priority level.
Real-Time Scheduling Rate Monotonic and Earliest Deadline First are popular real-time scheduling algorithms that ensure timely task execution.
Thread Synchronization Proper synchronization mechanisms such as mutexes and semaphores are essential for avoiding race conditions and ensuring data integrity. Synchronization mechanisms should be designed to avoid deadlock situations.
Conclusion Optimizing thread scheduling is essential for achieving system efficiency and responsiveness. By understanding scheduling algorithms and adapting to evolving workloads, we can ensure optimal resource utilization and performance.