Operating System amaliyot - Lecture 9.pptx

shodmonbakirov375 4 views 33 slides Oct 27, 2025
Slide 1
Slide 1 of 33
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33

About This Presentation

this is one of the main topic in operting system


Slide Content

Operating System Topics: Locks, Critical Sections, Spinlocks, Harmony Context Switching, Concurrent Queues Lecture: 9 Instructor: Rabia Tahir

Overview Topics covered: - Locks and critical sections - Spinlocks and time distribution - Harmony context switching - Stop-and-go in locks - Compatibility and concurrent queues

What is Synchronization? Synchronization ensures correct ordering and shared data protection. Prevents race conditions and maintains consistency.

Why Locks are Needed Multiple threads accessing shared data can cause corruption. Locks enforce mutual exclusion to ensure atomic operations.

Identifying Locks - Mutex (Mutual Exclusion Lock) - Binary Semaphore - Spinlock - Read-Write Lock Used based on contention level and system design.

Critical Section with Locks Example: lock.acquire () critical_section () lock.release () Ensures only one thread runs the critical section at a time.

Deadlock & Livelock Deadlock: two threads wait forever for each other's locks. Livelock : threads keep retrying but make no progress.

Spinlocks A thread repeatedly checks if a lock is available (busy waiting). Efficient for short waits and multi-core processors.

Spinlocks: Advantages - Low overhead for short operations - Avoids kernel context switch - Ideal for multi-core systems

Spinlocks: Disadvantages - CPU wastage while spinning - Poor for long waits - Can cause priority inversion

Time Distribution in Spinlocks Decide when to spin and when to sleep. Use backoff strategies like exponential backoff to improve fairness. Exponential back off: When multiple threads compete for same lock, each failed attempt waits a little longer before trying again (e.g. wait 1 ms , then wait 2 ms , ….). This reduces memory contention and gives fair chance to everyone.

Harmony Context Switching Harmony model balances thread scheduling and CPU utilization. Prevents excessive switching and reduces lock contention. Because every time context switching need to load the previous state and save the current state, this cause wastage of CPU utilization.

Stop-and-Go Locks Threads spin for a while, then 'go to sleep' if lock unavailable. Combines spinning + sleeping for performance balance. This method avoids wasting CPU time, while providing fast response.

Binary Semaphore A Binary Semaphore is a synchronization mechanism used in operating systems and concurrent programming to control access to a shared resour A Binary Semaphore can have only two values: 0 or 1, just like a binary signal. 1 → Resource is available (unlocked) 0 → Resource is in use (locked) It ensures mutual exclusion ( mutex ) i.e. only one process can access the critical section at a time .

Lock Compatibility Some locks (like RW locks) allow multiple readers but only one writer. Choose locks based on concurrency requirements. Depending on whether your program, has more reads or write then you can pick a suitable lock type to maximize concurrency and efficiency.

Creating a Concurrent Queue Concurrent Queue: A data structure shared by multiple threads One thread might add (enqueue) and other thread removes (dequeue) Goal: multiple threads can enqueue/dequeue safely. Use locks to ensure thread-safe operations. Without it, they might change the memory value at the same time causing errors.

Displaying a Concurrent Queue Visual: Head and tail pointers. Head (Dequeue), Tail (Enqueue) Locks may protect head and tail separately. Ensures non-blocking concurrent access.

Example: Using a Queue Producer-Consumer Model: Producers enqueue tasks, consumers dequeue. Synchronize access using locks or semaphores.

Queue Implementation v1: Single Lock The simplest way to make a concurrent queue is to use one global lock that protects the entire queue. Before adding or removing items, a thread must acquire the lock. After finishing, it releases it. Advantages: Easy to implement and ensures correctness and no data corruption. Disadvantages: Only one thread can use the queue at a time. Even if one thread is adding (enqueue) and another is removing (dequeue), they must wait for each other and reducing efficiency and concurrency

Code Example: Queue v1 class QueueV1: def __ init __(self): self.lock = Lock() self.items = [] def enqueue (self, x): with self.lock : self.items.append (x) def dequeue (self): with self.lock : return self.items.pop (0)

Performance of Queue v1 Low concurrency under high contention. Use profiling tools to measure throughput and latency.

Parallel Queue v2: Two Locks Use separate locks for enqueue and dequeue. Allows parallel operations on opposite ends of queue.

Code Example: Queue v2 class QueueV2: def __ init __(self): self.head_lock = Lock() self.tail_lock = Lock() def enqueue (self, item): with self.tail_lock : enqueue_operation () def dequeue (self): with self.head_lock : dequeue_operation ()

Advantages of Queue v2 - Improved throughput - Reduced contention - Ideal for multi-producer, multi-consumer systems

Testing Concurrent Structures Use stress tests and thread analyzers ( ThreadSanitizer , Helgrind ). Helgrind : a thread error detector (a tool for detecting synchronization errors in C, C++ and Fortran programs that use the POSIX pthreads threading primitives) ThreadSanitizer  is a tool that detects data races. It consists of a compiler instrumentation module and a run-time library. Test for data races, deadlocks, and starvation.

When to Use Locks vs Lock-Free Locks: simpler but can block threads. Lock-free: faster but complex and harder to debug.

Practical Tips - Minimize lock duration - Prefer fine-grained locks - Avoid nested locks - Profile before optimizing

Case Study: Producer-Consumer Bounded buffer problem. Use semaphores or locks + condition variables for coordination. Two processes, a producer and a consumer, must share a common, fixed-size buffer. The producer generates data and adds it to the buffer, while the consumer removes data from the buffer for processing

Case Study: Producer-Consumer The Challenge: The processes must be synchronized to prevent data loss or corruption. The producer must wait if the buffer is full. The consumer must wait if the buffer is empty. Both processes cannot access the buffer at the same time.

Case Study: Producer-Consumer Producer Logic: Acquire the mutex to get exclusive access. Wait on the empty semaphore. If there are no empty slots, the producer will block. Add the item to the buffer. Signal the full semaphore to indicate a new item is available. Release the mutex.

Case Study: Producer-Consumer Consumer Logic: Acquire the mutex to get exclusive access. Wait on the full semaphore. If there are no items in the buffer, the consumer will block. Remove the item from the buffer. Signal the empty semaphore to indicate a new slot is now available. Release the mutex .

Summary - Locks prevent race conditions - Choose lock type carefully - Harmony & spinlocks optimize performance - Concurrent queues improve parallelism

References 1. MIT OpenCourseWare - Locks & Synchronization 2. Stanford CS140 - Synchronization 3. UC Berkeley CS162 - Synchronization 4. Oxford CS Lecture Notes - Concurrency Control 5. Carnegie Mellon 15-213 - Parallel Programming
Tags