Lectaaaaaaauaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaare7.pptx

diasgabitov04 4 views 57 slides Nov 02, 2025
Slide 1
Slide 1 of 57
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57

About This Presentation

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa


Slide Content

Inter-task communication

Outline Recap Concurrent programming issues Race condition Critical Section Inter-task communication Mutex Semaphores C o ndition variables

Cloud to Edge Variety of processing platforms from cloud to edge At the edge – billions of devices, including embedded systems

Characteristics of Embedded Systems

Internet of Things E m bedded systems are interconnected? -> IoT

Resource constrained devices Resource constrained but we need high efficiency and reliability Thus, a wise resource management is required

Bare metal programmong vs RTOS

Concurrent programming Concurrent programming is a technique for expressing potential parallelism. It divides an overall computation into subcomputations that may be executed concurrently , that is, several computations are executing during overlapping time periods.

Advantages of concurrent programming Concurrent programming offers several distinguished advantages to programmers and application users. it improves the responsiveness of an application. With concurrent computation, the system appears to immediately respond to every user request, even if the system is executing other expensive computation . it improves the processor utilization. Multiple tasks compete for the processor time and keep the processor busy whenever a task is ready to run. If one task gets stuck, others can run . it provides a convenient structure for failure isolation .

Sequential vs Concurrent vs Parallel programming Using  sequential programming   to manage multiple tasks forces you to run everything in a fixed loop (cyclic execution). This approach quickly becomes  complex, hard to read, and difficult to maintain . Concurrent programming   lets multiple tasks share the CPU by  taking turns (multiplexing)  — improving responsiveness and structure. Note: Concurrent  → many tasks  appear  to run together (one CPU). Parallel  → many tasks  actually  run at the same time (multiple CPUs).

POSIX Threads A  POSIX thread, often called a  pthread, is a  standardized programming interface  for creating and managing threads, defined by the  POSIX (Portable Operating System Interface)  standard (specifically  IEEE 1003.1c ). It provides a  portable, low-level API  that allows developers to write  multi-threaded programs  that can run on different UNIX-like operating systems (Linux, macOS, BSD, etc.) with minimal changes.

POSIX features POSIX thread (pthread)  = a lightweight execution unit within a process that runs concurrently with other threads of the same process. All threads in a process share: The  same address space File descriptors  and global variables But have  their own stack ,  registers , and  thread ID

Importance of POSIX threads Foundation for  multi-threaded programming  on UNIX/Linux. Widely used in  embedded Linux ,  real-time systems , and  server applications . Many RTOS (like FreeRTOS, VxWorks) and even  CMSIS-RTOS  borrow ideas from pthreads. On Linux, pthreads are implemented on top of the  kernel’s Native POSIX Thread Library (NPTL) .

pthread.h Pthreads are defined as a set of C language programming types and procedure calls, implemented with a pthread.h header file and a thread library. Pthreads API routines are provided for thread management, mutexes, condition variables, and thread synchronization with read/write locks and barriers.

POSIX functions Concept Description Thread creation pthread_create()  — starts a new thread executing a given function. Thread termination pthread_exit()  — ends a thread gracefully; other threads continue. Thread synchronization Mutexes ( pthread_mutex_t ), condition variables, and semaphores prevent race conditions. Thread joining pthread_join()  — one thread waits for another to finish. Detaching pthread_detach()  — allows thread to run independently without joining. Thread attributes pthread_attr_t  — defines stack size, scheduling policy, etc.

Synchronization in Concurrent Programs (I) Concurrent programs must be designed so that  threads exchange information safely  without interfering with each other. Race conditions  occur when multiple threads access shared data simultaneously, leading to unpredictable results. Programs must control the  order (interleaving)  of execution through synchronization.

Synchronization in Concurrent Programs (II) RTOS kernels provide  synchronization objects  to manage shared access: Mutexes  – for mutual exclusion Semaphores  – for signaling and resource control Condition Variables  – for waiting on specific conditions Next, we’ll discuss  race conditions, critical sections, and synchronization mechanisms  used to solve them.

Race Conditions and Critical Sections In concurrent systems, multiple  threads/processes run simultaneously . Programmers  cannot control  when the OS scheduler preempts a task. Threads may be interrupted at  any instruction . This can cause  race conditions  — errors where results depend on timing or execution order.

Definition of a Race Condition A  race condition  occurs when: Two or more processes access  shared data , and The final result depends on the  interleaving  of their operations. The system’s behavior becomes  nondeterministic . Preventing races is a key goal of  synchronization  in RTOS design.

ATM Example (Setup) Scenario: Account balance = $1000 Process P1 (deposit + $200) Process P2 (withdraw – $200) Each process performs: Read balance Modify balance Write balance back If operations don’t overlap, balance remains  $1000 ✅ . If they interleave incorrectly, balance becomes  $800 ❌ .

How Race Occurs (Interleaving) Incorrect sequence: 1️⃣ P1 reads $1000 2️⃣ P2 reads $1000 3️⃣ P1 adds $200 → writes $1200 4️⃣ P2 subtracts $200 → writes $800 Result →  Lost update!  Both operations succeeded logically, but one overwrote the other. Cause:  no mutual exclusion during balance modification.

Critical Sections A  critical section  is a code block that  accesses shared data . Only  one task at a time  may enter a given critical section. While one task is inside, others must  wait . Prevents race conditions by  serializing access  to shared memory.

Best Practices for Critical Sections Keep critical sections  as short as possible . Perform only essential operations on shared data. Avoid blocking calls or infinite loops inside them. Most of a task’s work should be on  local (non-shared) data . Critical-section bugs can cause  deadlines to be missed  in real-time systems.

Synchronization Tools in RTOS Before entering a critical section, a task must  lock  a synchronization object. Common primitives: Mutexes  → Mutual exclusion Semaphores  → Resource counting / signaling Condition variables  → Wait / notify model RTOS kernels provide APIs to manage these safely and efficiently.

Mutexes Mutex  =  Mutual Exclusion Object Ensures  only one task  accesses a shared resource at a time. Used for protecting  global data  or shared I/O operations. Prevents  race conditions  by enforcing exclusive access.

Basic Mutex Operations Operation Description LOCK(mutex) Blocks the calling task until the mutex becomes available, then locks it. UNLOCK(mutex) Releases the mutex so other tasks can acquire it. Only the task that locks a mutex should unlock it. Mutex must be  initialized  before use.

Mutex in Concurrent Programs Each shared resource should have a  dedicated mutex . Tasks  lock the mutex  before using the shared resource. After finishing, they  unlock  it to let others proceed. This mechanism ensures that operations are  mutually exclusive  and safe from race conditions.

POSIX Mutex Functions Return  0 on success , otherwise error code. Must be created using  pthread_mutex_init()   before use.

Example: Deposit and Withdraw Threads Two threads share a  global balance variable . Both use the same mutex ( my_mutex ) to protect updates. Example flow: 1️⃣  deposit()   locks  my_mutex , updates balance, unlocks it. 2️⃣   withdraw()   locks the same mutex before its update. Ensures only one thread modifies  balance   at any moment.

Important Programming Notes Critical section = small block of code protected by mutex. Avoid  I/O operations  (like  printf ) inside critical sections → slows execution. Always ensure both threads complete before destroying the mutex. Use  pthread_join()   to wait for threads before cleanup.

Condition Variables: Overview Used for  task synchronization based on data values . Work  together with mutexes  to coordinate thread behavior. Let a task  wait  until another task changes a shared variable. Common in problems like  Producer–Consumer  or  Bounded Buffer .

Why We Need Condition Variables A thread may enter a critical section but  cannot proceed  until another thread performs an action. Instead of busy-waiting, it can  sleep (wait)  on a condition variable. Another thread  signals  when the condition changes. This avoids CPU waste and allows  efficient synchronization .

Two Core Operations Operation Description WAIT(condition, mutex) Suspends the task until another thread signals the condition. SIGNAL(condition) Wakes up one waiting task (if any). Condition variables are  global  and must be protected by a  mutex .

POSIX Condition Variable APIs Function Purpose pthread_cond_wait() Waits on a condition; releases mutex while waiting, reacquires it on wake-up. pthread_cond_timedwait() Waits with a time limit — returns error if timeout expires. pthread_cond_signal() Wakes one waiting thread. pthread_cond_broadcast() Wakes all waiting threads. All return    on success; non-zero = error.

Key Concept: Wait + Signal Interaction 1️⃣ Thread A locks mutex → checks shared data. 2️⃣ If data not ready → calls  pthread_cond_wait()  → releases mutex + sleeps. 3️⃣ Thread B modifies shared data → calls  pthread_cond_signal() . 4️⃣ Thread A wakes, re-locks mutex, continues execution.

Producer-Consumer Problem (I) The  Producer–Consumer Problem  (also called the  Bounded Buffer Problem ) demonstrates how  two or more threads share a common resource (buffer)  safely using synchronization. Producer threads  generate data and place it into a shared buffer. Consumer threads  remove and process data from that buffer. The buffer has a  finite capacity , so producers must stop when it’s full, and consumers must stop when it’s empty. This coordination requires  mutual exclusion  (no simultaneous access) and  condition-based synchronization (wait/signal when buffer state changes).

Producer-Consumer Problem (II) Producers  add data to a shared buffer. Consumers  remove data from the same buffer. Conditions to handle: Buffer full → producer waits. Buffer empty → consumer waits. pthread_cond_signal()   used to wake waiting threads.

Synchronization Logic Shared Variable When Producer Blocks When Consumer Blocks count = buffer size Buffer is full → wait on condition — count = 0 — Buffer is empty → wait on condition Both use same  mutex  to protect access to the buffer. Condition variables  to block or wake threads when the buffer state changes. Signaling ensures  only one thread acts  on the shared data at a time.

Key Takeaways Condition variables coordinate  state-based waiting . Must always be used  with a mutex . Help implement  efficient producer–consumer  and similar patterns. Prevent busy-waiting → save CPU time → better real-time performance.

Semaphores Semaphore  = synchronization primitive invented by  Edsger Dijkstra (1960s) . A semaphore is a  shared counter  used to control access to shared resources. Core idea: a semaphore represents the  number of available units  of a resource. Used to prevent  race conditions  and ensure  coordinated task execution .

Two Atomic Operations Operation Description P(sem)  (Wait / Down) Waits until semaphore value > 0, then decrements it. If 0, task blocks. V(sem)  (Signal / Up) Increments semaphore value by 1, potentially unblocking a waiting task. Key Property:  Both actions are  atomic  (cannot be interrupted).

Example Operations on a semaphore initialized to 1. Start:  value = 1

Semaphore value = number of  tickets  available. P()  (“wait/down”): If value > 0 → take 1 ticket (value– –) and  continue . If value = 0 →  block  (wait in line). V()  (“signal/up”): If  no one is waiting , put 1 ticket back (value++). If  someone is waiting , wake exactly one waiter  and give them the ticket immediately  → the woken thread finishes its pending P(). Net effect in that case: value  stays the same  (because +1 from V and –1 from the unblocked P cancel out). Initial value =  1  (one ticket in the bowl).

Example Action 1 A: P() → value 0, A continues A takes the only ticket. Value: 1 → 0. Action 2 A: P() → value 0, A blocks There are no tickets left. A tries to take another ticket, so A must  wait . Value stays 0.

Example Action 3 B: V() → value 0, A is unblocked B adds a ticket, but since A is waiting, that ticket is  handed directly to A . V would make value 1,  but  A’s pending P immediately consumes it back to 0. Net: value remains 0,  A becomes runnable  (its second P has now completed).

Example Action 4 A: V() → value 1 A returns one ticket. No one is waiting now, so the ticket goes back to the bowl. Value: 0 → 1.

Example Action 5 B: P() → value 0, B continues B takes the ticket. Value: 1 → 0.

Example Action 6 A: V() → value 1 A returns another ticket. No one is waiting at this moment, so value: 0 → 1. (This balances A’s earlier extra P in step 2.)

Example Action 7 B: V() → value 2 B returns its ticket. No waiters, so value: 1 → 2.

Takeaway Yes, this shows why  balanced P/V pairs matter . Because A did two P’s but only returned them later, the sequence can temporarily push the semaphore  above  its initial value if V’s aren’t paired carefully. With correct usage, total P’s and V’s per resource should match. In a  proper program , every task should normally balance its own P() and V() calls (1 lock → 1 unlock). The table’s duplicate V() actions were only included to illustrate  how the value changes , not to describe correct usage .

Summary: Why Inter-Task Communication? Multiple concurrent tasks = concurrent access to shared data Dangers: race conditions, inconsistent state, lost updates Solution:  controlled data passing  via RTOS primitives

Summary; Synchronization Tools

S u mmary A  condition variable  is like a  signal or announcement mechanism  between threads. It lets one thread  wait  (sleep) until another  announces  that a certain condition has changed — for example, “resource available,” “buffer not empty,” or “data ready.” Mutex  ensures consistent access. Condition variable  announces changes. Semaphore  may still handle resource counting elsewhere in the system. A semaphore already has  its own waiting queue : sem_wait()   automatically  blocks  a thread if the counter is 0. sem_post()   automatically  unblocks  one waiting thread. So, you  don’t need a condition variable , because the semaphore  is  a self-contained synchronization tool.

S u mmary: Mutex Lifecycle Summary 1️⃣ Declare →  pthread_mutex_t my_mutex; 2️⃣ Initialize →  pthread_mutex_init(&my_mutex, NULL); 3️⃣ Use →  pthread_mutex_lock()  /  pthread_mutex_unlock() 4️⃣ Synchronize threads →  pthread_join() 5️⃣ Destroy →  pthread_mutex_destroy(&my_mutex);

STM32 RTOS
Tags