Asynchronous programming is used anywhere where your application interacts with other external entities - be it other threads, other processes. This world by its nature is Asynchronous. Asynchronous literally means - anything can happen anytime - unpredictable. How do you write software through Asyn...
Asynchronous programming is used anywhere where your application interacts with other external entities - be it other threads, other processes. This world by its nature is Asynchronous. Asynchronous literally means - anything can happen anytime - unpredictable. How do you write software through Asynchronous programming? That's what we shall be going to learn in this course. Big software projects of reasonable sizes are Asynchronous. Software projects leave their synchronous boundaries the moment it starts integrating and communicating with external entities or go multithreaded.
Asynchronous programming is widely used in Distributed Systems, Networking Protocols, Distributes Databases, etc.
The prerequisite of this Course :
Know Thread Synchronization Concepts - Mutexes and Condition Variables ( any thread library, but we will be using pthreads )
C or C++ in preferable, but if your fav language is any other mainstream language then also it is ok. Borrow the concepts and implement it in your fav language.
Comfortable with basic Data structures such as Linked-List.
Zeal to learn and excel - beyond what Crowd is doing ( DS/ALGO/CP )
The end product of this Course shall be a mini library yet very powerful library which allows you do develop software through Asynchronous programming.
Course Difficulty Level :
The Course is level is Intermediate to Advanced. Very beginners pls refrain from enrolling into the Course.
Software Designing Problems to Address in this Course :
Problem 1. Simulating Concurrency in Single-Threaded Programs
problem 2. Reducing Multi-threaded Programs to Single Threaded
Problem 3. Work Deferrals
Problem 4. Asynchronous Communication
Problem 5. Queuing and Processing Incoming Network Packets
Size: 1.35 MB
Language: en
Added: May 13, 2024
Slides: 66 pages
Slide Content
A s y n c h r o n o u s P r o g r a m m i n g D e s ig n P a tt e r n s Programming Language : C/C++ ( but concepts are language agnostic ) Pre-requisites : Thread Synch ( Mutex and Condition Variables ), Socket programming basics Website : www.csepracticals.com
Agenda In this Course, we shall be going to understand Asynchronous Programming, and problem it solves through a Demo Project ! Pre-requites : Multithreading, Thread Sync, Mutex, Condition Variables C/C++ Or any main-stream programming language Table of Contents : Understanding sync and async programming paradigm Understanding sync and async communication Goals of Asynchronous Programming Defining the project to practice Asynchronous Programming Concepts Asynchronous Programming through Dispatch Queue/ EventLoop Design Implementation Implement Various Asynchronous Design Patterns After doing this Course : Understand where to apply Async Programming techniques and why Say good-byte to forced Multi-threading Say good-bye to forced locking Build scalable system soft-wares Implement Async Communication
Asynchronous Programming Design Patterns Async Vs Sync Example Till now, most of the code you have written so far is sync code All codes that you write while practicing CP is sync code Sync programming means – performs steps in known definite order Measure ingredients Mix flour, eggs, Sugar Heat Oven to the right temp Bake the cake Wait for cake to ready Prepare the cake Synchronous way Measure ingredients Mix flour, eggs, Sugar Heat Oven to the right temp Bake the cake Send notif that cake is ready Prepare the cake ASynchronous way Eat Eat Do independent tasks in parallel No waiting, do something else Asynchronous doesn’t means multithreading, but closely tied to it
Asynchronous Programming Design Patterns Async Vs Sync communication Conversation through telephone – Synchronous communication You hear what other is saying You wait until the other side has completed his sentence When one speak, other one actively hear While you speak or hear, you don’t do anything else Conversation through email or chat – Asynchronous Communication You don’t wait for other’s response You shoot your email/msg without knowing if other one is in ready to recv /read state You don’t wait for the reply ( at-least instantly ) You are notified by notification that reply has recvd Asynchronous communication means communication which happens ‘out of sync’ It is almost a bad software design if you need to get blocked for some reason ! However, blocking is justified in few scenarios like select( ) / fgets ( ) etc
Asynchronous Programming Design Patterns Asynchronous Vs Multi-Threading Programming They tend to appear same , but they are not Multithreading is one form of asynchronous programming Asynchronous programming can be done with single thread as well as with multiple threads Multi-Threading is about Workers who works in parallel Asynchronous is all about set of tasks to be finished, for example an application may have : Task 1 : Network 10 pkts in Queue to be processed Task 2 : Timer Callback pending to execute Task 3 : A Network pkt to send to remote machine Task 4 : A CLI input from user to be processed Asynchronous Programming Multithreading Programming Single Threaded Programming How would application going to finish pending tasks ? In what order ? Parallel or sequentially ? While executing current task, more task may Queue in
Asynchronous Programming Design Patterns Asynchronous Vs Multi-Threading Programming ASynchronous Single Threaded Sum[0,3] Mul[0,5] Sum[4,6] Mul[6,9] Sum[7,9] - Thread 1 1 2 3 4 5 6 7 8 9 10 Task 1 - Calculate sum of all numbers Task 2 – Calculate multiplication of all numbers Sum[0,9] Mul[0,9] Thread 1 Synchronous Single Threaded Sum[0,9] Mul[0,9] Thread 1 Synchronous Multi Threaded Thread 2 ASynchronous Multi Threaded Sum[0,3] Sum[4,6] Sum[7,9] Thread 1 Mul[0,5] Mul[6,9] Thread 2
Asynchronous Programming Design Patterns Course Layout I have developed this Course on a Fast-Track I would try to enforce learning by doing rather than splitting out theeeeoooooorrryyyyyyy We will straightaway discuss one problem statement where asynchronous programming is essential Then we built asynchronous solution to the problem called – Event Loop Or Dispatch Queue We apply the event loop async solution to solve various other problems ( Design Patterns ) I will use C to demonstrate the concepts, but concepts are language agnostic After doing this Course, you would be in a position to design asynchronous systems Pre-requisites : You must at-least know how to implement producer-consumer problem If not, then pls learn thready Synchronization basics before attempting this course No More Theory, Let’s Get our Hands Dirty
Design And Implement E v e n t L o o p A Data Structure to do Asynchronous Programming
Asynchronous Programming Introducing Event Loops Event loop is an async programming structure which schedule the computation to be formed in future ( Work Deferral ) Let us understand how event loop works – Design and Implementation Event loop is also called as Event dispatcher Once we finish the implementation of event loop, I will show you how to put event loop to implement different types of Asynchronous programming model The output of this section is : Event Loop Library
F1, arg1 F2, arg2 F3, arg3 F4, arg4 F1, arg5 Appln Thread(s) task_create_new_job ( argx , Fnx ); Event Loop is a separate independent thread acting on its task array Event Loop works like a machine, picking up Fn + Arg ( Computation ) from its task array and placing it on execution stack It serializes all computations, and discharge them one by one It continues to do so until the task array is empty It resumes again if some application thread(s) submits new Computation to task array Task Array Asynchronous Programming Event Loop internal Functioning
Appln Thread(s) task_create_new_job ( arg , foo); Task Array foo ( arg ) { . . . printf (“this ”); task_t * bar_task = task_create_new_job (arg1, bar); printf (“is my fav ”); . . . } o/P : this is my fav bar( ) { printf (“bar”); } Asynchronous Programming Event Loop internal Functioning 1 Foo, arg 2 3 5 bar, arg1 6 7 8 Appln submit new C to EL C is Queued in Task Array as Task EL is signaled 4 Fst task is removed From task array Task is triggered Ist Task is Completed Task Created New Task IInd task is removed From task array 9 Task is triggered 10 IInd Task is Completed bar 11 EL suspends again C – Computation EL – Event Loop Thread
Asynchronous Programming Event Loop Design and Implementation Event Loop Thread in Suspended State Application Submit Computation to Event Loop Thread EL Thread is Signaled ( Resume Execution ) While Task Array is Empty Dequeue Computation from Task Array and Triggers it N EV_DIS_IDLE EV_DIS_TASK_FIN_WAIT Y
Asynchronous Programming Design Patterns Code Access You can do this Course on any POSIX compliant Operating system – Linux Or MAC-OS I will execute and run all my codes on Linux Compiler used – gcc Programming language – C/C++ , if you are doing this course in other language, write equivalent code You are advised to use github for all your codes management 30 min tutorial from youtube , that suffice My Setup : Visual studio Code IDE, Ubuntu 20.04 LTS, VMWare Work-Station Player Code Access : https://github.com/sachinites/AsyncProgramming Create your github account and Fork this repository
Asynchronous Programming Event Loop Design and Implementation Data Structures Event Loop Thread in Suspended State Application Submit Computation to Event Loop Thread EL Thread is Signaled ( Resume Execution ) While Task Array is Empty Dequeue Computation from Task Array and Triggers it N EV_DIS_IDLE EV_DIS_TASK_FIN_WAIT Computation = Fn + Arg typedef void (* event_cbk )(void *); struct task_{ event_cbk ev_cbk ; void * arg ; struct task_ *next; struct task_ * prev ; } ; struct event_loop _{ struct task_ * task_array_head ; pthread_mutex_t ev_loop_mutex ; EV_LOOP_STATE ev_loop_state ; pthread_cond_t ev_loop_cv ; pthread_t *thread; task_t * current_task ; }; typedef enum { EV_DIS_IDLE, EV_DIS_TASK_FIN_WAIT, } EV_DISPATCHER_STATE; Y event_loop.h
Asynchronous Programming Event Loop Design and Implementation APIs void event_loop_init ( event_loop_t * el ); Initialize the member of event loop void event_loop_run ( event_loop_t * el ); Run the event loop thread
Asynchronous Programming Event Loop Design and Implementation APIs Write the following below helping APIs in file event_loop.c static task_t * event_loop_get_next_task_to_run ( event_loop_t * el ); static void event_loop_add_task_in_task_array ( event_loop_t * el , task_t * new_task ); static bool task_is_present_in_task_array ( task_t *task);
Asynchronous Programming Event Loop Design and Implementation APIs static void * event_loop_thread (void * arg ) { while(1) { printf (“%s() running\n”, __FUNCTION__); sleep(2); } return NULL; } event loop task array Application Thread Event Loop Thread Therefore, EnQueue /Dequeue operation on EL task array are C.S , and must be protected by EL mutex Queue new task DeQueue Ist task Algorithm : 1. Lock the EL mutex 2. Fetch the task from task array 3. If task not found, suspend 4. If task found, fire the task 5. Unlock the EL Mutex Repeat above 4 steps for forever
Asynchronous Programming Event Loop Design and Implementation APIs task_t * task_create_new_job ( event_loop_t * el , event_cbk cbk , void * arg ); Create a new task_t object, and initialise it Lock the EL Mutex Add a task to the EL’s task array If EL is in IDLE state, signal it If EL is already BUSY, We are done Unlock EL Mutex void event_dispatcher_schedule_task ( event_loop_t * el , task_t *task); event loop task array Application Thread Event Loop Thread Queue new task Task Submission to Event Loop Task Array DeQueue Ist task
Asynchronous Programming Event Loop Lock Free Application which uses Event Loops are lock free ( as long as Application logic is executed through Tasks only ) First, we need to understand why we need locks in multi-threaded environment, at the most fundamental level
Why we need locking in Multi-Threaded Environment To put in one sentence, we need locking because threads do context switching at arbitrary points in a program We want that other threads ever sees the data structures in in-consistent state Hence, lock the DS !
Asynchronous Programming Event Loop Lock Free Application which uses Event Loops are lock free ( as long as Application logic is executed through Tasks only ) And the reason is Obvious, if only event loop thread were to execute application code through tasks, then there is only one and only one thread in the system – event loop thread So , no Locks because the system is uni -threaded. But in this uni -threaded system also, Tasks switches from T1 to T2 ( upload/download example ). So, does it mean, we would need locks/synchronization since something similar to context switching is there – task switching ? Ans : Because Tasks switching happens in controlled fashion, unlike threads which Context Switches in uncontrolled manner ( at any arbitrary point ), we still don’t need any locks. When tasks terminate, it is programmer responsibility to not to leave application data structures in in-consistent state. Why would any programmer return from a task while he is in middle of deleting a node from a DLL ? The New task fired by event loop would always find application data structures in consistent state .
Asynchronous Programming Event Loop There is no Parallelism Concurrency , Parallelism ? Do you understand the difference ? ( Refer Appendix B ) On multi-core systems, Independent threads can execute in parallel on multiple Cores Event loop is a single threaded system, there is only one thread – Event loop thread Tasks executes one after the other ( serially ), and not in parallel by event loop thread ( sum( ) / mul ( ) example ) If tasks schedule itself again and again, it appear tasks are executing concurrently ( again, upload/download example ) Event Loop provides concurrency but no parallelism even on multi-processor systems
Asynchronous Programming Event Loop What applications must use Event Loops Applications which should be designed single threaded No parallelism Lots of overlapping work Too much contention if designed multithreaded Have listener threads listening for I/O events Need to defer work ( later ) Running on a single Core environment ( Embedded devices ) Communication across different independent components of the single software process
Design And Implement E v e n t L o o p A Data Structure to do Asynchronous Programming Thank you !
Serializing Multithreaded Flows A s y n c h r o no u s P r o g r a m m in g D e s i g n P a t t e r n 1
Asynchronous Programming Design Patterns Serializing Multi-threaded Flows We have a multi-threaded program We want to reduce multi-threaded program to a logical single threaded program But, If we wanted a single threaded program, why did you write a multithreaded programming in the first place ? And, now what is the need to reduce a multi-threaded program to single threaded ? And Why am I even Studying this, simply refactor your application code such that you don’t launch multiple threads, problem solved ! And Is it even possible ? T1 T2 T3 T Multi-threaded program Single-threaded program
Overlapping & Non-Overlapping Work If Thread T1 is doing work W1 and Thread T2 doing work W2 , then W1 and W2 is said to be overlapping if W1 and W2 operates on same Data Eg : W1 - sorting the array A in Ascending order W2 - sorting the array A in Descending order Since Array is a common data on which Thread T1 and T2 are operating, therefore, W1 and W2 are overlapping work If Threads access the same Data structures of a process , say, some global variable then work done by two threads are overlapping work Application Overlapping work Tend to be Single Threaded Non-Overlapping work Tend to be Multi Threaded
Overlapping & Non-Overlapping Application Example Single Threaded Application Design Example Networking protocols Modification of one DS trigger Modification of other DS Modification often Trigger computation OSI Layer Implementation Processing is done sequentially ( from top to bottom Or bottom to top )
Asynchronous Programming Design Patterns Is Multi-Threaded Always Superior ? Application A Single Threaded Multi Threaded Is Multi-Threaded Always Superior to Single Threaded ? Does Multi-Threaded appln always perform better as compared to Single Threaded ? More the degree of isolated Data Structures and independent responsibilities, more the application tend to be multi-threaded > So that threads can be assigned independent duties > Threads operate on isolated data, not interfering with one another ( wait n signaling ) Cars (threads) tend to move slow Road (shared resource ) is congested Cars move only when other nearby cars create room You will cover 5km in 1 hr Ahh ! Resource exclusive for you ! No Competitor to access same resource Move faster You will cover 5km in 10 min
Multithreaded and Single Threaded Applications Abhi wants to design a new application; he is confused whether he should design an application as Single threaded or multithreaded Choosing the wrong thread model shall be devastating Application Too big in size Wrong thread sync, deadlocks Overlapping Data Structures Too much locking & unlocking Not clear demarcation of logic/processing No Scope of parallelism Sequential tasks Single threaded Moderate size Proper thread sync , manageable Isolated Data Structures Independent threads can perform CRUDs clear demarcation of logic/processing Every thread tied to its specific responsibility Parallel tasks Multi threaded
Asynchronous Programming Design Patterns Listener Threads We need to understand how threads can be used by the application to listen for external events Refer to Appendix C . Complete this section before jumping to next video.
It is a common scenario that an application needs to constantly listen to external events Those external events can arrive anytime, and application needs to process those events Application May Use Thread(s) to listen on those external events Process STP has an overlapping property, meaning, it is a single threaded application by its design Process STP foo( ) bar( ) pkt_process ( ) ipc_process ( ) cli_handler ( ) kernel_events ( ) timer_event ( ) Network Pkt listener thread User input listener thread User Kernel events listener thread Kernel Process communication thread Process P1 Using listener threads , process can listen to all events at the same time, When event arrive, process them ! Problems here : Process STP tends to move towards multi-threaded design, against its base design and arch Forced locking Contention timer thread Asynchronous Programming Design Patterns Listener Threads
Process STP foo( ) bar( ) pkt_process ( ) ipc_process ( ) cli_handler ( ) kernel_events ( ) timer_events ( ) Network Pkt listener thread User input listener thread User Kernel events listener thread Kernel Process communication thread Process P1 All listener threads + Timer thread Queue up their computation to Event loop’s task array Event loop then runs, and fires all Queued Computations serially End Result : Application logic executes only in Context of EL thread Result : Listener threads exist, but concurrency do not Process STP runs in the context of one thread only – Event Loop thread ( Single threaded Design maintained ) No locking, no Contention Task Array timer thread Asynchronous Programming Design Patterns Listener Threads Serializing Computation using Event Loop
Asynchronous Programming Design Patterns Getting Started with Sample Project Assignment : https://github.com/sachinites/asyncprogramming/ dir : AsyncProject Soln dir : AsyncProjectSoln stp.c pkt_process ( ) Network Pkt listener thread Pkt listener thread User input listener thread ( main thread ) User rt_table Routing Table Files : rt.c rt.h Routing table represents the data structures possessed by STP process Routing Table can be concurrently updated by : User in the context of main thread By Network in the context of Pkt listener thread STP is suppose to be single threaded, but it violates the assumption STP is thread unsafe, error prone uses CUD CUD Dest /Mask Gateway OIF 122.1.1.1/32 10.1.1.1 Eth0 130.1.2.3/32 10.1.2.1 eth1 Sample routing table *CUD – Create Update Delete
Asynchronous Programming Design Patterns Event Loop Integration stp.c pkt_process ( ) cli_handler ( ) Network Pkt listener thread Pkt listener thread User input listener thread ( main thread ) User rt_table CUD CUD Task Array Event loop must be the only execution unit which executes STP’s instructions Listener/UI threads submit their computation to Event Loop instead of executing appln logic directly Consequently, event Loop triggers all computations on behalf of listener threads Thus, STP is reduced to logically single threaded process Listener threads have just one purpose – to listen for events in blocking state, and not to provide concurrency ( reason for thread invention ) Next : Integrating Timers – Another tool to do Async Programming stp_update_routing_table ( ) Solution Design
stp.c pkt_process ( ) cli_handler ( ) Network Pkt listener thread Pkt listener thread User input listener thread ( main thread ) User rt_table CUD CUD Task Array Goal : stp_update_routing_table ( ) must be invoked by event Loop thread instead by listener threads, Obviously all STP’s listener threads must handover their respective computation to event loop thread STP process must have Event Loop object and initialize it using stp_init_el ( ) Create new data Structure el_rt_table_update_data_t to wrap up all arguments in one single object Write a new API which create a new task and submit the task to Event Loop task array task_t * el_stp_update_routing_table ( rt_table_t * rt_table , int cmd_code , rt_table_entry_t * rt_entry ) ; Changes in application code stp.c to reroute the computation from direct to via Event Loop : Instead of calling stp_update_routing_table , call new API el_stp_update_routing_table Asynchronous Programming Design Patterns Event Loop Integration Steps This is a common pattern Regarding how an appln Submits computation/data to a general library el_stp_update_routing_table ( ) stp_update_routing_table ( )
Serializing Timers A s y n c h r o no u s P r o g r a m m in g D e s i g n P a t t e r n 2
Asynchronous Programming Design Patterns Timers Timers are one of the important elements of programming which contribute to Async programming Timer allows us to do Timer driven programming Schedule computation to be performed in future, one shot or periodically In this section , Let us first learn Posix timer APIs, and how to launch a new timer Then we will analyse the unwanted concurrency problem introduced by Timers in STP project We shall integrate Timers With Event Loop Library so that our STP process stays logically single threaded program Appendix D to learn Posix timers From next Video, I assume you already know how to create/delete Posix timers If you already have any other timer library which you have been using on Linux platform, feel free to use it and skip Appendix D
Asynchronous Programming Design Patterns Timers STP Project Homework : Routing table entries which are learned by Network should have expiration time of 30 sec Entries which are created by User should not have any expiration time First Timers to work with ‘Timers’ – Opportunity to learn working with timers Conceptually same concept irrespective of timer library
Asynchronous Programming Design Patterns Timers Implement Timer Expiration rt.h changes : #define RT_ENTRY_EXP_TIMER 30 Define new members in below structures : struct rt_table_entry _ { … Timer_t * exp_timer ; int exp_timer_msec ; } ; int /*0 on success, -1 on failure*/ rt_insert_new_entry ( rt_table_t * rt , char * dest , char mask , char * gw , char * oif , int exp_timer_in_millisec ); << new arg , if 0 do not start the timer, else start the timer
Asynchronous Programming Design Patterns Timers STP Project Homework : Routing table entries which are learned by Network should have expiration time of 30 sec Entries which are created by User should not have any expiration time First Timers to work with ‘Timers’ – Opportunity to learn working with timers Conceptually same concept irrespective of timer library
Process STP foo( ) bar( ) pkt_process ( ) ipc_process ( ) cli_handler ( ) kernel_events ( ) timer_event ( ) Network Pkt listener thread User input listener thread User Kernel events listener thread Kernel Process communication thread Process P1 Timers Have Introduced Unwanted Concurrency Rt table entry is deleted by its timer thread User might be updating rt table through CLI While at the same time , pkt listener thread might be Updating routing table timer thread Asynchronous Programming Design Patterns Timers STP Project Unwanted Concurrency Solution : Handover Timer Events to Event Loop thread ! Let Event Loop Thread delete the RT Table Entry Again – All external threads work is serialized by Event Loop thread External Threads – Main thread , Pkt Listener thread, Timer threads for each RT entry
Asynchronous Programming Event Loops Summary Event loop is an async programming structure which schedule the computation to be formed in future ( Work Deferral ) Event loop Dequeues the Computation out of Queue and Triggers them ( place it on execution stack ) Event loop is a powerful programming structure which helps avoid forced multithreading/locking Event loop helps in realizing concurrency in a single threaded programs ( Next Section ) Event loop works like a in-charge Or prime authority of the application code, no other thread is allowed to execute Application code but the event loop only Other threads of the application ( Timers, Socket Threads , UI Threads ) need to place the request for computation to Event loop It is Event loop thread which then triggers the computation request submitted by several other threads Event loop is also called as Event dispatcher
P r e e m p t i o n A s y n c h r o no u s P r o g r a m m in g D e s i g n P a t t e r n 3
Asynchronous Programming Design Patterns Preemption We have already discussed an example of Preempting the ongoing current task and scheduling it again In a event loop Preemption is required so that same job do not occupy the entire CPU for a very long time. Multiple EL jobs/Taks should get fair chance to get enough CPUs so that “Progression” is observed Same is used by OS’s scheduler when it schedules processes using context-switching So, now we will going to observer two levels of Scheduling : Level1 : OS scheduling the processes, including our stp.c Level2 : stp.c scheduling EL tasks ( EL tasks are invisible to underlying OS ) Lets do it formally now
Asynchronous Programming Design Patterns Preemption Context Save For the task to get preempted and rescheduled again, the task must resume from the same state where it had left Application must save the task state before preempting it, and restore it when task is scheduled again This is similar to Context Switching done by OS in the context of thread switching Analogy : Book Reading …. Here task state means – Remembering how much work has been completed by the task Let us reinvent the “printing routing table” function – but lets implement it using preemption In every Preemption & Reschedule, lets print 10 entries incrementally from Routing table
Asynchronous Programming Design Patterns Assignment Before we proceed further, let do one assignment Introduce a new option in STP main menu : 6. Serialize and send Rt Entry 7. Exit New Option 6 must do the following : Ask from user the Route Credentials : Destination & Mask = 32 Ask Destination Port Number 50000 Create a new task : Task must search RT entry in RT table, if found send it across network using UDP to 127.0.0.1:50000 To encode RT entry in a msg to be sent, use the same msg layout as used in udp_client.c with code ROUTE_CREATE Run an instance of stp.exe in another terminal window, and start a pkt listener thread listening on port number 50000 Now, Run another instance of stp.exe and choose option 2 to create a new rt entry followed by option 6 Link to the soln is shared with the video.
T a s k P r i o r i t i z a t i o n A s y n c h r o no u s P r o g r a m m in g D e s i g n P a t t e r n 4
Asynchronous Programming Design Patterns Task Prioritization Tasks may have priority associated with them which represents how important the task is ! Important task must be given CPU earlier and more CPU time as compared to other non-important tasks Some tasks can tolerate delay in getting CPU time, hence, must be of least priority Event Loop thread must choose Task with higher priority over task with Lower priority to schedule them on CPU Task with priority Pj must be chosen by EL thread for execution only and only when there are no task scheduled with priority P such that P > Pj Event Loop Thread must maintain a EL queue one for each priority level EL thread must finish all task scheduled with higher priority first Enhance the below API to include priority of the task as new parameter task_t * task_create_new_job ( event_loop_t * el , event_cbk cbk , void * arg , TASK_PRIORITY_T task_priority ); TASK_PRIORITY Value Task Example TASK_PRIORITY_HIGH Packet Processing, Show commands, Pkt sending TASK_PRIORITY_MEDIUM 1 Config Commands, Algorithmic Computation TASK_PRIOTITY_LOW 2 Object Deletion, Garbage Collection
Asynchronous Programming Design Patterns Task Prioritization Tasks may have priority associated with them which represents how important the task is ! Important task must be given CPU earlier and more CPU time as compared to other non-important tasks Some tasks can tolerate delay in getting CPU time, hence, must be of least priority Event Loop thread must choose Task with higher priority over task with Lower priority to schedule them on CPU Task with priority Pj must be chosen by EL thread for execution only and only when there are no task scheduled with priority P such that P > Pj Event Loop Thread must maintain a EL queue one for each priority level EL thread must finish all task scheduled with higher priority first TASK_PRIORITY Value Task Example TASK_PRIORITY_HIGH Packet Processing, Show commands, Pkt sending TASK_PRIORITY_MEDIUM 1 Config Commands, Algorithmic Computation TASK_PRIOTITY_LOW 2 Object Deletion, Garbage Collection
Asynchronous Programming Design Patterns Task Prioritization Code Changes Enhance the below API to include priority of the task as new parameter task_t * task_create_new_job ( event_loop_t * el , event_cbk cbk , void * arg , TASK_PRIORITY_T task_priority ); Enhance the below API such that, task with higher priority is returned first static task_t * event_loop_get_next_task_to_run ( event_loop_t * el );
Asynchronous Programming Design Patterns Task Prioritization Problems Addressed Making show command highest priority task, User response time is enhanced Making packet processing as highest priority task helps to process incoming network packets in a time bound manner 3. Making delete-tasks as lowest priority task, Premature Deletion of objects is addressed
Asynchronous Programming Design Patterns Task Prioritization Problems Addressed Making show command highest priority task, User response time is enhanced Let there be Algorithm A, which executes with MEDIUM priority and with preemption. Let A takes total of 5 sec to finish execution, preempts after every 1 sec A1 A2 A3 show A4 A5 lag show_task A_task show_task – Medium, Algorithm A priority - HIGH A1 A2 A3 A4 A5 show lag User trigger Show command show_task EL_Q EL_Q User trigger Show command show_task EL_Q A_task show_task EL_Q
Asynchronous Programming Design Patterns Task Prioritization Problems Addressed 2. Making packet processing as highest priority task helps to process incoming network packets in a time bound manner Client Server HTTP Req Server uses Event Loop scheduling such that : Algorithm A is HIGH Priority and HTTP Req Pkt processing task is Medium Priority A1 A2 A3 A4 A5 http_pkt_processing_task lag HTTP_ processing_task HTTP Res Client will experience delayed response from server Soln : Make HTTP pkt processing task a high priority And Algorithm A lower priority HTTP Req pkt arrives HTTP_processing_task A_task EL_Q EL_Q
Asynchronous Programming Design Patterns Task Prioritization Problems Addressed Making delete-tasks as lowest priority task, Premature Deletion of objects is addressed Discussed in next section
H a n d l i n g P r e m a t u r e D e l e t i o n A s y n c h r o no u s P r o g r a m m in g D e s i g n P a t t e r n 5
Asynchronous Programming Design Patterns Handling Premature Deletion Premature Deletion means, deleting a Data Object from the system when there are Tasks scheduled to be operating on the object The problem of premature deletion is inherent to any environment which is concurrent Be it, single threaded concurrent environment which is in-charge of event-loop Or multi threaded concurrent environment achieved via multi-threading
Asynchronous Programming Design Patterns Understanding Premature Deletion Following event happens in STP process exactly in same sequence : A new RT table entry is malloc’d , PTR is pointer to it This RT table entry is inserted into RT table An Expiry timer is triggered for this rt entry A task TS is scheduled on this rt entry which serialize it and send over network to remote machine Expiry timer Fires : Detach rt entry from RT table Free this RT table entry – free(PTR) TS is triggered, it perform its operation on this PTR STP crashes because pointer to rt_table entry PTR is a dangling pointer T1 T2 T3 D D D free(D) Dangling ptr Dangling ptr Premature Deletion means, deleting a Data Object from the system when there are Tasks scheduled to be operating on the object
Asynchronous Programming Design Patterns Handling Premature Deletion Solution Solution : Problem to Premature deletion is resolved when : Deletion (free) of Data object D is postponed until all Task scheduled to operate on it has also finished their respective operation on it Let their be Three Tasks T1 , T2 and T3 , Where T3 task is a delete task. Event-loop thread must schedule Task T3 only when Task T1 and T2 have finished their respective operation on D This would ensure no other task in the system ever sees D through Dangling Pointer Hence, Simple Solution is : Always delete objects through Separate Tasks, and not directly Delete Task must be of least Priority Pd Any Non-Delete Task Must have priority P > Pd T1 T2 T3 D D D free(D)
Asynchronous Programming Design Patterns Handling Premature Deletion Solution in STP Changes in STP process : Delete Single Rt Table Entry option # 4 Create a new Taks with Priority LOW In Task CBK fn : Detach incoming pointers to rt_entry Detach rt_entry from container Data Structures Release rt_entry resources free rt_entry Return EL_FINISH Introduce a new Option # 7 Delete rt table : void stp_el_delete_rt_table ( rt_table_t * rt_table ) Create a new Taks with Priority LOW In Task CBK fn : Detach incoming pointers to rt_table Detach rt_table from container Data Structures ( Imp ) Release rt_table resources Preempt after deletion of 10 entries, Return EL_CONTINUE Return EL_FINISH if all entries are deleted
A v o i d R e d u n d a n t W o r k A s y n c h r o no u s P r o g r a m m in g D e s i g n P a t t e r n 6
Asynchronous Programming Design Patterns Avoid Redundant work Real Life Examples : > Your mother ask you to mob the floor 5 times in a day, you mob it 5 times You end up cleaning the same floor unnecessarily 4 times > Your mother ask you to mob the floor 5 times in a day, but you obey only the last order and mob the floor on last call Intelligent !! Why to do same work multiple times, when the work produces the same result (Or result of doing it again overwrite the result of previous work) no matter how many times you do it. Its better to do it only on last call
foo (){ . . . . . . if (C1) algo1(); . . . . . . if (C2) algo1(); . . . . . . } Second invocation of algo1() makes the prev invocation of algo1() redundant foo (){ bool call_algo1 = FALSE; . . . . . . if (C1) call_algo1 = TRUE; . . . . . . if (C2) call_algo1 = TRUE; . . . . . . if (call_algo1) algo1(); } Better Approach Call algo1 only once ! But requires tracking using local bool variable Will not work if foo() is deeply nested fn and Call to algo1 is decided in deep nested functions also Best Approach foo ( ) { . . . if (C1) { if (! algo_task ) { algo_task = task_create_new_job ( el , algo1, arg ); } } . . . if (C2) { if (! algo_task ) { algo_task = task_create_new_job ( el , algo1, arg ); } } } Asynchronous Program Call algo1 only once ! Requires tracking using global algo_task variable Will work in nested fn call cases ! Asynchronous Programming Design Patterns Avoid Redundant work
A s y n c h r o n o u s P r o g r a m m i n g D e s ig n P a tt e r n s Programming Language : C/C++ ( but concepts are language agnostic ) Pre-requisites : Thread Synch ( Mutex and Condition Variables ), Socket programming basics Website : www.csepracticals.com