84694646456445645645645665656465464sdd.pptx

mhp821023 11 views 119 slides Sep 20, 2024
Slide 1
Slide 1 of 119
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119

About This Presentation

Are You Having A Bad Day?
so this file is for you


Slide Content

Stream API Life cycle

D aemon and Worker Threa ds DAEMON THREADS WORKER THREADS A thread in Java can be a so-called daemon thread or a standard worker thread w hen a Java program starts then one thread begins running immediately ( main thread ) – it starts the main method daemon threads are intended as helper threads (for example garbage collection) We can create child threads from the main thread. The main thread is the last thread to finish execution because it performs various shutdown operations

Memory Management there are processes and threads (light-weight processes) t he main purpose of  synchronization is  the sharing of resources without interference using mutual exclusion. „ The typical difference is that threads (of the same process) run in a shared  memory space, while processes run in separate memory spaces ” STACK MEMORY HEAP MEMORY the local variables and method arguments as well as method calls are stored on the stack objects are stored on the heap memory and live as long as it is referenced from somewhere in the application. EVERY THREAD HAS ITS OWN STACK MEMORY BUT ALL THREADS SHARE THE HEAP MEMORY (SHARED MEMORY SPACE)

D aemon and Worker Threa ds Java Virtual Machine (JVM) main thread daemon threads (gc) here we can create as many worker threads as we want (child threads of the main thread)

D aemon and Worker Threa ds d aemon t hread s   are a low priority  thread s that runs in background to perform tasks such as garbage collection  usually we create daemon threads for I/O operations or services (smartphone services such as NFC or Bluetooth communication) We can create a daemon thread for a smartphone application to look for smart-watches to pair with ... daemon threads are terminated by the JVM when all other worker threads are terminated (finish execution) so this is the main difference : worker threads are not terminated while daemon threads are interrupted by the JVM

Memory Management there are processes and threads (light-weight processes) „ The typical difference is that threads (of the same process) run in a shared  memory space, while processes run in separate memory spaces ” MAIN MEMORY (RAM) thread 1 thread 2 stack cache stack cache

Memory Management reading the number from memory incrementing the value writing the number to memory return with the variable public void increment() { counter++; } These operations seems to be atomic in the sense that requires only a single operation but this is not the case  it takes some time to finish with the increment operation  during this procedure another thread may call this method as well with the original counter value

Intrinsic Lock (Monitor) public synchronized void increment() { counter++; }  every object in Java has a so–called intrinsic lock   ”A thread that needs exclusive and consistent access to an object's fields has to acquire the object's intrinsic lock before accessing them, and then  release the intrinsic lock when it's done with them ” because of the monitor lock: no 2 threads can execute the same synchronized method at the same time

Intrinsic Lock (Monitor) THE PROBLEM IS THAT EVERY OBJECT HAS A SINGLE MONITOR LOCK   a thread owns the intrinsic lock between the time it has acquired the lock and released the lock. if we have 2 independent synchronized methods than the threads have to wait for each other to release the lock a s long as a thread owns an intrinsic lock no other thread can acquire the same lock t he other thread will block when it attempts to acquire the lock.

Intrinsic Lock (Monitor) public synchronized void increment() { counter++; } public void increment() { synchronized(this) { counter++; } } This is called object level locking because we get the monitor lock (intrinsic lock) associated with the object itself

Intrinsic Lock (Monitor) public static synchronized void increment() { counter++; } public void increment() { synchronized(SomeClass.class) { counter++; } } This is called class level locking because we get the monitor lock (intrinsic lock) associated with the class

Threads Communication thread 1 thread 2 stack cache stack cache wait() notify() Threads that are locking on the same intrinsic lock (monitor) can release the lock until the other thread calls notify Note : these wait() and notify() methods can be used and called from synchronized methods or blocks exclusively

Stream API Explained streams are supported starting in Java 8 basically streams introduce functional programming to the Java programming ecosystem „ In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions ” streams rely heavily on lambda expressions we can construct parallel operations quite easily with streams „ Lambda expressions  are similar to methods, but they do not need a name and they can be implemented right in the body of a method ”

Memory Management counter = 0 thread #1 thread #2

Memory Management counter = 0 thread #1 thread #2

Memory Management counter = 0 thread #1 thread #2 counter = counter + 1

Memory Management counter = 0 thread #1 thread #2 counter = counter + 1

Memory Management counter = 0 thread #1 thread #2 counter = counter + 1 counter = counter + 1

Memory Management counter = 1 thread #1 thread #2 counter = counter + 1

Memory Management counter = 1 thread #1 thread #2

Memory Management counter = 1 thread #1 thread #2 Note : both of the threads incremented the value of the counter but the result will be 1 (instead of 2 ) because they do the same operation at the same time

Stream API Explained streams are supported starting in Java 8 basically streams introduce functional programming to the Java programming ecosystem „ In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions ” streams rely heavily on lambda expressions we can construct parallel operations quite easily with streams

Stream API Explained SOURCE INTERMEDIATE OPERATIONS TERMINAL OPERATIONS the source of the stream (collection of data) returns a stream (filtering, sorting etc.) returns scalar or void (collect, reduce etc.)

Threads : there are various stages  thread is born, it is started, it runs and it dies

Threads : there are various stages  thread is born, it is started, it runs and it dies runnable new running waiting dead

Threads : there are various stages  thread is born, it is started, it runs and it dies runnable new running waiting dead 1.) NEW when we instantiate a thread It is in this state until we start it ~ start() 2.) RUNNABLE after we have started the thread The thread is executing its task in this state 3.) WAITING when a thread is waiting: for example for another thread to finish its task When other thread signals  this thread goes back to the ‚runnable’ state ~ wait() and sleep() 4.) DEAD after the thread finishes its task

Threads : there are various stages  thread is born, it is started, it runs and it dies runnable new running waiting dead 1.) NEW when we instantiate a thread It is in this state until we start it ~ start() 2.) RUNNABLE after we have started the thread The thread is executing its task in this state 3.) WAITING when a thread is waiting: for example for another thread to finish its task When other thread signals  this thread goes back to the ‚runnable’ state ~ wait() and sleep() 4.) DEAD after the thread finishes its task

Threads : there are various stages  thread is born, it is started, it runs and it dies runnable new running waiting dead 1.) NEW when we instantiate a thread It is in this state until we start it ~ start() 2.) RUNNABLE after we have started the thread The thread is executing its task in this state 3.) WAITING when a thread is waiting: for example for another thread to finish its task When other thread signals  this thread goes back to the ‚runnable’ state ~ wait() and sleep() 4.) DEAD after the thread finishes its task

Threads : there are various stages  thread is born, it is started, it runs and it dies runnable new running waiting dead 1.) NEW when we instantiate a thread It is in this state until we start it ~ start() 2.) RUNNABLE after we have started the thread The thread is executing its task in this state 3.) WAITING when a thread is waiting: for example for another thread to finish its task When other thread signals  this thread goes back to the ‚runnable’ state ~ wait() and sleep() 4.) DEAD after the thread finishes its task

MULTITHREADING ADVANTAGES

Advantages We can design more responsive softwares  d o several things at the same time We can achieve better resource utilization We can improve performance !!!

Downloading images from the web IO operations: copying files and parse the content Doing heavy calculations For example: simulations, numerical methods

Downloading images from the web IO operations: copying files and parse the content Doing heavy calculations For example: simulations, numerical methods THREAD #1 THREAD #2 THREAD #3

MULTITHREADING DISADVANTAGES

Disadvantages Of course there are some costs involved in multithreading Multithreading is not always better !!! Threads manupulate data located on the same memory area  we have to take it into consideration Difficult to design multithreaded software Hard to detect errors EXPENSIVE OPERATION : switcing between threads is exspensive !!! ~ CPU save local data, pointers ... of the current thread + loads the data of the other thread

Rule of thumb : for small problems it is unnecessary to use multithreading, it may be slower than single threaded applications ~ multithreaded sorting is slower for small number of items #threads running time

MULTITHREADING DEADLOCK AND LIVELOCK

Deadlock “Deadlock occurs when two or more threads wait forever for a lock or resource held by another of the threads” d eadlock  is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does 1.) DEADLOCK IN DATABASES: D eadlock happens when two processes each within its own transaction updates two rows of information but in the opposite order. For example : process A updates row 1 then row 2 in the exact timeframe that process B updates row 2 then row 1

Deadlock “Deadlock occurs when two or more threads wait forever for a lock or resource held by another of the threads” d eadlock  is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does 2.) DEADLOCK IN OPERATIONG SYSTEMS: D eadlock is a situation which occurs when a process or thread enters a waiting state because a resource requested is being held by another waiting process, which in turn is waiting for another resource held by another waiting process

Livelock a thread often acts in response to the action of another thread i f the other thread's action is also a response to the action of another thread then livelock may occure l ivelocked threads are unable to make further progress. However, the threads are not blocked : they are simply too busy responding to each other to resume work like two people attempting to pass each other in a narrow corridor: A moves to his left to let B pass, while B moves to his right to let A pass . T hey are still blocking each other, A moves to his right, while B moves to his left ... still not good

How to Handle Deadlock and Livelock? 1.) we should m ake sure that a thread does not block infinitely if it is unable to acquire a lock  this is why using Lock interface’s tryLock() method is extremely convenient and useful 2.) w e should make sure that each thread acquires the locks in the same order to  avoid any cyclic dependency in lock acquisition 3.) livelock can be handled with the methods above and some randomness ~  threads retry acquiring the locks at random intervals

Livelock A thread often acts in response to the action of another thread If the other thread's action is also a response to the action of another thread  livelock !!! L ivelocked threads are unable to make further progress. However, the threads are not blocked  they are simply too busy responding to each other to resume work Like two people attempting to pass each other in a narrow corridor: A moves to his left to let B pass, while B moves to his right to let A pass . T hey are still blocking each other, A moves to his right, while B moves to his left ... still not good

Parallel versus multithreading thread #1 thread #2 „parallel execution” thread #1 thread #2 „multithreaded execution” (with time-slicing)

MAIN MEMORY (RAM) thread 1 thread 2 CPU 1 cache CPU 2 cache Every read of a volatile variable will be read from the RAM so from the main memory (and not from cache)  usually variables are cached for performance reasons  c aches are faster. Do not use volatile keyword if not necessary ( + it prevents instruction reordering which is a performance boost technique ) Volatile Keyword

Volatile thread 1 thread 2 counter = 0

Volatile thread 1 thread 2 counter = 0

Volatile thread 1 thread 2 counter = 0 counter = counter + 1

Volatile thread 1 thread 2 counter = 0 counter = counter + 1

Volatile thread 1 thread 2 counter = 0 counter = counter + 1 counter = counter + 1

Volatile thread 1 thread 2 counter = 1 counter = counter + 1

Volatile thread 1 thread 2 counter = 1

Volatile thread 1 thread 2 counter = 1 Counter remained 1 instead of 2 ~ we should make sure the threads are going to wait for each other to finish the given task on the variables !!!

thread #1 thread #2 EXCHANGER object1 object2

MULTITHREADING STUDENT LIBRARY SIMULATION

b0 b1 b2 b3 b4 b5 b6 b7 s0 s1 s2 s3

MULTITHREADING DINING PHILOSOPHERS PROBLEM

Dining P hilosopher s Problem i t was formulated by Dijkstra in 1965 5 philopshers are present at a table and there are 5 forks (chopsticks) the philsophers can eat and think philosophers can eat when they have both left and right chopsticks a chopstick can be hold by one philosopher at a given time the problem: how to create a concurrent algorithm such that no philosopher will starve? (so the aim is to avoid deadlocks)

Dining P hilosopher s Problem

Dining P hilosopher s Problem p p 1 p 2 p 3 p 4

Dining P hilosopher s Problem p p 1 p 2 p 3 p 4 c c 1 c 2 c 3 c 4

Dining P hilosopher s Problem

Dining P hilosopher s Problem

Dining P hilosopher s Problem

MULTITHREADING Locks and synchronization

A reentrant lock has the same basic behavior as we have seen for synchronized blocks (with some extrended features) w e can make a lock fair : prevent thread starvation Synchronized blocks are unfair by default w e can check whether the given lock is held or not with reentrant locks w e can get the list of threads waiting for the given lock with reentrant locks s ynchronized blocks are nicer: we do not need the try-catch-finally block Locks and Synchroni zed Blocks

MULTITHREADING Parallel sum

5 2 8 1 11 17 9 3

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 for i in nums total = total + nums[i]

5 2 8 1 11 17 9 3 Parallel sum  with multiple processors or multicore processor we can assign a task to every processor ~ parallel computing

5 2 8 1 11 17 9 3 Parallel sum  with multiple processors or multicore processor we can assign a task to every processor ~ parallel computing thread #1 thread #2 sum1 = 16 sum2 = 40

MULTITHREADING FORK-join framework

What is fork-join framework? Concrete implementation for parallel execution !!!

Fork-join framework This framework helps to make an algorithm parallel We do not have to bother about low level synchronizations or locks Divide and conquer algorithms !!! A larger task  it can be divided into smaller ones + the subsolutions can be combined !!! IMPORTANT subtasks have to be independent in order to be executed in parallel So the main concept fork-join framework breaks the task into smalles subtasks until these subtasks are simple enough to solve without further breakups For example: parallel merge sort, parallel maximum finding

RecursiveTask<T> it will return a T type All the tasks we want to execute in parallel is a subclass of this class We have to override the computer method that will return the solution of the subproblem ForkJoinPool Basically it is a thread pool for executing fork-join tasks work-stealing a task is not equivalent to a thread !!! Tasks are lightweight threads  so fork-join will be efficient even when there are a huge number of tasks So ForkJoinPool creates a fix number of threads  usually the number of CPU cores These threads are executing the tasks but if a thread has no task: it can „steal” a task from more busy threads ~ tasks are distributed to all threads in the thread pool !!! RecursiveAction it is a task, but without any return value

fork  split the given task into smaller subtasks that can be executed in a parallel manner task task task task task

fork  split the given task into smaller subtasks that can be executed in a parallel manner task task task task task

join  the splitted tasks are being executed and after all of them are finished they are merged into one result task task task task task

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

PARALLEL EXECUTION SEQUENTIAL EXECUTION

Semaphores and Mutexes invented by Dijkstra back in 1962 semaphores are simple variables (or abstract data types) that are used for controlling access to a common resource it is an important concept in operating systems as wel „It is a record of how many units of a particular resource are available. We have to wait until a unit of the resource becomes available again.” COUNTING SEMAPHORES : allows an arbitrary resource count BINARY SEMAPHORES : semaphores that are restricted to the values and 1

Semaphores invented by Dijkstra back in 1962 semaphores are simple variables (or abstract data types) that are used for controlling access to a common resource it is an important concept in operating systems as wel „It is a record of how many units of a particular resource are available. We have to wait until a unit of the resource becomes available again.” COUNTING SEMAPHORES : allows an arbitrary resource count BINARY SEMAPHORES : semaphores that are restricted to the values and 1

Semaphores suppose a library has 10 identical study rooms (it can be used by a single student at a time) students must request a study room from the front desk if no rooms are free: students have to wait for rooms to be available again so until someone relinquishes a given study room when a student finished using the room, the student must return to the front desk and indicate that one room has become free THIS PROBLEM CAN BE SOLVED WITH THE HELP OF A SEMAPHORE

Semaphores 1.) semaphores tracks only how many resources are free – it does not keep track of which of the resources are free 2.) the semaphore count may serve as a useful trigger for a number of different actions (web servers) 3.) producer-comsumer problem can be solved and implemented with the help of semphores (Dijkstra’s approach)

Mutexes (Mutual Exclusion Objects)  a Lock is designed to enforce a mutual exclusion concurrency control policy „ M utual exclusion  is a property of concurrency control which is instituted for the purpose of preventing race conditions” p rocess synchronization plays an important role in maintaining the consistency of shared data (critical section problem) mutex is very similar to a binary semaphore: while binary semaphore can be used as mutex, a mutex is a more specific use-case

Mutexes (Mutual Exclusion Objects) SEMAPHORE It is a signaling mechanism threads and processes perform wait() and notify () operation s to indicate whether the y are acquiring or releasing the resource MUTEX m utex is a locking mechanishm threads or process es ha ve to acquire the lock on mutex object if it wants to acquire the resource

Mutexes (Mutual Exclusion Objects) SEMAPHORE semaphore allows multiple program threads to access the  finite instance of resources (not just a single resource) MUTEX mutex allows multiple program threads to access a  single shared resource  but one at a time

Mutexes (Mutual Exclusion Objects) SEMAPHORE the process or thread   blocks  itself if no resource is free till the count of semaphore become greater than MUTEX if the lock is already acqured by another thread or process then the thread will wait until the mutex object gets unlocked

Executors With the increase in the number of the cores available in the processors nowadays , multithreading is getting more and more crucial  Java provides its own multi-threading framework the so-called Executor Framework  with the help of this framework we can manage worker threads more eff iciently because of thread pools USING THREAD POOLS MAKES MULTITHREADING EFFICIENT !!!

Executors Why to use thread pools and the Execturor Framework ?  it will handle everything: schedule and execute the submitted tasks  creating and managing threads is expensive  a dding a new thread for each process leads to the creation of a large number of threads These threads need memory + CPU will spend too much time switching context when the threads are swapped Thread pools can resue threads in an extremely efficient manner by keeping the threads alive and reusing them (thread pools are usually queues ) USING THREAD POOLS MAKE S MULTITHREADING EFFICIENT !!!

Executors There are several types of executors : 1.) SingleThreadExecutor This executor has a single thread so we can execute processes in a sequential manner. Ever y process is exectured by a new thread 2.) FixedThreadPool(n) This is how we can create a thread pool with n threads. Usually n is the number of cores in the CPU.  if there are more tasks than n then these taks are stored with a LinkedBlockingQueue data structure

Executors There are several types of executors : 3.) CachedThreadPool The number of threads is not bounded: i f all the threads are busy executing some tasks and a new task comes the pool will create and add a new thread to the executor.   if a thread remains idle for 60 secs then it is removed  it is used for short parallel tasks 4.) ScheduledExecutor We can execute a given operation at regular intervals or we can use this executor if we wish to delay a certain task

Runnable and Callable Interfaces Runnable and Callable both run on a different threads than the calling thread b ut Callable can return a value and Runnable can not RUNNABLE : a so-called run-and-forget action. We execute a given operation in the run() method without a return value CALLABLE<T> : we use Callable interface’s call() method if we want to return a given value from the given thread  Callable interface will not return the value: this is why we need Future<T> object  calling thread will be blocked till the call() method is executed and Future <T> returns with the results

Runnable and Callable Interfaces The ExecutorService can handle both of the interfaces ( Runnable and Callable interfaces) executorService.execute() This method executes a Runnable interface so it means there is no return value (void run() method) executorService.submit() This method can handle Runnable interfaces as well as Callable interfaces  it can handle a Future<T> return value and we can get the T vaue with get() on the future object