daa-unit-3-greedy method

hodcsencet 6,607 views 84 slides Sep 21, 2021
Slide 1
Slide 1 of 84
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84

About This Presentation

DAA


Slide Content

UNIT - 3 Greedy Method

Among all the algorithmic approaches, the simplest and straightforward approach is the Greedy method. In this approach, the decision is taken on the basis of current available information without worrying about the effect of the current decision in future. Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an immediate benefit. This approach never reconsiders the choices taken previously

This approach is mainly used to solve optimization problems. Greedy method is easy to implement and quite efficient in most of the cases. Greedy algorithm is an algorithmic paradigm based on heuristic that follows local optimal choice at each step with the hope of finding global optimal solution. In many problems, it does not produce an optimal solution though it gives an approximate (near optimal) solution in a reasonable time.

Control abstraction for Greedy Method Algorithm GreedyMethod (a, n) { // a is an array of n inputs Solution: =Ø; for i : =0 to n do { s: = select (a); if (feasible (Solution, s)) then { Solution: = union (Solution, s); } else reject (); // if solution is not feasible reject it. } return solution; }

Three important activities A selection of solution from the given input domain is performed, i.e. s:= select(a). The feasibility of the solution is performed, by using feasible β€˜(solution, s)’ and then all feasible solutions are obtained. From the set of feasible solutions, the particular solution that minimizes or maximizes the given objection function is obtained. Such a solution is called optimal solution.

Differentiate Greedy and Divide-and-Conquer GREEDY APPROACH DIVIDE AND CONQUER 1.Many decisions and sequences are guaranteed and all the overlapping sub-instances are considered. 1.Divide the given problem into many sub problems. Find the individual solutions and combine them to get the solution for the main problem 2. Follows Bottom-up technique 2. Follows top down technique 3.Split the input at every possible points rather than at a particular point 3.Split the input only at specific points ( midpoint), each problem is independent. 4. Sub problems are dependent on the main Problem 4. Sub problems are independent on the main Problem 5. Time taken by this approach is not that much efficient when compared with DAC 5. Time taken by this approach efficient when compared with Greedy Algorithms. 6.Space requirement is less when compared DAC approach. 6.Space requirement is very much high when compared GA approach.

Areas of Application Greedy approach is used to solve many problems, such as… Job sequencing with dead lines knapsack problem Single Source shortest path problem . Finding the minimal spanning tree in a graph -Prim’s algorithm, - Kruskal’s algorithm.

JOB SEQUENCING WITH DEADLINES In job sequencing problem, the objective is to find a sequence of jobs, which is completed within their deadlines and gives maximum profit.. Let us consider, a set of n given jobs which are associated with deadlines and profit is earned, if a job is completed by its deadline. These jobs need to be ordered in such a way that there is maximum profit. It may happen that all of the given jobs may not be completed within their deadlines.

Assume, deadline of i th job Ji is di and the profit received from this job is pi. Hence, the optimal solution of this algorithm is a feasible solution with maximum profit. Thus, 𝑫(π’Š) > 𝟎 for 𝟏 ≀ π’Š ≀ 𝒏. Initially, these jobs are ordered according to profit, i.e. π’‘πŸ β‰₯ π’‘πŸ β‰₯ π’‘πŸ‘ β‰₯ … β‰₯ 𝒑𝒏. Ex: J=[j1,j2,j3,j4] P=[100,27,15,10], D= [2,1,2,1] J=[j1,j2,j3,j4,j5] P=[20,15,10,5,1], D= [2,2,1,3,3]

Example: Let us consider a set of given jobs as shown in the following table. We have to find a sequence of jobs, which will be completed within their deadlines and will give maximum profit. Each job is associated with a deadline and profit. INDEX 1 2 3 4 5 JOB J1 J2 J3 J4 J5 DEADLINE 2 1 3 2 1 PROFIT 60 100 20 40 20

Solution: To solve this problem, the given jobs are sorted according to their profit in a descending order. Hence, after sorting, the jobs are ordered as shown in the following table. From this set of jobs, first we select J2, as it can be completed within its deadline and contributes maximum profit. Next, J1 is selected as it gives more profit compared to J4. INDEX 1 2 3 4 5 JOB J2 J1 J4 J3 J5 DEAD LINE 1 2 2 3 1 PROFIT 100 60 40 20 20

In the next clock, J4 cannot be selected as its deadline is over, hence J3 is selected as it executes within its deadline. The job J5 is discarded as it cannot be executed within its deadline. Thus, the solution is the sequence of jobs ( J2, J1, J3), which are being executed within their deadline and gives maximum profit. Total profit of this sequence is 𝟏𝟎𝟎 + πŸ”πŸŽ + 𝟐𝟎 = πŸπŸ–πŸŽ

Ex2:-n=4, ( p1,p2,p3,p4 )=( 100,10,15,27 ) ( d1,d2,d3,d4 )=( 2,1,2,1 ) The maximum deadline is 2 units, hence the feasible solution set must have <=2 jobs. feasible processing solution sequence value 1. (1,2) 2,1 110 2. (1,3) 1,3 or 3,1 115 3. (1,4) 4,1 127 4. (2,3) 2,3 25 5. (3,4) 4,3 42 6. (1) 1 100 7. (2) 2 10 8. (3) 3 15 9. (4) 4 27 Solution 3 is optimal.

Algorithm JS ( d,j,n ) //d[ i ]β‰₯1, 1 ≀ i ≀ n are the deadlines. //The jobs are ordered such that p[1] β‰₯p[2] …… β‰₯p[n] // j[ i ] is the i th job in the optimal solution, 1≀ i ≀ k { d[0]=j[0]=0; // Initialize j[1]=1; // Include job 1 which is having highest profit k=1; // no of jobs considered till now for i =2 to n do { //Consider jobs in Descending order of p[ i ]. // Find position for i and check feasibility of insertion. r=k;

while ( ( d[ j[r] ] > d[ i ] ) and ( d[ j[ r ] ] != r ) ) do //checking present job with existing jobs in array and try to shift job towards upwards in list { r = r-1; } if ( d[ j[r] ] ≀ d[ i ] and d[ i ] > r )) then //job can be inserted or not { // Insert i into j[ ]. for q=k to (r+1) step -1 do // shift existing jobs downwards { j[q+1] = j[q]; } j[r+1] := i ; // insert job in sequence k:=k+1; // increase job count } } return k; } Time taken by this algorithm is o(n 2 )

Assignment Let n=4 (p1…p4)=(20,15,10,5,1) (d1…d4)=(2,2,1,3,3)

0/1 knapsack problem The Greedy algorithm could be understood very well with a well-known problem referred to as Knapsack problem. Although the same problem could be solved by employing other algorithmic approaches. Greedy approach solves Fractional Knapsack problem reasonably in a good time.

In this problem we have a Knapsack(bag) that has a weight limit M. There are items i1, i2, ..., in each having weight w1, w2, … w n and some benefit (value or profit) associated with it p1, p2, ..., p n Our objective is to maximize the benefit or profit such that the total weight inside the knapsack is at most M, and we are also allowed to take an item in fractional part.

There are n items in the store -weight of i th item π’˜ π’Š > 𝟎 - Profit for i th item 𝒑 π’Š > 𝟎 and Capacity of the Knapsack is M. In this version of Knapsack problem, items can be broken into smaller pieces, take fraction x i of i th item. 𝟎 ≀ π’™π’Š ≀ 𝟏 The i th item contributes the weight π’™π’Š*π’˜π’Š to the total weight in the knapsack and profit π’™π’Š*π’‘π’Š to the total profit. Hence, the objective of this algorithm is to π’Žπ’‚π’™π’Šπ’Žπ’Šπ’›π’† Ξ£ n=1 to n ( π’™π’Š. π’‘π’Š)

subject to constraint , Ξ£ n=1 to n ( π’™π’Š. π’˜π’Š) ≀ M It is clear that an optimal solution must fill the knapsack exactly, otherwise we could add a fraction of one of the remaining items and increase the overall profit. Thus, an optimal solution can be obtained by Ξ£ n=1 to n ( π’™π’Š. π’˜π’Š)= M In this context, first we need to sort those items according to the value of π’‘π’Š/π’˜π’Š , so that ( 𝒑 π’Š+𝟏 )/(π’˜ π’Š+𝟏 )≀𝒑 π’Š /π’˜ π’Š . Here, x is an array to store the fraction of items.

Example 1: Let us consider that the capacity of the knapsack M = πŸ”πŸŽ and the list of provided item are shown in the following table: ITEM A B C D PROFIT 280 100 120 120 WEIGHT 40 10 20 24 RATIO (Pi/ Wi )

As the provided items are not sorted based on pi / wi . After sorting, the items are as shown in the following table. After sorting all the items according to π’‘π’Š/π’˜π’Š . First all of B is chosen as weight of B is less than the capacity of the knapsack. Next, item A is chosen, as the available capacity of the knapsack is greater than the weight of A. Now, C is chosen as the next item. However, the whole item cannot be chosen as the remaining capacity of the knapsack is less than the weight of C. Hence, fraction of C (i.e. (60 βˆ’ 50)/20) is chosen. ITEM B A C D PROFIT 100 280 120 120 WEIGHT 10 40 20 24 RATIO (Pi/ Wi ) 10 7 6 5

Now, the capacity of the Knapsack is equal to the selected items. Hence, no more item can be selected. The total weight of the selected items is 𝟏𝟎 + πŸ’πŸŽ + 𝟐𝟎 βˆ— (𝟏𝟎/𝟐𝟎) = πŸ”πŸŽ And the total profit is 𝟏𝟎𝟎 + πŸπŸ–πŸŽ + 𝟏𝟐𝟎 βˆ— (𝟏𝟎/𝟐𝟎) = πŸ‘πŸ–πŸŽ + πŸ”πŸŽ = πŸ’πŸ’πŸŽ This is the optimal solution. We cannot gain more profit selecting any different combination of items.

Example 2: KANPSACK SIZE IS M=16 ITEM WEIGHT PROFIT 1 6 6 2 10 2 3 3 1 4 5 8 5 1 3 6 3 5

Minimum Cost Spanning Trees A spanning tree is a subset of Graph G, which has all the vertices covered with minimum possible number of edges. Hence, a spanning tree does not have cycles and it cannot be disconnected. -Note 1: Every connected and undirected Graph G has at least one spanning tree. -Note 2: A disconnected graph does not have any spanning tree.

Kruskal’s Algorithm Step 1 - Remove all loops and Parallel Edges. Step 2 - Arrange all edges in their increasing order of weight. Step 3 - Add the edge which has the least weight iff it does not form cycle.

1-6 ->10

Total min cost of spanning tree is 10+12+14+16+22+25=99

Time Complexity: Β O( ElogE ) or O( ElogV ). Sorting of edges takes O( ELogE ) time. After sorting, we iterate through all edges and apply the find-union algorithm. The find and union operations can take at most O( LogV ) time. So overall complexity is O( ELogE + ELogV ) time. The value of E can be at most O(V 2 ), so O( LogV ) is O( LogE ) the same. Therefore, the overall time complexity is O( ElogE ) or O( ElogV )

Prim’s Algorithm Prim's algorithm treats the nodes as a single tree and keeps on adding new nodes to the spanning tree from the given graph. - Step 1 - Remove all loops and parallel edges. -Step 2 - Choose any arbitrary node as root node with mincost . - Step 3 - Check outgoing edges and select the one with less cost.

EXAMPLE : Graph Resultant Graph

Solution:

Kruskal’s vs Prim’s Prim’s algorithm initializes with a node, where as Kruskal’s algorithm initiates with an edge. Prim’s algorithms span from one node to another while Kruskal’s algorithm select the edges in a way that the position of the edges is not based on the last step. In prim’s algorithm , graph must be a connected graph while the kruskal’s can function on disconnected graphs too. Prim’s algorithm has Time complexity of O(V2), and Kruskal’s Time Complexity is O(E log V)

Single source shortest path ( Dijkstra’s algorithm) For a given source node in the graph, the algorithm finds the shortest path between that node and other nodes. It also used for finding the shortest paths from a single node to a single destination node by stopping the algorithm once the shortest path to the destination node has been determined.

Example:
Tags