Greedy algorithm for design and analysis

JavedKhan524377 252 views 20 slides May 02, 2024
Slide 1
Slide 1 of 20
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20

About This Presentation

design and analysis of algorithm


Slide Content

Greedy Algorithm

Optimization and Optimal value Any problem which could be solved using more than one way is called an optimization problem . For instance, we travel from one point to another and we have more than one routes. When we reach to our destination at a lowest possible cost, it is the optimal value .

Commonly used optimization algorithms Greedy Algorithm Dynamic Programming

Greedy Algorithm Before explaining it, lets see a scenario.

Since she has no pockets in her trouser, she needs coins as less as possible. The problem to be solved here is to give her 85 dollars with less possible coins .

Available coins with Alex Options are: Bags full of 50, 20, 10 and 5 dollars coins only

What are available options We have all these possible options + more

Optimal value at each step Greedy algorithm selects the minimum or maximum (in this case) at each step.

What is Greedy Algorithm It is an optimization problem-solving method which selects the locally optimal (i.e. maximum or minimum in the available) choice at each step and hope it achieve the global optimum. Greedy Algorithms work step-by-step, and always choose the steps which provide immediate profit/benefit. It chooses the “locally optimal solution”, without thinking about future consequences.

What is Greedy Algorithm Greedy algorithms may not always lead to the optimal global solution, because it does not consider the entire data. The choice made by the greedy approach does not consider future data and choices. In some cases making a decision that looks right at that moment gives the best solution (Greedy), but in other cases, it doesn’t.

Greedy Example

Do Greedy Algorithm always find global optimal? No

Dynamic Programming Most of the optimization algorithms (e.g. Greedy) work on simple princple; solve the components of the main problem (i.e. subproblem), combine these sub-solutions and get to an overall solution. Many subproblems are created and solved repeatedly . To reduce the number of computations, the Dynamic Programming approach aims to answer each subproblem just once. Once the solution to a given subproblem has been computed, it is saved or "memoized ," making it easy to find the next time the exact solution is required.

Dynamic Programming In daily life, we are using the aproach of dynamic programming but we dont notice it. We ask a kid to count the sticks, he counts and correctly answer of “4”

We actually applied dynamic programming We add another stick and ask again, he doesn’t count it again and still says the answer as 5. The kid memorize the previous answer and hence, no need to count it again.

1. Optimal substructures If the overall optimal solution of a problem can be determined from the optimal solutions of its subproblems , this is referred to as having an optimal substructure property. We are aware that the nth Fibonacci number, or Fib(n), is nothing more than the sum of the two preceding Fibonacci numbers, or Fib(n) = Fib(n-1) + Fib (n-2). A problem of size 'n' has been divided into subproblems of sizes 'n-1' and 'n-2,' as shown by the equation above.

2. Overlapping Subproblems If addressing the same subproblem occur more than once , that subproblem is said to have overlapping subproblems .

An example of Fibonacci numbers Let's think about analyzing Fib (5). We can see that Fib(5) is calculated by taking the sum of Fib(4) and Fib(3), and Fib(4) is calculated by taking the sum of Fib(3) and Fib(2), and so on. Fib(3), Fib(2), Fib(1), and Fib(0) have all undergone repeated evaluation. These are nothing more than converging sub-Problems.

Greedy vs Dynamic Programming Greedy No guarantee of getting optimal solution Efficient in terms of memory. Fast algorithm Dynamic Programming Optimal solution is guaranteed as it generally considers all possible cases and then choose the best. It stores previous results, so consumes memory. Comparatively slow

Thank you
Tags