data structure and algorithm (Advanced algorithm Stretegies)

shahghanikhan 43 views 25 slides May 04, 2024
Slide 1
Slide 1 of 25
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25

About This Presentation

data structure and algorithm


Slide Content

Advanced Algorithm Analysis - S pring 2016 (CSC-606) Lecture: 04 Advanced Algorithm Strategies Engr. Durr -e- Nayab

Agenda for Today To understand the notion of algorithm strategies To study the approaches of various algorithm paradigm Brute force Greedy algorithms Divide-and-conquer, decrease-and-conquer Dynamic programming Transform-and-conquer Backtracking and branch-and-bound Genetic algorithms Conclusion Refer to book and other course materials for more details 2

Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 3 Algorithm Design Paradigms General approaches to the construction of efficient solutions to problems Such methods are of interest because: They provide templates suited to solving a broad range of diverse problems They can be translated into common control and data structures provided by most high-level languages The temporal and spatial requirements of the algorithms which result can be precisely analysed Although more than one technique may be applicable to a specific problem, it is often the case that an algorithm constructed by one approach is clearly superior to equivalent solutions built using alternative techniques

4 Algorithm D esign Strategies Brute force Divide and conquer Decrease and conquer Transform and conquer Space and time tradeoffs Greedy approach Dynamic programming Iterative improvement Backtracking Branch and bound Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

5 Brute Force Algorithm Brute force ( a.k.a Exhaustive Search/ Generate and Test ) is a straightforward approach to solve a problem based on the problem’s statement and definitions of the concepts involved It is considered as one of the easiest approach to apply and is useful for solving small – size instances of a problem It i s a very general problem-solving technique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

6 Brute Force Algorithm A brute-force search is simple to implement, and will always find a solution if it exists A brute-force algorithm to find the divisors of a  Natural Number  n  would enumerate all integers from 1 to n, and check whether each of them divides  n  without remainder A brute-force approach for the  Eight Queen Puzzle  would examine all possible arrangements of 8 pieces on the 64-square chessboard, and, for each arrangement, check whether each (queen) piece can attack any other Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

7 Brute Force Algorithm In order to apply brute-force search to a specific class of problems, one must implement four procedures, first, next , valid, and output These procedures should take as a parameter the data P for the particular instance of the problem that is to be solved, and should do the following : first  ( P ): generate a first candidate solution for  P next  ( P ,  c ): generate the next candidate for  P  after the current c valid  ( P ,  c ): check whether candidate  c  is a solution for  P output  ( P ,  c ): use the solution  c  of  P  as appropriate to the application Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

8 Brute Force Algorithm Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

9 Brute Force Algorithm (Limitations) Its cost is proportional to the number of candidate solutions – which in many practical problems tends to grow very quickly as the size of the problem increases Therefore , brute-force search is typically used when the problem size is limited The method is also used when the simplicity of implementation is more important than speed The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

Examples Some examples of brute force algorithms are: - Computing an (a > 0, n a nonnegative integer) by multiplying a*a*…*a - Computing n! - Selection sort - Bubble sort - Sequential search - Exhaustive search: Traveling Salesman Problem, Knapsack problem Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. 10

11 Greedy Algorithms Take what you can get now - Strategy A  greedy algorithm  is an  algorithm that follows the  probl e m solving heuristic of making the locally optimal choice at each stage  with the hope of finding a global optimum In greedy algorithms the solution is constructed through a sequence of steps, each expanding a partially constructed solution obtained so far At each step the choice must be locally optimal – this is the central point of this technique Greedy techniques are mainly used to solve optimization problems They do not always give the best solution Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

12 Greedy Algorithms Demonstration Making Change: To determine minimum number of coins to give while  making change These are the steps a human would take to emulate a greedy algorithm to represent 36 cents using only coins with values {1, 5, 10, 20} The coin of the highest value, less than the remaining change owed, is the local optimum Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

13 Greedy Algorithms Limitations We can make whatever choice seems best at the moment and then solve the sub problems that arise later. The choice made by a greedy algorithm may depend on choices made so far, but not on future choices or all the solutions to the sub problem Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one In other words, a greedy algorithm never reconsiders its choices This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution

14 Greedy Algorithms Examples: - Minimal spanning tree - Shortest distance in graphs - Greedy algorithm for the Knapsack problem - The coin exchange problem - Huffman trees for optimal encoding It has been proven that greedy algorithms for the minimal spanning tree, the shortest paths, and Huffman codes  always give the optimal solution Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

15 Divide and Conquer These are methods of designing algorithms that (informally) proceed as follows: Given an instance of the problem to be solved, split this into several smaller sub-instances ( of the same problem ) Independently solve each of the sub-instances and then combine the sub-instance solutions so as to yield a solution for the original instance Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

16 Decrease and Conquer With the divide-and-conquer method the size of the problem instance is reduced by a factor (e.g. half the input size) But sometimes the name "divide and conquer" is applied also to algorithms that reduce each problem to only one sub-problem, such as the binary search algorithm for finding a record in a sorted list These algorithms can be implemented more efficiently than general divide-and-conquer algorithms In decrease-and-conquer method the size is reduced by a constant Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

17 Divide/Decrease and Conquer Hence; - The name "divide and conquer" should be used only when each problem may generate two or more sub problems - The name  "decrease and conquer"  has been proposed instead for the single-sub problem class Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

18 Advantages of D & C Solving difficult problems Algorithm Efficiency Parallelism Memory access Round off Control Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

19 Issues with D & C The issues here are two: - How to solve the sub-instance - How to combine the obtained solutions The answer to the second question depends on the nature of the problem. - In most cases the answer to the first question is: using the same method. Here another very important issue arises: when to stop decreasing the problem instance, i.e. what is the minimal instance of the given problem and how to solve it - When we use recursion, the solution of the minimal instance is called “ terminating condition ” Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

20 Examples Divide and Conquer - Computing an (a > 0, n a nonnegative integer) by recursion - Binary search in a sorted array (recursion) - Merge sort algorithm - Quicksort algorithm (recursion) - The algorithm for solving the fake coin problem (recursion ) Decrease and Conquer Insertion sort Topological sorting Binary Tree traversals: in order , preorder and post order (recursion) Computing the length of the longest path in a binary tree (recursion) Computing Fibonacci numbers (recursion) Reversing a queue (recursion) Warshall’s algorithm (recursion) Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

21 Dynamic Programming One disadvantage of using Divide-and-Conquer is that the process of recursively solving separate sub-instances can result in the  same computations being performed repeatedly  Since   identical sub-instances may arise The idea behind  dynamic programming  is to avoid this pathology by obviating the requirement to calculate the same quantity twice The method usually accomplishes this by maintaining a  table of sub-instance results Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

22 Dynamic Programming DP is a method for solving a complex problem by breaking it down into a collection of simpler sub problems , solving each of those sub problems just once, and storing their solutions - ideally, using a memory-based data structure DP is a  Bottom-Up Technique   in which the smallest sub-instances are explicitly  solved first and the results of these used to construct solutions to progressively larger sub-instances In contrast, Divide-and-Conquer is a  Top-Down Technique  which  logically  progresses from the initial instance down to the smallest sub-instance via intermediate sub-instances Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

23 Dynamic Programming DP algorithms are often used for optimization A DP algorithm will examine the previously solved sub problems and will combine their solutions to give the best solution for the given problem In comparison, a greedy algorithm treats the solution as some sequence of steps and picks the locally optimal choice at each step. Examples: -Fibonacci numbers computed by iteration - Warshall’s algorithm implemented by iterations Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

24 Dynamic Programming Demonstration For example, In the coin change problem of finding the minimum number of coins of given denominations needed to make a given amount A DP algorithm would find an optimal solution for each amount by first finding an optimal solution for each smaller amount and then using these solutions to construct an optimal solution for the larger amount In contrast, a greedy algorithm might treat the solution as a sequence of coins, starting from the given amount and at each step subtracting the largest possible coin denomination that is less than the current remaining amount Hence if the coin denominations are 1,4,5,15,20 and the given amount is 23 This greedy algorithm gives a non-optimal solution of 20+1+1+1 while the optimal solution is 15+4+4 Levitin “Introduction to the Design & Analysis of Algorithms,” 3 rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.

Summary 25