divide and conquer ppt in Design and Analysis of Algorithms

dimpuk1 24 views 24 slides Aug 28, 2024
Slide 1
Slide 1 of 24
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24

About This Presentation

divide and conquer ppt


Slide Content

4-1Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Divide-and-Conquer
The most-well known algorithm design strategy:
1. Divide instance of problem into two or more
smaller instances
2.Solve smaller instances recursively
3.Obtain solution to original (larger) instance by
combining these solutions

4-2Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Divide-and-Conquer Technique (cont.)
subproblem 2
of size n/2
subproblem 1
of size n/2
a solution to
subproblem 1
a solution to
the original problem
a solution to
subproblem 2
a problem of size n
(instance)
It general leads to a
recursive algorithm!

4-3Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Divide-and-Conquer Examples
•Binary search
•Sorting: merge sort and quick sort
•Binary tree traversals
•Multiplication of large integers
•Finding Maximum and Minimum
•Matrix multiplication: Strassen’s algorithm

4-5Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Merge sort
•Split array A[0..n-1] into about equal halves and
make copies of each half in arrays B and C
•Sort arrays B and C recursively
•Merge sorted arrays B and C into array A as
follows:
–Repeat the following until no elements remain in one of the
arrays:
•compare the first elements in the remaining unprocessed
portions of the arrays
•copy the smaller of the two into A, while incrementing the
index indicating the unprocessed portion of that array
–Once all elements in one of the arrays are processed, copy the
remaining unprocessed elements from the other array into A.

4-6Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Pseudocode of Mergesort

4-7Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Pseudocode of Merge

4-8Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Mergesort Example
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
3 8 2 9 1 7 4 5
2 3 8 9 1 4 5 7
1 2 3 4 5 7 8 9
The non-recursive
version of Mergesort
starts from merging
single elements into
sorted pairs.

4-9Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4

4-10Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4

4-11Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Analysis of Mergesort
•All cases have same efficiency: Θ(n log n)
T(n) = 2T(n/2) + (n) , T(n) = 2T(n/2) + (n) ,
T(1) = 1T(1) = 1

4-12Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Merge sort Time Complexity
Best
 Case Complexity – The best case
 
occurs when there is no sorting required,
i.e. when the given array itself is sorted. The best-case time complexity of merge
sort would be
 
O(n*log n).
 
Average Case Complexity – 
This case will occur when the elements of array are
in a jumbled order, Which means that the elements are neither
  properly
ascending nor properly descending. The time complexity of merge sort for
average case is
 
O(n*log n).
 
Worst Case Complexity – 
The worst case occurs when the array elements are
required to be sorted in the reverse order. That means suppose you have to sort
the array elements in ascending order, but its elements in the array are in a
descending order. The complexity in this case would be
 
O(n*log n).
 
Space Complexity
The space complexity of merge sort is O(n). It is because we require an extra
variable for swapping in merge sort.

4-13Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Quick sort
•Select a pivot (partitioning element) – here, the first element
•Rearrange the list so that all the elements in the first s positions
are smaller than or equal to the pivot and all the elements in the
remaining n-s positions are larger than or equal to the pivot
(see next slide for an algorithm)
•Exchange the pivot with the last element in the first (i.e., )
subarray — the pivot is now in its final position
•Sort the two subarrays recursively
p
A[i]p A[i]p

4-14Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Partitioning Algorithm
Time complexity: ΘΘ((r-lr-l)) comparisons
or i > r
or j = l <

4-15Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Quicksort Examples
5 3 1 9 8 2 4 7

4-16Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Time Complexity
Best Case Time Complexity – 
In Quicksort, the best-case is said to occur when the pivot
element is the middle element or is near to the middle element. The time complexity of
quicksort in the best-case is
 
O(n*logn).
 
Average Case Time Complexity – 
The average case occurs when the elements of the
array are in jumbled order that is not properly ascending or not properly descending. In
this case the time complexity of quicksort algorithm would be
 
O(n*logn).
 
Worst Case Time Complexity – 
In quick sort, the worst case will occur when the pivot
element is either the greatest or the smallest element of the array. Suppose that the pivot
element is always the last element of the array, the worst case would occur when the given
array is sorted already in an ascending or the descending order. The
  time complexity of
quicksort in the worst-case is
 
O(n
2
).
 
2. Space Complexity
The space complexity for the quicksort algorithm is
 
O(n*logn).
 

4-17Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Analysis of Quicksort
•If all the splits happen in the middle of
corresponding sub arrays, is the best case.
T(n) = 2T T(n) = 2T
bestbest(n/2) + ((n/2) + (nn) ) for n>1 for n>1 and T(1)=0and T(1)=0
i.e, Θ(n log n)

4-18Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Binary Search
Very efficient algorithm for searching in sorted array:
K
vs
A[0] . . . A[m] . . . A[n-1]
If K = A[m], stop (successful search); otherwise, continue
searching by the same method in A[0..m-1] if K < A[m]
and in A[m+1..n-1] if K > A[m]
l  0; r  n-1
while l  r do
m  (l+r)/2
if K = A[m] return m
else if K < A[m] r  m-1
else l  m+1
return -1

4-19Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Analysis of Binary Search
•Count the no.of times the search key is compared with an element of
the array(known as 3 way comparision)
–worst-case recurrence: T(n) = T(n/2) + 1,
T(1) = 1
solution: T(n) = log
2(n)
•Optimal for searching a sorted array
•Limitations: must be a sorted array

4-20Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Conventional Matrix Multiplication
•Brute-force algorithm
c
00 c
01 a
00 a
01 b
00 b
01 =
*
c
10 c
11 a
10 a
11 b
10 b
11
a
00 * b
00 + a
01 * b
10 a
00 * b
01 + a
01 * b
11

=
a
10
* b
00
+ a
11
* b
10
a
10
* b
01
+ a
11
* b
11

8 multiplications
4 additions
Efficiency class in general:  (n
3
)

4-21Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Strassen’s Matrix Multiplication
The product of two matrices can be computed as
follows:
C
00 C
01 A
00 A
01 B
00 B
01
= *
C
10
C
11
A
10
A
11
B
10
B
11
M
1 + M
4 - M
5 + M
7 M
3 + M
5
=
M
2 + M
4 M
1 + M
3 - M
2 + M
6

4-23Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Formulas for Strassen’s Algorithm
M
1 = (A
00 + A
11)  (B
00 + B
11)
M
2 = (A
10 + A
11)  B
00
M
3 = A
00  (B
01 - B
11)
M
4 = A
11  (B
10 - B
00)
M
5 = (A
00 + A
01)  B
11
M
6
= (A
10
- A
00
)  (B
00
+ B
01
)
M
7 = (A
01 - A
11)  (B
10 + B
11)

4-24Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Analysis of Strassen’s Algorithm
CONSIDER Number of multiplications
T(n) = 7T(n/2),
T(1) = 1
Solution: M(n) = 7
log
2
n
= n
log
2
7
≈ n
2.807
vs. n
3
of
brute-force alg.
.

4-25Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
To find the maximum and minimum numbers in a given array
 A[] 
of size
 n, the following algorithm can be used.
Finding Maximum and Minimum
Divide and Conquer Approach:
In this, the array is divided into two halves. Then using recursive
approach maximum and minimum numbers in each halves are found.
Later, return the maximum of two maxima of each half and the
minimum of two minima of each half.
In this problem, the number of elements in an array is
 y−x+1,
where
 

is greater than or equal to 
x. MaxMin(x,y)
  will return the
maximum and minimum values of an array
 A[x...y]

4-26Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2
nd
ed., Ch. 4
Algorithm: maxmin(x, y)
if y – x ≤ 1 then
return (max(A[x],A[y]),min((A[x],A[y]))
else
(max1, min1):= maxmin(x, ((x + y)/2) )
⌊ ⌋
(max2, min2):= maxmin( ((x + y)/2) + 1) ,y)
⌊ ⌋
return (max(max1, max2), min(min1, min2))
Analysis
If
 
T(n) 
represents the numbers, then the recurrence relation can be represented as
Tags