Presentation_23953_Content_Document_20240906040454PM.pptx

rameshmanoj733 13 views 93 slides Sep 13, 2024
Slide 1
Slide 1 of 93
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93

About This Presentation

Preshhn


Slide Content

Analysis and Design of Algorithms MODULE – 1 BMS Institute of Technology and Mgmt 1

Department of ISE Agenda What is an Algorithm? Algorithm Specification Analysis Framework Performance Analysis: Space complexity, Time complexity Asymptotic Notations: Big-Oh notation (O), Omega notation (Ω), Theta notation (Θ), and Little-oh notation (o) Mathematical analysis of Non-Recursive Recursive Algorithms with Examples . Important Problem Types: Sorting, Searching Introduction to NP Hard and NP complete 3

BMS Institute of Technology and Mgmt Department of ISE Learning Outcomes of Module -1 Department of CA DSU 4 Students will be able to Representing real world problem into algorithmic notation . Performance analysis of an algorithm. Important problem types . Fundamental Data structures .

BMS Institute of Technology and Mgmt Department of ISE What is an algorithm? Department of CA DSU 5 Algorithmic: The sprit of computing – David Harel. Ano t he r r eason f o r s t udyin g al g ori t hm s is usefulness in developing analytical skills. the i r Algorithms can be seen as special kinds of solutions to problems – not answers but rather precisely defined procedures for getting answers.

BMS Institute of Technology and Mgmt Department of ISE What is an algorithm? Department of CA DSU 6 Recipe, process, method, technique, procedure, routine,… with the following requirements: Finiteness  terminates after a finite number of steps Definiteness  rigorously and unambiguously specified Clearly specified input  valid inputs are clearly specified Clearly specified/expected output  can be proved to produce the correct output given a valid input Effectiveness  steps are sufficiently simple and basic

BMS Institute of Technology and Mgmt Department of ISE Algorithm Department of CA DSU 7 Can be represented in various forms Unambiguity/clearness Effectiveness Finiteness/termination Correctness

What is an algorithm? “Computer” An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time. Problem Algorithm Input O u tput Department of ISE BMS Institute of Technology and Mgmt Department of CA 8

BMS Institute of Technology and Mgmt Department of ISE Why study algorithms? Department of CA DSU 9 Theoretical importance the core of computer science Practical importance A practitioner’s toolkit of known algorithms Framework for designing and analyzing algorithms for new problems

BMS Institute of Technology and Mgmt Department of ISE Euclid’s Algorithm Department of CA DSU 10 Problem: Find gcd( m,n ), the greatest common divisor of two nonnegative, not both zero integers m and n Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ? Euclid’s algorithm is based on repeated application of equality gcd( m,n ) = gcd( n, m mod n ) until the second number becomes 0, which makes the problem trivial. Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12

BMS Institute of Technology and Mgmt Department of ISE Two descriptions of Euclid’s algorithm Department of CA DSU 11 Step 1 If n = 0, return m and stop; otherwise go to Step 2 Step 2 Divide m by n and assign the value of the remainder to r Step 3 Assign the value of n to m and the value of r to n. Go to Step 1. while n ≠ do r ← m mod n m← n n ← r return m

BMS Institute of Technology and Mgmt Department of ISE Other methods for computing gcd( m,n ) Department of CA DSU 12 Consecutive integer checking algorithm Step 1 Assign the value of min{ m,n } to t Step 2 Divide m by t. If the remainder is 0, go to Step 3; otherwise, go to Step 4 Step 3 Divide n by t. If the remainder is 0, return t and stop; otherwise, go to Step 4 Step 4 Decrease t by 1 and go to Step 2 Is this slower than Euclid’s algorithm? How much slower?

BMS Institute of Technology and Mgmt Department of ISE Other methods for gcd( m,n )[cont.] Department of CA DSU 13 Middle-school procedure Step 1 Find the prime factorization of m Step 2 Find the prime factorization of n Step 3 Find all the common prime factors Step 4 Compute the product of all the common prime factors and return it as gcd (m,n ) Is this an algorithm? How efficient is it?

BMS Institute of Technology and Mgmt Department of ISE Fundamental steps in solving problems Statement of the problem Development of mathematical model Design of the algorithm Correctness of the algorithm Analysis of algorithm for its time and space complexity Implementation Program testing and debugging Documentation Department of CA DSU 16

The two common method to find run time of a program is Operation Count Steps Count Operation Count The operation count method is a technique used to analyze the time complexity of algorithms by counting the number of basic operations performed as a function of the input size. Identify the Basic Operations: Begin by identifying the basic operations that the algorithm performs. These operations can be simple arithmetic operations (e.g., addition, subtraction, multiplication), comparisons (e.g., less than, equal to), assignments, or any other fundamental operations that are executed repeatedly. Count the Operations: For each basic operation, determine how many times it is executed based on the input size (n). To do this, you may need to examine the algorithm's loops, recursive calls, and conditional statements. Keep in mind that the number of operations might vary depending on the specific input data and any early termination conditions.

Example def array_sum ( arr ): sum = 0 for element in arr : sum += element return sum

Now, let's go through the steps of the operation count method to find the time complexity: Step 1: Identify the Basic Operations - Assigning a value to the variable "sum" (sum = 0). - Addition operation inside the loop (sum += element). Step 2: Count the Operations - The assignment operation is performed only once. - The addition operation inside the loop is executed "n" times, where "n" is the size of the input array. Step 3: Express the Operation Count The total number of basic operations can be expressed as a function of the input size "n": T(n) = 1 (assignment) + n (addition inside the loop)

T(n) = 1 (assignment) + n (addition inside the loop) Step 4: Simplify the Expression Since we are interested in the time complexity, we can ignore the constant term (1) and only consider the dominant term (n) because it represents the most significant factor as "n" approaches infinity. T(n) ≈ n

STEPS COUNT- Here we attempt to find the time spent in all parts of the program.

Asymptotic Analysis of algorithms (Growth of function) Resources for an algorithm are usually expressed as a function regarding input. Often this function is messy and complicated to work. To study Function growth efficiently, we reduce the function down to the important part. Let f (n) = an 2 +bn+c In this function, the n 2  term dominates the function that is when n gets sufficiently large. Dominate terms are what we are interested in reducing a function, in this; we ignore all constants and coefficient and look at the highest order term concerning n.

Asymptotic notation: The word  Asymptotic  means approaching a value or curve arbitrarily closely (i.e., as some sort of limit is taken). Asymptotic means study of functions of parameter n and n becomes larger and larger without bound. Here we are concerned about how the running time of an algorithm increases with the size of the input. Asymptotic notations  are used to write fastest and slowest possible running time for an algorithm. These are also referred to as 'best case' and 'worst case' scenarios respectively.  "In asymptotic notations, we derive the complexity concerning the size of the input. (Example in terms of n)" "These notations are important because without expanding the cost of running the algorithm, we can estimate the complexity of the algorithms."  

Asymptotic Notations:  Asymptotic Notation is a way of comparing function that ignores constant factors and small input sizes. Three notations are used to calculate the running time complexity of an algorithm: 1. Big-oh notation:  Big-oh is the formal method of expressing the upper bound of an algorithm's running time. It is the measure of the longest amount of time. The function  f (n) = O (g (n))  [read as "f of n is big-oh of g of n"] if and only if exist positive constant c and such that f (n) ⩽  k.g  ( n) for  n>=n0 in all  case     Hence, function g (n) is an upper bound for function f (n), as g (n) grows faster than f (n)  

A  

For Example: 1 . 3n+ 2 =O(n) as 3n+ 2 ≤4n  for  all n≥ 2    2 . 3n+ 3 =O(n) as 3n+ 3 ≤4n  for  all n≥ 3    Hence, the complexity of  f(n)  can be represented as O (g (n))  

2. Omega () Notation:  The function f (n) = Ω ( g (n)) [read as "f of n is omega of g of n"] if and only if there exists positive constant c and n  such that F (n) ≥ k* g (n) for all n, n≥ n  

For Example: f (n) =8n 2 +2n-3≥8n 2 -3 =7n 2 +(n 2 -3)≥7n 2 (g(n)) Thus, k 1 =7 Hence, the complexity of  f (n)  can be represented as Ω ( g (n))  

3. Theta ( θ):   The function f (n) = θ ( g (n)) [read as "f is the theta of g of n"] if and only if there exists positive constant k 1 , k 2  and k  such that k 1 * g (n) ≤ f(n)≤ k 2 g(n)for all n, n≥ n

For Example:  3n+2= θ ( n) as 3n+2≥3n and 3n+2≤ 4n, for n k 1 =3,k 2 =4, and n =2 Hence, the complexity of f (n) can be represented as θ ( g(n)).

Recurrence Relation A recurrence is an equation or inequality that describes a function in terms of its values on smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the natural numbers that satisfy the recurrence. Or Simply, a function that calls itself repetitively Eg – Binary search – 10 20 30 40 50 60 70 I J BS ( a, I,j ,x) where x is searching element Mid = ( i+j )/2 If (a[mid]==x) O© Return (mid) Else if (a[mid]>x) BS (a,I,mid-1,x) Else BS(a,mid+1,j,x)

There are four methods for solving Recurrence: Substitution Method Iteration Method Recursion Tree Method Master Method

Substitution Method It means to expand the recurrence and express it as a summation of terms of n and initial condition. Example1:  Consider the Recurrence equation for binary search(BS) T (n) =   T(n/2)+C if n>1        1 if n=1 Consider recurrence equation for BS; T(n) = T(n/2)+C ---------- a ( as given) T(n/2) = T(n/4)+C -------- b ( as per Binary search algorithm) T(n/4) = T(n/8)+C -------- c ( as per Binary search algorithm) Now substitute eqn b in a ; T(n) = T(n/2)+C = T(n/4)+C +C = T(n/ )+2C ( Similarly insert c in final eqn ) = T(n/8)+C +2C = T(n/ )+3C = T(n/ )+4C . … k times Therefore; T(n/ )+ kC.   Now this equation continues. So in order to terminate the recurrence, we have to make T(n) = 1. If =n ; then T(n/ ) = T(n/ )+kc = T(1)+kc------- d But Time complexity should be in terms of n not constant. Therefore calculate value of k now. We know, =n. Apply log on both sides. Log =log n Therefore, k log 2 = log n. But log 2 = 1 (constant). Hence, k = log n. Now substitute k value in d. Thus, 1+log n*c. Thus order of Time complexity for Binary search is O( )  

Example 2 –  Solve the equation by Substitution Method. T (n) = T(n/2) + 1 We have to show that it is asymptotically bound by O (log n). Solution: For T (n) = O (log n) We have to show that for some constant c Assume that g(n)=log n T (n) ≤c  logn .   Put this in given Recurrence Equation. T (n) ≤c log(n/2) + 1 ≤c log(n/2) + 1 = c (logn-log2)+1 ≤c logn for c≥1Thus T (n) =O ( logn )

2. Iteration Methods It means to expand the recurrence and express it as a summation of terms of n and initial condition. Example1:  Consider the Recurrence T (n)  =  1    if  n= 1           =   2T (n- 1 )  if  n> 1   

Example2:  Consider the Recurrence T (n) = T (n- 1 ) + 1  and T ( 1 ) =   θ ( 1 ).

Example 3 : Consider the Recurrence T(n)= aT (a/b)+f(n) T (n) = 1  if n=1   2T (n-1)  if n>1   Solution: T (n) = 2T (n-1) = 2[2T (n-2)] = 2 2 T (n-2) = 4[2T (n-3)] = 2 3 T (n-3) = 8[2T (n-4)] = 2 4 T (n-4) (Eq.1) Repeat the procedure for i times T (n) = 2 i T (n- i ) Put n- i =1 or i = n-1 in (Eq.1) T (n) = 2 n-1 T (1) = 2 n-1 .1 {T (1) =1 .....given} = 2 n-1 T(n)=O(n)

Consider T (n) = 2T (n/2)+ n 2 We have to obtain the asymptotic bound using recursion tree method. Solution:  The Recursion tree for the above recurrence is (Detailed steps in next slide) 3. Recursion Tree Method ( using divide and conquer method)

Step 1 – Given ; Step 2 – Find n/2 ; Step 3 – Find n/4

Example – 2: Given T(n) = 2T(n/2)+Cn (This is time complexity of merge sort) Using recursive tree; n is divided into 2 branches n/2 Cost taken to divide and conquer this step = C . n Cost taken to divide and conquer this step = C. n Cost taken to divide and conquer this step = C. n Cost taken to divide and conquer till last step = 3C . n Cost taken to divide and conquer till last step = 3C .n. Thus value of constant depends on height of tree. Here height of tree is 3. Thus 3C.n. Height of tree grows for on and on, then we can write as = C . n log n = O( n log n)

Any recurrence relation given is in the below format then only we can apply masters theorem. T (n) = a T(n/b) + f (n) Where f(n) = θ ( n k log p n ) n = size of the problem. a = number of sub problems in the recursion. n/b= size of each sub problem f (n) = sum of the work done outside the recursive calls, which includes the sum of dividing the problem and the sum of combining the solutions to the sub problems 4. Masters Theorem

T(n)=a T(n/b)+θ ( n k log p n ) a>=1, b>1 , k>=0, p is a real number case1 : if a>b k then T(n)= θ (n log b a ) case 2: if a= b k If p>-1 then T(n)= θ (n log b a log p+1 n) If p=-1 then T(n)= θ (n log b a log n) if p<-1 then T(n)= θ (n log b a )   case 3 : if a<b k if p>=0 then T(n)= θ ( n k log p n) if p<0 then T(n)= θ ( n k )

Ex 1 : T(n)= 3T(n/2)+n 2 Solution – Compare with masters theorem T (n) = a T(n/b) + f (n) a=3,b=2, k=2, p=0 Compare a with b k 2*pow(2) T(n)= θ ( n k log p n) T(n)=(n 2 log o n) T(n)= θ  (n 2 )

Mathematical Analysis of Non Recursive Algorithm 1. Determine number of Parameters. 2. Identify the basic operations. 3.Check the number of times the basic operations is executed. 4. Simplify and find order of growth. Non recursive & Recursive Algorithm

1. Complexity analysis of Insertion Sort

ALGORITHM: INSERTION SORT (A) 1. for  j  =  2  to  A.length     2.   key  =  A [j]   3.  // Insert A[j] into the sorted sequence A[1.. j - 1]   4.   i  =  j  - 1   5.  while  i   >  0 and A[ i ]  >  key   6.      A[ i  + 1] = A[ i ]   7.       i i  =  i  -1   8.  A[ i  + 1] = key  

Initially, given 1 st  Iteration: Set key = 22 Compare a1 with a0 Since a0 > a1, swap both of them.

2 nd  Iteration: Set key = 63 Compare a2 with a1 and a0 Since a2 > a1 > a0, keep the array as it is.

3 rd  Iteration: Set key = 14 Compare a3 with a2, a1 and a0 Since a3 is the smallest among all the elements on the left-hand side, place a3 at the beginning of the array.

4 th  Iteration: Set key = 55 Compare a4 with a3, a2, a1 and a0. As a4 < a3, swap both of them.

5 th  Iteration: Set key = 36 Compare a5 with a4, a3, a2, a1 and a0. Since a5 < a2, so we will place the elements in their correct positions.

Hence the array is arranged in ascending order, so no more swapping is required. Logic:  If we are given  n  elements, then in the first pass, it will make  n-1  comparisons; in the second pass, it will do  n-2 ; in the third pass, it will do  n-3  and so on. Thus, the total number of comparisons can be found by; Output; (n-1) + (n-2) + (n-3) + (n-4) + ...... + 1 O(n 2 ) Best Case O(n) Average Case O(n 2 ) Worst Case O(n 2 ) Time Complexity

2. Complexity analysis for Linear Search Algorithm Linear search is also called as  sequential search algorithm.  It is the simplest searching algorithm. In Linear search, we simply traverse the list completely and match each element of the list with the item whose location is to be found. If the match is found, then the location of the item is returned; otherwise, the algorithm returns NULL. Algorithm for Linear Search Algorithm: The algorithm for linear search can be broken down into the following steps: Start:  Begin at the first element of the collection of elements. Compare:  Compare the current element with the desired element. Found:  If the current element is equal to the desired element, return true or index to the current element. Move:  Otherwise, move to the next element in the collection. Repeat:  Repeat steps 2-4 until we have reached the end of collection. Not found:  If the end of the collection is reached without finding the desired element, return that the desired element is not in the array.

Linear_Search (a, n,  val ) // 'a' is the given array, 'n' is the size of given array, ' val ' is the value to search   Step 1: set  pos  = -1   Step 2: set  i  =  1    Step 3: repeat step 4 while  i   < = n   Step 4: if a[ i ] ==  val    set  pos  =  i    print  pos    go to step 6   [end of if]   set  i i  =  i  + 1   [end of loop]   Step 5: if  pos  = -1   print "value is not present in the array "   [end of if]   Step 6: exit  

Working of Linear search Now, let's see the working of the linear search Algorithm. Let the elements of array are – Let the element to be searched is  K = 41 Now, start from the first element and compare  K  with each element of the array. The value of  K,  i.e.,  41,  is not matched with the first element of the array. So, move to the next element. And follow the same process until the respective element is found.

Now, the element to be searched is found. So algorithm will return the index of the element matched. Linear Search complexity Now, let's see the time complexity of linear search in the best case, average case, and worst case. We will also see the space complexity of linear search. Case Time Complexity Best Case O(1) Average Case O(n) Worst Case O(n)

WHAT IS FACTORIAL? The factorial of a whole number 'n' is defined as the product of that number with every whole number less than or equal to 'n' till 1 For example : 3! = 3*2*1 = 6 3. Complexity Analysis for FACTORIAL

FACTORIAL WITH RECURSION //Algorithm computes n! recursively //Input: A non-negative integer n //Output: Value of n! 2*fact(1) 1*fact(0) 3*fact(2) return 1 ALGORITHM fact(n) if n==0 return 1 else return n*fact(n-1) EXAMPLE n=3 3! = 6

if 3==0 else 3*fact(2) if 2==0 else 2*fact(1) fact(2) if 1==0 else 1*fact(0) fact(1) fact(0) if 0==0 return 1 3*2 = 6 fact(3) 1*1 = 1 call call call 2*1 = 2 6 EXPLAINATION Let's consider, n = 3 Thus, we get 3!= 6 2 1 1 2 1

Let's consider the algorithm again! fact(n) if n==0 return 1 else return n*fact(n-1) Time Complexity Base case Recursive case fact(0) is only comparison (1 unit of time) fact(n) is 1 comparison, 1 multiplication, 1 subtraction and time for factorial(n-1) Let the time taken be T(n), Thus we get: T(0) = 1 T(n) = T(n — 1) + 3 where n>0 Time complexity is the number of elementary operations an algorithm performs in relation to the input size.

T(n) = T(n-1) + 3 = T(n-2) + 6 = T(n-3) + 9 = T(n-4) + 12 = ... T(n) = T(n-k) + 3k As we know T(0) = 1 We need to find the value of k for which n - k = 0, k = n T(n) = T(0) + 3n , k = n = 1 + 3n We can say that, T(n)∝n where n=N Thus, TIME COMPLEXITY T(n)=O(n) Here we get a generic equation

Space Complexity Space complexity of an algorithm is the amount of space it uses for execution in relation to the size of the input Space complexity = Input space + Auxiliary space Input space is the size of variables to its respective datatype to be stored in the memory space Auxiliary Space is the extra space or the temporary space used by the algorithm during it's execution. fact(n) if n==0 return 1 else return n*fact(n-1) // 'n' is an integer variable which has input space of 4bytes // The function call, "if" condition, "else" condition, and return function these all comes under the auxiliary space and lets assume these all will take combinely “4 bytes” of additional space NOTE : We are calling that function recursively "N" times so here the complexity of auxiliary space will be "4*N bytes"

Thus, Total Space Complexity = (4 + 4*n) bytes But these 4 bytes are constant so we will not consider it and after removing all the constants(4 from 4*n) finally we get, SPACE COMPLEXITY T(n) = O(n) Memory/Call stack It is also known as implicit stack which is a background stack created by the compiler during execution Auxiliary space 5*factorial(4) 4*factorial(3) 2*factorial(1) 3*factorial(2) factorial(1)=1 2*1 =2 3*2 =6 4*6 =24 1 5*24 = 120

Searching Technique which can be applied if the items are either ascending order or descending order. Initial values: Low=0 , High=n-1 Step1: Divide array into two parts Mid=(low + high)/2 Step2: Search key in middle of array key If (key==a[mid]) 4. Complexity analysis for Binary Search 10 20 30 40 50 60 70 80 90 A[0] A[1] A[2] A[3] A[4] A[5] A[6] A[7] A[8] Low Key high 10 20 30 40 50 60 70 80 90 A[0] A[1] A[2] A[3] A[4] A[5] A[6] A[7] A[8] Low Key high

Step3: Search key in the left part of an array if(key<A[Mid]) high=mid-1 Key 30 10 20 30 40 50 60 70 80 90 A[0] A[1] A[2] A[3] A[4] A[5] A[6] A[7] A[8] Low Mid -1 (high) Mid

Step4: Search key in the right part of an array if(key>A[Mid]) low=mid+1 Key 80 10 20 30 40 50 60 70 80 90 A[0] A[1] A[2] A[3] A[4] A[5] A[6] A[7] A[8] Mid Mid+1 (Low) high

Algorithm Binary_Search( key,A,N ) Low=0 , High=n-1 While(low<=high) Mid=(low + high)/2 If (key==a[mid]) return mid+1 Else if(key<A[Mid]) high=mid-1 Else Low=mid+1 Endif End while Return -1

Analysis : Best Case: Omega(1) Worst case: T(n)= 1 if n=1 T(n/2)+1 otherwise T(n)=t(n/2)+1 T(n)=1+t(n/2) T(n)=1+1+t(n/2 2 ) =2+ t(n/2 2 ) =2+1+ t(n/2 3 ) =3+ t(n/2 3 ) =4+ t(n/2 4 ) . . =i+ t(n/2 i ) because 2 i =n = i+t (1) =i 2 i =n Taking log i log 2 2 = log 2 n t(n)= theta(log 2 n)

5. Complexity Analysis for Radix Sort Algorithm The process of radix sort works similar to the sorting of students names, according to the alphabetical order. In this case, there are 26 radix formed due to the 26 alphabets in English. In the first pass, the names of students are grouped according to the ascending order of the first letter of their names. After that, in the second pass, their names are grouped according to the ascending order of the second letter of their name. And the process continues until we find the sorted list. Algorithm The key idea behind Radix Sort is to exploit the concept of place value. It assumes that sorting numbers digit by digit will eventually result in a fully sorted list. Radix Sort can be performed using different variations, such as Least Significant Digit (LSD) Radix Sort or Most Significant Digit (MSD) Radix Sort.

radixSort (array) d <- maximum number of digits in the largest element create d buckets of size 0-9 for i <- 0 to d sort the elements according to ith place digits using countingSort countingSort (array, d) max <- find largest element among dth place elements initialize count array with all zeros for j <- 0 to size find the total count of each unique digit in dth place of elements and store the count at jth index in count array for i <- 1 to max find the cumulative sum and store it in count array itself for j <- size down to 1 restore the elements to array decrease count of each element restored by 1

Working of Radix sort Algorithm Now, let's see the working of Radix sort Algorithm. The steps used in the sorting of radix sort are listed as follows - First, we have to find the largest element (suppose  max ) from the given array. Suppose  'x'  be the number of digits in  max . The  'x'  is calculated because we need to go through the significant places of all elements. After that, go through one by one each significant place. Here, we have to use any stable sorting algorithm to sort the digits of each significant place. Now let's see the working of radix sort in detail by using an example. To understand it more clearly, let's take an unsorted array and try to sort it using radix sort. It will make the explanation clearer and easier.

In the given array, the largest element is  736  that have  3  digits in it. So, the loop will run up to three times (i.e., to the  hundreds place ). That means three passes are required to sort the array. Now, first sort the elements on the basis of unit place digits (i.e.,  x = 0 ). Here, we are using the counting sort algorithm to sort the elements.

Pass 1: In the first pass, the list is sorted on the basis of the digits at 0's place. After the first pass, the array elements are -

Pass 2: In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 10 th  place). After the second pass, the array elements are -

Pass 3: In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at 100 th  place). After the third pass, the array elements are -

Algorithm Step 1 :Find the largest number in ARR as LARGE Step 2 : [INITIALIZE] SET NOP = Number of digits in LARGE Step 3 : SET PASS =0 Step 4 : Repeat Step 5 while PASS <= NOP-1 Step 5 : SET I = 0 and INITIALIZE buckets Step 6 :Repeat Steps 7 to 9 while I<n-1< li=""></n-1<> Step 7 : SET DIGIT = digit at PASSth place in A[I] Step 8 : Add A[I] to the bucket numbered DIGIT Step 9 : INCREMENT bucket count for bucket numbered DIGIT [END OF LOOP] Step 10 : Collect the numbers in the bucket [END OF LOOP] Step 11 : END

int  largest( int  a[])   {           int  larger=a[ ], i;         for (i= 1 ;i< 10 ;i++)       {            if (a[i]>larger)           larger = a[i];       }        return  larger;   }  

void   radix_sort ( int  a[])   {        int  bucket[10][10],  bucket_count [10];        int  i, j, k, remainder, NOP=0, divisor=1, larger, pass;       larger = largest(a);        while (larger>0)       {           NOP++;           larger/=10;       }  

  for (pass=0;pass< NOP;pass ++) // Initialize the buckets       {            for (i=0;i<10;i++)            bucket_count [i]=0;            for (i=0;i<10;i++)           {               // sort the numbers according to the digit at  passth  place                         remainder = (a[i]/divisor)%10;               bucket[remainder][ bucket_count [remainder]] = a[i];                bucket_count [remainder] += 1;           }  

 // collect the numbers after PASS  pass            i=0;            for (k=0;k<10;k++)           {                for (j=0;j< bucket_count [k];j++)               {                   a[i] = bucket[k][j];                   i++;               }           }           divisor *= 10;          }  

Case Time Complexity Best Case Ω( n+k) Average Case θ( nk) Worst Case O( nk )

Computing time functions Department of ISE BMS Institute of Technology and Mgmt DSU 31

BMS Institute of Technology and Mgmt Department of ISE Values of some important functions as n   Department of CA DSU 32

Polynomial Time - searching, sorting Algorithms used – - Linear search-n -Binary search – log n -Insertion sort – n2 - Merge sort – n log n - Matrix multiplication – These problems can be completed within some stipulated time   NP Hard & NP Complete Exponential Time – Algorithms used – - 0/1 Knap sack– - Graphing color – - Hamiltonian cycle - - Sum of subsets - - Travelling sales man - - Problems that takes more time to solve   Problems that cannot be solved in polynomial time is called NP problems. Means, if problem cannot be solved in given amount of time

Classification of Problem Group1– (P TYPE)- consists of problems whose solutions are bounded by the polynomial of small degree. Example – Binary search o(log n) , sorting o(n log n), matrix multiplication 0(n 2.81). Group2-(NP TYPE) – contains problems whose best known algorithms are non polynomial. • Example –Traveling salesperson problem 0(n22 n ), knapsack problem 0(2 n/2 ) etc NP Hard & NP Complete If an NP-hard problem can be solved in polynomial time, then all NP-complete problems can be solved in polynomial time. All NP-complete problems are NP-hard, but all NP-hard problems are not NP-complete. All NP problems can be solved in polynomial time if and only if all other NP complete problems can be solved in polynomial time

Difference between NP Hard & NP Complete Parameters NP-Hard Problem NP-Complete Problem Meaning and Definition One can only solve an NP-Hard Problem X only if an NP-Complete Problem Y exists. It then becomes reducible to problem X in a polynomial time. Any given problem X acts as NP-Complete when there exists an NP problem Y- so that the problem Y gets reducible to the problem X in a polynomial line. Presence in NP The NP-Hard Problem does not have to exist in the NP for anyone to solve it. For solving an NP-Complete Problem, the given problem must exist in both NP-Hard and NP Problems. Decision Problem This type of problem need not be a Decision problem. This type of problem is always a Decision problem (exclusively). Example Circuit-satisfactory, Vertex cover, Halting problems, etc., are a few examples of NP-Hard Problems. A few examples of NP-Complete Problems are the determination of the Hamiltonian cycle in a graph, the determination of the satisfaction level of a Boolean formula, etc.

Deterministic & Non Deterministic - Deterministic – Predict output for inputs given Non Deterministic Algorithm - cannot predict output properly though input is known - same input will give different output for different rounds of execution - Will take more than 1 path wherein cannot determine next step of execution ( eg - if else condition in a flow chart) - Will get approximate but not exact solution Eg – Monte carlo problem, Genetic algorithm

The End…… is the new Beginning
Tags