daa18d8d-d333-4398-94dd-a46802d88d79.pptx

yvtinsane 61 views 39 slides Jul 25, 2024
Slide 1
Slide 1 of 39
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39

About This Presentation

Daa ppt


Slide Content

PESENTED BY Dr. O. Sri Nagesh Professor CSE Vignan Institute of Technology Hyderabad

INTRODUCTION TO ALGORITHMS The word “algorithm” has taken on a special significance in computer science where algorithm has come to refer to a method that can be used by a computer for the solution of a problem. Def: Algorithm is a finite set of instructions that is followed, accomplishes a particular task. In addition all algorithms must satisfy the following criteria. Input: Zero or more quantities are externally supplied. Output: At least one quantity is produced. Definiteness: Each instruction is clear and unambiguous. Finiteness: If we trace out the instructions of an algorithm then for all cases the algorithm terminate after a finite number of steps. Effectiveness: Algorithm must not only definite and also feasible.

Design of Algorithm Understanding the problem Decision making on Capabilities of computational devices Select exact/appropriate method Data structures Algorithmic strategies Specification of algorithm/Design of algorithm Verification Analysis Coding

UNDERSTANDING THE PROBLEM Algorithms must be created after understanding the problem statement clearly. For some problems algorithms are available and for some problems algorithms must be created. After writing an algorithm we need to find the strength and weakness of an algorithm. After algorithm is decided find the inputs of the algorithm in order to get desired output. The input of the algorithm is also called as instance of the problem. It is very important to find the range of inputs so that the boundary values of algorithm get fixed. Then the algorithm works correctly for all valid inputs.

DECISION MAKING After finding the required input set for a given problem we have to analyze the input and need to decide certain issues as follows Capabilities of computational devices Choice for either exact or approximate problem solving method Data structures Algorithmic strategies Capabilities of computational devices: The selection of Computational device is an important decision for smooth execution of an algorithm. It is necessary to know about computational capabilities of devices on which the algorithm is running. We are going to run algorithms in two different ways one is Sequential and other one is Parallel. Sequential algorithms runs on a machine line by line . Parallel algorithmic instructions are executed in parallel. Time and space requirement plays vital role in selecting computational devices.

DECISION MAKING contd.. Exact or Approximate Problem Solving Method: The next important decision is weather the problem is solved exactly or approximately. Accordingly we need to choose exact or approximation algorithm. EX: Travelling Sales Person Problem and some approximation problems in Mathematics. Data Structures: Data Structures and algorithm work together and these are interdependent. So choice of proper data structure is required before designing the actual algorithm. Algorithm Strategies: This is a general approach by which many problems can be solved algorithmically. Algorithm strategies also called as Algorithmic techniques or algorithmic paradigm. Ex: what data structure to be used for a particular algorithm, where and how to use conditional statements, looping and branching etc.

Algorithm Design Techniques Brute Force: This is a straight forward technique with naive approach. Divide and conquer: The problem is divided into smaller instances. Dynamic programming: The results of smaller reoccurring instances are obtain to solve the problem. Here an optimization problem is obtained by breaking the complete problem into subproblems and finding out the optimal solution of subproblem. The overall solution of the problem depends on the optimal solution of the subproblem. The solutions of subproblems are combined to get best solution. Greedy techniques: To solve the problem locally optimal decisions are made. It is also similar to Dynamic programming but optimal solutions are made in different approach. Some unsolved problems in Dynamic program will be solved here. EX: Fractional Knapsack. Back Tracking: The method is based on the trial and error. If we want to solve some problem then desired solution is chosen from the finite set s. branch and bound  algorithm is an optimization  technique  to get an optimal solution to the problem. It looks for the best solution for a given problem in the entire space of the solution. The bounds in the function to be optimized are merged with the value of the latest best solution.

Specification of algorithm There is various ways by which we can specify an algorithm Using natural language Pseudocode Flowchart Example of natural language Adding 2 numbers Read a Read b Add a and b and store result in c Display c

Pseudocode Example of Pseudocode Algorithm sum( a, b) Read a and b c=a + b write c End Algorithm

Flow chart start Input a Input b C= a+b Display c stop

Algorithm verification It is nothing but checking the correctness of an algorithm. Generally we will check the algorithm gives correct result/output in finite amount of time for valid set of input. The proof of correctness of algorithm is complex for some problems. Generally by mathematical induction we can check the correctness of an algorithm. We can identify the incorrectness of algorithm by at least for one input from the set of inputs the output will be wrong.

Analysis of Algorithm While analyzing the algorithm we should consider the following factors Time efficiency of an algorithm or Time complexity. Space efficiency of an algorithm or Space Complexity Simplicity of an algorithm-Easy to understand Generality of an algorithm Range of input

Coding or Implementation of an Algorithm Implementation can be done by suitable programming language. Ex: Generally we use Object Oriented Programming for algorithm writing that is we generally use C++ or JAVA or Python Programming. While writing code in any programming language it is better to write an optimized code by the programmer to reduce burden on the compiler. We can decide the programming language only at after algorithm is written.

Testing a program Testing a program is done by giving set of inputs and test the correctness in the output. There are two phases for testing a program Debugging Profiling Debugging: It is a technique in which a sample set of data is tested to see whether the result obtained is expected or not. If the results produced are unexpected then algorithm refinement will be done and tested again. Profiling: Hidden errors are not pointed out in debugging. So profiling is introduced. It is a performance measurement is the process of executing a correct program on a sample set of data. Time and space requirement is calculated to execute the program. This is done for optimization of an algorithm.

PERFORMANCE ANLYSIS The efficiency of an algorithm can be decided by measuring the performance of an algorithm. We can measure the performance of an algorithm by computing two factors. Amount of time required by an algorithm to execute. This is also called as time complexity of an algorithm. Amount of storage space required by an algorithm. This is also called as space complexity of an algorithm.

PERFORMANCE ANLYSIS contd.. Measuring Space Complexity Computing best case, worst case and Average case efficiencies Measuring input size Analysis of Algorithms Measuring Time Complexity Computing order of growth of algorithms Measuring running time

Space Complexity The space complexity can be defined as amount of memory required by an algorithm to run. To compute the space complexity we can use two factors. They are constant and instant characters. The space requirement S(P) is given as Here ‘C’ is a constant. It is a fixed part and it denotes the space of inputs and outputs. This space ‘C’ is an amount of space taken by instructions, variables and identifiers and S P is a space dependent upon instance characteristics. ‘ S P ’is a variable part whose space requirement depends on particular problem instance. Important Note:  The best algorithm/program should have the lease space-complexity. The lesser the space used, the faster it executes. S(P) = C+S P

CALCULATING SPACE COMPLEXITY EX-1: Compute Space complexity for addition of 3 numbers Algorithm add( a , b , c) return (a + b + c) End add The above algorithm is having 3 variables a , b , and c. So 3 memory spaces are needed for the above algorithm. Here S P is 0 and c=3. So from the formulae S(P) = C+S P S(P) = 3+0=3 EX-2 : Compute the sum of n elements stored in an array Algorithm Add( x, n) // here x is an array sum=0 //1 unit of memory for sum for I =1 to n sum=sum + x[ i ] return sum End Add For sum 1 unit of space is used For n 1 unit of space is used. For n elements of array n units of memory is used For I 1 unit of memory is used Total memory is used n+3.

CALCULATING SPACE COMPLEXITY contd.. Recursive algorithms Algorithm Add ( x, n) if( n==0) return 0 else return Add(x,n-1)+x[n] End Add Here memory requirements are n+1 spaces for return addresses n spaces for array elements Total spaces are n+n+1=2n+1 Space complexity for above algorithm is 2n+1.

CALCULATING SPACE COMPLEXITY contd.. return 0+x[1]+x[2]+x[3] return to calling function Function call return 0+x[1]+x[2] Function call return 0+x[1] Function call return 0 Add(x,3) Add(x,2) Add(x,1) Add(x,0)

CALCULATING SPACE COMPLEXITY contd.. STACK Space Return 0 Return 0+x[1] Return 0+x[1]+x[2] Return 0+x[1]+x[2]+x[3] to calling function Sum(x,0) Sum(x,1) Sum(x,2) Sum(x,3)

Values Space 3 elements {13,23,5} Ist step 0+13=13 2 nd step 13+23=36 3 rd step 36+5=41 Return address of 13 Return address of 36 Here 4 spaces are required for stack and 3 spaces for return addresses. Total memory needed in this example is 4+3=7. If n elements are there in an array then n+1 spaces required for stack and n spaces required for return addresses. Total space requirement is n+1+n=2n+1. A(0+13=13) A(13+23=36) A(36+5=41)

TIME COMPLEXITY The time (TP) taken by the program P is the sum of the compile time and the runtime. The compile time does not depend on the instant characteristics. We know that compiled program will run several times without recompilation. Compile time depends on the hardware configuration, system load, number of other programs running, instruction set used. So we consider only runtime instance characteristics. The time complexity is therefore given in terms of frequency count. Frequency count is the number of times of execution of statements. If the number is constant that is represented by O(1) that means a constant.

ORDER OF GROWTH OF TIME COMPLEXITY n(Linear) log n (Logarithmic) n log n(Linear logarithmic) n 2 (Quadratic) 2 n (Exponential) 1 1 2 2 1 2 4 4 4 2 8 16 16 8 3 24 64 256 16 4 64 256 65536 32 5 160 1024 4,294,967,296

Time Complexity Calculation Measuring the performance of an algorithm with respect to time is called Time complexity of an algorithm. This is the important measure to consider the performance of an algorithm. This performance is of relation with input size n is called order of growth as shown in above slide. EX: Algorithm sum() 0 A = 20 1 B = 30 1 C = A+B 1 write C 1 End sum 0 Here 4 units of time needed for the above algorithm. 1 unit of time for A, 1 unit for B, 1 unit for executing addition and 1 unit for writing the result. Total 4 units. O(1)

Time Complexity Calculation Contd.. EX: sum of n elements in an array Algorithm sum( a,n ) 0 s=0 1 for I =1 to n do n+1 s=s + a[ I ] n return s 1 End sum 0 Total 2n+3 is the time complexity ( tp ) of the above algorithm. O(n) Steps per execution Total steps 1 1 1 n+1 1 n 1 1

Time Complexity Calculation contd.. EX2: Sum of natural numbers from 1 to n Algorithm sum() 0 s=0 1 for i =1 to n do n+1 s= s+i n return s 1 End sum 0 This algorithm time complexity as same as above. As space only changes. Time will be same as 2n+3.

Time Complexity Calculation contd.. EX2: Calculate nth Fibonacci number Algorithm fib(n) //calculate nth Fibonacci number if(n<=1) then write(n) else { f1=0 f2=1 for i =2 to n do { f3=f1+f2 f1=f2 f2=f3 } write(f3) }//end else End fib()

Time Complexity Calculation contd.. s/e Frequency n<=1 (if) n>1(else) Total steps n<=1(if) n>1(else) 1 1 1 1 1 1 1 1 1 else body 1(f1=0) 1 1 1(f2=1) 1 1 n(for loop) n 1 n-1 n-1 1 n-1 n-1 1 n-1 n-1 1 1 1

Time Complexity Calculation contd.. In the above table the time complexity of the algorithm is in two steps If condition n<=1 is true only the execution is constant that is 2 steps If the condition n<=1 is false 4n+1 is the complexity of iterative function shown above.

Time Complexity of Algorithm contd.. EX: Matrix addition Algorithm Madd ( a,b,c,m,n ) for i =1 to m do for j=1 to n do c[ i ][j]=a[ i ][j]+b[ i ][j] End Madd

ALGORITHM FOR MATRIX ADDITION In the above algorithm m+1 steps taken by the first for loop In the second for loop takes m(n+1)steps In the third step the addition done for mn times. Total steps are 2mn+2m+1 O(n 2 ) is the time complexity for matrix addition. s/e freq Tot.steps 1 m+1 m+1 1 m(n+1) m(n+1) 1 mn mn

RECURSIVE ALGORITHM CONTD.. Algorithm Factorial(n) { if(n==0) return 1 else return n*factorial (n-1) } T(0) =2 T(n) = T(n-1) + 3 if n>0 T(n-1) = T(n-2) + 3 T(n-2)=T(n-3)+3 So T(n) = T (n-k)+3k n=k then T(n) = T(0)+3n T(n)=3n+2 is the time complexity of the above algorithm

PLOT OF FUNCTION VALUES

Asymptotic Notations To choose the best algorithm we need to check efficiency of each algorithm. The efficiency can be measured by computing time complexity of each algorithm. Asymptotic notation is a short hand way to represent time complexity. Using this notation we can measure the time complexity as fastest possible, slowest possible or average time. Notations like symbols Big O(O), Omega( Ω ), Theta( Θ )used for Asymptotic notations.

Big Oh Notation It is denoted by ‘O’. It is the method of representing the upper bound of time limit of algorithms running time. Using this notation we can give longest amount of time taken by the algorithm to complete (Worst case). DEFINITION: Let f(n) and g(n) be two non negative functions, let n and constant C are two integers such n denotes some value of input and n> n . Similarly C is some constant such that C>0. f(n) <= C*g(n)

OMEGA NOTATION It is denoted by Ω . This notation is used to represent lower bound of algorithms running time. Using omega( Ω ) notation we can calculate the shortest time taken by an algorithm (Best Case). Definition: A function f(n) is said to be in Ω notation if f(n) is bounded below by some positive constant multiple of g(n) such that F(n) >= C*g(n) for all n>= n 0.

THETA NOTATION The Theta ( Θ ) notation running time is in between Upper bound and lower bound (Average case). Definition: Let f(n) ang g(n) be two non negative functions. There are two positive constants namely C1 and C2 such that C 1 <=g(n)<=C 2 g(n) Then we can say F(n) ∈   Θ (g(n))

LITTLE Oh NOTATION The main idea of  asymptotic analysis  is to have a measure of efficiency of algorithms that doesn’t depend on machine specific constants, mainly because this analysis doesn’t require algorithms to be implemented and time taken by programs to be compared. Definition :  Let f(n) and g(n) be functions that map positive integers to positive real numbers. We say that f(n) is ο(g(n)) (or f(n) ∈   ο(g( n)) if for  any real  constant c > 0, there exists an integer constant n ≥ 1 such that 0 ≤ f(n) < c*g(n).