Dynamic Programming is a method used in mathematics and computer science to solve complex problems by breaking them down into simpler subproblems. By solving each subproblem only once and storing the results, it avoids redundant computations, leading to more efficient solutions for a wide range of p...
Dynamic Programming is a method used in mathematics and computer science to solve complex problems by breaking them down into simpler subproblems. By solving each subproblem only once and storing the results, it avoids redundant computations, leading to more efficient solutions for a wide range of problems. This article provides a detailed exploration of dynamic programming concepts, illustrated with examples.
What is Dynamic Programming (DP)?
Dynamic Programming (DP) is a method used in mathematics and computer science to solve complex problems by breaking them down into simpler subproblems. By solving each subproblem only once and storing the results, it avoids redundant computations, leading to more efficient solutions for a wide range of problems.
How Does Dynamic Programming (DP) Work?
Identify Subproblems: Divide the main problem into smaller, independent subproblems.
Store Solutions: Solve each subproblem and store the solution in a table or array.
Build Up Solutions: Use the stored solutions to build up the solution to the main problem.
Avoid Redundancy: By storing solutions, DP ensures that each subproblem is solved only once, reducing computation time.To find the nth Fibonacci number using a brute force approach, you would simply add the (n-1)th and (n-2)th Fibonacci numbers. This would work, but it would be inefficient for large values of n, as it would require calculating all the previous Fibonacci numbers.Fibonacci Series using Dynamic Programming
Subproblems: F(0), F(1), F(2), F(3), …
Store Solutions: Create a table to store the values of F(n) as they are calculated.
Build Up Solutions: For F(n), look up F(n-1) and F(n-2) in the table and add them.
Avoid Redundancy: The table ensures that each subproblem (e.g., F(2)) is solved only once.
By using DP, we can efficiently calculate the Fibonacci sequence without having to recompute subproblems.
Example 2: Longest common subsequence (finding the longest subsequence that is common to two strings)
Example 3: Shortest path in a graph (finding the shortest path between two nodes in a graph)
Example 4: Knapsack problem (finding the maximum value of items that can be placed in a knapsack with a given capacity)
When to Use Dynamic Programming (DP)?
Dynamic programming is an optimization technique used when solving problems that consists of the following characteristics:
1. Optimal Substructure:
Optimal substructure means that we combine the optimal results of subproblems to achieve the optimal result of the bigger problem.
Example:
Consider the problem of finding the minimum cost path in a weighted graph from a source node to a destination node. We can break this problem down into smaller subproblems:
Find the minimum cost path from the source node to each intermediate node.
Find the minimum cost path from each intermediate node to the destination node.
The solution to the larger problem (finding the minimum cost path from the source node to the destination node) can be constructed
What is Transitive Closure? The concept of transitive closure can be thought of as constructing a data structure that makes it possible to answer reachability questions. That is, can one get from node a to node d in one or more hops ? A binary relation tells you only that node a is connected to node b, and that node b is connected to node c, etc.
Warshall’s Algorithm: Transitive Closure Computes the transitive closure of a relation Main idea: a path exists between two vertices i, j, iff there is an edge from i to j; or there is a path from i to j going through vertex 1; or there is a path from i to j going through vertex 1 and/or 2; or there is a path from i to j going through vertex 1, 2, and/or 3; or •... there is a path from i to j going through any of the other vertices
Warshall’s Algorithm
Warshall’s Algorithm Contd.,
Ex: Transitive Graph
Warshall’s Algorithm (matrix generation)
Warshall’s Algorithm: Transitive Closure
Ex1:
Ex1:
Ex1:
Ex1:
Ex1:
Warshall’s Algorithm
Ex2: Derive the transitive closure for the below graph.
1 2 3 1 1 1 1 1 1 1 1 1 2 1 1 1 1 3
Floyd’s Algorithm: All pairs shortest paths ALL PAIR SHORTEST PATH ALL PAIRS NODE1 (1,1) NODE2 (2,1) NODE3 (3,1) NODE4 (4,1) (1,2) (2,2) (3,2) (4,2) (1,3) (2,3) (3,3) (4,3) (1,4) (2,4) (3,4) (4,4) IF V=4 4*4 PAIRS If Dijkstra algorithm is used it would require n2 to compute single source shortest path so, obviously to compute shortest path for all n vertices, it might take n*n2= O(n3) Hence we need some approach which require lesser time than O(n3). Floyd's is one such algo .
Floyd’s Algorithm: All pairs shortest paths
Floyd’s Algorithm: All pairs shortest paths
Key difference b/w Floyd’s and Dijikstra’s Floyd’s algorithm works for graphs with negative edges but not negative cycles. i.e distance of any node from itself is always zero. But in some cases, as in the above example, when we traverse from 3 to 3, the distance comes out to be -2. this indicates it has negative cycles.
Floyd’s Algorithm (matrix generation)
Dijkstra’s v/s Floyd’s algo Dijkstra’s Dijkstra’s algorithm is an example of a single-source shortest or SSSP algorithm, i.e., given a source vertex it finds shortest path from source to all other vertices. Works on Greedy approach. Does not support graph with negative edges. Time complexity is O(E log V) Floyd’s Floyd’s algo is an example of all-pairs shortest path algorithm, meaning it computes the shortest path between all pair of nodes. Uses Dynamic Programming approach. Works very well even with negative edges. Time complexity is O( V3)
Floyd’s Algorithm (matrix generation)
Example2: Steps: Remove all the self loops and parallel edges if any (keeping the lowest weight edge) from the graph. In the given graph, there are neither self edges nor parallel edges. Now run Floyd’s algo .
Example3
Output
Floyd’s Algorithm
Time analysis of Floyd’s algorithm
Program – 10 Write Java programs to (a) Implement All-Pairs Shortest Paths problem using Floyd's algorithm. (b) Implement Travelling Sales Person problem using Dynamic programming.
import java.util.Scanner; public class FloydDP { static final int MAX = 20; // max. size of cost matrix static int a[ ][ ]; // cost matrix static int n; // actual matrix size public static void main(String args[ ]) { a = new int[MAX][MAX]; ReadMatrix( ); Floyds( ); // find all pairs shortest path PrintMatrix( ); }
static void ReadMatrix( ) { System.out.println("Enter the number of vertices\n"); Scanner scanner = new Scanner(System.in); n = scanner.nextInt( ); System.out.println("Enter the Cost Matrix (999 for infinity) \n"); for (int i = 1; i <= n; i++) { for (int j = 1; j <= n; j++) { a[i][j] = scanner.nextInt( ); } } scanner.close( ); }
static void Floyds( ) { for (int k = 1; k <= n; k++) { for (int i = 1; i <= n; i++) for (int j = 1; j <= n; j++) if ((a[i][k] + a[k][j]) < a[i][j]) a[i][j] = a[i][k] + a[k][j]; } }