RANDOMIZED ALGORITHMS Randomized algorithms in Data Structures(DS) and Algorithms(A) -- (DSA) are algorithms that use randomness in their computations to achieve a desired outcome. 1.These algorithms introduce randomness to improve efficiency or simplify the algorithm design. 2.Randomized algorithms can often provide faster solutions or better approximations compared to deterministic algorithms. 3.Particularly useful in situations where exact solutions are difficult to find or when a probabilistic approach is acceptable . EX: Randomized Quick Sort, we use a random number to pick the next pivot (or we randomly shuffle the array). This randomness is used to reduce time complexity or space complexity in other standard algorithms.
What is a Randomized Algorithm? An algorithm that uses random numbers to decide what to do next anywhere in its logic is called a Randomized Algorithm. For example, in Randomized Quick Sort, we use a random number to pick the next pivot (or we randomly shuffle the array).
How to analyse Randomized Algorithms? Some randomized algorithms have deterministic time complexity . For example, the implementation of Karger’s algorithm has time complexity is O(E). Such algorithms are called Monte Carlo Algorithms and are easier to analyse for worst case. Time complexity of other randomized algorithms is dependent on value of random variable . Such Randomized algorithms are called Las Vegas Algorithms NOTE: These algorithms are typically analysed for expected worst case.
Randomized Algorithms(Classification and Applications) Randomized algorithms can be used to solve a wide variety of problems, including optimization, search, and decision-making. 1.Monte Carlo methods: These are a class of randomized algorithms that use random sampling to solve problems that may be deterministic in principle, but are too complex to solve exactly. Examples include estimating pi, simulating physical systems, and solving optimization problems . 2.Randomized search algorithms: These are algorithms that use randomness to search for solutions to problems. Examples include genetic algorithms and simulated annealing.
Randomized Algorithms(Classification and Applications) 3.Randomized data structures: These are data structures that use randomness to improve their performance. Examples include skip lists and hash tables. 4.Randomized load balancing: These are algorithms used to distribute load across a network of computers, using randomness to avoid overloading any one computer. 5.Randomized encryption: These are algorithms used to encrypt and decrypt data, using randomness to make it difficult for an attacker to decrypt the data without the correct key.
las vegas Relation with the Monte-Carlo Algorithms: The Las-Vegas algorithm can be differentiated with the Monte-carlo algorithms in which the resources used to find out the solution are bounded but it does not give guarantee that the solution obtained is accurate. In some applications by making early termination a Las-Vegas algorithm can be converted into Monte-Carlo algorithm. Complexity Analysis : The complexity class of given problem which is solved by using a Las-Vegas algorithms is expect that the given problem is solved with zero error probability and in polynomial time.
Monte Carlo The computational algorithms which depend on repeated random sampling to compute their results such algorithm are called as Monte-Carlo algorithms. The random algorithm is Monte-carlo algorithms if it can give the wrong answer sometimes. Whenever the existing deterministic algorithm is fail or it is impossible to compute the solution for given problem then Monte-Carlo algorithms or methods are used . Monte-carlo methods are best repeated computation of the random numbers, and that’s why these algorithms are used for solving physical simulation system and mathematical system.
Monte Carlo(contd..) This Monte-carlo algorithms are specially useful for disordered materials, fluids, cellular structures. In case of mathematics these method are used to calculate the definite integrals, these integrals are provided with the complicated boundary conditions for multidimensional integrals. This method is successive one with consideration of risk analysis when compared to other methods
Optimization of Monte-Carlo Algorithms: In numerical optimization the numerical simulation is used which is effective,efficient and popular application for the random numbers. The travelling salesman problem is an optimization problem which is one of the best examples of optimizations. There are various optimization techniques available for Monte-carlo algorithms such as Evolution strategy, Genetic algorithms, parallel tempering etc.
Applications and Scope(Monte-carlo) The Monte-carlo methods has wider range of applications. It uses in various areas like 1.physical science 2.Design and visuals 3.Finance and business 4.Telecommunication etc. In general Monte carlo methods are used in mathematics. By generating random numbers we can solve the various problem. The problems which are complex in nature or difficult to solve are solved by using Monte-carlo algorithms. Monte carlo integration is the most common application of Monte-carlo algorithm.
Randomized version of QuickSort. A Central Pivot is a pivot that divides the array in such a way that one side has at-least 1/4 elements. 1. If low >= high, then EXIT. 2. While pivot 'x' is not a Central Pivot. (i) Choose uniformly at random a number from [low..high]. Let the randomly picked number number be x. (ii) Count elements in arr[low..high] that are smaller than arr[x]. Let this count be sc.
Randomized version of QuickSort(contd..) (iii) Count elements in arr[low..high] that are greater than arr[x]. Let this count be gc. (iv) Let n = (high-low+1). If sc >= n/4 and gc >= n/4, then x is a central pivot. 3. Partition arr[low..high] around the pivot x. 4. randQuickSort(arr, low, sc-1) // Recur for smaller elements 5 randQuickSort(arr, high-gc+1, high) // Recur for greater elements
ANALYSES The important thing in our analysis is, time taken by step 2 is O(n) . The probability that the randomly chosen element is central pivot is 1/n. Therefore, expected number of times the while loop runs is n Thus, the expected time complexity of step 2 is O(n) . What is overall Time Complexity in Worst Case? In worst case, each partition divides array such that one side has n/4 elements and other side has 3n/4 elements. The worst case height of recursion tree is Log 3/4 n which is O(Log n) .
Solution of above recurrence is O(n Log n) Note that the above randomized algorithm is not the best way to implement randomized Quick Sort. The idea here is to simplify the analysis as it is simple to analyse. Typically, randomized Quick Sort is implemented by randomly picking a pivot (no loop). Or by shuffling array elements. Expected worst case time complexity of this algorithm is also O(n Log n)
Randomized Binary Search Randomized Binary Search is a variation of the classic binary search algorithm that introduces an element of randomness into the process. While the traditional binary search operates deterministically by repeatedly dividing the search interval in half, a randomized binary search adds a random step to the decision-making process.
How Does Randomized Binary Search Work? In a traditional binary search, you always choose the middle element of the current search range and compare it with the target value. The key difference in randomized binary search is that, instead of always choosing the middle element, you select a random element from the current search range to compare with the target.
Key Points: Randomness: The randomness is introduced in how we select the element to compare in each iteration. Time Complexity: The time complexity of randomized binary search in the worst case is still O(logn) similar to the classic binary search. This is because, on average, the algorithm will still halve the search space in each step. The expected time complexity remains 𝑂 (log 𝑛 ) though the actual behavior in practice may have some slight variations due to the random choice of elements.
Expected Time complexity of Randomized Binary Search Algorithm For n elements let say expected time required be T(n), After we choose one random pivot, array size reduces to say k. Since pivot is chosen with equal probability for all possible pivots, hence p = 1/n. T(n) is sum of time of all possible sizes after choosing pivot multiplied by probability of choosing that pivot plus time take to generate random pivot index.Hence T(n) = p*T(1) + p*T(2) + ..... + p*T(n) + 1 putting p = 1/n THEN T(n) = ( T(1) + T(2) + ..... + T(n) ) / n + 1 n*T(n) = T(1) + T(2) + .... + T(n) + n .... eq(1) Similarly for n-1 (n-1)*T(n-1) = T(1) + T(2) + ..... + T(n-1) + n-1 .... eq(2)
Use Cases: When Binary Search Faces Worst-Case Scenarios: In deterministic binary search, the time complexity can degrade if the data is not well-distributed (for example, if all values are the same). Randomized binary search may help avoid some of these worst-case performance issues. Randomized Algorithms in Problem Solving: It can be used as part of a broader randomized approach to solving problems (e.g., in optimization or probabilistic methods).
Comparison with Deterministic Binary Search Deterministic Binary Search: Always selects the middle element, ensuring a direct and predictable approach to narrowing down the search space. Randomized Binary Search: Randomly selects an element, which could, in theory, lead to slightly different behavior across different runs, but still finds the target in an efficient manner.
Min-Cut Algorithm The Min-Cut Algorithm refers to a family of algorithms used to find the minimum cut in a flow network. A cut in a graph is a way of dividing the graph's vertices into two disjoint subsets, and the minimum cut is the cut that minimizes the total weight (capacity) of the edges crossing from one subset to the other.
Min-Cut Algorithm Problem Overview: Given an undirected graph (or directed graph with capacities on edges), the goal is to find a partition of the graph’s nodes into two disjoint sets such that the sum of the capacities of the edges crossing the partition is minimized. The min-cut problem is closely related to the maximum flow problem, due to the Max-Flow Min-Cut Theorem, which states that the maximum value of flow in a network is equal to the capacity of the minimum cut separating the source and sink.
Key Concepts Source (s) : A designated starting point in the graph. Sink (t) : A designated endpoint in the graph. Cut: A partition of the graph’s vertices into two disjoint sets, with one containing the source and the other containing the sink. Capacity of a cut: The sum of the capacities of the edges that cross the cut. Minimum cut: The cut that has the smallest possible capacity among all possible cuts
Algorithms to Find Min-Cut 1. Ford-Fulkerson Method (Edmonds-Karp) 2. Karger's Algorithm (Randomized) 1. Ford-Fulkerson Method (Edmonds-Karp): This is an augmenting path based algorithm used to solve the maximum flow problem. The Max-Flow Min-Cut Theorem implies that once the maximum flow is found (using Ford-Fulkerson or its more efficient variant Edmonds-Karp), the minimum cut can be easily identified. To find the min-cut: Perform a maximum flow calculation. After the flow is computed, find the set of reachable vertices from the source in the residual graph. The edges that go from a reachable vertex to a non-reachable vertex in the residual graph form the minimum cut.
2. Karger's Algorithm (Randomized): This is a randomized algorithm that finds the minimum cut in an undirected graph. The algorithm works by randomly contracting edges in the graph until only two vertices remain . The remaining cut is likely to be close to the minimum cut, but since it's randomized, multiple runs are necessary to ensure a correct result. Karger's algorithm is efficient in terms of expected time complexity but may require many repetitions to guarantee the minimum cut with high probability.
Steps: 1.Randomly pick an edge and contract it (combine two vertices into one). 2.Repeat this process until only two vertices remain. 3.The cut between these two vertices is the resulting minimum cut. Time Complexity: The expected time complexity is O(n 2 log n), where 𝑛 is the number of vertices.
Applications of Min-Cut Network design : Min-cut algorithms are used to design robust communication networks, ensuring that the network can withstand failure of some connections without splitting into disconnected components. Image segmentation : In computer vision, the min-cut algorithm is used for partitioning an image into segments that correspond to different regions. Clustering: Min-cut can be used in graph-based clustering methods to group similar objects together. Load balancing: In parallel computing, min-cut algorithms help distribute tasks efficiently across processors.
EXAMPLE
EXAMPLE (CONTD..)
Game-Theoretic Techniques Game theory and randomized algorithms can be combined in many interesting ways, particularly in situations where uncertainty and randomness are integral to decision-making and strategy. In these scenarios, players often choose strategies that are not deterministic but instead involve randomness (e.g., random mixed strategies). Here are some key game-theoretic techniques that involve randomized algorithms: 1. Mixed Strategies 2. Randomized Algorithms in Auctions and Mechanism Design 3. Correlated Equilibria 4. Randomized Load Balancing 5. Randomized Social Choice and Voting 6. Randomized Approximation Algorithms 7. Stochastic Games
1. Mixed Strategies Definition: In game theory, a mixed strategy is one in which players randomize over possible actions, assigning probabilities to each of their available strategies. Application: Mixed strategies are commonly used in games where a pure strategy (one that always selects the same action) does not provide an optimal solution. For example, in rock-paper-scissors , players randomize their choices to avoid being predictable. Example: In a zero-sum game, a player might randomize their strategies (e.g., choosing between several possible actions with specific probabilities) to prevent the opponent from exploiting any patterns.
2. Randomized Algorithms in Auctions and Mechanism Design Definition: In auction theory and mechanism design, randomized algorithms are used to design mechanisms where randomness helps achieve certain desired outcomes, such as fairness or efficiency. Application: Randomized auctions (like Vickrey auctions) often use randomization to ensure that bidding strategies do not influence the final outcome in ways that are unfair or inefficient. Example: In the Vickrey-Clarke-Groves (VCG) auction, randomness can be used to prevent bidders from manipulating the auction outcome through false reporting of preferences.
3. Correlated Equilibria Definition : In a correlated equilibrium, players choose strategies based on some shared random signal. The correlation suggests that players will follow a recommendation, which may not be purely random but guided by the signal. Application: Randomized correlation can be useful in situations where players are better off cooperating or coordinating through some common external randomness, rather than choosing their strategies independently. Example: In a repeated game, players might use correlated signals to achieve better cooperation outcomes than they would in a standard Nash equilibrium, especially if the correlation helps them avoid the "prisoner's dilemma."
4. Randomized Load Balancing Definition: Randomized algorithms are often used in distributed systems, especially for load balancing in a way that minimizes the overall cost (or expected makespan) of task distribution. Application: In competitive environments where resources (e.g., processors, servers) need to be allocated to tasks, players can choose random strategies to balance their load. Example: In a game-theoretic model of load balancing, where multiple players (servers) are choosing how to allocate resources (tasks), randomized algorithms can minimize the overall cost (e.g., maximizing efficiency or minimizing delays).
5. Randomized Social Choice and Voting Definition: In social choice theory and voting systems, randomized algorithms can be used to resolve voting conflicts in a fair and transparent manner, especially when there are ties or ambiguous preferences. Application: Randomized algorithms can ensure fairness in elections or decision-making processes, particularly in situations where traditional deterministic rules could lead to unfair outcomes or deadlocks. Example: A randomized voting scheme could be used to break ties in elections, where each candidate has an equal probability of being selected if they tie in votes.
Monte Carlo Tree Search (MCTS) Definition: MCTS is a popular randomized algorithm used in decision-making processes for games that involve uncertainty, such as in chess, Go, or other strategic games. Application: The algorithm simulates possible game outcomes by randomly exploring the decision tree, selecting moves based on these simulated outcomes, and refining its strategy over time. Example: AlphaGo famously used MCTS combined with deep learning techniques to play Go, a game with a vast number of possible moves, using randomness to explore potential future states and refine its strategy.
Conclusion Incorporating randomness into game-theoretic models and algorithms allows players to handle uncertainty, prevent predictability, and sometimes achieve better outcomes than strictly deterministic strategies. These techniques, ranging from mixed strategies and randomized load balancing to Bayesian games and stochastic processes, are widely applied in economics, computer science, and beyond.
MIN-MAX and AND-OR Game Trees A game tree is a full, balanced, binary tree in which internal nodes at an even distance from the root are labeled MIN and internal nodes at an odd distance (including the root) are labeled MAX. Each node of the tree is a record containing four fields: “type,” “l-child,” “r-child,” and “value.” “Type” is MIN, MAX, or LEAF, “l-child” and “r-child” are pointers to the node’s left and right children respectively (which will be null in the case of a leaf ), and “value” is the value returned by the node – either 0 or 1. The value returned by a MIN node is the lesser of the values of its two children. The value returned by a MAX node is the greater of the values of its two children. Initially only the leaves have values.
A min-max tree of depth 2k = 2 × 2 = 4,with 2 2k − 1 = 4 2 − 1 = 15 internal nodes and 2 2k = 4 2 = 16 leaves.
A game tree in which each node has d children and there are k MAX and k MIN nodes is signified by T d,k . We will work exclusively with binary game trees, denoted by T 2,k. In binary computer science terms, a MIN node can be thought of as a logical AND and a MAX node as a logical OR. Input and output within a game tree
Deterministic Evaluation Following Algorithm is a recursive deterministic algorithm for game tree evaluation which can be called from the main program with the command Evaluate(root). Note that the algorithm does not necessarily evaluate every node. In the case of an OR node, if one child returns a 1 then evaluation of the other node is unnecessary since regardless of whether it is a 1 or a 0, the OR node will return a 1 ation of the other node is unnecessary since regardless of whether it is a 1 or a 0, the OR node will return a 1. Likewise, if an AND node is found to have one 0 child, we know immediately that the node will return the value 0.
Deterministic evaluation of an AND-OR game tree 1 : Evaluate(node) 2: if (node.type = “LEAF”) then 3: return(node.value) 4: end if 5: if (node.type = “OR”) then 6: p1 = Evaluate(node.right) 7: if (p1 = 1) then 8: return p1 9: else 10: return (Evaluate(node.left)) 11: end if 12: end if 13: if (node.type = “AND”) then 14: p1 = Evaluate(node.left) 15: if (p1 = 0) then 16: return p1 17: else 18: return (Evaluate(node.right) 19: end if 20: end if
Randomized Evaluation Following Algorithm is a randomized evaluation algorithm that uses coin tosses. Whether the algorithm will evaluate the right child first Or the left child first is dependent on the coin toss and is impossible for an adversary to know in advance. Note: Following algorithm is a Las Vegas algorithm in that it always returns the correct result, and in terms of deterministic efficiency is no worse than a deterministic algorithm
RandEvaluate(node) 2: if (node.type = “LEAF”) then 3: return(node.value) 4: end if 5: if (node.type = “‘OR”) then 6: Toss a coin 7: if (Heads) then 8: p1 = RandEvaluate(node.left) 9: if (p1 = 1) then 10: return p1 11: else 12: return(RandEvaluate(node.right)) 13: end if 14: end if 15: if (Tails) then 16: p2 = RandEvaluate(node.right) 17: if (p2 = 1) then 18: return p2 19: else 20: return(RandEvaluate(node.left)) 21: end if 22: end if 23: end if 24: if (node.type = “AND”) then 25: Toss a coin 26: if (Heads) then 27: p1 = RandEvaluate(node.left) 28: if (p1 = 0) then 29: return p1 30: else 31: return(RandEvaluate(node.right)) 32: end if 33: end if 34: if (Tails) then 35: p2 = RandEvaluate(node.right) 36: if (p2 = 0) then 37: return p2 38: else 39: return(RandEvaluate(node.left)) 40: end if 41: end if 42: end if
Cost Analysis of Randomized AND-OR Game Tree Evaluation Given the randomized evaluation strategy, we would like to know what the expected cost of evaluating a node is. To begin, consider an OR node with two leaves. If the node is to return a 1, then at least one of the leaves must return a 1. With probability 1/2 this leaf will be selected first and only one evaluation will need to be done. The other possibility (also occurring with probability 1/2) is that both leaves will need to be evaluated. The expected number of evaluations, E 1 OR = (1/2)*1 + (1/2)*2 = 3/2
Similar analysis with the 0s and 1s switched can be applied to the AND nodes to find that the expected cost of evaluating an AND node that is to return a 0 is: E AND = (1/2)*1 + (1/2)*2 = 3/2
In above cases in which the AND node returns a 1 or the OR node returns a 0, it may seem at first that the randomized algorithm has no advantage, since two evaluations will necessarily be required. 1. For an AND node to return a 1, however,both of its OR-node children must also return 1s — which is the “good” case for them. 2. For an OR node to return a 0, both of its AND-node children must also return a 0 — which is the “good” case for them. The algorithm benefits because what is bad for an AND node is good for an OR node and vice versa.
The expected cost of the randomized algorithm for evaluating T2,k is at most 3 k . By induction on k. Base Case: k = 1
T 2,1 will have one AND node at the root, two OR nodes as children, and four leaves, as in following figure. There are two cases to consider: those in which the root returns a 1, and those in which the root returns a 0. Case 1: If the root of T evaluates to 0, then at least one of the OR nodes must evaluate to 0. With probability 1/2 this OR node is chosen first, incurring a cost of 2 (if the OR node returns a 0, it will have had to evaluate both leaves), and with probability 1/2 both sub-trees will need to be evaluated, incurring a cost of 1/2 × 3/2 + 1/2 × (3/2 + 2). The expected cost of evaluating T, Then, is at most 1/2 ×2 + 1/2×(3/2+2)=2(3/4)
Case 2: If the root of T evaluates to 1, then both OR nodes will have to be evaluated, and the cost will be 2 × 3/2 = 3. The cost for evaluating the tree, then, is <=3, and since k = 1 in this case and 3 k = 3, the cost is <=3 and the base case is established. Inductive Case: Assume that for all T 2,k−1 , the expected evaluation cost <=3 k−1 . Consider a tree T whose root is an OR node with two children, each of which is the root of a T 2,k−1 game tree. Following diagram (a & b) : The evaluation of such a tree involves two cases: those in which the root evaluates to 1, and those in which it evaluates to 0.
The OR-node cases Case 1: If the root of T evaluates to 1, then at least one of the two T 2,k−1 sub-trees must evaluate to 1. With probability 1/2 this sub-tree is chosen first, incurring a cost <=3 k−1 , and with probability at most 1/2 both sub-trees will need to be evaluated,incurring a cost of 2 × 3 k−1 The expected cost of evaluating T, then, is at most (1/2) * 3 k-1 + (1/2)*2* 3 k-1 = (3/2) * 3 k-1
Case 2: If the root of T evaluates to 0, then both sub-trees will have to be evaluated, and the cost will be 2 × 3 k−1 .The AND-node cases: Figure a is an OR node with two T 2,k−1 subtrees, and b is the full T2,k rooted at an AND node which connects two copies of the tree from a . The root T 2,k is, by definition (2.1), an AND node. The children of this root will be sub-trees rooted at OR nodes - like the one just discussed (see figure (5)). The evaluation of T 2,k can again be broken down in to two cases: those in which the root returns a 1, and those in which it returns a 0.
Case 1: If T 2,k evaluates to 1, then both sub-trees rooted at OR nodes return 1. By equation (1) and linearity of expectation, the expected cost of evaluating T 2,k is 2 * (3/2)*3 k-1 =3 k Case 2: If T 2,k evaluates to 0, then at least one of its children returns 0. With probability 1/2 this child is chosen first, so the expected cost of evaluating T 2,k is at most 1/2(2*3 k-1 ) +1/2 ((3/2) * 3 k-1 + 2* 3 k-1 ) =2(3/4)*3 k-1 <= 3 k So we see that the expected cost of evaluating T 2,k does not exceed 3 k in either case.
Minimax Algorithm in Game Theory Minimax is a kind of backtracking algorithm that is used in decision making and game theory to find the optimal move for a player, assuming that your opponent also plays optimally. It is widely used in two player turn-based games such as Tic-Tac-Toe, Chess, etc. In Minimax the two players are called maximizer and minimizer . The maximizer tries to get the highest score possible while the minimizer tries to do the opposite and get the lowest score possible. Every board state has a value associated with it. In a given state if the maximizer has upper hand then, the score of the board will tend to be some positive value. If the minimizer has the upper hand in that board state then it will tend to be some negative value. The values of the board are calculated by some heuristics which are unique for every type of game.
Example: Consider a game which has 4 final states and paths to reach final state are from root to 4 leaves of a perfect binary tree as shown below. Assume you are the maximizing player and you get the first chance to move, i.e., you are at the root and your opponent at next level. Which move you would make as a maximizing player considering that your opponent also plays optimally?
Maximizer goes LEFT: It is now the minimizers turn. The minimizer now has a choice between 3 and 5. Being the minimizer it will definitely choose the least among both, that is 3 Maximizer goes RIGHT: It is now the minimizers turn. The minimizer now has a choice between 2 and 9. He will choose 2 as it is the least among the two values. Being the maximizer you would choose the larger value that is 3. Hence the optimal move for the maximizer is to go LEFT and the optimal value is 3. Now the game tree looks like below :
The below tree shows two possible scores when maximizer makes left and right moves. Note: Even though there is a value of 9 on the right subtree, the minimizer will never pick that. We must always assume that our opponent plays optimally.
Minimax Algorithm in Game Theory Alpha-Beta pruning is not actually a new algorithm, but rather an optimization technique for the minimax algorithm. It reduces the computation time by a huge factor. This allows us to search much faster and even go into deeper levels in the game tree. It cuts off branches in the game tree which need not be searched because there already exists a better move available. It is called Alpha-Beta pruning because it passes 2 extra parameters in the minimax function, namely alpha and beta.
Alpha is the best value that the maximizer currently can guarantee at that level or above . Beta is the best value that the minimizer currently can guarantee at that level or below .
The initial call starts from A. The value of alpha here is -INFINITY and the value of beta is +INFINITY . These values are passed down to subsequent nodes in the tree. At A the maximizer must choose max of B and C, so A calls B first At B it the minimizer must choose min of D and E and hence calls D first. At D, it looks at its left child which is a leaf node. This node returns a value of 3. Now the value of alpha at D is max( -INF, 3 ) which is 3. To decide whether its worth looking at its right node or not, it checks the condition beta<=alpha . This is false since beta = +INF and alpha = 3. So it continues the search. D now looks at its right child which returns a value of 5.At D, alpha = max(3, 5) which is 5. Now the value of node D is 5
D returns a value of 5 to B. At B, beta = min( +INF, 5) which is 5. The minimizer is now guaranteed a value of 5 or lesser. B now calls E to see if he can get a lower value than 5. At E the values of alpha and beta is not -INF and +INF but instead -INF and 5 respectively, because the value of beta was changed at B and that is what B passed down to E Now E looks at its left child which is 6. At E, alpha = max(-INF, 6) which is 6. Here the condition becomes true. beta is 5 and alpha is 6. So beta<=alpha is true. Hence it breaks and E returns 6 to B Note how it did not matter what the value of E‘s right child is. It could have been +INF or -INF, it still wouldn’t matter, We never even had to look at it because the minimizer was guaranteed a value of 5 or lesser. So as soon as the maximizer saw the 6 he knew the minimizer would never come this way because he can get a 5 on the left side of B. This way we didn’t have to look at that 9 and hence saved computation time. E returns a value of 6 to B. At B, beta = min( 5, 6) which is 5.The value of node B is also 5 So far this is how our game tree looks. The 9 is crossed out because it was never computed .
Binary planar partition A binary planar partition is a specific type of partition of a planar graph or set of points in the plane. The concept of "binary" in this context usually refers to dividing the structure into two parts (binary partition) while ensuring that the structure remains planar (i.e., it can be drawn on a flat plane without edges crossing). These types of partitions are commonly encountered in computational geometry, graph theory, and optimization. The idea of a binary partition often involves partitioning a graph, a set of points, or a geometric shape into two subsets, with each subset being a part of a planar structure.
Possible Contexts for Binary Planar Partition: 1.Graph Theory 2.Computational Geometry 3.Binary Space Partitioning (BSP) Trees 4. Planar Graph Cut Problems 1.Graph Theory: In graph theory, a binary planar partition might refer to partitioning the vertices of a planar graph into two subsets such that the edges of the graph are divided between these two subsets while maintaining the planarity of the graph (i.e., the partition does not introduce edge crossings). This could be useful in problems like graph coloring or network design where you're looking to split a graph into two parts with certain properties.
Possible Contexts for Binary Planar Partition: 2.Computational Geometry (Binary Partition of a Set of Points): In computational geometry, the term could be used to describe the partitioning of a set of points in the plane into two subsets in such a way that the points in each subset are still part of a planar arrangement. A common example of a geometric partition is binary space partitioning (BSP), which is used to recursively divide a space into two half-spaces (with a plane acting as the divider). While BSP is typically more about dividing space into regions, a similar approach could be applied to partitioning sets of points or graphs into two subsets.
Possible Contexts for Binary Planar Partition: 3.Binary Space Partitioning (BSP) Trees: A binary space partition (BSP) tree is a data structure used in computer graphics, computer-aided design (CAD), and computational geometry for recursive division of space. BSP involves dividing a space (or the plane) with a sequence of planes, where each plane partitions the space into two half-spaces. A binary partition of a planar object could be recursively subdivided using BSP trees.
Possible Contexts for Binary Planar Partition: 4. Planar Graph Cut Problems: A binary partition could also refer to partitioning a planar graph in such a way that it forms a cut (separating the graph into two disjoint subsets), possibly optimizing some cost function (like the minimum cut). This is related to the min-cut problem and can involve binary partitioning of the graph while maintaining planarity constraints.
Applications: Geometric Algorithms: Binary planar partitions are useful in geometric algorithms that involve separating space (e.g., in convex hulls or range searching). Graph Partitioning: In network design or circuit layout problems, where you may want to partition a graph into two parts while preserving planarity or minimizing certain edge costs (e.g., minimizing the cut in a planar graph). Computer Graphics and Visualization : BSP trees and similar partitioning strategies help in dividing spaces efficiently for rendering, ray tracing, and collision detection in 3D models.