Module_II_Problem_Solving_AI_MLgame.pptx

soundariya20 7 views 18 slides Oct 25, 2025
Slide 1
Slide 1 of 18
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18

About This Presentation

AI game theory
Game Theory
Two-player zero-sum games
Optimal Decisions in Games
Heuristic Alpha
Beta Tree Search
Monte Carlo Tree Search
Stochastic Games
Partially Observable Games
Limitations of Game Search Algorithms
Constraint Satisfaction Problems- Defining Constraint Satisfaction Problem...


Slide Content

Module II – Problem Solving in AI Essentials of Artificial Intelligence and Machine Learning Presented by: [Your Name] Institution: [Your College Name]

Introduction Problem solving is the core of Artificial Intelligence. Involves searching for optimal decisions or actions in a given environment. Methods include Game Theory, Search Algorithms, and Constraint Satisfaction Problems (CSPs).

Game Theory Overview Definition: Study of strategic interaction between rational decision-makers. Applications: AI agents, economics, robotics, negotiation systems. Focus on two-player zero-sum games for simplicity.

Two-Player Zero-Sum Games Concept: One player’s gain is the other’s loss. Examples: Chess, Checkers, Tic-Tac-Toe. Representation: Payoff matrices or game trees. Goal: Maximize one’s outcome while minimizing opponent’s advantage.

Optimal Decisions in Games Agents use minimax strategy. MAX player: tries to maximize utility. MIN player: tries to minimize MAX’s utility. Decision tree used to evaluate possible moves.

Heuristic Alpha–Beta Tree Search Improvement over minimax by pruning unnecessary branches. α (alpha): Best value MAX can guarantee. β (beta): Best value MIN can guarantee. Effect: Reduces computation without affecting result. Example: Chess programs use α–β pruning for faster move selection.

Monte Carlo Tree Search (MCTS) Probabilistic search algorithm using random simulations. Steps: Selection, Expansion, Simulation, Backpropagation. Used in: Go, Atari, reinforcement learning. Balances exploration vs. exploitation.

Stochastic Games Definition: Games with probabilistic outcomes. Includes: Randomness in state transitions or rewards. Examples: Card games, dice games. Represented using Markov Decision Processes (MDPs).

Partially Observable Games Players have incomplete information about the game state. Examples: Poker, hide-and-seek, security games. Requires reasoning under uncertainty using belief states.

Limitations of Game Search Algorithms High computational cost (exponential growth). Memory-intensive for large game trees. Difficulty with stochastic or partially observable environments. Heuristic evaluation may be inaccurate or biased.

Constraint Satisfaction Problems (CSPs) Definition: Problem defined by variables, domains, and constraints. Goal: Find assignment of values satisfying all constraints. Examples: Sudoku, timetabling, scheduling, map coloring.

Defining a CSP Components: Variables: X1, X2, ..., Xn Domains: possible values for each variable Constraints: relations restricting values Solution: Complete assignment satisfying all constraints.

Constraint Propagation & Inference Constraint Propagation: Simplifies CSPs by enforcing local consistency. Techniques: Node consistency, Arc consistency (AC-3 algorithm), Path consistency. Goal: Reduce search space.

Backtracking Search for CSPs Depth-first search with backtracking. Assign variables one at a time. When a conflict occurs → backtrack and try a new value. Can be improved using MRV (Minimum Remaining Values) heuristic.

Local Search for CSPs Start with an initial (possibly inconsistent) assignment. Iteratively modify to reduce constraint violations. Example: Min-conflicts heuristic used in N-Queens problem.

Structure of Problems Exploiting structure can improve efficiency: Independent subproblems → solve separately. Tree-structured CSPs → solved in linear time. Decomposition and graph-based methods help scalability.

Summary Game theory and CSPs are key AI problem-solving models. Search algorithms (minimax, α–β, MCTS) guide rational decision-making. CSP techniques use inference, propagation, and search to find valid solutions.

References Russell & Norvig, Artificial Intelligence: A Modern Approach. Stuart Russell and Peter Norvig, 4th Edition. AI Course Material – Module II: Problem Solving.