presentation on artificial intelligence autosaved

divya1587 14 views 122 slides May 28, 2024
Slide 1
Slide 1 of 122
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122

About This Presentation

hgh


Slide Content

Constraint satisfaction problems
CS171, Fall 2016
Introduction to Artificial Intelligence
Prof. Alexander Ihler

Constraint Satisfaction Problems
•What is a CSP?
–Finite set of variables, X
1, X
2, …, X
n
–Nonempty domain of possible values for each: D
1, ..., D
n
–Finite set of constraints, C
1...C
m
•Each constraint C
ilimits the values that variables can take, e.g., X
1X
2
–Each constraint C
iis a pair: C
i = (scope, relation)
•Scope = tuple of variables that participate in the constraint
•Relation = list of allowed combinations of variables
May be an explicit list of allowed combinations
May be an abstract relation allowing membership testig & listing
•CSP benefits
–Standard representation pattern
–Generic goal and successor functions
–Generic heuristics (no domain-specific expertise required)

Example: Sudoku
•Problem specification
Variables: {A1, A2, A3, … I7, I8, I9}
Domains: { 1, 2, 3, … , 9 }
Constraints:
each row, column “all different”
alldiff(A1,A2,A3…,A9), ...
each 3x3 block “all different”
alldiff(G7,G8,G9,H7,…I9), ...
Task:solve (complete a partial solution)
check “well-posed”: exactly one solution?
A
B
C
D
E
F
G
H
I
1 2 3 4 5 6 7 8 9

CSPs: What is a Solution?
•State: assignment of values to some or all variables
–Assignment is completewhen every variable has an assigned value
–Assignment is partialwhen one or more variables have no assigned value
•Consistentassignment
–An assignment that does not violate any constraint
•A solutionto a CSP is a complete and consistent assignment
–All variables are assigned, and no constraints are violated
•CSPs may require a solution that maximizes an objective
–Linear objective )linear programming or integer linear programming
–Ex: “Weighted”CSPs
•Examples of applications
–Scheduling the time of observations on the Hubble Space Telescope
–Airline schedules
–Cryptography
–Computer vision, image interpretation

A solutionis any setting of the variables that satisfies all the constraints, e.g.,

Example: Map Coloring
•Constraint graph
–Vertices: variables
–Edges: constraints
(connect involved variables)
•Graphical model
–Abstracts the problem to a canonical form
–Can reason about problem through graph connectivity
–Ex: Tasmania can be solved independently (more later)
•Binary CSP
–Constraints involve at most two variables
–Sometimes called “pairwise”

Aside: Graph coloring
•More general problem than map coloring
•Planar graph:
graph in 2D plane with no
edge crossings
•Guthrie’s conjecture (1852)
Every planar graph can be colored ·4 colors
•Proved (using a computer) in 1977 (Appel & Haken 1977)

Varieties of CSPs
•Discrete variables
–Finite domains, size d )O(d
n
) complete assignments
•Ex: Boolean CSPs: Boolean satisfiability (NP-complete)
–Inifinite domains (integers, strings, etc.)
•Ex: Job scheduling, variables are start/end days for each job
•Need a constraint language, e.g., StartJob_1 + 5 ·StartJob_3
•Infinitely many solutions
•Linear constraints: solvable
•Nonlinear: no general algorithm
•Continuous variables
–Ex: Building an airline schedule or class schedule
–Linear constraints: solvable in polynomial time by LP methods

Varieties of constraints
•Unaryconstraints involve a single variable,
–e.g., SA ≠green
•Binaryconstraints involve pairs of variables,
–e.g., SA ≠WA
•Higher-orderconstraints involve 3 or more variables,
–Ex: jobs A,B,C cannot all be run at the same time
–Can always be expressed using multiple binary constraints
•Preference(soft constraints)
–Ex: “red is better than green” can often be represented by a cost for
each variable assignment
–Combines optimization with CSPs
9

Simplify…
•We restrict attention to:
•Discrete & finite domains
–Variables have a discrete, finite set of values
•No objective function
–Any complete & consistent solution is OK
•Solution
–Find a complete & consistent assignment
•Example: Sudoku puzzles

Binary CSPs
CSPs only need binary constraints!
•Unary constraints
–Just delete values from the variable’s domain
•Higher order (3 or more variables): reduce to binary
–Simple example: 3 variables X,Y,Z
–Domains Dx={1,2,3}, Dy={1,2,3}, Dz={1,2,3}
–Constraint C[X,Y,Z] = {X+Y=Z} = {(1,1,2),(1,2,3),(2,1,3)}
(Plus other variables & constraints elsewhere in the CSP)
–Create a new variable W, taking values as triples (3-tuples)
–Domain of W is Dw={(1,1,2),(1,2,3),(2,1,3)}
•Dw is exactly the tuples that satisfy the higher-order constraint
–Create three new constraints:
•C[X,W] = { [1,(1,1,2)], [1,(1,2,3)], [2,(2,1,3) }
•C[Y,W] = { [1,(1,1,2)], [2,(1,2,3)], [1,(2,1,3) }
•C[Z,W] = { [2,(1,1,2)], [3,(1,2,3)], [3,(2,1,3) }
Other constraints elsewhere involving X,Y,Z are unaffected

•Find numeric substitutions that make an equation hold:
Example: Cryptarithmeticproblems
T W O
+ T W O
= F O U R
7 3 4
+ 7 3 4
= 1 4 6 8
For example:
O = 4
R = 8
W = 3
U = 6
T = 7
F = 1
Note: not unique –how many solutions?
R
U
W
T
O
F
C
2
C
3
C
1
all-different
O+O = R + 10*C
1
W+W+C
1= U + 10*C
2
T+T+C
2= O + 10*C
3
C
3= F
Non-pairwise CSP:
C
1= {0,1}
C
2= {0,1}
C
3= {0,1}

•Try it yourself at home:
(a frequent request from college students to parents)
Example: Cryptarithmeticproblems
S E N D
+ M O R E
= M O N E Y

Random binary CSPs
•A random binary CSP is defined by a four-tuple (n, d, p
1, p
2)
–n = the number of variables.
–d = the domain size of each variable.
–p
1= probability a constraint exists between two variables.
–p
2= probability a pair of values in the domains of two variables connected by a
constraint is incompatible.
•Note that R&N lists compatible pairs of values instead.
•Equivalent formulations; just take the set complement.
•(n, d, p
1, p
2) generate random binary constraints
•The so-called “model B” of Random CSP (n, d, n
1, n
2)
–n1 = p
1n(n-1)/2 pairs of variables are randomly and uniformly selected and binary
constraints are posted between them.
–For each constraint, n
2= p
2d^2 randomly and uniformly selected pairs of values are
picked as incompatible.
•The random CSP as an optimization problem (minCSP).
–Goal is to minimize the total sum of values for all variables.
(adapted from http://www.unitime.org/csp.php)

CSP as a standard search problem
•A CSP can easily be expressed as a standard search problem.
•Incremental formulation
–Initial State: the empty assignment {}
–Actions: Assign a value to an unassigned variable provided that it does not
violate a constraint
–Goal test: the current assignment is complete
(by construction it is consistent)
–Path cost: constant cost for every step (not really relevant)
•Aside: can also use complete-state formulation
–Local search techniques (Chapter 4) tend to work well
BUT:solution is at depth n (# of variables)
For BFS: branching factor at top level is nd
next level: (n-1)d

Total: n! d
n
leaves! But there are only d
n
complete assignments!

Commutativity
•CSPs are commutative.
–Order of any given set of actions has no effect on the outcome.
–Example: choose colors for Australian territories, one at a time.
•[WA=red then NT=green] same as [NT=green then WA=red]
•All CSP search algorithms can generate successors by
considering assignments for only a single variable at each
node in the search tree
there are d
n
irredundant leaves
•(Figure out later to which variable to assign which value.)

Backtracking search
•Similar to depth-first search
–At each level, pick a single variable to expand
–Iterate over the domain values of that variable
•Generate children one at a time, one per value
–Backtrack when a variable has no legal values left
•Uninformed algorithm
–Poor general performance

Backtracking search
functionBACKTRACKING-SEARCH(csp) returna solution or failure
returnRECURSIVE-BACKTRACKING({} , csp)
functionRECURSIVE-BACKTRACKING(assignment, csp) returna solution or failure
ifassignmentis complete then return assignment
varSELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp)
for each value in ORDER-DOMAIN-VALUES(var, assignment, csp)do
ifvalueis consistent with assignmentaccording to CONSTRAINTS[csp] then
add {var=value}to assignment
resultRECURSIVE-BACTRACKING(assignment, csp)
ifresult failure then returnresult
remove {var=value}from assignment
return failure
(R&N Fig. 6.5)

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
22
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
23
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
24
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
25
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
26
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
27
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
28
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
29
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
30
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
31
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
32
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
•Expand deepestunexpanded node
•Generate only onechild at a time.
•Goal-Testwhen inserted.
–For CSP, Goal-test at bottom
33
Future= green dotted circles
Frontier=white nodes
Expanded/active=gray nodes
Forgotten/reclaimed= black nodes

Backtracking search
functionBACKTRACKING-SEARCH(csp) returna solution or failure
returnRECURSIVE-BACKTRACKING({} , csp)
functionRECURSIVE-BACKTRACKING(assignment, csp) returna solution or failure
ifassignmentis complete then return assignment
varSELECT-UNASSIGNED-VARIABLE(VARIABLES[csp],assignment,csp)
for each value in ORDER-DOMAIN-VALUES(var, assignment, csp)do
ifvalueis consistent with assignmentaccording to CONSTRAINTS[csp] then
add {var=value}to assignment
resultRECURSIVE-BACTRACKING(assignment, csp)
ifresult failure then returnresult
remove {var=value}from assignment
return failure
(R&N Fig. 6.5)

Improving Backtracking O(exp(n))
•Make our search more “informed” (e.g. heuristics)
–General purpose methods can give large speed gains
–CSPs are a generic formulation; hence heuristics are more “generic” as well
•Before search:
–Reduce the search space
–Arc-consistency, path-consistency, i-consistency
–Variable ordering (fixed)
•During search:
–Look-ahead schemes:
•Detecting failure early; reduce the search space if possible
•Which variable should be assigned next?
•Which value should we explore first?
–Look-back schemes:
•Backjumping
•Constraint recording
•Dependency-directed backtracking

Look-ahead: Variable and value orderings
•Intuition:
–Apply propagation at each node in the search tree(reduce future branching)
–Choose a variablethat will detect failures early (low branching factor)
–Choose valueleast likely to yield a dead-end (find solution early if possible)
•Forward-checking
–(check each unassigned variable separately)
•Maintaining arc-consistency (MAC)
–(apply full arc-consistency)
36

Dependence on variable ordering
•Example: coloring
Color WA, Q, V first:
9 ways to color
none inconsistent (yet)
only 3 lead to solutions…
Color WA, SA, NT first:
6 ways to color
all lead to solutions
no backtracking

Dependence on variable ordering
•Another graph coloring example:

Minimum remaining values (MRV)
•A heuristic for selecting the next variable
–a.k.a. most constrained variable (MCV)heuristic
–choose the variable with the fewest legal values
–will immediately detect failure if X has no legal values
–(Related to forward checking, later)
39

40
Degree heuristic
•Another heuristic for selecting the next variable
–a.k.a. most constraining variableheuristic
–Select variable involved in the most constraints on other
unassigned variables
–Useful as a tie-breaker among most constrained variables
What about the order to try values?

Least Constraining Value
•Heuristic for selecting what value to try next
•Given a variable, choose the least constraining value:
–the one that rules out the fewest values in the remaining
variables
–Makes it more likely to find a solution early
41

Variable and value orderings
•Minimum remaining values for variable ordering
•Least constraining value for value ordering
–Why do we want these? Is there a contradiction?
•Intuition:
–Choose a variable that will detect failures early (low branching factor)
–Choose value least likely to yield a dead-end (find solution early if possible)
•MRV for variable selection reduces current branching factor
–Low branching factor throughout tree = fast search
–Hopefully, when we get to variables with currently many values, forward checking
or arc consistency will have reduced their domains & they’ll have low branching too
•LCV for value selection increases the chance of success
–If we’re going to fail at this node, we’ll have to examine every value anyway
–If we’re going to succeed, the earlier we do, the sooner we can stop searching
42

Summary
•CSPs
–special kind of problem: states defined by values of a fixed set of variables,
goal test defined by constraints on variable values
•Backtracking = depth-first search with one variable assigned per node
•Heuristics
–Variable ordering and value selection heuristics help significantly
•Variable ordering (selection) heuristics
–Choose variable with Minimum Remaining Values (MRV)
–Degree Heuristic –break ties after applying MRV
•Value ordering (selection) heuristic
–Choose Least Constraining Value

Constraint satisfaction problems
(continued)
CS171, Fall 2016
Introduction to Artificial Intelligence
Prof. Alexander Ihler

You Should Know
•Node consistency, arc consistency, path consistency, K-
consistency (6.2)
•Forward checking (6.3.2)
•Local search for CSPs
–Min-Conflict Heuristic (6.4)
•The structure of problems (6.5)

Minimum remaining values (MRV)
•A heuristic for selecting the next variable
–a.k.a. most constrained variable (MCV)heuristic
–choose the variable with the fewest legal values
–will immediately detect failure if X has no legal values
–(Related to forward checking, later)
46
Idea:reduce the branching factor now
Smallest domain size = fewest # of children = least branching

Detailed MRV example
Initially, all regions have |D
i|=3
Choose one randomly, e.g. WA
& pick value, e.g., red
(Better: tie-break with degree…)
WA=red
Do forward checking (next topic)
NT & SA cannot be red
Now NT & SA have 2 possible values
–pick one randomly

Detailed MRV example
NT & SA have two possible values
Choose one randomly, e.g. NT
& pick value, e.g., green
(Better: tie-break with degree;
select value by least constraining)
NT=green
Do forward checking (next topic)
SA & Q cannot be green
Now SA has only 1 possible value;
Q has 2 values.

Detailed MRV example
SA has only one possible value
Assign it
SA=blue
Do forward checking (next topic)
Now Q, NSW, V cannot be blue
Now Q has only 1 possible value;
NSW, V have 2 values.

50
Degree heuristic
•Another heuristic for selecting the next variable
–a.k.a. most constraining variableheuristic
–Select variable involved in the most constraints on other
unassigned variables
–Useful as a tie-breaker among most constrained variables
Note:usually (& in picture above) we use the degree heuristic as a tie-
breaker for MRV; however, in homeworks & exams we may use it without
MRV to show how it works. Let’s see an example.

Ex: Degree heuristic (only)
•Select variable involved in largest # of constraints with other un-assigned vars
•Initially: degree(SA) = 5; assign (e.g., red)
–No neighbor can be red; we remove the edges to assist in counting degree
•Now, degree(NT) = degree(Q) = degree(NSW) = 2
–Select one at random, e.g. NT; assign to a value, e.g., blue
•Now, degree(NSW)=2
•Idea: reduce branching in the future
–The variable with the largest # of constraints will likely knock out the most values
from other variables, reducing the branching factor in the future
SA=red NT=blue NSW=blue

Ex: MRV + degree
•Initially, all variables have 3 values; tie-breaker degree => SA
–No neighbor can be red; we remove the edges to assist in counting degree
•Now, WA, NT, Q, NSW, V have 2 values each
–WA,V have degree 1; NT,Q,NSW all have degree 2
–Select one at random, e.g. NT; assign to a value, e.g., blue
•Now, WA and Q have only one possible value; degree(Q)=1 > degree(WA)=0
•Idea: reduce branching in the future
–The variable with the largest # of constraints will likely knock out the most values
from other variables, reducing the branching factor in the future
SA=red NT=blue NSW=blue

Least Constraining Value
•Heuristic for selecting what value to try next
•Given a variable, choose the least constraining value:
–the one that rules out the fewest values in the remaining
variables
–Makes it more likely to find a solution early
53

Look-ahead: Constraint propagation
•Intuition:
–Apply propagation at each node in the search tree(reduce future branching)
–Choose a variablethat will detect failures early (low branching factor)
–Choose valueleast likely to yield a dead-end (find solution early if possible)
•Forward-checking
–(check each unassigned variable separately)
•Maintaining arc-consistency(MAC)
–(apply full arc-consistency)
54

Forward checking
•Idea:
–Keep track of remaining legal values for unassigned variables
–Backtrack when any variable has no legal values

55

56
Forward checking
•Idea:
–Keep track of remaining legal values for unassigned variables
–Backtrack when any variable has no legal values

Assign {WA = red}
Effect on other variables (neighbors of WA):
•NT can no longer be red
•SA can no longer be red
Red
Not red
Not red

57
Forward checking
•Idea:
–Keep track of remaining legal values for unassigned variables
–Backtrack when any variable has no legal values

Assign {Q = green}
Effect on other variables (neighbors of Q):
•NT can no longer be green
•SA can no longer be green
•NSW can no longer be green
Red
Not red
Not green
Green
Not red
Not green
Not green
(We already have failure, but FC
is too simple to detect it now)

58
Forward checking
•Idea:
–Keep track of remaining legal values for unassigned variables
–Backtrack when any variable has no legal values

Forward checking has detected this partial assignment is inconsistent with
any complete assignment
Assign {V = blue}
Effect on other variables (neighbors of V):
•NSW can no longer be blue
•SA can no longer be blue (no values possible!)
Red
Not red
Not green
Green
Not red
Not green
Not blue
Not green
Not blue
Blue

Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1,2,3,4}
X4
{1,2,3,4}
X2
{1,2,3,4}
Backtracking search with forward checking
Bookkeeping is tricky & complicated
1
3
2
4
X3X2 X4X1

Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1,2,3,4}
X4
{1,2,3,4}
X2
{1,2,3,4}
Red = value is assigned to variable
1
3
2
4
X3X2 X4X1

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1,2,3,4}
X4
{1,2,3,4}
X2
{1,2,3,4}
Red = value is assigned to variable

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }
•(Please note: As always in computer science, there are many different ways to implement
anything. The book-keeping method shown here was chosen because it is easy to present and
understand visually. It is not necessarily the most efficient way to implement the book-keeping in
a computer. Your job as an algorithm designer is to think long and hard about your problem, then
devise an efficient implementation.)
•One possibly more efficient equivalent alternative (of many):
–Deleted:
•{ (X2:1,2) (X3:1,3) (X4:1,4) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, ,4}
X4
{ ,2,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, ,4}
X4
{ ,2,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, ,4}
X4
{ ,2,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }
•X2 Level:
–Deleted:
•{ (X3,2) (X3,4) (X4,3) }
•(Please note: Of course, we could have failed as soon as we deleted { (X3,2) (X3,4) }.
There was no need to continue to delete (X4,3), because we already had established
that the domain of X3 was null, and so we already knew that this branch was futile and
we were going to fail anyway. The book-keeping method shown here was chosen
because it is easy to present and understand visually. It is not necessarily the most
efficient way to implement the book-keeping in a computer. Your job as an algorithm
designer is to think long and hard about your problem, then devise an efficient
implementation.)

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ , , , }
X4
{ ,2, , }
X2
{ , ,3,4}
Red = value is assigned to
variable

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }
•X2 Level:
–FAIL at X2=3.
–Restore:
•{ (X3,2) (X3,4) (X4,3) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, ,4}
X4
{ ,2,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, ,4}
X4
{ ,2,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, ,4}
X4
{ ,2,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
X

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }
•X2 Level:
–Deleted:
•{ (X3,4) (X4,2) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, , }
X4
{ , ,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, , }
X4
{ , ,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, , }
X4
{ , ,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
X

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }
•X2 Level:
–Deleted:
•{ (X3,4) (X4,2) }
•X3 Level:
–Deleted:
•{ (X4,3) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, , }
X4
{ , , , }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
X

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }
•X2 Level:
–Deleted:
•{ (X3,4) (X4,2) }
•X3 Level:
–Fail at X3=2.
–Restore:
•{ (X4,3) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, , }
X4
{ , ,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
X
X

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }
•X2 Level:
–Fail at X2=4.
–Restore:
•{ (X3,4) (X4,2) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{ ,2, ,4}
X4
{ ,2,3, }
X2
{ , ,3,4}
Red = value is assigned to
variable
X = value led to failure
XX

Ex: 4-Queens Problem
•X1 Level:
–Fail at X1=1.
–Restore:
•{ (X2,1) (X2,2) (X3,1) (X3,3) (X4,1) (X4,4) }

Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1,2,3,4}
X4
{1,2,3,4}
X2
{1,2,3,4}
Red = value is assigned to
variable
X = value led to failure
X
1
3
2
4
X3X2 X4X1

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1,2,3,4}
X4
{1,2,3,4}
X2
{1,2,3,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1,2,3,4}
X4
{1,2,3,4}
X2
{1,2,3,4}
Red = value is assigned to
variable
X = value led to failure
X

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X2,3) (X3,2) (X3,4) (X4,2) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1, ,3, }
X4
{1, ,3,4}
X2
{ , , ,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1, ,3, }
X4
{1, ,3,4}
X2
{ , , ,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1, ,3, }
X4
{1, ,3,4}
X2
{ , , ,4}
Red = value is assigned to
variable
X = value led to failure
X

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X2,3) (X3,2) (X3,4) (X4,2) }
•X2 Level:
–Deleted:
•{ (X3,3) (X4,4) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1, , , }
X4
{1, ,3, }
X2
{ , , ,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1, ,, }
X4
{1, ,3, }
X2
{ , , ,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1, ,, }
X4
{1, ,3, }
X2
{ , , ,4}
Red = value is assigned to
variable
X = value led to failure
X

Ex: 4-Queens Problem
•X1 Level:
–Deleted:
•{ (X2,1) (X2,2) (X2,3) (X3,2) (X3,4) (X4,2) }
•X2 Level:
–Deleted:
•{ (X3,3) (X4,4) }
•X3 Level:
–Deleted:
•{ (X4,1) }

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1, ,, }
X4
{ , ,3, }
X2
{ , , ,4}
Red = value is assigned to
variable
X = value led to failure
X

1
3
2
4
X3X2 X4X1
Ex: 4-Queens Problem
X1
{1,2,3,4}
X3
{1, ,, }
X4
{ , ,3, }
X2
{ , , ,4}
Red = value is assigned to
variable
X = value led to failure
X

Constraint propagation
•Forward checking
–propagates information from assigned to unassigned variables
–But, doesn't provide early detection for all failures:
–NT and SA cannot both be blue!
•Constraint propagationrepeatedly enforces constraints locally
–Can detect failure earlier
–But, takes more computation –is it worth the extra effort?
97

98
Arc consistency (AC-3)
•Simplest form of propagation makes each arc consistent
•X !Yis consistent iff
for everyvalue x of X there is someallowed value y for Y (note: directed!)
•Consider state after WA=red, Q=green
–SA !NSW consistent if
SA = blue and NSW = red

99
Arc consistency
•Simplest form of propagation makes each arc consistent
•X !Yis consistent iff
for everyvalue x of X there is someallowed value y for Y (note: directed!)
•Consider state after WA=red, Q=green
–NSW !SA consistent if
NSW = red and SA = blue
NSW = blue and SA = ???)NSW = blue can be pruned
No current domain value for SA is consistent

100
Arc consistency
•Simplest form of propagation makes each arc consistent
•X !Yis consistent iff
for everyvalue x of X there is someallowed value y for Y (note: directed!)
•Enforce arc consistency:
–arc can be made consistent by removing blue from NSW
•Continue to propagate constraints
–Check V !NSW : not consistent for V = red; remove red from V

101
Arc consistency
•Simplest form of propagation makes each arc consistent
•X !Yis consistent iff
for everyvalue x of X there is someallowed value y for Y (note: directed!)
•Continue to propagate constraints
•SA !NT not consistent:
–And cannot be made consistent! Failure
•Arc consistency detects failure earlier than FC
–But requires more computation: is it worth the effort?

Ex: Arc Consistency in Sudoku
102
Each row, column and major block must be alldifferent
“Well posed” if it has unique solution: 27 constraints
2 3
4 62
•Variables: 81 slots
•Domains=
{1,2,3,4,5,6,7,8,9}
•Constraints:
•27 not-equal

Arc consistency checking
•Can be run as a preprocessor, or after each assignment
–As preprocessor before search: Removes obvious inconsistencies
–After each assignment: Reduces search cost but increases step cost
•AC is run repeatedly until no inconsistency remains
–Like Forward Checking, but exhaustive until quiescence
•Trade-off
–Requires overhead to do; but usually better than direct search
–In effect, it can successfully eliminate large (and inconsistent) parts of the state
space more effectively than can direct search alone
•Need a systematic method for arc-checking
–If Xloses a value, neighbors of Xneed to be rechecked:
i.e., incoming arcs can become inconsistent again (outgoing arcs stay consistent).

Arc consistency algorithm (AC-3)
functionAC-3(csp) returnsfalse if inconsistency found, else true, may reduce cspdomains
inputs: csp, a binary CSP with variables {X
1, X
2, …, X
n}
local variables: queue, a queue of arcs, initially all the arcs incsp
/* initial queue must contain both (X
i, X
j) and(X
j, X
i)*/
whilequeue is not empty do
(X
i, X
j) REMOVE-FIRST(queue)
ifREMOVE-INCONSISTENT-VALUES(X
i, X
j)then
if size of D
i= 0 then return false
for each X
kin NEIGHBORS[X
i] − {X
j} do
add (X
k, X
i) to queue if not already there
returntrue
functionREMOVE-INCONSISTENT-VALUES(X
i, X
j) returnstrueiff we delete a
value from the domain of X
i
removedfalse
for eachxinDOMAIN[X
i] do
ifno value y inDOMAIN[X
j]allows (x,y) to satisfy the constraints between X
iand X
j
then delete xfrom DOMAIN[X
i]; removedtrue
return removed
(from Mackworth, 1977)

Complexity of AC-3
•A binary CSP has at most n
2
arcs
•Each arc can be inserted in the queue d times (worst case)
–(X, Y): only d values of X to delete
•Consistency of an arc can be checked in O(d
2
) time
•Complexity is O(n
2
d
3
)
•Although substantially more expensive than Forward Checking,
Arc Consistency is usually worthwhile.

K-consistency
•Arc consistency does not detect all inconsistencies:
–Partial assignment {WA=red, NSW=red}is inconsistent.
•Stronger forms of propagation can be defined using the notion of k-consistency.
•A CSP is k-consistent if for any set of k-1 variables and for any consistent
assignment to those variables, a consistent value can always be assigned to any
kth variable.
–E.g. 1-consistency = node-consistency
–E.g. 2-consistency = arc-consistency
–E.g. 3-consistency = path-consistency
•Stronglyk-consistent:
–k-consistent for all values {k, k-1, …2, 1}

Trade-offs
•Running stronger consistency checks…
–Takes more time
–But will reduce branching factor and detect more inconsistent partial
assignments
–No “free lunch”
•In worst case n-consistency takes exponential time
•“Typically” in practice:
–Often helpful to enforce 2-Consistency (Arc Consistency)
–Sometimes helpful to enforce 3-Consistency
–Higher levels may take more time to enforce than they save.

•Before search: (reducing the search space)
–Arc-consistency, path-consistency, i-consistency
–Variable ordering (fixed)
•During search:
–Look-ahead schemes:
•Value ordering/pruning(choose a least restricting value),
•Variable ordering(choose the most constraining variable)
•Constraint propagation(take decision implications forward)
–Look-back schemes:
•Backjumping
•Constraint recording
•Dependency-directed backtracking
108
Improving backtracking

Further improvements
•Checking special constraints
–Checking Alldiff(…) constraint
•E.g. {WA=red, NSW=red}
–Checking Atmost(…) constraint
•Bounds propagation for larger value domains
•Intelligent backtracking
–Standard form is chronological backtracking, i.e., try different value for
preceding variable.
–More intelligent: backtrack to conflict set.
•Set of variables that caused the failure or set of previously assigned
variables that are connected to X by constraints.
•Backjumping moves back to most recent element of the conflict set.
•Forward checking can be used to determine conflict set.

Local search for CSPs
•Use complete-state representation
–Initial state = all variables assigned values
–Successor states = change 1 (or more) values
•For CSPs
–allow states with unsatisfied constraints (unlike backtracking)
–operators reassignvariable values
–hill-climbing with n-queens is an example
•Variable selection: randomly select any conflicted variable
•Value selection: min-conflicts heuristic
–Select new value that results in a minimum number of conflicts with the
other variables

Local search for CSPs
functionMIN-CONFLICTS(csp, max_steps) returnsolution or failure
inputs: csp, a constraint satisfaction problem
max_steps, the number of steps allowed before giving up
currentan initial complete assignment for csp
for i= 1 to max_stepsdo
ifcurrentis a solution for cspthen return current
vara randomly chosen, conflicted variable from VARIABLES[csp]
valuethe value vfor varthat minimize CONFLICTS(var,v,current,csp)
set var = valuein current
return failure

Number of conflicts
•Solving 4-queens with local search
1 2 3 4
Q
Q
Q
Q
(5 conflicts)
4454
Q
Q
Q
Q
2445
Q
Q
Q
Q
4353
Q
Q
Q
Q
3555
Note: here I check allneighbors
& pick the best; typically in
practice pick one at random

Number of conflicts
•Solving 4-queens with local search
1 2 3 4
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
(5 conflicts) (2 conflicts) (0 conflicts)
3321
Q
Q
Q
Q
2445
Q
Q
Q
Q
2220
Q
Q
Q
Q
2231
Note: here I check allneighbors
& pick the best; typically in
practice pick one at random

Local optima
•Local search may get stuck at local optima
–Locations where no neighboring value is better
–Success depends on initialization quality & basins of attraction
•Can use multiple initializations to improve:
–Re-initialize randomly (“repeated” local search)
–Re-initialize by perturbing last optimum (“iterated” local search)
•Can also add sideways & random moves (e.g., WalkSAT)
states
objective
current state
global maximum
local maximum plateau of local optima
(R&N Fig 7.18)

Local optimum example
•Solving 4-queens with local search
1 2 3 4
Q
Q
Q
Q
(1 conflict)
2313
Q
Q
Q
Q
1333
Q
Q
Q
Q
2121
Q
Q
Q
Q
3241
“Plateau” example:
no single move can decrease # of conflicts

Median number of consistency checks over 5 runs to solve problem
Parentheses -> no solution found
USA: 4 coloring
n-queens: n = 2 to 50
Zebra: see exercise 6.7 (3
rd
ed.); exercise 5.13 (2
nd
ed.)
Comparison of CSP algorithms
Evaluate methods on a number of problems

Advantages of local search
•Local search can be particularly useful in an online setting
–Airline schedule example
•E.g., mechanical problems require than 1 plane is taken out of service
•Can locally search for another “close” solution in state-space
•Much better (and faster) in practice than finding an entirely new
schedule
•Runtime of min-conflicts is roughly independent of problem size.
–Can solve the millions-queen problem in roughly 50 steps.
–Why?
•n-queens is easy for local search because of the relatively high density
of solutions in state-space

Hardness of CSPs
•x
1… x
ndiscrete, domain size d: O( d
n
) configurations
•“SAT”: Boolean satisfiability: d=2
–One of the first known NP-complete problems
•“3-SAT”
–Conjunctive normal form (CNF)
–At most 3 variables in each clause:
–Still NP-complete
•How hard are “typical” problems?
CNF clause: rule out one configuration

Hardness of random CSPs
•Random 3-SAT problems:
–n variables, p clauses in CNF:
–Choose any 3 variables, signs uniformly at random
–What’s the probability there is nosolution to the CSP?
–Phase transition at (p/n) ¼4.25
–“Hard” instances fall in a very narrow regime around this point!
ratio ( p/n )
avg time (sec)
minisat
easy, sat easy, unsat
ratio ( p/n )
Pr[ unsat ]
satisfiable unsatisifable

Hardness of random CSPs
•Random 3-SAT problems:
–n variables, p clauses in CNF:
–Choose any 3 variables, signs uniformly at random
–What’s the probability there is nosolution to the CSP?
–Phase transition at (p/n) ¼4.25
–“Hard” instances fall in a very narrow regime around this point!
log avg time (sec)
ratio ( p/n )
minisat
easy, sat easy, unsat
ratio ( p/n )
Pr[ unsat ]
satisfiable unsatisifable

•R = [number of initially filled cells] / [total number of cells]
•Success Rate = P(random puzzle is solvable)
•[total number of cells] = 9x9 = 81
•[number of initially filled cells] = variable
0
1000
2000
3000
4000
0.00 0.10 0.20 0.30 0.40
AvgTime vs. R
Avg Time
0.00
0.50
1.00
0.00 0.10 0.20 0.30 0.40 0.50
Success Rate vs. R
Success Rate
R = [number of initially filled cells] / [total number of cells]
R = [number of initially filled cells] / [total number of cells]
Ex: Sudoku
Backtracking search
+ forward checking

Graph structure and complexity
•Disconnected subproblems
–Configuration of one subproblem
cannot affect the other: independent!
–Exploit: solve independently
•Suppose each subproblemhas c variables out of n
–Worse case cost: O( n/c d
c
)
–Compare to O( d
n
), exponential in n
–Ex: n=80, c=20, d=2 )
•2
80
= 4 billion years at 1 million nodes per second
•4 * 2
20
= 0.4 seconds at 1 million nodes per second
Q
WA
NT
SA
NSW
V
T

Tree-structured CSPs
•Theorem:If a constraint graph has no cycles, then
the CSP can be solved in O(n d^2) time.
–Compare to general CSP: worst case O(d^n)
•Method: directed arc consistency (= dynamic programming)
–Select a root (e.g., A) & do arc consistency from leaves to root:
–D!F: remove values for D not consistent with any value for F, etc.)
–D!E, B!D, … etc
–Select a value for A
–There must be a value for B that is compatible; select it
–There must be values for C, and for D, compatible with B’s; select them
–There must be values for E, F compatible with D’s; select them.
–You’ve found a consistent solution!
A
F
E
D
C
B

Exploiting structure
•How can we use efficiency of trees?
•Cutset conditioning
–Exploit easy-to-solve problems during search
•Tree decomposition
–Convert non-tree problems into (harder) trees
Tree!SA=red
Q
WA
NT
SA
NSW
V
T
Q,SA
WA,SA
NT,SA
NSW,SA
V,SA
T
Change
“variables”
= color of pair
of areas
Now:
“unary” WA-SA constraint
“binary” (WA,SA) –(NT,SA)
require all 3 consistent

Summary
•CSPs
–special kind of problem: states defined by values of a fixed set of variables,
goal test defined by constraints on variable values
•Backtracking = depth-first search, one variable assigned per node
•Heuristics: variable order & value selection heuristics help a lot
•Constraint propagation
–does additional work to constrain values and detect inconsistencies
–Works effectively when combined with heuristics
•Iterative min-conflicts is often effective in practice.
•Graph structure of CSPs determines problem complexity
–e.g., tree structured CSPs can be solved in linear time.
Tags