By applying a k-sat solver (and finding a solution) we can prove that a k-sat problem is satisfiable. For random 3-sat problems for instance this is expected to happen for m/n < 4.3 (m=nr. of clauses, n=nr. of variables). However, what if m/n > 4.3? In this case we expect that the problem is unsatisfiable but can we also prove it (so we can stop trying to solve it). There are algorithms for this too (although it seems to be a harder problem). Here is an easy one that I got from this website: .
Take a variable X that appears in the largest number of clauses and set it to FALSE. Now all clauses which contain "NOT X" are satisfied and all clauses which contain "X" have 2 variables left. Study the reduced problem with only these reduced 2-variable clauses which is a 2-sat (sub)problem of the original problem. 2-sat is solvable in polynomial time so it is easy to find out if solutions exist (or not), see and your homework Exercise 7.17. If no solutions exist, set "X=TRUE" and repeat. If again no solutions exist than there can be no solutions to the original 3-sat problem. If you cannot prove unsatisfiability for X move on to the next variable etc. If you can't prove unsatisfiability for any variables you fail...
Tuesday, November 2, 2010
Thursday, September 30, 2010
The Ambiguity of Defining Task Environments
Consider a taxi driving agent. This agent is clearly in a multi-agent environment because of the other drivers on the road. They typically prevent the taxi-driver from doing what would be optimal for it (especially when they create a traffic jam). But how about a puzzle solving agent that gets distracted by a bird singing his song? We could argue for both single or multi-agent environments depending on how much the bird irritates the puzzle solving agent.
Another example. Consider a poker playing agent. Poker involves picking cards from a deck. Since these cards are randomly shuffled there seems to be a random element to the game. So is this environment stochastic? It depends. If you model the deck of cards as part of the state space, then the state + the action deterministically determine the next state. However, if the state definition does not involve the order of the deck of cards, then this can be viewed as a random draw which would then argue for a stochastic environment.
Or this example. A part picking robot confronted with a new problem every time can be classified as operating in an episodic environment. However, it may wear in the process or use up battery power in which case it really is sequential.
Chess has a discrete number of states. Or not? If we define a state to be the exact position of every piece on the board then it would be continuous.
Bottom-line: the task environment crucially depends on one's definition of state, action and utility function.
Another example. Consider a poker playing agent. Poker involves picking cards from a deck. Since these cards are randomly shuffled there seems to be a random element to the game. So is this environment stochastic? It depends. If you model the deck of cards as part of the state space, then the state + the action deterministically determine the next state. However, if the state definition does not involve the order of the deck of cards, then this can be viewed as a random draw which would then argue for a stochastic environment.
Or this example. A part picking robot confronted with a new problem every time can be classified as operating in an episodic environment. However, it may wear in the process or use up battery power in which case it really is sequential.
Chess has a discrete number of states. Or not? If we define a state to be the exact position of every piece on the board then it would be continuous.
Bottom-line: the task environment crucially depends on one's definition of state, action and utility function.
Sunday, October 25, 2009
Q: Is the fixed point of constraint propagation unique?
The answer is yes (with thanks to Rina Dechter).
First imagine there were multiple fixed points with different domain sets. Each of these domain sets would be arc-consistent by definition. Now simply take the union of all these domains and you can easily verify that that is still arc-consistent (for each value in a source domain there must be a possible value in the target domain, but this is guaranteed by the fact that each sub-solution admits a possible value in the target; therefore you can simply pick that). We call the union the maximal arc-consistent set.
Now let's see if constraint propagation will converge to that maximal set. First we establish it converges. This follows because at each iteration we remove values form domains and we stop when there are no more changes. Clearly, since the domains are finite this must stop at some point.
To prove that we will converge on the maximal set, we need to recall if a particular domain is consistent for all its incoming messages, then a strictly larger domain will also be consistent for all its incoming messages. Let's assume that some message from Di to domain Dj is going to remove a value from the maximal set (at Di) for the first time. This means that the target, Dj, must have had a domain smaller than the one in the maximal set to the begin with. If not, by the definition of the maximal set, the arc was consistent and no value would have been removed from source Di using constraint propagation. This leads to a contradiction because we assumed that this was the first message that was going to remove the first value from a domain in the maximal set. Thus, constraint programming will not remove values which are in the maximal set, but on the other hand it will not stop before every arc is consistent either. Therefore, irrespective of the order in which we send messages, it will always converge on the same unique maximal set.
First imagine there were multiple fixed points with different domain sets. Each of these domain sets would be arc-consistent by definition. Now simply take the union of all these domains and you can easily verify that that is still arc-consistent (for each value in a source domain there must be a possible value in the target domain, but this is guaranteed by the fact that each sub-solution admits a possible value in the target; therefore you can simply pick that). We call the union the maximal arc-consistent set.
Now let's see if constraint propagation will converge to that maximal set. First we establish it converges. This follows because at each iteration we remove values form domains and we stop when there are no more changes. Clearly, since the domains are finite this must stop at some point.
To prove that we will converge on the maximal set, we need to recall if a particular domain is consistent for all its incoming messages, then a strictly larger domain will also be consistent for all its incoming messages. Let's assume that some message from Di to domain Dj is going to remove a value from the maximal set (at Di) for the first time. This means that the target, Dj, must have had a domain smaller than the one in the maximal set to the begin with. If not, by the definition of the maximal set, the arc was consistent and no value would have been removed from source Di using constraint propagation. This leads to a contradiction because we assumed that this was the first message that was going to remove the first value from a domain in the maximal set. Thus, constraint programming will not remove values which are in the maximal set, but on the other hand it will not stop before every arc is consistent either. Therefore, irrespective of the order in which we send messages, it will always converge on the same unique maximal set.
Wednesday, October 14, 2009
Q: Why domination implies faster search in A*
If we have 2 heuristics h1 and h2 and h1<=h2 for all n, then we say that h2 dominates h1. The question was why this always means that h2 in A* expands fewer nodes. Say for instance that h2 is constant at each level but bigger then h1 but the latter would show some interesting distribution over nodes revealing information about which branches look more promising.
The reason why domination translates into fewer nodes expanded is that A* is guaranteed to expand every node with a total cost smaller than the cost of the optimal goal node. Since for all nodes in the search tree g(n) is the same, and f(n)=g(n)+h(n), then it means that if A* with h2 would expand the node (because its f2(n) is smaller than f(G)), then A* with h1 would certainly also expand that node (because f1(n)<=f2(n)). Thus for both cases above, even though the distribution shows which branch is more promising, we would have to backtrack and expand all these nodes anyway.
The reason why domination translates into fewer nodes expanded is that A* is guaranteed to expand every node with a total cost smaller than the cost of the optimal goal node. Since for all nodes in the search tree g(n) is the same, and f(n)=g(n)+h(n), then it means that if A* with h2 would expand the node (because its f2(n) is smaller than f(G)), then A* with h1 would certainly also expand that node (because f1(n)<=f2(n)). Thus for both cases above, even though the distribution shows which branch is more promising, we would have to backtrack and expand all these nodes anyway.
Thursday, October 8, 2009
Q: What's the difference between time and space complexity?
Time complexity concerns the number of operations you have to perform to get to a goal state. We measure these usually in
b: the maximal branching factor,
m: the maximum depth of the search tree
d: the depth of the solution.
Space complexity on the other hand concerns the number of "things" you have to keep in memory during your search. For instance for BFS you keep all nodes in memory, so the complexity is O(b^[d+1]) which us the number of nodes you expand (or O(b^d), see blog below). For DFS you only keep the path to the current node in memory and potentially all its the successors of those nodes, so its O(bm).
b: the maximal branching factor,
m: the maximum depth of the search tree
d: the depth of the solution.
Space complexity on the other hand concerns the number of "things" you have to keep in memory during your search. For instance for BFS you keep all nodes in memory, so the complexity is O(b^[d+1]) which us the number of nodes you expand (or O(b^d), see blog below). For DFS you only keep the path to the current node in memory and potentially all its the successors of those nodes, so its O(bm).
Q: How is graph search different from tree search?
This is actually a very important issue which is perhaps a little under-emphasized in the book. Graph search maintains in addition to its fringe (or open list) also a closed list that contains all nodes that have been expanded in the past. Note that for BFS and UCS we weren't allowed to forget those anyway. Everytime, we expand a node and put its successors into the fringe we first check whether the states of these successors were expanded before, in which case they would be on the closed list. If they are on our closed list, we do *not* put them in our fringe. So we do more work, but we cut down the search tree potentially by a lot.
For UCS, when we discover that a state has already been considered, we are guaranteed that the last discovered path to that state in our search tree must have higher path-cost. This is simply true because UCS visits nodes according to increasing path-cost. So, we can safely delete the last discovered path and still be optimal. A similar reasoning holds for BFS in case of constant step cost. However, for DFS and therefore for IDS (which uses DFS) the situation is different. In this case nodes are not expanded in order of increasing path cost and it may happen that a later discovered path is actually better. So, now we need to remember the path-costs as well, and potentially delete the first node in favor of the later one. However, in this case you will have to change not only the path cost of that node, but also the path-costs of all its descendants (since you have found a better path leading to one of their ancestors, which will reflect in their total path-costs as well). With this additional trick, IDS will be optimal for constant step-cost. However, DFS need not be optimal even with this trick.
Note that doing graph search with DFS defeats the purpose of using DFS in the first place: you will now have to keep all nodes into memory which is exactly what we wanted to avoid. There is however a very useful trick even for DFS that doesn't require more memory: check newly expanded nodes only against the list of nodes that you need to maintain in memory for DFS anyway. These are the nodes that are on the path from the root to the current node. This obviously does not require extra memory but it will avoid loops of the sort A-B-C-A-B-A- etc. In fact, a search tree will never become infinitely deep anymore!
How will the search tree change in general? This depends on the strategy you employ (remember: search trees are generated dynamically, not before you start the search). For BFS you will never generate the same state in a lower layer if it already exists in a higher layer. This need not be the case for UCS if the step-costs are not constant. For DFS checking only against nodes in your path, you may generate nodes more than ones in different branches but you will not generate the same node twice on a single path. When you do DFS with full graph search (which I not recommend because of the memory issues) it is not so clear you can even conveniently represent the search as traversing a single tree. (For instance, you may need to move entire branches from one side of the tree to another.)
As an example I would try to a few of the algorithms discussed above on the problem of figure 3.18 (left and middle).
For UCS, when we discover that a state has already been considered, we are guaranteed that the last discovered path to that state in our search tree must have higher path-cost. This is simply true because UCS visits nodes according to increasing path-cost. So, we can safely delete the last discovered path and still be optimal. A similar reasoning holds for BFS in case of constant step cost. However, for DFS and therefore for IDS (which uses DFS) the situation is different. In this case nodes are not expanded in order of increasing path cost and it may happen that a later discovered path is actually better. So, now we need to remember the path-costs as well, and potentially delete the first node in favor of the later one. However, in this case you will have to change not only the path cost of that node, but also the path-costs of all its descendants (since you have found a better path leading to one of their ancestors, which will reflect in their total path-costs as well). With this additional trick, IDS will be optimal for constant step-cost. However, DFS need not be optimal even with this trick.
Note that doing graph search with DFS defeats the purpose of using DFS in the first place: you will now have to keep all nodes into memory which is exactly what we wanted to avoid. There is however a very useful trick even for DFS that doesn't require more memory: check newly expanded nodes only against the list of nodes that you need to maintain in memory for DFS anyway. These are the nodes that are on the path from the root to the current node. This obviously does not require extra memory but it will avoid loops of the sort A-B-C-A-B-A- etc. In fact, a search tree will never become infinitely deep anymore!
How will the search tree change in general? This depends on the strategy you employ (remember: search trees are generated dynamically, not before you start the search). For BFS you will never generate the same state in a lower layer if it already exists in a higher layer. This need not be the case for UCS if the step-costs are not constant. For DFS checking only against nodes in your path, you may generate nodes more than ones in different branches but you will not generate the same node twice on a single path. When you do DFS with full graph search (which I not recommend because of the memory issues) it is not so clear you can even conveniently represent the search as traversing a single tree. (For instance, you may need to move entire branches from one side of the tree to another.)
As an example I would try to a few of the algorithms discussed above on the problem of figure 3.18 (left and middle).
Q: Is BFS really O(b^[d+1])?
In the book (Russel & Norvig) it is argued that the time and space complexity of BFS (breadth first search) is O(b^[d+1]) where b is the maximal branching factor and d the depth of the solution. The reason for this is that when searching at level d, so generate all the successors into the fringe. There are b successors for O(b^d) nodes at level d, so if we are unlucky and the last of those O(b^d) nodes is the goal state, we generate a total of O(b^[d+1]) states in total.
However,...., this is easily avoided, by first checking all the nodes in each layer for whether they are a goal state, and if we didn't find the goal state at that layer, we go back and generate all successors. So we delay the generation of the successors. This BFS* algorithm is O(b^d). So we shouldn't make a big fuss about BFS having an even worse time and space complexity than IDS (which also has O(b^d)).
However,...., this is easily avoided, by first checking all the nodes in each layer for whether they are a goal state, and if we didn't find the goal state at that layer, we go back and generate all successors. So we delay the generation of the successors. This BFS* algorithm is O(b^d). So we shouldn't make a big fuss about BFS having an even worse time and space complexity than IDS (which also has O(b^d)).
Subscribe to:
Posts (Atom)