AI Planning & Logic: Algorithms and Search Techniques

Backward Chaining Algorithm Explained

Backward chaining is a reasoning technique used in Artificial Intelligence (AI) and expert systems. It starts from the goal (what we want to prove) and works backward to find facts or rules that support it. It is commonly used in logic programming, rule-based systems, and Prolog.

Example: Simple Knowledge Base

Let’s consider a simple knowledge base (rule-based system):

Rules:

  • If it is raining, then the ground is wet. (Raining → WetGround)
  • If it is cloudy, then it is raining. (Cloudy → Raining)
  • It is cloudy. (Cloudy is a fact)

Goal: WetGround

We apply backward chaining:

  1. Is there a rule that concludes WetGround? ✔ Yes: Raining → WetGround
  2. Now we need to prove the new sub-goal: Raining
  3. Is there a rule that concludes Raining? ✔ Yes: Cloudy → Raining
  4. Now prove Cloudy: Is Cloudy a known fact? ✔ Yes!

✅ So, we’ve proven: Cloudy (fact) ⇒ Raining ⇒ WetGround

Conclusion: By backward chaining, we successfully proved the goal WetGround.

Backward Chaining Algorithm

BACKWARD-CHAINING(goal, knowledge_base):
    if goal is a known fact in knowledge_base:
        return True

    for each rule in knowledge_base:
        if rule's conclusion matches the goal:
            sub_goals = rule's conditions (premises)
            all_proved = True
            for sub_goal in sub_goals:
                if not BACKWARD-CHAINING(sub_goal, knowledge_base):
                    all_proved = False
                    break
            if all_proved:
                return True

    return False

Classical Planning & The Blocks World

Classical Planning is a type of automated planning in Artificial Intelligence (AI). It is used to find a sequence of actions (a plan) that leads from the initial state to the goal state.

The Blocks World Example

The Blocks World is a classic domain used in AI for testing planning algorithms.

You have a table and several blocks (say A, B, C). The blocks can be stacked on each other or placed on the table.

A Robotic Arm (Agent) Can Perform Actions Like:

  • Pick up a block.
  • Put down a block.
  • Stack a block on another.
  • Unstack a block from another.

Representation in Planning

States:

Described using logical predicates like:

  • On(A, B) → A is on B
  • OnTable(C) → C is on the table
  • Clear(A) → Nothing is on top of A
  • Holding(B) → Agent is holding B
  • HandEmpty → Agent’s hand is empty

Actions:

Each action has:

  • Preconditions
  • Effects

Example: Initial and Goal States

Initial State:

  • On(A, B)
  • On(B, C)
  • OnTable(C)
  • Clear(A)
  • HandEmpty

(A is on B, B is on C, and C is on the table)

Goal State:

  • On(C, B)
  • On(B, A)
  • OnTable(A)

Quantifiers in Logic

Write appropriate quantifiers for the following statements:

  1. Some students read well
    ∃x(Student(x) ∧ ReadsWell(x))
  2. Some students like some books
    ∃x∃y(Student(x) ∧ Book(y) ∧ Likes(x,y))
  3. Some students like all books
    ∃x(Student(x) ∧ ∀y(Book(y) → Likes(x,y)))
  4. All students like some books
    ∀x(Student(x) → ∃y(Book(y) ∧ Likes(x,y)))
  5. All students like no books
    ∀x(Student(x) → ∀y(Book(y) → ¬Likes(x,y)))

Resolution Principle in AI

Resolution is a method of proving validity or unsatisfiability by refuting the negation of the goal. If a contradiction is derived (an empty clause), the original goal is proved.

Key Concepts:

  • Knowledge Base (KB): A set of known facts and rules, typically written in clausal form (CNF – Conjunctive Normal Form).
  • Goal: The statement to be proved. We negate the goal and add it to the KB.
  • Resolution Rule: Combines two clauses that contain complementary literals (e.g., P and ¬P) to produce a new clause by eliminating the complementary literals.
  • Refutation: If applying resolution repeatedly leads to an empty clause (⊥), the goal is proved true (i.e., a contradiction is found).

Resolution Rule (Propositional Logic):

Clause 1: A ∨ B
Clause 2: ¬B ∨ C
Resolution gives: A ∨ C (Because B and ¬B are complementary)

Example (Propositional Logic):

Given KB:

  • P → Q → becomes ¬P ∨ Q
  • Q → R → becomes ¬Q ∨ R

Goal: Prove P → R

Negated Goal:

¬(P → R) → becomes P ∧ ¬R

Add All Clauses to KB:

  • ¬P ∨ Q
  • ¬Q ∨ R
  • P
  • ¬R

Apply Resolution:

  1. (¬Q ∨ R) + (¬R) → ¬Q
  2. (¬P ∨ Q) + (¬Q) → ¬P
  3. (P) + (¬P) → empty clause (⊥) ✅

Conclusion: P → R is proved.

GraphPlan Algorithm Explained

GraphPlan is a planning algorithm introduced by Blum and Furst in 1995. It efficiently finds a sequence of actions (a plan) to reach a goal from a given initial state by constructing a planning graph.

It works for classical planning problems where the world is:

  • Fully observable
  • Deterministic
  • Static
  • Has finite states

Structure of a Planning Graph:

The planning graph is a level-based graph made of alternating layers:

  • Level 0: Initial state (facts/literals)
  • Level 1: Actions that can be applied
  • Level 2: New state (facts after applying actions)
  • Level 3: Next level actions, and so on…

The graph keeps expanding until:

  • All goal conditions appear in the literal layer.
  • No mutexes (conflicts) between them.

Steps of the GraphPlan Algorithm:

  1. Graph Expansion (Forward Phase): Continue expanding until all goal literals appear without conflicts.
  2. Plan Extraction (Backward Phase): If no plan is found, expand the graph further and retry.

Key Features of GraphPlan:

  • Compact Representation: Stores multiple possible paths efficiently.
  • Mutex Constraints: Helps prune invalid or conflicting actions.
  • Sound and Complete: Guarantees to find a valid plan if one exists.
  • Efficient for STRIPS-like problems (standard planning language).

GraphPlan Algorithm:

GraphPlan(S₀, G, A):
    Initialize planning graph with L₀ = S₀
    i = 0

    while true:
        if G ⊆ Lᵢ and goals are not mutex:
            if ExtractPlan(G, i, Graph):
                return Plan
        if graph leveled off:
            return failure
        ExpandGraph(Graph, i)
        i = i + 1

Plan Search Approaches

This section details two primary approaches to searching for a plan in AI.

1. Forward State-Space Search

This approach starts from the initial state and applies actions to move forward through the state space until the goal state is reached.

How It Works:

  • Start with the initial state.
  • At each step, choose an action whose preconditions are satisfied in the current state.
  • Apply the action to generate a new state.
  • Repeat the process until a state that satisfies the goal conditions is found.

Search Methods Used:

  • Breadth-First Search (BFS)
  • Depth-First Search (DFS)
  • A* with heuristics

Advantages:

  • Intuitive and simple to implement.
  • Good for systems where the number of applicable actions is small at each state.

Disadvantages:

  • May explore many irrelevant or dead-end paths.
  • Can be inefficient if the state space is large.

Example:

In a robot planning domain:

  • Start with the robot at Room A.
  • Apply action Move(A → B) → robot is now in Room B.
  • Continue until the goal (e.g., robot delivers a package) is achieved.

2. Backward State-Space Search

This approach starts from the goal state and works backward, finding actions that could have produced the goal, and determining what preconditions must be met.

How It Works:

  • Start with the goal condition.
  • Find actions that could achieve parts of the goal.
  • Replace the goal with the preconditions of those actions.
  • Repeat until the initial state satisfies all required preconditions.

Search Methods Used:

Similar to forward search (DFS, BFS, A*, etc.), but in the reverse direction.

Advantages:

  • Focuses only on relevant actions that directly contribute to the goal.
  • Can be more goal-directed and efficient than forward search.

Disadvantages:

  • Requires computing regression of goals.
  • More complex if actions have many effects or preconditions.

Example:

To make coffee:

  • Goal: CoffeeReady
  • Find an action like BrewCoffee → CoffeeReady
  • Now set new sub-goals: HaveHotWater ∧ HaveCoffeePowder
  • Continue backward until the initial state can satisfy these sub-goals.

State Space Search in AI

State space search is a fundamental problem-solving technique in Artificial Intelligence (AI). It involves exploring a space of all possible states that can be reached from a given initial state by applying legal actions, in order to find a goal state.

Key Components of State Space Search:

  • Initial State: The starting point of the search.
  • Goal State(s): The desired outcome(s) the agent is trying to reach.
  • State Space: The set of all possible states reachable from the initial state through valid actions.
  • Actions / Operators: Actions that move the system from one state to another.
  • Transition Model: Describes what action causes which change in the state.
  • Path Cost (optional): Cost associated with a path (used in optimal search strategies).

Types of State Space Search:

  • Breadth-First Search (BFS)
  • Depth-First Search (DFS)
  • Uniform Cost Search (UCS)
  • A* Search
  • Greedy Search

Applications:

  • Pathfinding
  • Puzzle Solving
  • Automated Reasoning
  • Game Playing