Fundamentals of AI Planning, Probability, and Knowledge Representation

Core Concepts in AI Planning and Probability

Key Definitions in AI Systems

  • Plan/Action Sequence: A sequence of actions an agent follows to achieve a goal from the initial state.
  • State: A complete description of the environment at a specific time, representing all true conditions.
  • Mutex (Mutual Exclusion): A condition where two actions or states cannot occur simultaneously in a planning graph.
  • STRIPS Definition: Stanford Research Institute Problem Solver; a formal language for defining actions and effects.
  • Search Direction: Forward search starts from the initial state; backward search starts from the goal state.
  • Example Plan: Boil water → Add tea leaves → Add milk → Stir → Serve tea.
  • Uncertainty: Lack of complete, accurate information causing ambiguity in AI decision-making.
  • Conditional Probability: Probability of event A occurring given that event B has already occurred.
  • Bayes’ Rule: Formula to update the probability of a hypothesis based on prior knowledge and new evidence.
  • Bayes’ Rule Calculation Example: Apply Bayes’ rule to compute: P(A)=0.2, P(B|A)=0.7, P(B)=0.5. P(A|B) = (0.7 × 0.2) / 0.5 = 0.28.

Understanding Probability in AI

Prior Probability

Prior Probability represents our belief about an event before any evidence is considered. It is a baseline estimate based on previous knowledge or historical data. For example, if meteorological data suggests a 30% chance of rain, that becomes the prior probability. Prior probability is independent of any new or current observation.

Conditional Probability

Conditional Probability is the probability of an event occurring given that another event has already occurred. It is denoted as P(A|B) and calculated using the formula P(A|B) = P(A ∩ B) / P(B).

Joint Probability

Joint Probability represents the probability of two or more events occurring simultaneously. For two events A and B, it is written as P(A ∩ B). For example, P(Rain ∩ Wet Grass) represents the probability that both rain occurred and the grass is wet. Joint probabilities are often calculated using the chain rule in Bayesian networks.

Knowledge Representation Challenges

Critical Issues in Knowledge Representation (KR)

Knowledge Representation (KR) in AI involves organizing information so that machines can process and reason with it effectively. However, representing knowledge becomes challenging due to several critical issues:

  • Complexity

    Complexity arises because real-world scenarios involve numerous interconnected variables and rules. For instance, representing human language or environmental data requires handling vast details.

  • Uncertainty

    Uncertainty is a major issue since not all information is complete or accurate. AI systems often rely on probabilistic models like Bayesian Networks to handle incomplete or noisy data, such as medical diagnoses where symptoms may not fully match any known disease.

  • Ambiguity

    Ambiguity occurs when information can be interpreted in multiple ways. For example, the word “bank” could mean a financial institution or the side of a river, depending on context.

  • Incompleteness

    Incompleteness reflects situations where all necessary information may not be available for decision-making.

  • Scalability

    Scalability becomes a problem when the knowledge base grows too large to process efficiently.

Effective KR often combines logic-based methods, probabilistic reasoning, and semantic networks to address these challenges and support robust AI reasoning under uncertainty.

Advanced AI Planning Techniques

Hierarchical Planning

Hierarchical Planning simplifies complex problem-solving by breaking down large tasks into smaller, manageable subtasks. This method reflects real-world problem-solving, where tasks are not performed all at once but are decomposed into layers of subgoals and primitive actions.

At the highest level, the system identifies the overall goal, which is then decomposed into subgoals using techniques such as Hierarchical Task Networks (HTNs). For instance, the goal “Organize a conference” may break into “Book venue,” “Invite speakers,” and “Arrange catering.” Each subtask can be further decomposed until reaching executable low-level actions.

Hierarchical planning is advantageous because it allows abstraction, where the planner focuses on high-level objectives without initially worrying about detailed steps. It also promotes reusability since subplans can be reused for similar tasks across different planning problems. Furthermore, it manages complexity efficiently by handling smaller pieces of the problem separately, making planning computationally feasible even for large domains.

STRIPS (Stanford Research Institute Problem Solver)

STRIPS is one of the earliest AI planning languages. In STRIPS, each action is defined by its preconditions (conditions that must be true before the action can execute) and effects (how the state changes after the action). STRIPS assumes a deterministic and fully observable environment. For example, an action “Pick up object” may have preconditions “Object is on the table” and “Hand is empty”, and the effects would update the state to “Holding object”.

ADL (Action Description Language)

ADL extends STRIPS to handle more complex planning problems. It introduces conditional effects, quantifiers, and logical connectives, making it more expressive. ADL can describe actions where certain effects occur only under specific conditions, making it suitable for real-world domains that are more complicated than STRIPS can handle.

Partial-Order Planning

Partial-Order Planning allows some actions to be unordered, meaning they can occur in any sequence as long as dependencies are respected. This flexibility enables the planner to exploit parallelism and generate more efficient plans, especially in domains where multiple agents or concurrent actions exist.