AI Search Strategies, Logic, and Rational Agents

AI Perspectives and Rational Foundations

Question A: The Four Perspectives of AI

Question: As seen in class, AI can be framed using four perspectives. Recall and define them. Discuss how they complement or contradict each other. Which one provides the most practical foundation for designing intelligent systems and why?

Answer: The four AI perspectives are:

  • Thinking Humanly: AI systems model human cognitive processes.
  • Acting Humanly: AI behaves like humans (the Turing Test approach).
  • Thinking Rationally: AI uses logical reasoning based on the “laws of thought.”
  • Acting Rationally: AI chooses actions that maximize goal achievement.

Human-centric approaches focus on imitating people, while rational approaches focus on optimal decision-making. Because human behavior is not always rational, these perspectives may conflict. The most practical foundation is Acting Rationally, because intelligent agents are designed to select the best action to achieve goals under given constraints.

Question B: Rationality and Information

Question: A rational agent may choose an action that later seems mistaken. Explain how this contradiction may occur with an example.

Answer: A rational agent selects the best action based on the information available at the time. If new information later appears, the earlier action may seem wrong, but it was still rational given the agent’s knowledge.

Example: A self-driving car crosses an intersection because sensors detect no traffic. If another car suddenly appears unexpectedly, the earlier decision may seem mistaken, but it was rational based on the available data.

Search Algorithm Analysis and Comparison

Question C: UCS vs. Greedy Best-First Search

Question: Analyze the trade-offs between Uniform-Cost Search and Greedy Best-First Search, and explain why A* often provides a better balance.

Answer:

  • Uniform-Cost Search (UCS): Expands nodes with the lowest path cost g(n). It is complete and optimal but can expand many nodes.
  • Greedy Best-First Search: Expands the node with the smallest heuristic h(n). It is faster but not guaranteed to be optimal.
  • A* Search: Uses f(n) = g(n) + h(n). It balances real cost and estimated cost. With an admissible heuristic, it is both complete and optimal.

Thus, A* provides a better balance between efficiency and optimality.

Question D: Adversarial Search

Question: Explain why adversarial search is needed instead of standard search. Discuss implications for objectives, state evaluation, and action strategies.

Answer: Adversarial search is used when multiple agents compete, and one agent’s success depends on the opponent’s actions.

  • Search objective: Maximize utility while minimizing the opponent’s advantage.
  • State evaluation: Consider the opponent’s possible responses.
  • Strategy: Use algorithms like Minimax.

Search Algorithm Enumeration and Properties

Question A: Depth-First Search (DFS)

Question: Use DFS to enumerate all paths in the fringe/frontier visited before finding node G from S.

Answer: DFS explores the deepest branch first using a stack. Nodes are expanded along one path until reaching a leaf or goal, then backtracking occurs. Property: DFS may reach the goal quickly but does not guarantee optimality.

Question B: Breadth-First Search (BFS)

Question: Use BFS to enumerate paths visited before reaching G from S.

Answer: BFS explores nodes level by level using a queue. All nodes at depth d are expanded before nodes at depth d+1. Property: BFS is complete and optimal when step costs are equal.

Question C: Uniform-Cost Search (UCS)

Question: Use UCS to enumerate paths visited before reaching G.

Answer: UCS expands the node with the lowest path cost g(n) first. Properties: Complete and optimal when costs are positive, though it may expand many nodes.

Question D: Greedy Best-First Search

Question: Use Greedy Best-First Search.

Answer: Greedy Best-First Search expands the node with the lowest heuristic value h(n). Properties: Often faster, but not guaranteed to be optimal, and may follow misleading heuristics.

Question E: A* Algorithm

Question: Use the A* algorithm.

Answer: A* expands the node with the lowest evaluation function: f(n) = g(n) + h(n), where g(n) is the cost from the start to the node and h(n) is the estimated cost to the goal. With an admissible heuristic, A* is complete and optimal.

Question F: Performance Comparison

Question: Analyze the performance of the algorithms in terms of search effort, solution cost, and efficiency.

AlgorithmSearch EffortOptimalEfficiency
DFSLow memory; may explore deep pathsNot optimalFast in some cases
BFSExplores many nodesOptimal (equal costs)High memory usage
UCSExpands nodes by costOptimalSlower than heuristic search
Greedy BFSFew nodes expandedNot optimalVery fast
A*Balanced expansionOptimal (with admissible heuristic)Efficient with good heuristics

Intelligent Agent Lifecycle and Performance

Question 1: Agent Life Cycle Pseudo-code

Question: Use pseudo-code to write the life cycle of an intelligent agent.

For life do
  ReadSensors(percept)
  BestAction = ArgmaxActions(Utility(percept, action))
  ExecuteAction(BestAction)
  UpdateInternalState
  If terminal goal then break
End

Question 2: Measuring Performance

Question: How is performance measured for pathfinding agents, and which parameters affect it?

Answer: Performance is measured by time complexity and space complexity. The main parameters are the branching factor (b) and the search depth (d).

Question 3: Algorithm Identification

Question: Identify the search algorithm used for each tree (G1–G5) and whether it found the optimal path.

TreeAlgorithmOptimal Path
G1BFSNo
G2A*No
G3DFSNo
G4A*Yes (A → D → C → G, cost = 5)
G5UCSYes (A → D → C → G, cost = 5)

Adversarial Search: Minimax and Pruning

Question 4: Game Value and Pruning

Question: What is the value of the game and which branches are pruned?

Answer: Game value = 8. Pruned branches = d4, c4, d10, c8.

Minimax Tree: Value of the Game

The value of the game is obtained by propagating values from the leaves to the root using Minimax rules:

  • MIN node: Choose the smallest child value.
  • MAX node: Choose the largest child value.

Repeat this process level by level until reaching the root. The value at the root is the value of the game.

Alpha–Beta Pruning

Pruning happens when the following condition is met: α ≥ β.

  • α (alpha): The best value found so far for MAX.
  • β (beta): The best value found so far for MIN.

When this condition occurs, the remaining unexplored branches of that node are pruned because they cannot affect the final decision.

First-Order Logic and Knowledge Representation

Question 5: Logic Translations

  • (a) Every coin in my pocket is a dime: ∀x (Coin(x) ∧ In(x, Pocket) → Dime(x))
  • (b) Some coin on the table is a dime: ∃x (Coin(x) ∧ On(x, Table) ∧ Dime(x))
  • (c) Not all coins in my pocket are dimes: ∃x (Coin(x) ∧ In(x, Pocket) ∧ ¬Dime(x))
  • (d) None of the coins on the table are dimes: ∀x (Coin(x) ∧ On(x, Table) → ¬Dime(x))
  • (e) Nothing on my desk escapes my attention. There is a computer on my desk. Therefore there is a computer that does not escape my attention: ¬EscapeMyAttention(Computer)
  • (f) All of Karl’s friends like at least one of Imre’s neighbours: ∀x ∃y (Friend(Karl, x) ∧ Neighbor(Imre, y) ∧ Like(x, y))
  • (g) Everyone who dances ballet is the child of someone who dances ballet: ∀x ∃y (BalletDancer(x) ∧ Parent(y, x) ∧ BalletDancer(y))

Logical Entailment in Knowledge Bases

Question 4: Knowledge Base Analysis

Question: Let the Knowledge Base be: A → B, C → D, A ∨ C, ¬B ∨ ¬D. Does KB ⊨ (¬A ∧ C)?

Answer: No, KB ⊭ (¬A ∧ C). If both A and C were true, then B and D would also be true, which contradicts ¬B ∨ ¬D. Since A ∨ C only requires one of them to be true, the KB does not force ¬A ∧ C, so it is not entailed.