Artificial Intelligence Fundamentals: Search, Agents, and Logic
Heuristic Search Algorithms Explained
A* Search Algorithm (A-Star)
A* (A-Star) is an informed search algorithm that finds the shortest path from a start node to a goal node using heuristics. It combines the advantages of Uniform Cost Search and Greedy Best-First Search.
A* Formula
f(n) = g(n) + h(n)
- g(n): Cost from the start node to the current node.
- h(n): Estimated cost from the current node to the goal (heuristic).
- f(n): Total estimated cost of the path through the current node.
Example of A* Search
Suppose you are navigating a map from city A to city B. The algorithm considers both the actual distance traveled and the estimated distance left. If a path has g(n) = 10 and h(n) = 5, then f(n) = 15. The algorithm chooses the node with the lowest f(n).
Advantages of A*
- Guarantees the shortest path if the heuristic is admissible (never overestimates the cost).
- Efficient and optimal.
Hill Climbing Algorithm
Hill Climbing is a local search algorithm that starts from an initial state and moves in the direction of increasing value (uphill) to find the peak or goal.
Steps of Hill Climbing
- Evaluate the initial state.
- Choose the best neighbor with a higher value.
- Move to the neighbor.
- Repeat until no better neighbor exists.
Example of Hill Climbing
For a function f(x) = -x² + 4x + 6, the algorithm evaluates points and moves toward the value of x that increases the function’s result. It stops when it reaches the highest point (maximum value).
Limitation
Can get stuck at local maxima, plateaus, or ridges.
Various Heuristic Search Techniques
Heuristic Search uses knowledge or “rules of thumb” to find solutions more efficiently than uninformed methods.
Types of Heuristic Search
-
Greedy Best-First Search:
- Uses h(n) to estimate the cost from the current node to the goal.
- Chooses the node with the lowest h(n) value.
- Example: In a city map, it selects the next city that appears closest to the destination.
-
A* Search:
- Uses f(n) = g(n) + h(n) for a more accurate search.
- Finds the optimal path.
-
Hill Climbing:
- Chooses the neighbor with the best value.
- Can be Simple, Steepest-Ascent, or Stochastic.
-
Simulated Annealing:
- Allows occasional downhill moves to escape local maxima.
- Decreases the probability of accepting worse solutions over time.
-
Genetic Algorithms:
- Inspired by natural evolution.
- Uses selection, crossover, and mutation.
- Example: Optimization problems like scheduling or route planning.
Heuristic Strategy Comparison (Conceptual Diagram)
Start | [Current Node] |--- h(n) (Greedy Best-First) |--- g(n) + h(n) (A* Search) |--- random (Simulated Annealing) |--- fittest (Genetic Algorithms)
Each heuristic technique chooses a different path from the current node depending on its strategy.
AI Agents and PEAS Representation
Types of Agents in AI
- Simple Reflex Agent: Responds to current percepts only.
- Model-Based Reflex Agent: Uses internal memory of past percepts (internal state).
- Goal-Based Agent: Chooses actions based on achieving defined goals.
- Utility-Based Agent: Selects actions based on a utility function (maximizing benefit).
- Learning Agent: Learns from the environment and improves over time.
PEAS Representation
PEAS stands for Performance Measure, Environment, Actuators, and Sensors.
PEAS for a Driverless Vehicle
- P (Performance Measure): Safety, speed, fuel efficiency, reaching destination.
- E (Environment): Roads, traffic, weather, pedestrians.
- A (Actuators): Engine, brake, steering, horn, indicator lights.
- S (Sensors): Cameras, GPS, radar, LiDAR, speedometer.
PEAS for a Smart Washing Machine
- P (Performance Measure): Cleanliness of clothes, energy saving.
- E (Environment): Type of fabric, water level, detergent.
- A (Actuators): Water inlet valve, motor, heater, buzzer.
- S (Sensors): Water level sensor, temperature sensor, load sensor.
Challenges and Solutions in Hill Climbing
Problems Encountered
- Local Maximum: A peak that is lower than the global highest point.
- Plateau: A flat area where all neighboring states have the same value, making movement difficult.
- Ridge: A region with a steep path, requiring indirect moves (often diagonal) to reach the top.
Solutions to Hill Climbing Problems
- Random Restart Hill Climbing: Start the search from multiple random positions.
- Simulated Annealing: Accept some worse moves (downhill) to escape local traps.
- Sideway Moves: Allow small changes even without immediate improvement.
Steps of the Hill Climbing Algorithm (Review)
- Start with an initial solution.
- Evaluate the neighbor states.
- Choose the one with the highest value.
- Repeat until the goal is reached or no improvement is possible.
Knowledge Representation: Instances and Chaining
Instance and Is-a Relationship
-
Instance Representation:
An instance is a specific object belonging to a class.
Example: “Socrates” is an instance of the class “Man”.
-
Is-a Relationship:
Defines a class-subclass relationship (inheritance).
Example: “Dog is-a Animal” means Dog inherits properties of Animal.
Forward Chaining vs. Backward Chaining
| Forward Chaining | Backward Chaining |
|---|---|
| Data-driven. | Goal-driven. |
| Starts with facts and applies rules to reach conclusions. | Starts with a goal and works backward to see if it can be proven. |
|
Example: Facts: Man(Socrates), All men are mortal. Rule: IF man(x) THEN mortal(x). Result: Mortal(Socrates) |
Example: Goal: Mortal(Socrates) Checks: Is Socrates a man? Are all men mortal? |
Inference Rules in First-Order Logic (FOL)
(Note: The difference between Predicate Logic and Propositional Logic is detailed in the Knowledge Representation Approaches section.)
-
Modus Ponens:
If A → B is true and A is true, then B is true.
-
Universal Instantiation:
From ∀x P(x) (For all x, P(x) is true), infer P(a) (P is true for some constant a).
-
Existential Instantiation:
From ∃x P(x) (There exists an x such that P(x) is true), infer P(a) for some constant a.
-
Resolution:
A powerful inference rule used primarily for automated theorem proving.
Knowledge Representation Necessity and Unification
Why Knowledge Representation is Necessary
AI systems need to store and process information effectively to make informed decisions. Knowledge Representation (KR) allows for reasoning, learning, and understanding.
Example of Reasoning:
- Rule: All humans are mortal.
- Fact: Socrates is human.
- Conclusion: Socrates is mortal.
Unification
Unification is a process in logic programming (like Prolog) used to make two logical statements or expressions match by finding a substitution for variables.
Example:
The expressions Father(x, John) and Father(Bob, John) unify with the substitution x = Bob.
Approaches to Knowledge Representation (KR)
Major KR Approaches
- Logical Representation: Based on formal logic (e.g., predicate logic, propositional logic).
- Semantic Networks: Graph-based representation using nodes (objects/concepts) and edges (relationships).
- Frames: Data structures used for representing stereotyped situations or objects (similar to object-oriented programming).
- Production Rules: Knowledge represented as IF-THEN rules (used in expert systems).
- Ontologies: Formal naming and definition of concepts and their relationships within a domain.
Key Differences Between Approaches
- Logic is suitable for formal, rigorous reasoning. (Predicate Logic allows variables and quantification, unlike Propositional Logic.)
- Semantic Networks are more intuitive and human-friendly for visualization.
- Frames are object-oriented and efficient for structured data.
- Production Rules are easy to implement for rule-based systems.
Properties of an Efficient KR System
- Representational Adequacy: The ability to represent all required knowledge.
- Inferential Adequacy: The ability to draw sound and relevant conclusions.
- Inferential Efficiency: The speed and computational cost of the reasoning process.
- Acquisitional Efficiency: The ease with which new knowledge can be acquired and integrated.
