Artificial Intelligence: A Comprehensive Guide to AI Concepts and Algorithms

What is AI? Explain intelligent behavior that involved in AI
: Artificial Intelligence is a branch of Computer Science that pursues creating the computers
or machines as intelligent as human beings. It can be described as an attempt to build machines
that like humans can think and act, able to learn and use knowledge to solve problems on their
own.
Intelligence relates to tasks involving higher mental processes, e.g. creativity, solving
problems, pattern recognition, classification, learning, induction, deduction, building analogies,
optimization, language processing, knowledge and many more. Intelligence is the computational
part of the ability to achieve goals.
Intelligent behaviour is depicted by perceiving one’s environment, acting in complex
environments, learning and understanding from experience, reasoning to solve problems and
discover hidden knowledge, applying knowledge successfully in new situations, thinking
abstractly, using analogies, communicating with others and more.


A Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether
or not a computer is capable of thinking like a human being. The art of creating machines that
perform functions requiring intelligence when performed by people; that it is the study of, how to
make computers do things which, at the moment, people do better. Focus is on action, and not
intelligent behavior centered around the representation of the world
c. Example: Turing Test
 3 rooms contain: a person, a computer and an interrogator.
 The interrogator can communicate with the other 2 by teletype (to avoid the machine
imitate the appearance of voice of the person)
 The interrogator tries to determine which the person is and which the machine is.
 The machine tries to fool the interrogator to believe that it is the human, and the person
also tries to convince the interrogator that it is the human.
 If the machine succeeds in fooling the interrogator, then conclude that the machine is
intelligent.


Differentiate Between Fully Observable Environment & Partially Observable
Environment.
Ans: When an agent sensor is capable to sense or access the complete state of an agent at each
point in time, it is said to be a fully observable environment else it is partially observable.
 Maintaining a fully observable environment is easy as there is no need to keep track of the
history of the surrounding.
 An environment is called unobservable when the agent has no sensors in all
environments.
 An environment might be partially observable because of noisy and inaccurate sensors or
because parts of the state are simply missing from the sensor data
 Examples:
o Chess – the board is fully observable, so are the opponent’s moves.
o Driving – the environment is partially observable because what’s around the
corner is not known


With the help of Diagram explain Learning Based Agents?
Ans: Artificial intelligence is defined as the study of rational agents. A rational agent could be
anything that makes decisions, as a person, firm, machine, or software. It carries out an action
with the best outcome after considering past and current percepts(agent’s perceptual inputs at a
given instance). An AI system is composed of an agent and its environment. The agents act in
their environment. The environment may contain other agents. An agent is anything that can
perceive its environment through sensors and can act upon that environment through actuators.
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability :
 Simple Reflex Agents
 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent
A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.It starts to act with basic knowledge and then able to act and adapt
automatically through learning. A learning agent has mainly four conceptual components, which
are:
1. Learning element: It is responsible for making improvements by learning from
environment
2. Critic: Learning element takes feedback from critic which describes that how well the agent
is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem generator: This component is responsible for suggesting actions that will lead
to new and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance


Explain the Concept of Rationality.
Ans : An ideal rational agent is the one, which is capable of doing expected actions to maximize
its performance measure, on the basis of : Its percept sequence and its built-in knowledge base
Rationality of an agent depends on the following −
• Performance Measures : When an agent is plunked down in a n environment , it
generates a sequences of actions according to the percepts it receives. If the sequence is
desirable then the agent has performed well
• Rationality :The agent prior knowledge of the environment .The actions that the agent
can perform.
• Omniscience Learning & Autonomy :Ominscience agent knows the actual outcome of
its actions & can act accordingly but it is impossible in reality
• Autonomy: A vaccum cleaner agent that learns to predict where & when additional dirt
will appear will do better than one that does not.
A rational agent always performs right action, where the right action means the action that
causes the agent to be most successful in the given percept sequence. The problem the
agent solves is characterized by Performance Measure, Environment, Actuators, and
Sensors (PEAS)


Explain AI Problem solving methodology in detail.
Ans: Problem-solving is commonly known as the method to reach the desired goal or finding a solution
to a given situation. In computer science, problem-solving refers to artificial intelligence
techniques, including various techniques such as forming efficient algorithms, heuristics, and
performing root cause analysis to find desirable solutions. In Artificial Intelligence, the users can
solve the problem by performing logical algorithms, utilizing polynomial and differential
equations, and executing them using modeling paradigms.
 Problem: It is a question which is to be solved which means defining the start state, goal
state, & other intermediate states.
 Search Space : It is a complete set of states including start state & goal state where the
answer of the problem is to be searched.
 Search: It is the process of finding the solution in search space.
 Well defined Problem: A problem in which the initial state or starting position & goal
state are clearly specified, and a unique solution can be shown to exist.
 Solution of the Problem: A solution of the problem is a path from initial state to goal
state whichever has a optimal or perfect path to reach to a solution.


The 8-puzzle problem belongs to the category of “sliding-block puzzle” types of
problems. It is described as follows:
It has set of a 3×3 board having 9 block spaces out of which, & blocks are having tiles
bearing number from 1 to 8. One space is left blank. The tile adjacent to blank space can
move into it. Arrange the tiles in a sequence. The start state is any situation of tiles, and goal
state is tiles arranged in a specific sequence. Solution of this problem is reporting of
“movement of tiles” order to reach the goal state. The transition function or legal move is any
one tile movement by one space in any direction (i.e. towards left or right or up or down) if
that space is blank. The data structure to represent the states can be 9-element vector
indicating the tiles in each board position. Hence, a starting state corresponding to above
configuration will be (1, blank, 4, 6, 5, 8, 2, 3, 7} (there can be various different start
positions). The goal state is (1, 2, 3, 4, 5, 6, 7, 8, blank). Here, the possible movement
outcomes after applying a move can be many. They are represented as tree. This tree is called
state space tree. The depth of the tree will depend upon the number of steps in the solution

8QWLAAAAAElFTkSuQmCC


Describe the Production System with an example.
Ans: A production system is based on a set of rules about behavior. These rules are a basic
representation found helpful in expert systems, automated planning, and action selection. It also
provides some form of artificial intelligence. It helps on structuring the AI program that it
facilitates describing & performing the search process. Production system or production rule
system is a computer program typically used to provide some form of artificial intelligence, which
consists primarily of a set of rules about behavior but it also includes the mechanism necessary to
follow those rules as the system responds to states of the world.
• Production system consist of the following components:
 Set of rules: of the form C–> A where LHS shows the condition & RHS shows the action
to be performed. LHS describes the applicability of rules & RHS describes the operation
to be performed.
 Knowledge Base: Contains all appropriate information of a particular task which may be
permanent or may pertain only to the solution of current problem.
 Control Strategy: Helps to decide which rule to apply next. If the wrong control strategy
is applied it may be possible that a solution is never obtained even if it exists in the
database.
 Rule applier: Checks the applicability of rule(whether to apply the rule or not) by
matching the current state with the LHS & finds the appropriate rule from the database


Explain the Nature of AI problems in detail.
Ans : Problems can be of several types and solution for a particular problem depends on the
nature of problem..Following are the nature of AI problems
 Path Finding Problems: Example is Traveling salesperson problem
• Decomposable Problems: Problem broken into small sub problems & the solution of the
main problem is obtained by joining the solution of sub problem.
Example: ∫(x3+3×2+sin2x+cos2x)dx=0
• Recoverable Problems: Example is 8-Puzzle problem
• Predictable Problems: Example is 8-Puzzle problem
• Problems affecting the quality of Solution: Example is Traveling salesperson problem
• State Finding Problems: Natural Language understanding
• Problems requiring Interaction: Example: Computer is generating some advice for a
particular problem.
• Knowledge Intensive Problems: Example is chess game(large amount of knowledge)


8. Design the production rules for “Water Jug Problem”
Ans : A production system is based on a set of rules about behavior. These rules are a basic
representation found helpful in expert systems, automated planning, and action selection. It also
provides some form of artificial intelligence. It helps on structuring the AI program that it
facilitates describing & performing the search process. Production system or production rule
system is a computer program typically used to provide some form of artificial intelligence, which
consists primarily of a set of rules about behavior but it also includes the mechanism necessary to
follow those rules as the system responds to states of the world.
Example of production System is Water Jug Problem
Initial State: 4 liters of water in Jug 1 (Suppose x)
Initial State: 3 liters of water in Jug 2 (Suppose y)
Goal State: 2 Liters of water in Jug 1
Now applying or defining the set of rules:
Rule 1: (x,y)—->(4, y) (If (x
Rule 2: (x,y)—->(3, y) If (y
Rule 3: (x,y)—->(x-d , y) (Pour some water out from Jug 1)
Rule 4: (x,y)—->(x, y-d) (Pour some water out from Jug 2)
Rule 5: (x,y)—->(0, y) (Empty the Jug 1)
Rule 6: (x,y)—->(x,0) (Empty the Jug 2)
Rule 7: (x,y)—->(4, y – (4-x)) (Fill the Jug 1 by pouring out some water from Jug 2)
Rule 8: (x,y)—->(x- (3 – y), 3) (Fill the Jug 2 by pouring out some water from Jug 1)
Rule 9: (x,y)—->(x+y, 0) (Empty Jug 2 by pouring all its water into Jug 1)
Rule 10: (x,y)—->(0, x+y) (Empty Jug 1 by pouring all its water into Jug 2)
Rule 11: (0,2)—–>(2, 0) (Pour the 2 liters from Jug 2 in Jug 1)
Rule 12: (2,y)—–>(0, y) (Empty the 2 liter from Jug 1 on ground)
9. Create a solution for water Jug problem (5 Ltrs, 3 Ltrs to measure 4 Ltrs)


Describe Breadth first search algorithm with an example?
Ans : The plan to reach the goal state from the start state differ only by the order and/or length
of actions. Uninformed search is also called Blind search. These algorithms can only generate
the successors and differentiate between the goal state and non goal state.
The following uninformed search algorithms are discussed in this section.
 Depth First Search Breadth First Search  Uniform Cost Search
Breadth First Search:
Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data
structures. It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as
a ‘search key’), and explores all of the neighbor nodes at the present depth prior to moving on
to the nodes at the next depth level. A Search strategy, in which the highest layer of a decision
tree is searched completely before proceeding to the next layer is called Breadth-first search
(BFS). In this strategy, no viable solutions are omitted and therefore it is guaranteed that an
optimal solution is found. This strategy is often not feasible when the search space is large.
It is implemented using a queue.

 In Data Structures the complexity of BFS is measured as O (V+E) where ‘V’ are the no. of
nodes & ‘E’ are the no. of edges.
 In Artificial Intelligence, the complexity is measured as O(bd) where ‘b’ is the branch
factor representing the no. of children assigns to a node which can be Binary, Ternary or
Quad & ‘d’ is the depth or levels in a tree.


The A* algorithm is a specialization of best-first search. It provides general
guidelines about how to estimate goal distances for general search graphs. At each
node along a path to the goal node, the A* algorithm generates all successor nodes
and computes an estimate of the distance from the start node to a goal node through
each of the successor. It then chooses the successor with shortest estimated distance
for expansion. It calculates the heuristic function based on distance of current node
from start state and distance of current node to goal node. The form of heuristic
estimation function for A* is defined as follows:
f(n) = g(n) + h(n)
Where, f(n) = evaluation function
g(n) = cost(or distance) of current node from start node
h(n) = cost of current node from goal node
In A* algorithm, the most promising node, based on value of heuristic
function, is chosen for expansion. Normally the node having lowest value of f(n) is
chosen for expansion. A* algorithm maintains two list. One stores the list of open
nodes & other maintains the list of already expanded nodes.
 A* algorithm:
1. Place the starting node ‘s’ on ‘OPEN’ list.
2. If ‘OPEN’ is empty, stop and return failure.
3. Remove from ‘OPEN’ the node ‘n’ that has the smallest value of f*(n). If node ‘n’ is
goal node, then return success and stop, otherwise
4. Expand ‘n’ generating all of its successors ‘n’, and put ‘n’ into the closed list. For
each successor ‘n’, if ‘n’ is not already OPEN, attach back pointer to ‘n’. Compute
f*(n) and place it on CLOSED.
5. Each ‘n’ that is already on OPEN or CLOSED should be attached to the back pointer
which reflects the lowest f*(n) path. If ‘n’ was on CLOSED and its pointer was
changed, remove it and place it on OPEN.
6. Return to Step 2.


Example:
In this example, we will traverse the given graph using the A* algorithm. The
heuristic value of all states is given in the below table so we will calculate the f(n) of
each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any
node from start state.
Here we will use OPEN and CLOSED list.

 Initialization: {(S, 5)}
Iteration1: {(S–> A, 4), (S–>G, 10)}
 Iteration2: {(S–> A–>C, 4), (S–> A–>B, 7), (S–>G, 10)}
 Iteration3: {(S–> A–>C—>G, 6), (S–> A–>C—>D, 11), (S–> A–>B, 7), (S–>G,
10)}
 Iteration 4 will give the final result, as S—>A—>C—>G it provides the optimal
path with cost 6.
5. Describe problem reduction algorithm for searching AND-OR graph
Answer:
Problem Reduction:
 When a problem can be divided into a set of sub problems, where each sub problem
can be solved separately and a combination of these will be a solution, AND-OR
graphs or AND – OR trees are used for representing the solution. The decomposition
of the problem or problem reduction generates AND arcs. One AND arc may point to
any number of successor nodes. All these must be solved so that the arc will rise to
many arcs, indicating several possible solutions.Hence the graph is known as AND –
OR instead of AND. Figure shows an AND – OR graph.
 The AND-OR graph is also applicable for parse tree generation in English language.
Consider following simple English Grammar.
i. S  Np Vp ii. Np  N iii. Np  art N iv. Vp  V v. Vp  V Np vi. art  a vii. art  the viii. N  man ix. N  dog x. V  likes xi. V  bites Its AND – OR


Monte Carlo tree search (MCTS) is a general game-playing algorithm to find the best
move from any given game state of any game.
MCTS is a more powerful alternative for more complex games.
The above figure details the actual procedure.
In phase (1), existing information is used to repeatedly choose successive child nodes
down to the end of the search tree.
Next, in phase (2), the search tree is expanded by adding a node.
In phase (3), a simulation is run to the end to determine the winner.
Finally, in phase (4), all the nodes in the selected path are updated with new
information gained from the simulated game. This 4-phase algorithm is run repeatedly
until enough information is gathered to produce a good move.
In MCTS, nodes are the building blocks of the search tree. These nodes are
formed based on the outcome of a number of simulations. The process of Monte Carlo
Tree Search can be broken down into four distinct steps, viz., selection, expansion,
simulation and backpropagation. Each of these steps is explained in details below:
Selection: In this process, the MCTS algorithm traverses the current tree from the
root node using a specific strategy. The strategy uses an evaluation function to
optimally select nodes with the highest estimated value. MCTS uses the Upper
Confidence Bound (UCB) formula applied to trees as the strategy in the selection
process to traverse the tree. It balances the exploration-exploitation trade-off. During
tree traversal, a node is selected based on some parameters that return the maximum
value. The parameters are characterized by the formula that is typically used for this
purpose is given below.
When traversing a tree during the selection process, the child node that returns
the greatest value from the above equation will be one that will get selected. During
traversal, once a child node is found which is also a leaf node, the MCTS jumps into
the expansion step.
Expansion: In this process, a new child node is added to the tree to that node which
was optimally reached during the selection process.
Simulation: In this process, a simulation is performed by choosing moves or
strategies until a result or predefined state is achieved.
Backpropagation: After determining the value of the newly added node, the
remaining tree must be updated. So, the backpropagation process is performed, where
it backpropagates from the new node to the root node. During the process, the number
of simulation stored in each node is incremented. Also, if the new node’s simulation
results in a win, then the number of wins is also incremented