Introduction to Artificial Intelligence

What is Intelligence?

Intelligence is the ability to learn, understand, and adapt easily. It is synonymous with intellect.

EMI – Intelligence

Automatic symbolic machine learning involves simulation procedures in Tidoscomo intelligent machines.

IdM – Intelligence

The intrinsic structure of a machine-to-machine system is responsible for coordinating their behavior and interactions with the external environment. This can be studied mathematically.

Intelligent Processes in Humans

  • Mental processes
  • Memory
  • Motor activity
  • Sensory activity

Psychological Foundations of AI

  • Simulation of mental processes
  • Logic – logical formalism
  • Neuroscience – simulation of brain architecture
  • Philosophy
  • Mathematics
  • Economics
  • Computer Engineering
  • Linguistics
  • Cybernetics

Schools of Thought in AI

  • Structuralism: The structure of the machine should contain the principles of human behavior (neural networks).
  • Behaviorism: There is no intelligence in the machine simulation of human behavior. (Most used)
  • Functionalism: The higher the adaptation of the system to the user, the greater the intelligence.

Differences Between Traditional AI and Conventional Computing

ConventionalAI
AlgorithmicNon-algorithmic
Numerical processingSymbolic processing
DeterminismNondeterminism
Impersonal method of programmingPersonal method
Difficult to change to include new dataEasily modifiable program

What is an AI Technique?

An AI technique is a method that exploits knowledge, embodied in a way that:

  • Captures generalizations
  • Can be understood by people
  • Can be modified to correct errors
  • Can be used in many situations

The Turing Test

In the Turing Test, an interrogator asks questions to two hidden entities, one human and one machine. The interrogator communicates indirectly and attempts to determine which entity is the human through an extensive dialogue. If the interrogator cannot distinguish between the entities, then the computer is considered to be able to think, according to the Turing Test.

Types of Intelligent Systems

  • Traditional: Database, graphics programs, calculation programs, word processors.
  • Non-traditional: Manipulation of symbols, knowledge storage, knowledge acquisition, decision making, management of expert knowledge.

Nature and Types of Human Knowledge

  • Data: Pure elements measured from a particular event.
  • Information: Data analysis and interpretation of a data set.
  • Knowledge: The ability to create a mental model that describes an object and indicate the actions to implement.
  • Declarative Knowledge: Descriptive and generic facts and events (“what”).
  • Procedural Knowledge: Prescriptive and difficult to express and explain (“how”).
  • Common Sense Knowledge: The combination of declarative and procedural knowledge (“the trial of right and wrong”).
  • Heuristic Knowledge: Unique to each individual, cannot be obtained from any source, involves the systematic evaluation and use of heuristic rules.

Types of Analysis

  • Logical Analysis: Based on data from reports, interviews, and other electromechanical means.
  • Heuristic Analysis: Data-based, heuristic, or intuitive.

Key Aspects of Intelligent Systems

  • Ability to use knowledge to perform tasks or solve problems.
  • Ability to make inferences and associations to work with complex problems resembling real-world problems.

Intelligent Skills

  • Store and retrieve large amounts of information efficiently.
  • Connect new thoughts and ideas in a non-linear way.
  • Adapt or modify behavior based on rationality.
  • Employ various skills simultaneously in a given situation.

Intelligent behavior is the result of multiple, chained decisions. The choice of decision or control of the decision is based on performance criteria, duration, and risk.

Decision control is the process by which the solutions of a problem and decision-making are sequenced, synchronized, interconnected, and aimed to provide the behavior of an object-oriented system.

Agent Troubleshooting

Reactive agents do not work in environments where the number of condition-action rules is too large to store.

Search

An agent with several immediate options can decide what to do by comparing different sequences of possible actions. This can be formulated as: goal seeking → run.

State Space

The state space is the set of all states accessible from a given state. Accessible states are those defined by the successor function. The state space can be represented as a graph where nodes are states and arcs are actions.

Search Strategies

A search strategy is defined by the order in which nodes are expanded. Strategies are evaluated according to the following criteria:

  • Completeness: Does the algorithm always find a solution if it exists?
  • Time Complexity: Number of nodes generated.
  • Space Complexity: Maximum number of nodes in memory.
  • Optimization: Does the strategy find the optimal solution?

Time and space complexity are measured in terms of:

  • b: Maximum branching factor of the tree.
  • d: Depth of the shallowest goal.
  • m: The maximum length of any path in the state space.

State-Space Search

State-space search characterizes the solution of a problem as a process of searching for a solution path from an initial state to the goal. It is represented by a court [N, A, s, G] where:

  • N: Set of nodes or states of the graph. These correspond to states in the problem-solving process.
  • A: Set of arcs of a graph. These correspond to the steps to be taken.
  • S: Non-empty subset of N, which contains the initial states of the problem.
  • G: Non-empty subset of N, which contains the goal states of the problem.

Search Strategies for State-Space Search

Data-Driven Search

  • The algorithm starts with the provided data and a set of valid moves for changing states.
  • The search continues by applying rules to facts to produce new facts, which are used to generate more new facts.
  • This process continues until it generates a path that satisfies the goal condition.

Goal-Driven Search

  • A goal is given in the problem or can be easily defined.
  • There are many rules that apply to the facts, producing a growing number of conclusions or goals.
  • For example, in a mathematical theorem prover, the total number of rules used to produce a given theorem is usually much smaller than the number of rules available.
  • Data for the problem is not given but must be acquired by the system to solve the problem.
  • For example, in a medical diagnosis program, a variety of diagnostic tests can be applied. Doctors ask only what is needed to confirm or deny a particular hypothesis.

Data-Guided Search

  • All data is provided in the initial problem formulation.
  • There are many potential targets, but there are few different ways to use the facts and information provided in a particular instance.
  • It is difficult to formulate a hypothesis.

Implementation Variables

  • LE: List of States
  • LNE: List of New States
  • BSS: Dead End

If the present state S does not meet the goal requirements, then generate its first descendant S1. This process continues until a descendant of a child is the goal node. If no child node leads to the goal, then return failure to the parent, where regression is applied.