Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Artificial Intelligence - PTU(Module 1 & 2 ), Study notes of Artificial Intelligence

Module1: Introduction,Foundations of artificial intelligence (AI),History of AI,Problem Solving Formulating problems,problem types,states and operators,state space, search srategies. Module2: Informed Search Strategies- Best first search, A* algorithm,heuristic functions,Iterative deepening A*(IDA),small memory A*(SMA) Game playing - Perfect decision game, imperfect decision game, evaluation function, alpha-beta pruning

Typology: Study notes

2016/2017
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 01/12/2017

arwinder_singh_malhi
arwinder_singh_malhi 🇮🇳

4.4

(27)

1 document

1 / 65

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Module-I
What is artificial intelligence?
Artificial Intelligence is the branch of computer science concerned with
making computers behave like humans.
Major AI textbooks define artificial intelligence as "the study and
design of intelligent agents," where an intelligent agent is a system that
perceives its environment and takes actions which maximize its chances
of success. John McCarthy, who coined the term in 1956, defines it as "the
science and engineering of making intelligent machines, especially
intelligent computer programs."
The definitions of AI according to some text books are categorized into four
approaches and are summarized in the table below:
Systems that think like humans
“The exciting new effort to make
computers think machines with
minds, in the full and literal
sense.” (Haugeland,1985)
Systems that think rationally
“The study of mental faculties through
the use of computer models.”
(Charniak and McDermont,1985)
Systems that act like humans
The art of creating machines that
perform functions that require
intelligence when performed by
people.” (Kurzweil,1990)
Systems that act rationally
Computational intelligence is the
study of the design of intelligent
agents.” (Poole et al.,1998)
The four approaches in more detail are as follows:
66
66AI NOTES BY ARWINDER SINGH MALHI
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
Discount

On special offer

Partial preview of the text

Download Artificial Intelligence - PTU(Module 1 & 2 ) and more Study notes Artificial Intelligence in PDF only on Docsity!

Module-I

▲ What is artificial intelligence?

Artificial Intelligence is the branch of computer science concerned with making computers behave like humans.

Major AI textbooks define artificial intelligence as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances

of success. John McCarthy , who coined the term in 1956, defines it as "the science and engineering of making intelligent machines, especially intelligent computer programs."

The definitions of AI according to some text books are categorized into four approaches and are summarized in the table below:

Systems that think like humans “The exciting new effort to make computers think … machines with minds, in the full and literal sense.” (Haugeland,1985)

Systems that think rationally “The study of mental faculties through the use of computer models.” (Charniak and McDermont,1985)

Systems that act like humans The art of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil,1990)

Systems that act rationally “ Computational intelligence is the study of the design of intelligent agents.” (Poole et al.,1998)

The four approaches in more detail are as follows:

66

a. Acting humanly: The Turing Test approach

  • (^) Test proposed by Alan Turing in 1950
  • The computer is asked questions by a human interrogator.

The computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or not. Programming a computer to pass, the computer need to possess the following capabilities:

Natural language processing to enable it to communicate successfully in English. ♦ Knowledge representation to store what it knows or hears

Automated reasoning to use the stored information to answer questions and to draw new conclusions. ♦ Machine learning to adapt to new circumstances and to detect and extrapolate patterns. To pass the complete Turing Test, the computer will need ♦ Computer vision to perceive the objects, and

Robotics to manipulate objects and move about.

b. Thinking humanly: The cognitive modeling approach

We need to get inside actual working of the human mind: ■ through introspection – trying to capture our own thoughts as they go by; ■ through psychological experiments

Allen Newell and Herbert Simon, who developed GPS , the “ General Problem

Solver ” tried to trace the reasoning steps to traces of human subjects solving the same problems.

The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to try to construct precise and testable theories of the workings of the human mind

66

Computational units Storage units

Cycle time Bandwidth Memory updates/sec

1 CPU,10 8 gates 10 10 bits RAM

10 11 bits’ disk 10 -9^ sec

10 10 bits/sec 10 9

10 11 neurons 10 11 neurons

10 14 synapses 10 -3^ sec

10 14 bits/sec 10 14 Table 1.1 A crude comparison of the raw computational resources available to computers (circa 2003) and brain. The computer’s numbers have increased by at least by a factor of 10 every few years. The brain’s numbers have not changed for the last 10,000 years.

Brains and digital computers perform quite different tasks and have different properties. Table 1.1 shows that there are 10000 times more neurons in the typical human brain than there are gates in the CPU of a typical high-end computer. Moore’s Law predicts that the CPU’s gate count will equal the brain’s neuron count around 2020.

Psychology (1879 – present)

The origin of scientific psychology is traced back to the wok if German physiologist Hermann von Helmholtz (1821-1894) and his student Wilhelm Wundt (1832 – 1920)

In 1879, Wundt opened the first laboratory of experimental psychology at the university of Leipzig.

In US, the development of computer modeling led to the creation of the field of cognitive science.

The field can be said to have started at the workshop in September 1956 at MIT.

Computer engineering (1940-present)

66

For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been the artifact of choice.

A1 also owes a debt to the software side of computer science, which has supplied the operating systems, programming languages, and tools needed to write modern programs

Control theory and Cybernetics (1948-present)

Ktesibios of Alexandria (c. 250 B.Sc.) built the first self-controlling machine: a water clock with a regulator that kept the flow of water running through it at a constant, predictable pace.

Modern control theory, especially the branch known as stochastic optimal control, has as its goal the design of systems that maximize an objective function over time.

Linguistics (1957-present)

Modem linguistics and AI, then, were "born" at about the same time, and grew up together, intersecting in a hybrid field called computational linguistics or natural language processing.

The History of Artificial Intelligence

The gestation of artificial intelligence (1943-1955)

There were a number of early examples of work that can be characterized as AI, but it was Alan Turing who first articulated a complete vision of A1 in his 1950 article "Computing Machinery and Intelligence." There in, he introduced the Turing test, machine learning, genetic algorithms, and reinforcement learning.

The birth of artificial intelligence (1956)

McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him bring together U.S. researchers interested in automata theory,

66

Figure 1.1 The Tom Evan’s ANALOGY program could solve geometric analogy problems as shown.

A dose of reality (1966-1973)

From the beginning, AI researchers were not shy about making predictions of their coming successes. The following statement by Herbert Simon in 1957 is often quoted:

“It is not my aim to surprise or shock you-but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

Knowledge-based systems: The key to power? (1969-1979)

Dendral was an influential pioneer project in artificial intelligence (AI) of the

1960s, and the computer software expert system that it produced. Its primary aim was to help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge of chemistry. It was done at Stanford University by Edward Feigenbaum, Bruce Buchanan, Joshua Lederberg, and Carl Djerassi.

A1 becomes an industry (1980-present)

In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build intelligent computers running Prolog. Overall, the A1 industry boomed from a few million dollars in 1980 to billions of dollars in 1988.

The return of neural networks (1986-present)

Psychologists including David Rumelhart and Geoff Hinton continued the study of neural-net models of memory.

A1 becomes a science (1987-present)

66

In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area.

Speech technology and the related field of handwritten character recognition are already making the transition to widespread industrial and consumer applications.

The Bayesian network formalism was invented to allow efficient representation of, and rigorous reasoning with, uncertain knowledge.

The emergence of intelligent agents (1995-present)

One of the most important environments for intelligent agents is the Internet.

The state of art

What can A1 do today?

Autonomous planning and scheduling: A hundred million miles from Earth, NASA's Remote Agent program became the first on-board autonomous planning program to control the scheduling of operations for a spacecraft (Jonsson et al., 2000). Remote Agent generated plans from high- level goals specified from the ground, and it monitored the operation of the spacecraft as the plans were executed-detecting, diagnosing, and recovering from problems as they occurred.

Game playing: IBM's Deep Blue became the first computer program to defeat the world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match (Goodman and Keene, 1997).

Autonomous control: The ALVINN computer vision system was trained to steer a car to keep it following a lane. It was placed in CMU's NAVLAB computer-controlled minivan and used to navigate across the United States- for 2850 miles it was in control of steering the vehicle 98% of the time.

Diagnosis: Medical diagnosis programs based on probabilistic analysis have been able to perform at the level of an expert physician in several areas of medicine.

Logistics Planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and Replanning Tool, DART (Cross and

66

Our aim is to design agents.

A rational agent is one that performs the actions that cause the agent to be most

successful.

We use the term performance measure for the criteria that determine how

successful and agent is. We will insist on an objective performance measure

imposed by some authority.

Example Consider the case of an agent that is supposed to vacuum a dirty floor. A

plausible performance measure would be amount of dirt cleaned in a certain period

of time. A more sophisticated measure would include the amount of electricity

consumed and amount of noise generated.

We need to be careful to distinguish between rationality and omniscience. If an

agent is omniscient, it knows the actual outcomes of its actions. Rationality is

concerned with expected success given what has been perceived. In other words,

we cannot blame an agent for not taking into account something it could not

perceive or for failing to take an action that it is not capable of taking.

What is rational at any given time depends on four things:

    • The performance measure that defines degree of success
    • Everything that the agent has perceived so far (the percept sequence )
    • What the agent knows about the environment
    • The actions the agent can perform

The ideal rational agent:

For each possible percept sequence, an ideal rational agent should do whatever

action is expected to maximize its performance measure, on the basis of the

evidence provided by the percept sequence and whatever built-in knowledge the

agent has.

Ideal mapping from percept sequences to actions

For an ideal agent, we can simply make a table of the action it should take in

response to each possible percept sequence. (For most agents, this would be an

infinite table.) This table is called a mapping from the percept sequences to

actions.

Specifying which action an agent ought to take in response to any given percept

sequence provides a design for an ideal agent.

It is, of course, possible to specify the mapping for an ideal agent without creating

a table for every possible percept sequence.

Example : The sqrt agent

The percept sequence for this agent is a sequence of keystrokes representing a

number and an action is to display a number on a screen. The ideal mapping when

a percept is an positive number x is to display a positive number z such that z 2 =x.

66

This specification does not require the designer to actually construct a table of

square roots.

Algorithms exist that make it possible to encode the ideal sqrt agent very

compactly. It turns out that the same is true for much more general agents.

One more requirement for agents: Autonomy

If an agent's actions are based completely on built-in knowledge, such that it need

pay no attention to its percepts, then we say that the agent lacks autonomy.

An agent's behaviour can depend both on its built-in knowledge and its experience.

A system is autonomous if it's behaviour is determined by it's own experience.

It seems likely that the most successful agents will have some built-in knowledge

and will also have the ability to learn.

Structure of Intelligent Agents

Now we start talking about the insides of agents.

The job of AI is to design the agent program : a function that implements the agent

mapping from percepts to actions. We assume this program will run on some sort

of computing device call the architecture.

The architecture makes the percepts from the sensors available to the agent

program. runs the program and feeds the program's action choices to the effectors

as they are generated.

  • agent=architecture + program

Before we design an agent program, we must have a pretty good idea of the

possible percepts and actions, what goals or performance measure the agent is

supposed to achieve, and what sort of environment it will operate in.

Example A robot designed to inspect parts as they go by on a conveyer belt can

make use of a number of simplifying assumptions: that the lighting will always be

66

The full taxi driver task is extremely open-ended - there is no limit to the novel

situations that can arise.

We start with percepts, actions, and goals

The taxi driver will need to know where it is, what else is on the road and how fast

it is going. This information can be obtained from the percepts provided by one or

more controllable cameras, a speedometer and odometer. To control the vehicle

properly it should have an accelerometer. It will need to know the state of the

vehicle, so it will need the usual arrays of engine and electrical sensors. It might

also have instruments like a GPS to give exact position with respect to an

electronic map. It might also have infrared or sonar sensors. Finally, it will need

some way for the customer to communicate destination.

The actions will include control over the engine through the accelerator pedal and

control over steering and braking. Some way of talking to passengers and perhaps

some way to communicate with other vehicles.

Performance measures?

Getting to the correct destination, minimizing fuel consumption and wear and tear,

minimizing trip time and cost, minimizing traffic violations and disturbances of

other drivers, maximizing safety and passenger comfort. Some of these goals

conflict and so there will be trade-offs.

Operating environment?

City streets? highways? snow and other road hazards? driving on right or left?

The more controlled the environment, the easier the problem.

We will now consider four types of agent programs:

    • Simple reflex agents
    • Agents that keep track of the world
    • Goal-based agents
    • Utility agents

Simple reflex agents

Constructing a lookup table is out of the question. The visual input from a simple

camera comes in at the rate of 50 megabytes per second, so the lookup table for an

hour would be 260x60x50M. However, we can summarize certain portions of the table

by noting certain commonly occurring input/output associations. For example, if

the car in front brakes, then the driver should also brake.

In other words some processing is done on visual input to establish the condition,

"brake lights in front are on" and this triggers some established connection to the

66

action "start braking". Such a connection is called a condition-action rule written

as

If condition then action

Agents that keep track of the world

Simple reflex agents only work if the correct action can be chosen based only on

the current percept.

Even for the simple braking rule above, we need some sort of internal description

of the world state. (To determine if the car in front is braking, we would probably

need to compare the current image with the previous to see if the brake light has

come on.)

Another example

From time to time the driver looks in the rear view mirror to check on the location

of nearby vehicles. When the driver is not looking in the mirror, vehicles in the

next lane are invisible. However, in order to decide on a lane change requires that

the driver know the location of vehicles in the next lane.

66

To do so requires hitting the brakes. The goal-based agent is more flexible but

takes longer to decide what to do.

Utility-based agents

Goals alone are not enough to generate high-quality behaviour. For example, there

are many action sequences that will get the taxi to its destination, but some are

quicker, safer, more reliable, cheaper, etc.

Goals just provide a crude distinction between "happy" and "unhappy" states

whereas a more general performance measure should allow a comparison of

different world states. "Happiness" of an agent is called utility.

Utility can be represented as a function that maps states into real numbers. The

larger the number the higher the utility of the state.

A complete specification of the utility function allows rational decisions in two

kinds of cases where goals have trouble. First, when there are conflicting goals,

only some of which can be achieved (e.g., speed vs. safety), the utility function

specifies the appropriate trade-off. Second, when there are several goals that the

agent can aim for, none of which can be achieved with certainty, utility provides a

way in which the likelihood of success can be weighed up against the importance

of the goals.

66

Environments

Now we look at how agents couple with the environment.

Properties of environments

  • Accessible vs. inaccessible

If an agent's sensory apparatus gives it access to the complete state of the

environment, we say that the environment is accessible. When the

environment is accessible, less internal state is necessary.

  • Deterministic vs. non-deterministic

If the next state is completely determined by the current state and the actions

selected by the agent, then the environment is deterministic. An agent need

not worry about uncertainty in an accessible deterministic environment. If

the environment is inaccessible, it may appear to be nondeterministic.

66

Intelligent agents are supposed to act in such a way that the environment goes

through a sequence of states that maximizes the performance measure.

Unfortunately, this specification is difficult to translate into a successful agent

design. The task is simplified if the agent can adopt a goal and aim to satisfy it.

Example Suppose the agent is in Auckland and wishes to get to Wellington. There

are a number of factors to consider e.g. cost, speed and comfort of journey.

Goals such as this help to organize behaviour by limiting the objectives that the

agent is trying to achieve. Goal formulation , based on the current situation is the

first step in problem solving. In addition to formulating a goal, the agent may wish

to decide on some other factors that affect the desirability of different ways of

achieving the goal.

We will consider a goal to be a set of states - just those states in which the goal is

satisfied. Actions can be viewed as causing transitions between states.

How can the agent decide on what types of actions to consider?

Problem formulation is the process of deciding what actions and states to

consider. For now let us assume that the agent will consider actions at the level of

driving from one city to another.

The states will then correspond to being in particular towns along the way.

The agent has now adopted the goal of getting to Wellington, so unless it is already

there, it must transform the current state into the desired one. Suppose that there

are three roads leaving Auckland but that none of them lead directly to Wellington.

What should the agent do? If it does not know the geography it can do no better

than to pick one of the roads at random.

However, suppose the agent has a map of the area. The purpose of a map is to

provide the agent with information about the states it might get itself into and the

actions it can take. The agent can use the map to consider subsequent steps of a

hypothetical journey that will eventually reach its goal.

In general, an agent with several intermediate options of unknown value can

decide what to do by first examining different possible sequences of actions that

lead to states of known value and then choosing the best one.

This process is called search. A search algorithm takes a problem as input and

returns a solution in the form of an action sequence. Once a solution is found, the

actions it recommends can be carried out. This is called the execution phase.

Hence, we have a simple "formulate, search, execute" design for the agent.

66

Formulating Problems

Formulating problems is an art. First, we look at the different amounts of

knowledge that an agent can have concerning its actions and the state that it is in.

This depends on how the agent is connected to its environment.

There are four essentially different types of problems.

  • single state
  • multiple state
  • contingency
  • exploration

Knowledge and problem types

Let us consider the vacuum world - we need to clean the world using a vacuum

cleaner. For the moment we will simplify it even further and suppose that the

world has just two locations. In this case there are eight possible world states.

There are three possible actions: left, right, and suck. The goal is to clean up all the

dirt, i.e., the goal is equivalent to the set of states {7,8}.

66