Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Agents and Problem Solving - Artificial Intelligence - Lecture Slides, Slides of Artificial Intelligence

Some concept of Artificial Intelligence are Agents and Problem Solving, Autonomy, Programs, Classical and Modern Planning, First-Order Logic, Resolution Theorem Proving, Search Strategies, Structure Learning. Main points of this lecture are: Agents and Problem Solving, Reactive, Goal-Based, Utility-Based, State Space, Asymptotic Analysis, Essentials of Graph Theory, Perception, Sensors, Action

Typology: Slides

2012/2013

Uploaded on 04/29/2013

shantii
shantii 🇮🇳

4.4

(14)

98 documents

1 / 24

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 2 of 41
Agents and Problem Solving
Docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18

Partial preview of the text

Download Agents and Problem Solving - Artificial Intelligence - Lecture Slides and more Slides Artificial Intelligence in PDF only on Docsity!

Lecture 2 of 41

Agents and Problem Solving

Lecture Outline

  • Intelligent Agent Frameworks
    • Reactive
    • With state
    • Goal-based
    • Utility-based
  • Thursday: Problem Solving and Search
    • Background in combinatorial algorithms
      • Asymptotic analysis
      • Essentials of graph theory (definitions)
    • State space search handout (Winston)
    • Search handout (Ginsberg)

Review: Agent Programs

  • Software Agents
    • Also known as ( aka ) software robots, softbots
    • Typically exist in very detailed, unlimited domains
    • Example
      • (Real-time) critiquing, automation of avionics, shipboard damage control
      • Indexing (spider), information retrieval (IR; e.g., web crawlers) agents
      • Plan recognition systems (computer security, fraud detection monitors)
    • See: Bradshaw ( Software Agents )
  • Focus of This Course: Building IAs
    • Generic skeleton agent: Figure 2.4, R&N
    • function SkeletonAgent ( percept ) returns action
      • static: memory , agent’s memory of the world
      • memoryUpdate-Memory ( memory, percept )
      • actionChoose-Best-Action ( memory )
      • memoryUpdate-Memory ( memory, action )
      • return action

Agent Framework:

Simple Reflex Agents [1]

Agent Sensors

Effectors

Condition-Action Rules

What action I should do now

Environment

Agent Frameworks:

(Reflex) Agents with State [1]

Agent Sensors

Effectors

Condition-Action Rules

What action I should do now

State^ Environment

How world evolves

What my actions do

What world is like now

Agent Frameworks:

(Reflex) Agents with State [2]

  • Implementation and Properties
    • Instantiation of generic skeleton agent: Figure 2.
    • function ReflexAgentWithState ( percept ) returns action
      • static: state , description of current world state; rules , set of condition-action rules
      • stateUpdate-State ( state , percept )
      • ruleRule-Match ( state, rules )
      • actionRule-Action { rule }
      • return action
  • Advantages
    • Selection of best action based only on current state of world and rules
    • Able to reason over past states of world
    • Still efficient, somewhat more robust
  • Limitations and Disadvantages
    • No way to express goals and preferences relative to goals
    • Still limited range of applicability

Agent Frameworks:

Goal-Based Agents [2]

  • Implementation and Properties
    • Instantiation of generic skeleton agent: Figure 2.
    • Functional description
      • Chapter 13: classical planning
      • Requires more formal specification
  • Advantages
    • Able to reason over goal, intermediate, and initial states
    • Basis: automated reasoning
      • One implementation: theorem proving (first-order logic)
      • Powerful representation language and inference mechanism
  • Limitations and Disadvantages
    • Efficiency limitations: can’t feasible solve many general problems
    • No way to express preferences

Agent Frameworks:

Utility-Based Agents [1]

Agent Sensors

Effectors

Utility (^) What action I should do now

State^ Environment

How world evolves

What my actions do

What world is like now

What it will be like if I do A

How happy will I be

Looking Ahead: Search

  • Thursday’s Reading: Sections 3.1-3.4, Russell and Norvig
  • Thinking Exercises (Discussion in Next Class): 3.3 (a, b, e), 3.
  • Solving Problems by Searching
    • Problem solving agents: design, specification, implementation
    • Specification components
      • Problems – formulating well-defined ones
      • Solutions – requirements, constraints
    • Measuring performance
  • Formulating Problems as (State Space) Search
  • Example Search Problems
    • Toy problems: 8-puzzle, 8-queens, cryptarithmetic, toy robot worlds, constraints
    • Real-world problems: layout, scheduling
  • Data Structures Used in Search
  • Next Tuesday: Uninformed Search Strategies
    • State space search handout (Winston)
    • Search handouts (Ginsberg, Rich and Knight)

Homework 1:

Machine Problem

  • Due: 10 Sep 2004
    • Submit using new script (procedure to be announced on class web board)
    • HW page: http://www.kddresearch.org/Courses/Fall-2004/CIS730/Homework
  • Machine Problem: Uninformed (Blind) vs. Informed (Heuristic) Search
    • Problem specification (see HW page for MP document)
      • Description: load, search graph
      • Algorithms: depth-first, breadth-first, branch-and-bound, A search*
      • Extra credit: hill-climbing, beam search
    • Languages: options
      • Imperative programming language of your choice (C/C++, Java preferred)
      • Functional PL or style (Haskell, Scheme, LISP, Standard ML)
      • Logic program (Prolog)
    • MP guidelines
      • Work individually
      • Generate standard output files and test against partial standard solution
    • See also: state space, constraint satisfaction problems

Rational Agents

  • “Doing the Right Thing”
    • Committing actions
      • Limited to set of effectors
      • In context of what agent knows
    • Specification (cf. software specification)
      • Preconditions, postconditions of operators
      • Caveat: not always perfectly known (for given environment)
      • Agent may also have limited knowledge of specification
  • Agent Capabilities: Requirements
    • Choice: select actions (and carry them out)
    • Knowledge: represent knowledge about environment
    • Perception: capability to sense environment
    • Criterion: performance measure to define degree of success
  • Possible Additional Capabilities
    • Memory (internal model of state of the world)
    • Knowledge about effectors, reasoning process (reflexive reasoning)

Measuring Performance

  • Performance Measure: How to Determine Degree of Sucesss
    • Definition: criteria that determine how successful agent is
    • Clearly, varies over
      • Agents
      • Environments
    • Possible measures?
      • Subjective (agent may not have capability to give accurate answer!)
      • Objective: outside observation
    • Example: web crawling agent
      • Number of hits
      • Number of relevant hits
      • Ratio of relevant hits to pages explored, resources expended
      • Caveat: “you get what you ask for” (issues: redundancy, etc.)
  • When to Evaluate Success
    • Depends on objectives (short-term efficiency, consistency, etc.)
    • Is task episodic? Are there milestones? Reinforcements? (e.g., games)

What Is Rational?

  • Criteria
    • Determines what is rational at any given time
    • Varies with agent, environment, situation
  • Performance Measure
    • Specified by outside observer or evaluator
    • Applied (consistently) to (one or more) IAs in given environment
  • Percept Sequence
    • Definition: entire history of percepts gathered by agent
    • NB: may or may not be retained fully by agent (issue: state and memory)
  • Agent Knowledge
    • Of environment – “required”
    • Of self (reflexive reasoning)
  • Feasible Action
    • What can be performed
    • What agent believes it can attempt?

Problem-Solving Agents [1]:

Preliminary Design

  • Justification
    • Rational IAs: act to reach environment that maximizes performance measure
    • Need to formalize, operationalize this definition
  • Practical Issues
    • Hard to find appropriate sequence of states
    • Difficult to translate into IA design
  • Goals
    • Chapter 2, R&N: simplifies task of translating agent specification to formal design
    • First step in problem solving: formulation of goal(s) – “accept no substitutes”
    • Chapters 3-4, R&N: goal{world states | goal test is satisfied}
  • Problem Formulation
    • Given: initial state, desired goal, specification of actions
    • Find: achievable sequence of states (actions) mapping from initial to goal state
  • Search
    • Actions: cause transitions between world states (e.g., applying effectors)
    • Typically specified in terms of finding sequence of states (operators)