Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Understanding Learning and Behavior with Neural Networks, Slides of Artificial Intelligence

The concepts of connectionism, neurons, and neural networks, focusing on the power of neurons in producing complex and intelligent behavior through their huge number and connectivity. It covers various recurring concepts such as perceptron, learning automata, neural networks, collective learning, cybernetic loop, backpropagation, and credit assignment. The document also discusses the role of interaction and activity of the brain in shaping connectivity.

Typology: Slides

2012/2013

Uploaded on 04/29/2013

shantii
shantii 🇮🇳

4.4

(14)

98 documents

1 / 22

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
About
Learning and Behavior
Learning is any change in a system that produces
a more or less permanent change in its capacity
for adapting to its environment.
Human beings, viewed as behaving systems, are
quite simple. The apparent complexity of our
behavior over time is largely a reflection of the
complexity of the environment in which we find
ourselves.
1
Herbert Simon (1916-2001)
Docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16

Partial preview of the text

Download Understanding Learning and Behavior with Neural Networks and more Slides Artificial Intelligence in PDF only on Docsity!

About

Learning and Behavior

  • Learning is any change in a system that produces a more or less permanent change in its capacity for adapting to its environment.
  • Human beings, viewed as behaving systems, are quite simple. The apparent complexity of our behavior over time is largely a reflection of the complexity of the environment in which we find ourselves.

1

Herbert Simon (1916-2001)

Connectionism

  • The concept of a Neuron has invited many lines of

research.

  • Yet the power of living neurons lies in both
    • Their huge number 10^10 in humans, 10^4 in small bug
    • Their connectivity 10^5 in humans
  • It is therefore by the power of the

ganglia (10^15 ) that we present a

complex and intelligent behavior

  • Neurons can be model as digital

or continuous systems

Credit assignment:

A simple neuron model

Soma n[i,j]= ∑ a[i,k]*d[k,j]

for all k dendrites

Linear Axon a[i,j]= n[i,j]

Non Linear Axon

if n[i,j] > Threshold[i,j]

then a[i,j] = 1 else a[i,j] = 0

A Linear

Neuronal Network

5

dendrite h,j

dendrite i,j

Axon h

Axon i

Stimulus h

Stimulus i

Output = Axon j

It can learn a linear function such as

Th=

Th= 1

Th= 1

Logical OR Th= 2

Th= 1

Th= 1

Logical AND

But how about a non linear function such as Exclusive OR?

Exclusive OR

Credit Assignment using Gradient Descent method to change the dendritic weights

  • Change the function for firing neurons from a step function to a continous one with a close behavior, e.g. a sigmoid axon

a(j) = 1/(1+e-n(j))

  • Calculate the derivative of the sigmoid: da(j)/dn(j) = a(j)*(1-a(j))
  • Determine output error e(output) = TrueValue - a(j)
  • Determine the axon error ea(j) = a(j)(1-a(j))e(output)
  • Minimize E by gradient descent: change each weight by an amount proportional to the partial derivative Δd = αea(j)a(h)

7

If slope is negative  increase n(j) If slope is positive  decrease n(j)

Local minima for Error are the places where derivative equals zero

Gradient descent

Neuron sum n(j)

Error

ea

Faster Convergence:

Momentum rule

  • Add a fraction (a=momentum) of the last change to the current change

8

Δd(t) = αea(j)a(h)+β*Δd(t-1)

Study and Run Excel file Docsity.com

A Lesson from Mother Nature:

Using the Scientific Method

Observation

10

Connectionism

Stímulus

Response

Feedback

Hypothesis

11

When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some grow process or metabolic change takes place in one or both cells such that As efficiency, as one of the cells firing B, is increasedDonald O. Hebb 1949

Low Interaction

Medium Interaction

High Interaction

Unsupervised Learning

13

So far we have talked about training with a purpose, I.e. showing some examples and give some type of compensation, this is called supervised learning.

Yet following Donald O. Hebb hypothesis, we do not event need to supervise the learning process, this will take place anyway!

Hebbian Credit assignment:

A simple neuron model

Orthogonally fh = 0*

Normality ff = 1* {execute f[i] = nf[i]/sqrt(nf.nf)}

Soma n[i,j]= ∑ a[i,k]d[k,j]*

for all k dendrites

Linear Axon a[i,j]= n[i,j]

Non Linear Axon if n[i,j] > Threshold[i,j]

then a[i,j] = 1 else a[i,j] = 0

Lineal compensation Δd[i,j] = ηa[i,k]a[i,j]**

0 < η < 1 (^) Docsity.com^14

Argumentation

16

In any case the connectivity is due to at least the following three factors:

  • Time:
    • Born, Child, Young, Adult
  • Interaction:
    • Low, Medium, High
  • Activity of the brain:
    • Low, Medium, High

Linear Recurrent Networks

• Associative Memory

• Hopfield Network

• Kohonen SOM

17

How about storing Information

19

dendrite h,j

dendrite i,j

Axon h

Axon i

Stimulus h

Stimulus i

Output = Axon j

Th= 1

Th= 1

Th= 0

dendrite h,k

dendrite i,k

dendrite k,j Value = + Th =

INF KNW h I h = j h = > j 0 0 1 1 0 1 0 1 1 0 0 0 1 1 1 1

Value = -

Value = -

2 bit net for =

How about storing Knowledge

20

dendrite h,j

dendrite i,j

Axon h

Axon i

Stimulus h

Stimulus i

Output = Axon j

Th= 1

Th= 0

Th= 1

INF KNW h I h = j h = > j 0 0 1 1 0 1 0 1 1 0 0 0 1 1 1 1

Value = 1

Value = 1

2 bit net for = >

Value = -