














Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The concepts of connectionism, neurons, and neural networks, focusing on the power of neurons in producing complex and intelligent behavior through their huge number and connectivity. It covers various recurring concepts such as perceptron, learning automata, neural networks, collective learning, cybernetic loop, backpropagation, and credit assignment. The document also discusses the role of interaction and activity of the brain in shaping connectivity.
Typology: Slides
1 / 22
This page cannot be seen from the preview
Don't miss anything!
1
Herbert Simon (1916-2001)
A simple neuron model
for all k dendrites
5
dendrite h,j
dendrite i,j
Axon h
Axon i
Stimulus h
Stimulus i
Output = Axon j
It can learn a linear function such as
Th=
Th= 1
Th= 1
Logical OR Th= 2
Th= 1
Th= 1
Logical AND
But how about a non linear function such as Exclusive OR?
Exclusive OR
Credit Assignment using Gradient Descent method to change the dendritic weights
a(j) = 1/(1+e-n(j))
7
If slope is negative increase n(j) If slope is positive decrease n(j)
Local minima for Error are the places where derivative equals zero
Gradient descent
Neuron sum n(j)
Error
ea
Faster Convergence:
Momentum rule
8
Δd(t) = αea(j)a(h)+β*Δd(t-1)
Study and Run Excel file Docsity.com
A Lesson from Mother Nature:
Using the Scientific Method
Observation
10
Connectionism
Stímulus
Response
Feedback
11
“ When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some grow process or metabolic change takes place in one or both cells such that A ’ s efficiency, as one of the cells firing B, is increased ” Donald O. Hebb 1949
Low Interaction
Medium Interaction
High Interaction
13
So far we have talked about training with a purpose, I.e. showing some examples and give some type of compensation, this is called supervised learning.
Yet following Donald O. Hebb hypothesis, we do not event need to supervise the learning process, this will take place anyway!
A simple neuron model
Orthogonally fh = 0*
Normality ff = 1* {execute f[i] = nf[i]/sqrt(nf.nf)}
Soma n[i,j]= ∑ a[i,k]d[k,j]*
for all k dendrites
Linear Axon a[i,j]= n[i,j]
Non Linear Axon if n[i,j] > Threshold[i,j]
then a[i,j] = 1 else a[i,j] = 0
Lineal compensation Δd[i,j] = ηa[i,k]a[i,j]**
0 < η < 1 (^) Docsity.com^14
16
In any case the connectivity is due to at least the following three factors:
17
How about storing Information
19
dendrite h,j
dendrite i,j
Axon h
Axon i
Stimulus h
Stimulus i
Output = Axon j
Th= 1
Th= 1
Th= 0
dendrite h,k
dendrite i,k
dendrite k,j Value = + Th =
INF KNW h I h = j h = > j 0 0 1 1 0 1 0 1 1 0 0 0 1 1 1 1
Value = -
Value = -
2 bit net for =
How about storing Knowledge
20
dendrite h,j
dendrite i,j
Axon h
Axon i
Stimulus h
Stimulus i
Output = Axon j
Th= 1
Th= 0
Th= 1
INF KNW h I h = j h = > j 0 0 1 1 0 1 0 1 1 0 0 0 1 1 1 1
Value = 1
Value = 1
2 bit net for = >
Value = -