%控制感知机的学习过程,学习AND运算
P=[0 1 0 1 1;1 1 1 0 0];
T=[0 1 0 0 0];
net = newp([0 1;0 1],1);
net=init(net);
y=sim(net,P);
e=T-y;
while (mae(e)>0.0015)
dw=learnp(w,P,[],[],[],[],e,[],[],[],[],[])
db=learnp(b,ones(1,5),[],[],[],[],e,[],[],[],[],[])
%每次学习完后,会返回需要的调整权值矩阵和阈值矩阵
w=w+dw
b=b+db
net.iw{1,1}=w
net.b{1}=b
y=sim(net,P);
e=T-y
end
learnp用于感知器神经网络权值和阈值的学习,学习规则是调整网络的权值和阈值,使网络平均绝对误差性能最小,以便实现输入向量的分类
help learnp
LEARNP Perceptron weight/bias learning function.
Syntax
[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = learnp(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnp(code)
Description
LEARNP is the perceptron weight/bias learning function.
LEARNP(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - SxR weight matrix (or b, an Sx1 bias vector).
P - RxQ input vectors (or ones(1,Q)).
Z - SxQ weighted input vectors.
N - SxQ net input vectors.
A - SxQ output vectors.
T - SxQ layer target vectors.
E - SxQ layer error vectors.
gW - SxR gradient with respect to performance.
gA - SxQ output gradient with respect to performance.
D - SxS neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
and returns,
dW - SxR weight (or bias) change matrix.
LS - New learning state.
LEARNP(CODE) returns useful information for each CODE string:
'pnames' - Returns names of learning parameters.
'pdefaults' - Returns default learning parameters.
'needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P and error E to a layer
with a 2-element input and 3 neurons.
p = rand(2,1);
e = rand(3,1);
Since LEARNP only needs these values to calculate a weight
change (see Algorithm below), we will use them to do so.
dW = learnp([],p,[],[],[],[],e,[],[],[],[],[])
Network Use
You can create a standard network that uses LEARNP with NEWP.
To prepare the weights and the bias of layer i of a custom network
to learn with LEARNP:
1) Set NET.trainFcn to 'trainb'.
(NET.trainParam will automatically become TRAINB's default parameters.)
2) Set NET.adaptFcn to 'trains'.
(NET.adaptParam will automatically become TRAINS's default parameters.)
3) Set each NET.inputWeights{i,j}.learnFcn to 'learnp'.
Set each NET.layerWeights{i,j}.learnFcn to 'learnp'.
Set NET.biases{i}.learnFcn to 'learnp'.
(Each weight and bias learning parameter property will automatically
become the empty matrix since LEARNP has no learning parameters.)
To train the network (or enable it to adapt):
1) Set NET.trainParam (NET.adaptParam) properties to desired values.
2) Call TRAIN (ADAPT).
See NEWP for adaption and training examples.
Algorithm
LEARNP calculates the weight change dW for a given neuron from the
neuron's input P and error E according to the perceptron learning rule:
dw = 0, if e = 0
= p', if e = 1
= -p', if e = -1
This can be summarized as:
dw = e*p'
>> plotpv(P,T)
>> plotpc(net.iw{1,1},net.b{1})