matlab-神经网络-感知机(6)

系统 1377 0

 

%控制感知机的学习过程,学习AND运算
P=[0 1 0 1 1;1 1 1 0 0];
T=[0 1 0 0 0];
net = newp([0 1;0 1],1);
net=init(net);

y=sim(net,P);
e=T-y;
while (mae(e)>0.0015)
   dw=learnp(w,P,[],[],[],[],e,[],[],[],[],[])
   db=learnp(b,ones(1,5),[],[],[],[],e,[],[],[],[],[])
   %每次学习完后,会返回需要的调整权值矩阵和阈值矩阵
   w=w+dw
   b=b+db
   net.iw{1,1}=w
   net.b{1}=b  
   y=sim(net,P);
   e=T-y
end

 

 

learnp用于感知器神经网络权值和阈值的学习,学习规则是调整网络的权值和阈值,使网络平均绝对误差性能最小,以便实现输入向量的分类

help learnp
 LEARNP Perceptron weight/bias learning function.
 
   Syntax
  
     [dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
     [db,LS] = learnp(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
     info = learnp(code)
 
   Description
 
     LEARNP is the perceptron weight/bias learning function.
 
     LEARNP(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
       W  - SxR weight matrix (or b, an Sx1 bias vector).
       P  - RxQ input vectors (or ones(1,Q)).
       Z  - SxQ weighted input vectors.
       N  - SxQ net input vectors.
       A  - SxQ output vectors.
       T  - SxQ layer target vectors.
       E  - SxQ layer error vectors.
       gW - SxR gradient with respect to performance.
       gA - SxQ output gradient with respect to performance.
       D  - SxS neuron distances.
       LP - Learning parameters, none, LP = [].
       LS - Learning state, initially should be = [].
     and returns,
       dW - SxR weight (or bias) change matrix.
       LS - New learning state.
 
     LEARNP(CODE) returns useful information for each CODE string:
       'pnames'    - Returns names of learning parameters.
       'pdefaults' - Returns default learning parameters.
       'needg'     - Returns 1 if this function uses gW or gA.
 
   Examples
 
     Here we define a random input P and error E to a layer
     with a 2-element input and 3 neurons.
 
       p = rand(2,1);
       e = rand(3,1);
 
     Since LEARNP only needs these values to calculate a weight
     change (see Algorithm below), we will use them to do so.
 
       dW = learnp([],p,[],[],[],[],e,[],[],[],[],[])
 
   Network Use
 
     You can create a standard network that uses LEARNP with NEWP.
 
     To prepare the weights and the bias of layer i of a custom network
     to learn with LEARNP:
     1) Set NET.trainFcn to 'trainb'.
        (NET.trainParam will automatically become TRAINB's default parameters.)
     2) Set NET.adaptFcn to 'trains'.
        (NET.adaptParam will automatically become TRAINS's default parameters.)
     3) Set each NET.inputWeights{i,j}.learnFcn to 'learnp'.
        Set each NET.layerWeights{i,j}.learnFcn to 'learnp'.
        Set NET.biases{i}.learnFcn to 'learnp'.
        (Each weight and bias learning parameter property will automatically
        become the empty matrix since LEARNP has no learning parameters.)
 
     To train the network (or enable it to adapt):
     1) Set NET.trainParam (NET.adaptParam) properties to desired values.
     2) Call TRAIN (ADAPT).
 
     See NEWP for adaption and training examples.
 
   Algorithm
 
     LEARNP calculates the weight change dW for a given neuron from the
     neuron's input P and error E according to the perceptron learning rule:
 
       dw =  0,  if e =  0
          =  p', if e =  1
          = -p', if e = -1
 
     This can be summarized as:
 
       dw = e*p'

 

 

 

 

 

 

 

 

 

 

>> plotpv(P,T)
>> plotpc(net.iw{1,1},net.b{1})


matlab-神经网络-感知机(6)
 

 

matlab-神经网络-感知机(6)


更多文章、技术交流、商务合作、联系博主

微信扫码或搜索:z360901061

微信扫一扫加我为好友

QQ号联系: 360901061

您的支持是博主写作最大的动力,如果您喜欢我的文章,感觉我的文章对您有帮助,请用微信扫描下面二维码支持博主2元、5元、10元、20元等您想捐的金额吧,狠狠点击下面给点支持吧,站长非常感激您!手机微信长按不能支付解决办法:请将微信支付二维码保存到相册,切换到微信,然后点击微信右上角扫一扫功能,选择支付二维码完成支付。

【本文对您有帮助就好】

您的支持是博主写作最大的动力,如果您喜欢我的文章,感觉我的文章对您有帮助,请用微信扫描上面二维码支持博主2元、5元、10元、自定义金额等您想捐的金额吧,站长会非常 感谢您的哦!!!

发表我的评论
最新评论 总共0条评论